text
stringlengths
1
1.54M
meta
dict
\section{Introduction} Plastic pollution poses an imminent threat to the marine environment, food safety \cite{BARBOZA2018336}, human health, eco-tourism, and contributes to climate change \cite{schmidt2017export}. Global plastic production has exceeded 500 million tons of plastic, and projections indicate that 30\% of all produced plastic will end up discarded in the oceans \cite{nollkaemper1994land} \cite{epa2014municipal}. Researchers have documented a five-fold increase in plastic debris within the Central Pacific Gyre and have shown that plastic pieces now outnumber the native plankton 6:1 in terms of abundance \cite{clapp2012rising}. A significant amount of marine plastic (about 80\%) originates from land-based sources \cite{Windom_1992}: Most commonly in the form of food containers, such as plastic bags and bottles, and packaging materials. The other ~20\% stems from shipping vessel discharges and discarded commercial fishing gear \cite{Windom_1992}. Studies have shown that removing plastic from the oceans will exponentially benefit the ecosystems. This includes the prevention of the movement of invasive species between regions \cite{carlton2017tsunami}, the prevention of its degradation into micro-plastics \cite{andrady2011microplastics}, and the decrease in emissions of greenhouse gases (thereby decelerating climate change) \cite{royer2018production}]. To understand the spatiotemporal distribution of plastic, we require more accurate methods with reliable and low-cost deployment strategies. Various in situ approaches to ocean plastic monitoring have been proposed. These in situ methods include using SONAR/LIDAR to map plastic debris \cite{valdenegrotoro2019deep}, human counting via visual methods \cite{van2018methodology}, and debris sampling using fishing nets \cite{rech2014rivers}. However, these methods are labor-intensive, incur high financial costs, and do not cover large surface areas. Furthermore, polymers such as polyethylene and polypropylene are affected by the growth of a biofilm when submerged in water that will influence their sinking behaviors \cite{Kaiser_Kowalski_Waniek_2017}. Any polymer that has its density increased by biofilm to a certain point sinks beyond surface sampling devices such as manta trawls. As a result, these surface sampling limitations lead to quantity underestimations of floating plastics. Creating an accurate marine plastic debris estimation requires developing alternative methods to investigate the distribution of positively buoyant plastic across the entire water column. Recently, several methods using computer vision and modern deep learning technologies to quantify marine plastic debris without physical removal have been suggested \cite{oceancleanup} \cite{fulton2018robotic}. The Earth and Space Science journal illustrates a method using the two-stage Faster R-CNN model to actively monitor and identify surface plastic as it floats down a river \cite{oceancleanup}. This approach does not account for the sinking polymer problem but shows that, on average, an automated method detects 34.6\% more plastic than human visual counting does. Remote sensing of plastic litter provides a promising new and less labor-intensive tool for the quantification and characterization of ocean plastic pollution \cite{rs13173401}. A research team at the University of Minnesota developed a computer vision model specialized for marine plastic detection in deep-sea environments \cite{fulton2018robotic} which demonstrates that quantification across the water column can be achieved. It also exemplifies the relationship between object detection models and AUV’s to great success. The AquaVision project \cite{PANWAR2020100026} shows that object detection models can reach high levels of precision utilizing open-source datasets and one-stage approaches such as RetinaNet. Since AquaVision was trained on the TACO dataset. It also indicates that a computer vision model trained on land-based images of plastic can detect similar types of plastic in a marine environment. Unlike these recently proposed algorithms that specialize in monitoring either floating marine plastic \cite{oceancleanup}, deep-sea specific environments \cite{fulton2018robotic}, or models trained on land-based plastic [17]. The Ocean Cleanup group has demonstrated that computer vision models can detect floating plastic debris via cameras attached to above-water vessels \cite{rs13173401}. Their results show that macroplastics can be successfully quantified for comparisons across methods. Our object detection model (DeepPlastic) utilizes a training set composed exclusively of marine-based plastic images and performs equally well across the entire water column while producing significant results. \begin{figure}[ht] \centering \includegraphics[width=0.90\linewidth]{figures/OceanIllustration_Sharp.jpg} \caption{Concept of real-time plastic detection via AUV's equipped with cameras and DeepTrash vision} \label{fig_concept} \end{figure} In this study, we tested four state-of-the-art deep-learning architectures, Faster-RCNN, Single Shot Multibox Detector, YOLOv4-Tiny and YOLOv5-S, then reported their performances to infer marine plastic debris in real-time. The main results will be described as follows: 1) the model’s precision and accuracy to feasibly identify plastic debris, 2) insurability that this method can successfully distinguish marine plastic debris from similar-looking non-plastic objects, and 3) A generalized model capable of detecting marine plastic in most oceanic environments. The results show that deep learning models can identify plastic with significant accuracy while operating at a rate that supports real-time applications such as autonomous underwater vehicles (AUVs) for at-scale marine- plastic quantification and monitoring. \section{Related Work} Increasing demand for identifying and removing plastic from the world’s waterways has led to a surge of research in computer vision and AUV solutions. A team of researchers at the University of Minnesota robotics lab recently experimented with AUV deployments for identifying deep ocean marine plastic debris \cite{fulton2018robotic}. Another growing trend has been to utilize deep learning and computer vision to identify floating marine plastic on the river automatically and ocean surfaces \cite{PANWAR2020100026}. Additionally, AUV’s have been used as a means for environmental surveillance \cite{5509604}, mapping \cite{5603860}, and localization of marine plastic debris [20]. Underwater vision technology has been pushed forward thanks to work done by Ge et al. \cite{Ge_Shi_Mei_Dai_Li_2016} with LIDAR technology to localize and map marine-plastic debris on coastal beaches. Further research into implementing LIDAR in conjunction with forward-facing SONAR image models trained by deep convolutional neural networks was conducted by Howell et al. \cite{Kurz_Buckley_Howell_Schneider_2009}, and Valdenegro-Toro et al. \cite{valdenegrotoro2019deep} which resulted in a model capable of detecting underwater debris with 80\% accuracy. Unfortunately, these methods incur high expenses due to retrofitting sonar and an in-house water tank for evaluation. The University of Minnesota robotics lab \cite{fulton2018robotic} annotated and published a dataset of images collected by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) \cite{JAMSTEC}. JAMSTEC released the J-EDI (JAMSTEC E-Library of Deep-Sea Images), which contains marine plastic debris dating back to 1982 and provides data in the form of images and videos. The work presented in this research paper has benefited from the University of Minnesota team, which released close to 3000 annotated images from the JAMSTEC J-EDI dataset. These datasets were used to train our convolutional neural networks (CNNs) to identify features of plastic debris. Photography, especially video-cameras, have found common application as environmental monitoring systems \cite{mock1995underwater} \cite{premkumardeepak2017intelligent}. Underwater cameras provide a globally accessible and low-cost quantification aid. Combining object detection models with underwater cameras equipped on automobiles such as AUVs makes it possible to observe and monitor sub-surface plastics in known hotspots worldwide \cite{fulton2018robotic}. By mounting video cameras to AUV’s, buoys, and other submersibles, institutions could feasibly quantify macro-plastics, which constitute 90\% of the total plastic mass in the oceans. \ \begin{figure}[ht] \centering \subfigure[Ocean]{\label{fig:ocean_plastic}\includegraphics[width=0.24\textwidth]{figures/plastic-1.jpg}} \subfigure[Lake]{\label{fig:lake_plastic}\includegraphics[width=0.24\textwidth]{figures/plastic-2.jpg}} \caption{Example images of marine plastic debris from the DeepTrash dataset in different marine environments} \label{fig_plastic} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=0.95\textwidth]{figures/dp.pdf} \caption{Methadology for Marine Plastic Detection} \label{fig_met} \end{figure*} \section{Network Architecture} Four state-of-the-art object detection models were selected for this work. Each architecture has different benefits and drawbacks, with the main trade-off being speed for accuracy. \begin{itemize} \item \textit{Faster RCNN Inception v2} Faster RCNN \cite{10.5555/2969239.2969250} is an improvement on R-CNN \cite{DBLP:journals/corr/GirshickDDM13} that introduces a Region Proposal Network to make the network trainable end to end. The network uses the convolutional feature maps to produce region proposals, which are fed to the fully connected (in our case softmax layers) for detection. Faster R-CNN uses VGG-16 \cite{DBLP:journals/corr/SimonyanZ14a} for feature extraction while we use the newer we use the Inception v2 \cite{tensorflowmodelgarden2020} as feature extractor instead because of it's known capabilities to enhance object detection. \item \textit{Single Shot Multibox Detector MobileNet v2} Single Shot MultiBox Detector (SSD) \cite{DBLP:journals/corr/abs-1805-09501} is another well-known detection model that performs object localization and classification in a single forward pass of the network. This architecture introduces additional Convolutional layers with the base network to improve performance. We use a MobileNetv2 implementation \cite{NAGRATH2021102692} for faster performance speeds. \item \textit{YOLOv5-S} \cite{Jocher_Stoken_Borovec_NanoCode012_ChristopherSTAN_Changyu_Laughing_Tkianai_Hogan_Lorenzomammana_et_al._2020} YOLOv5-S Unlike the official release of YOLOv4, YOLOv5 currently exists in active development. There- fore, all YOLOv5 related code, and models may be subject to modification or deletion without notice. YOLOv5-S has 7.5 million parameters, 140 layers and operates at a lightweight 7MB (14MB for weights pre-trained on COCO). This architecture uses the Cross Stage Partial Network (CSP) \cite{wang2019cspnet} as the processing backbone and was trained on MSCOCO to extract rich/informative features from an input image. YOLOv5 also uses a PANet \cite{liu2018path} for the model-neck to generate feature pyramids and the computational friendly LeakyReLU and Sigmoid activation function. The model uses SGD as a default learning rate, but these tests were performed with the ADAM adaptive learning rate enabled \cite{kingma2017adam}. \item \textit{YOLOv4-Tiny} \cite{bochkovskiy2020yolov4} Inference speeds on YOLOv4-Tiny can reach upwards of 400 frames/second when using a 1080Ti GPU with accuracy, precision, and recall that meet the demands of a production-ready robotics platform. YOLOv4- Tiny uses a CSPDarknet53-Tiny neural network as opposed to the regular SPDarknet53 network. To simplify the computation process, the YOLOv4-Tiny model uses the LeakyReLU as an activation function. \end{itemize} \section{Methodology} \subsection{Dataset Construction} The dataset was curated by collecting videos of marine plastic from the field in California (South Lake Tahoe, Bodega Bay, San Francisco Bay). The videos vary significantly in quality, depth, and visibility to better represent the harshness of marine environments. After recording, manual identification of marine plastic captured in the still images was performed, emphasizing choosing images containing complex object detection scenarios such as illumination, noise and occlusion. Each image would then get annotated to prepare them for object detection using the deep learning models. This curation approach ensured that the dataset of images would closely conform to real-world conditions. To further increase the representation of marine plastics in different locations, images were also sourced from datasets created by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) \cite{JAMSTEC}. Annotations were performed using the free tool supervisely \cite{drozdov} to create the final dataset, which contains ~3200 images. Ocean environments provide a wide variety of visual challenges, so all plastic instances get consolidated into a single classification labeled “trash\_plastic”. We called our final dataset, DeepTrash \cite{tata_gautam_2021_5562940} and have open sourced it to further help the research in this field. \subsection{Enhancements of Custom Dataset} The following procedures were implemented for the deep learning models to detect marine plastic: \begin{enumerate}[label={\alph*})] \item \textit{Dataset Formatting} The input data constituted of images and annotation labels for bounding boxes were converted into either a TFRecords (FasterRCNN and SSD), PyTorch (YOLOv5-S) or a Darknet format (YOLOv4) to process each respective model. The bounding boxes delimited each image’s regions of interest based on 2D coordinates located in the respective annotation file. \item \textit{Image Pre-processing} To ensure that learning occurs on the same image properties, auto orient was applied to strip images of their exchangeable Image file format (EXIF) data \cite{9108753} so that the models interpret images regardless of image format. Finally, the input images get resized and bounding boxes adjusted to 416x416 pixels. \item \textit{Data Augmentation} To mitigate the effects of the model generalizing towards undesired features and to replicate underwater conditions such as variable illumination, occlusion, and color–the dataset was further enhanced by randomly changing the brightness and saturation of the images via PyTorch’s built-in Transforms augmentation. These modified images were then added back into the dataset, effectively tripling the size of our dataset. \subsection{Object Detection} We used four state-of-the-art neural network architectures FasterRCNN with Inception v2, Single Shot Multibox Detector with MobileNet v2, YOLOv5-S and YOLOv4, downloaded from their respective repositories . The following software versions were used: Tensorflow1.5, PyTorch v1.8.1, Darknet, OpenCV version 3.2.0, and CUDA 11.2. \subsubsection{Fine Tuning Hyperparameters} This object detection model uses ADAM \cite{kingma2017adam} as the adaptive learning rate, which utilizes a decaying learning rate for a set number of epochs. The final layer of the network uses Softmax and reflects the usage of a single class. \subsubsection{GPU Hardware} We tested two state of the art GPUs:NVIDIA P100 and NVIDIA Tesla V100®GPU (version 460.32.03) were chosen due to their proven parallel computing capability. \subsubsection{Training} After every 1000 epochs (iterations) of training, the model would be evaluated on the validation dataset to calculate precision, recall, and mean average precision (mAP). This means stopping training to check for the following: \begin{itemize} \item When accuracy stops increasing, the model no longer needs additional training to prevent overfitting. \item Depending on performance, hyperparameters should receive adjustments to optimize for evaluation metrics. \end{itemize} \subsubsection{Evaluation Metrics} After the model has finished training, use the testing and validation datasets containing images mutually exclusive from the training dataset as an input to evaluate the network’s performance. The model draws a bounding box around successfully detected objects with a confidence score of .50 or higher. The number of true positive bounding boxes drawn around marine plastic debris and true negatives provides the basis of evaluation. The following performance metrics were utilized to produce results: \begin{itemize} \item \textit {\textbf{True positive and True negative values}}: True positive values represent an outcome in which the models correctly predict a positive class, and conversely, a true negative represents when the model correctly predicts the negative class. \item \textit{\textbf{Precision and Recall}} -- represents if the model suc- cessfully detected plastic in an image. \[Recall = \frac{TP}{TP+FN}\] \[Precision = \frac{TP}{TP+FP}\] \item \textit{\textbf{Mean Average Precision}} -- Evaluates how often the network can recognize plastic in a group of images. After collecting the values for true and false positives, generate a precision-recall curve using the Intersection over Union (IoU) formula: \[ \textit{IOU}=\frac{BBox_{predicted} \cap BBox_{groundTruth}}{BBox_{predicted} \cup BBox_{groundTruth}} \] Where \(BBox_{predicted} \) and \(BBox_{groundTruth}\) Where BBoxpredicted and BBoxgroundT ruth are the areas under the curve for predicted and ground truth bounding boxes, respectively. A high threshold for confidence and IoU must be set to ensure accuracy, with a correct detection represented by the threshold being exceeded. The mAP can then be obtained by integrating the precision-recall curve \cite{10.1007/978-3-642-40994-3_29}: \cite{10.1007/978-3-642-40994-3_29}: \[ mAP = \int_{0}^{1} p(x) dx \] \item \textit{\textbf{F1-Score}} -- Evaluates the balance between precision and recall values. \item \textit{\textbf{GPU Speed (ms/IMG})} -- Represents how fast the network can infer marine plastic debris contained within an input image. \end{itemize} \subsubsection{Visualizing results} For each processed image, the network populates arrays containing the following data: Scores -- Confidence scores for the predicted boxes and Number of detections -- The total number of detections made per image. The following equation converts the normalized coordinates into image coordinates for rendering bounding boxes on top of images: \begin{equation} imgCoord_k = BoxScore_{i}^{j}\cdot Width \end{equation} where $k\in$~(left,right,top,bottom), $i$ is an index of boxes, $j \in(0,1,2,3)$, and $Width$ is a width of the image. These image coordinates were used to visualize the results of predicted bounding boxes in Figure \ref{fig_results}. \iffalse \begin{itemize} \item \textit{\(left\_coordinate=boxes[index][1] * image\_width\)} \item \textit{\(right\_coordinate=boxes[index][3] * image\_width\)} \item \textit{\(top\_coordinate=boxes[index][0] * image\_width\)} \item \textit{\(low\_coordinate=boxes[index][2] * image\_width\)} \end{itemize} \fi \end{enumerate} \begin{figure}[ht] \centering \subfigure[Detection near water surface]{\label{fig:near_water}\includegraphics[width=0.24\textwidth]{figures/fig-1.jpg}} \subfigure[Detection for partially buried debris.]{\label{fig:partial_occlusion}\includegraphics[width=0.24\textwidth]{figures/fig-2.jpg}} \caption{Results generated by the model with bounding boxes and confidence scores rendered over marine plastic debris.} \label{fig_initial_results} \end{figure} \begin{table}[t] \centering \vspace{3mm} \begin{tabular}{l|c|>{\centering\arraybackslash}m{0.7cm}|*{3}{c}} Network& mAP&F1-Score&Precision \\ \hline YOLOv5s&85.0&0.89&\textbf{0.93}\\ Tiny-YOLOv4&84.0&0.80&\textbf{0.96}\\ Faster R-CNN&79.0&0.76&\textbf{0.84}\\ SSD&76.0&0.71&\textbf{0.83}\\ \end{tabular} \caption{Detection metrics in mAP, F1, and Precision.} \label{tab:detection_one} \end{table} \begin{table} \centering \begin{tabular}{ l | c | c | c } Network & P100 & V100 \\ \hline YOLOv5s & 2.8 & 1.4 \\ Tiny-YOLOv4 & 1.9 & \textbf{1.2} \\ Faster R-CNN & 2.4 & 1.5 \\ SSD & 2.5 & 2.1 \\ \end{tabular} \label{tab:fps} \caption{Performance metrics for Inference (ms/img).} \vspace{1mm} \end{table} \section{Results} All results expressed in Table I were produced from the validation dataset presented in the methodology section. Since the images used in the training dataset were not isolated laboratory creations but instead real-world images directly from the field, the general object detection has a more accurate representation of marine plastic debris. This approach comes with a set of trade-offs: \begin{itemize} \item The model performs stronger in real-world deployments, and therefore, the evaluation results in Table II do not significantly differ from near-real-time measurements taken from the field. \item Peak performance of the object detection model in a perfectly controlled environment could not be measured, and the highest possible benchmark of a single detection remains unknown. \item These trade-offs indicate the results of this paper better approximate long-term performance across a wider variety of marine environments–leading to a more substantial evaluation of the object detection model’s performance in the field. \end{itemize} \subsection{Quantitative Results} The results captured in Table I demonstrate that near-real-time object detection of marine plastic debris in the epipelagic layer of the ocean is both feasible and close to real-world execution. The tested models demonstrate high average precision, mAP, and F1 scores relative to their inference speed. Repeated testing of the model produced a results variance of Usually, evaluation results between models showcase a clear relationship between models, such as trading off significant inference speed for increased accuracy. However, the results presented in this paper showcase that both YOLOv4-Tiny and YOLOv5- S produce high debris localization metrics when it comes to identifying epipelagic plastic in near-real-time. YOLOv5-S provides a significantly higher F1 score in exchange for a slight dip in inference performance. Reducing the number of classes to 1, i.e.,” trash plastic,” ensures an even distribution of class examples within the training dataset. The singular nature of this object detection model may reduce the total number of use cases the model can be utilized for–but guarantees strong performance on use cases within the domain of the model. A single classification also builds upon the pre-trained weights’ performance during transfer learning, as it meant less skewing towards unrelated classifications. \subsection{Evaluation Results} \subsubsection{Object Detection} The mAP values obtained from the object detection models on the validation dataset have been expressed in Table I. Both models demonstrate high accuracy in plastic localization. It also reveals that the YOLOv5-S model has a higher mAP than the YOLOv4-Tiny, Faster RCNN and SSD models. \subsubsection{Inference Speed} These speeds were dictated by the GPU (NVIDIA P100 and V100 using a batch size of 32) and included image preprocessing. The YOLOv4-S model provided the highest inference speed-to-maP performance ratio for the provided dataset. \subsection{Qualitative Results} This study focused on determining the feasibility of detecting marine plastic debris for near-real-time monitoring/quantification purposes. To that end, the results in Table I demonstrate that general object detection models can fill this much-needed role. Since a relatively high level of performance can be maintained with such fast inference speeds–we believe that models such as the one presented in this paper can be applied to AUVs and other tools for real-world solutions. Equally important is that these solutions now have a near-future timeline of implementation and have been proven to be low-cost. \section{Discussion} \label{sec:disc} In this study, we built a computer vision model that detects marine plastic debris with high precision, visualizes the detections with bounding boxes, and operates near real-time speeds. These conditions match the requirements for robotic platforms such as AUVs or buoys. As one of the first object detection models specialized for the epipelagic layer, direct comparison results can not be readily performed—however, relative performance comparisons. DeepPlastic and object detection models geared towards plastic detection in deep-sea and river plastic environments reveal DeepPlastics’ state-of-the-art performances. The article mentioned above in Earth and Space Science \cite{https://doi.org/10.1029/2019EA000960} describes a two-stage reference model, utilizing cameras positioned above water, capable of detecting plastic floating on rivers. It utilized 1272 images in its training set and the Faster R-CNN architecture for its second stage. Across multiple experiments, this model’s highest precision rate was 68\% when employing image flipping and the ADAM adaptive learning rate. DeepPlastic was trained on a dataset using image flipping, in addition to other data augmentation techniques, and also uses the ADAM learning rate--but achieves a precision rate of 93\% when detecting marine plastic debris submerged in the ocean via underwater cameras. The University of Minnesota’s (UoM) computer vision model \cite{fulton2018robotic} specialized for marine plastic detection in deep-sea environments utilized 5720 images in its training set and three classes. The DeepTrash training dataset shares many of the same images as both include samples from JAMSTEC \cite{JAMSTEC}. The UoM model achieved an mAP of 82.3\% for its plastic images class using the YOLOv2 architecture and a high of 83.3\% when using Faster R-CNN. DeepPlastic achieves an mAP of 93\% when using the YOLOv4 architecture and input images of marine plastic debris on the same dataset in similar conditions. AquaVision [17] was trained on three datasets totaling ~4400 images, including images of both land-based and marine-based debris and four classes. AquaVision’s highest performance for the plastic class was an average precision of 81.5\% when using the one-stage RetinaNet method. DeepPlastic performs at an mAP of 85\% when using YOLOv5. The specific training datasets used by the three models described above are either not public or utilize datasets outside of the domain of DeepPlastic (i.e., the dataset images are not underwater). Therefore comparing performances via dataset is not an option for this study. \subsection{Points of Improvement} This model can efficiently monitor and quantify marine plastic. Improvements can be made in the following areas: \subsubsection{Data Augmentation Improvements} While grayscale, saturation, and vertical/horizontal flipping have been proven data augmentation techniques–emerging techniques such as AutoAugment [35] could be explored to improve the model’s variability in the future once ready for adaptation. Other methods such as shear and the cutout regularization technique would be great to utilize after integration technologies improve. \subsubsection{Dataset Improvements} The data set used in this study is unique and one of the first of its kind. For the data set, we see three main improvements that can be made to enhance the deep learning model: \begin{itemize} \item Adding more images from different locations \item Using more images from variable water conditions \item Finally, acquiring a more extensive set of underwater plastic images \end{itemize} As more plastic images from different locations and oceanic conditions become available, they will increase marine plastic debris representation–providing a more comprehensive dataset for model training. We believe this will improve the mAP and overall robustness of the object detection model. \subsubsection{Camera Improvements} Readily available off-the-shelf cameras have come a long way but still suffer from certain limitations. The first and most substantial limitation revolves around most underwater cameras that will only work during the daytime. If we want to continue the monitoring process during the nighttime, better night-vision underwater sensors need to be developed. The second limitation stems from the common H.265 video compression techniques \cite{Lu_2019_CVPR} underwater cameras utilize to induce encoding artifacts. This impedes real-time detection by deteriorating the image quality. Developments in end-to-end deep learning video compression techniques \cite{Lu_2019_CVPR} could lead to solutions for this limitation once ready for implementation. \section{Code and Dataset Availability} All code/dataset and instructions to build and utilize the DeepPlastic object detection model can be found \href{https://zenodo.org/record/5562940#.YWSe39nMI-S}{ online via Zenodo -- DeepPlastic}. \indent \section{Conclusion} \label{sec:con} This work’s objective was to develop a deep learning vision model capable of consistently identifying and quantifying marine plastic near real-time. To attain this objective, a pair of general object detection models were constructed using two state-of-the-art deep learning models built for inference speed to measure which performed best. \\ \indent This study concludes that a marine plastic debris detection system based on the YOLOv5-S model would be fast, accurate, and robust enough to enable real-time marine plastic debris detection. This study shows that effective object detection models can be constructed using readily available, pre-enabled GPUs for reasonable costs. \\ \indent Furthermore, the dataset created for and utilized by this general detection model demonstrates that massive, highly curated datasets can be used in conjunction with samples relative to the domain of object detection and web scraping to produce promising results. \textit{This computer vision system enables multiple deployment methods to detect/monitor marine plastic and allows researchers to quantify marine plastic debris without physical removal.} \section{Future Work} Improvement of the dataset would have the highest impact on performance but collecting additional images would require human labor in fieldwork or preprocessing. A technology capable of producing synthetic images containing marine plastic debris in an ocean environment could provide an automated solution to dataset creation. This could be accomplished with a two-stage autoencoder [37]. Object detection models trained on identifying jellyfish (or other objects similar to marine plastic debris) paired with a our object detection model could lead to a decrease in false positives. Inference speed could be improved through specialized GPU technology or tailoring models towards specific higher power GPUs than used in this study. An end-to-end video compression technique explicitly developed for near real-time object detection could lead to a better ratio of true positives to true negatives and an improved range on object detection. Tailoring this object detection model for vision-equipped AUVs could result in automated identification and plastic removal devices capable of scalable deployment across large bodies of water, as shown in figure 1. Further optimizations could build in support for stationary monitoring devices such as buoys as well. We hope that such a system will facilitate scalable adoption by researchers and civilians to detect and clean up marine plastic. \section{Acknowledgements} We gratefully acknowledge the help and support of Nikhil Deshmudre for his efforts and help with the deployment of this computer vision system. The authors would also like to thank Joseph Nelson, Co-Founder of Roboflow.com, for providing us with Roboflow Pro free of charge, making it easier to iterate on the deep learning models. Some of the images in this dataset were sourced from the TrashCan dataset, where the researchers hand-annotated and open-sourced over 5000 images from the JAMSTEC-JEDI dataset. The authors would like to thank the researchers from the University of Minnesota, Robotics LAB, and the Japan Agency for Marine-Earth Science and Technology for open sourcing this data to contribute to the advancement of science. Finally, we would like to thank Rae Rose Lowe for her support throughout this process.
{ "timestamp": "2021-10-22T02:05:54", "yymm": "2105", "arxiv_id": "2105.01882", "language": "en", "url": "https://arxiv.org/abs/2105.01882" }
\section{Impact of channel access mechanisms} \label{sec:access_channel} This section provides the experiments that have led to assess the impact of channel access mechanism in a GEO-satellite backhaul system. The PEP mechanisms are not activated. \subsection{Dynamic SATCOM access and uncongested network} The characteristics of this test scenario are as follows: \begin{itemize} \item Number of connected UEs: 10; \item Same application for all UEs: data transfer in download; \item All UEs start downloading at the same time; \item OpenSAND limits: 20 Mbps Forward / 1 Mbps in Return; \item Flow limitation per UE (SLA) in download: 2 Mbps; \item OpenSAND access (total return channel at 1 Mbps): \begin{itemize} \item CRA = 50 kbps / RBDC = 1000 kbps \item CRA = 100kbps / RBDC = 900 kbps \item CRA = 500 kbps / RBDC = 500 kbps \item CRA = 1000 kbps /RBDC = 0 kbps \end{itemize} \end{itemize} The metric reported in this report is the rate convergence time, \textit{i.e.} the time required for the UE to reach the rate of its SLA. This enables the analysis of the impact of the access mechanism on the speed of convergence of congestion control. \begin{table}[h] \caption{Rate convergence time when all users start at the beginning of the connection} \label{table:access_uncongested} \begin{tabularx}{\linewidth}{c|c} Access Method & Rate convergence time (s) \\ \hline CRA = 50 kbps / RBDC = 1000 kbps & 10 \\ \hline CRA = 100 kbps / RBDC = 900 kbps & 9 \\ \hline CRA = 500 kbps / RBDC = 500 kbps & 6 \\ \hline CRA = 1000 kbps / RBDC = 0 kbps & 7 \\ \end{tabularx} \end{table} The results of this experiment are presented in Table \ref{table:access_uncongested}. A low value of CRA impacts the end user speed convergence time. That being said, once the threshold of 50\% of the capacity on the return link is reached, increasing it beyond does not seem to bring significant gains. \subsection{Dynamic SATCOM access and congested network} The previous test has shown that considering a low value of CRA, for example at 10\% of the capacity, increases the rate convergence time. That being said, an actual system will likely be loaded. The test presented in this section considers 9 UEs whose connection is established before the last UE starts downloading. The characteristics of this test scenario are as follows: \begin{itemize} \item Number of connected UEs: 10; \item Same application for all UEs: data transfer in download; \item 9 UEs start downloading from the start, and one UE starts the download 10 seconds later; \item OpenSAND limits: 20 Mbps Forward / 1 Mbps in Return; \item Flow limitation per UE (SLA) in download: 2 Mbps; \item OpenSAND access (total return channel at 1 Mbps): \begin{itemize} \item CRA = 50 kbps / RBDC = 1000 kbps \item CRA = 100kbps / RBDC = 900 kbps \item CRA = 500 kbps / RBDC = 500 kbps \item CRA = 1000 kbps /RBDC = 0 kbps \end{itemize} \end{itemize} The reported metric in this report is the rate convergence time, \textit{i.e.} the time required for the 10$^{th}$ UE to reach the rate of its SLA. This enables the analysis of the impact of the access mechanism on the speed of convergence of congestion control. \begin{table}[h] \caption{Rate convergence time for the 10$^{th}$ UE when all others UE started at the beginning of the connection} \label{table:access_congested} \begin{tabularx}{\linewidth}{c|c} Access Method & Rate convergence time (s) \\ \hline CRA = 50 kbps / RBDC = 1000 kbps & 4 \\ \hline CRA = 100 kbps / RBDC = 900 kbps & 11 \\ \hline CRA = 500 kbps / RBDC = 500 kbps & 10 \\ \hline CRA = 1000 kbps / RBDC = 0 kbps & 10 \\ \end{tabularx} \end{table} The results of this experiment are presented in Table \ref{table:access_congested}. The result of the case 'CRA = 50 kbps / RBDC = 1000 kbps' shows a very short convergence time. This may be due to the link load which is variable. This phenomenon could have been absorbed by a larger number of tests, which could not be carried out due to lack of time. Moreover, the comparison of the other cases completes the results of the previous section. When the CRA is set to a value greater than 100 kbps, the convergence performance is the same. Once the network is loaded, it is not necessary to dynamically adapt the use of the resource on the return path. \subsection{Dynamic SATCOM access and mixed upload and download traffic} In order to complete the results observed in the previous sections, tests with mixed download and upload traffic were carried out and a subset of the results is presented in this section. The characteristics of this test scenario are as follows: \begin{itemize} \item Number of connected UEs: 10; \item Application data transfer in download for 8 UEs and Upload for 2 UEs; \begin{itemize} \item Long flows for download and upload last 30 seconds; \item Short flows are 1 MB in download and 300 kB in upload; \item 7 UEs start long flows (download) and 1 UE start long flows (upload) at the beginning of the experiment; \item Short flows (1 UE in download and 1 UE in upload) start 10 seconds later; \end{itemize} \item OpenSAND limits: 20 Mbps Forward / 1 Mbps in Return; \item Flow limitation per UE (SLA): 2 Mbps (download) and 300 kbps (upload); \item OpenSAND access (total return channel at 1 Mbps): \begin{itemize} \item CRA = 50 kbps / RBDC = 1000 kbps \item CRA = 100kbps / RBDC = 900 kbps \item CRA = 500 kbps / RBDC = 500 kbps \item CRA = 1000 kbps /RBDC = 0 kbps \end{itemize} \end{itemize} \begin{table}[h] \caption{Short flows download and upload times in a congested environment} \label{table:up-dw-acc} \begin{tabularx}{\linewidth}{c|c|c} Access Method & 1 MB download time (s) & 300 kB upload time \\ \hline CRA = 50 kbps & 10.3 & 7.5 \\ RBDC = 1000 kbps & & \\ \hline CRA = 100 kbps & 10.7 & 7.2 \\ RBDC = 900 kbps & & \\ \hline CRA = 500 kbps & 9.9 & 7.1 \\ RBDC = 500 kbps & & \\ \hline CRA = 1000 kbps & 11.6 & 7.5 \\ RBDC = 0 kbps & & \\ \end{tabularx} \end{table} The results of this experiment are presented in Table~\ref{table:up-dw-acc}. The different access mechanisms, once the network is loaded, do not seem to illustrate substantial gains for the file sizes considered. \subsection{Discussion} The main conclusions of this activity are: \begin{itemize} \item Once the network is loaded, \begin{itemize} \item The impact of the choice of access method does not matter; \item CRA / RBDC or SCPC combinations (i.e. all capacity in CRA) show similar performance \end{itemize} \item If the network is not loaded, \begin{itemize} \item A CRA value that is too low (lower than 10\% of maximum capacity) can impact performance; \item A CRA / RBDC approach can reduce costs. \end{itemize} \end{itemize} \section{Acknowledgments} \label{sec:ackno} This study was founded by CNES SMILE project. The authors would like to thank all those who participated in making this study possible. \section{Discussion} \label{sec:discussion} The main contributions of all these studies are as follows: \begin{itemize} \item Different proofs of concept for the GEO backhaul service have been implemented; \item If the system is congested, the protocol optimizations offered by a PEP (to improve QoE) or adaptation of access mechanisms (to reduce costs) do not bring significant gains; \item If the system is not congested: \begin{itemize} \item Protocol optimizations bring significant gains for file transfer; \item Adaptation of access mechanisms, \textit{i.e.} the constant reduction of the allocated throughput, enables to reduce the costs of access while having a negligible impact on services. \end{itemize} \end{itemize} The tests carried out do not take into account the complexity of the operator's core networks. The introduction of WAN accelerator equipment (i.e. PEP) ensures performance in the part for which the operator is responsible and neglects the impacts of the network conditions between the operator's network and the data servers. These equipments isolate error segments, perform local retransmissions and tune the protocols to the network where it is deployed. They also implement caching mechanisms. Although negligible gains were measured by the introduction of these equipments, this observation does not allow us to deduce its uselessness given that many functions were not considered and evaluated. The results nevertheless show the importance of studying the impacts of the different protocol layers for SATCOM systems offering a backhauling service for mobile networks. This includes many multi-layered technical interactions, as well as the understanding of which is necessary to optimally size and implement such systems. \section{Introduction} \label{sec:introduction} Mobile Network Operators (MNO) have the regular need to optimize their communication infrastructure in order to better manage congestion, guarantee the quality of the service and maximize income. The end user demand may not be located close to the already deployed core network of a MNO and answering to it may not be economically viable. The deployment of LTE cells and their connection to the core network with a satellite system has proved to be an efficient solution to this issue. The satellite backhauling service represented 36,750 sites served by satellite in 2018, which is twice the amount served in 2012. As this service is growing strongly and a source of significant revenue for operators, it is important to present the issues related to this use case and the necessary considerations allowing to define future satellite systems. \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/archi_backhaul_lte.png} \caption{Backhaul services through satellite systems} \label{fig:archi-backhaul} \end{figure} Figure~\ref{fig:archi-backhaul} shows an architecture of a LTE backhauling service through satellite. End users (User Equipment, EU) access Radio Access Network (RAN) of a MNO using LTE standards. The satellite system connects the RAN to the core network of the MNO. The backhauling services need for relatively strong end-to-to-end Quality of Service (QoS) requirements. In general, variable data rates should be guaranteed by the QoS mechanisms, the voice from 64~kbps and video from 256~kbps should also be guaranteed. There may also be strong requirements on the round trip time, jitter and packet loss rate. \begin{itemize} \item[$\blacktriangleright$] GEO satellite backhauling accesses are facing a trade-off between quality of network service, quality of the user experience and the price of the access. \end{itemize} Satellite systems may allocate Constant Rate Assignment (CRA) for backhauling services (as opposed to Rate-Based Dynamic Capacity, RBDC). With CRA, a portion of the satellite resource is dedicated to a user even if it does not actually use it. The reduction of the CRA would let the satellite resource management mechanisms to allocate the unused resource to the systems that actually need it. However, decreasing the CRA and increasing the RBDC may reduce the Quality of Experience (QoE) due to the request-allocation loop inherent to RBDC mechanisms. This study measures the relevance of using dynamic resource allocation mechanisms for backhaul services through satellite systems and their impact on the QoE. The satellite system is emulated with OpenSAND~\cite{opensand,opensand-site}, the LTE system with Amarisoft~\cite{amarisoft}, and the experiments are orchestrated by OpenBACH~\cite{openbach}. We compare the relevance of applying PEP~\cite{RFC3135} mechanisms and dynamic resource allocations when the system is loaded by measuring the QoE for Web browsing, data transfer and VoIP applications. The main conclusions are the following. \begin{itemize} \item[$\blacktriangleright$] When the system is congested, PEP and layer-2 access mechanisms do not provide significant improvements. \item[$\blacktriangleright$] When the system is not congested, data transfer can be greatly improved through TCP optimizations. \item[$\blacktriangleright$] Tuning the Constant Rate Assignment can help in reducing the cost of the resource and provide QoE improvments when the network is not loaded. \end{itemize} \section{Platform validation} \label{sec:platform} This sections provides details on how the exploited platform has been validated. \subsection{Validation strategy} The experience feedback from previous activities shows that the platform needs to be set up carefully, especially when it integrates so many elements provided by different entities. The following step-by-step procedure has been exploited: \begin{itemize} \item Prepare the test architecture with the Amarisoft platform and two OpenBACH agents (to emulate the clients); \item Launch OpenBACH scenarios allowing an end-to-end QoS analysis and QoE measurements to validate the Amarisoft component; \item Add the PEPs in "deactivated" mode (TCP acceleration disabled) at the ends of the system; \item Launch OpenBACH scenarios allowing an end-to-end QoS analysis to validate the Amarisoft along with "deactivated" PEP; \item Add OpenSAND between the eNodeB and the Core of Amarisoft; \item Launch OpenBACH scenarios allowing end-to-end QoS analysis to validate the Amarisoft along with OpenSAND and "deactivated" PEP; \item Activate PEP; \item Launch OpenBACH scenarios allowing QoS analysis of end-to-end to validate the Amarisoft along with OpenSAND and activated PEP. \end{itemize} \subsection{Validation results} \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/phase1_lteospep_qos_rate_fwd.png} \caption{Forward link throughput(b/s)} \label{fig:forward_qos_data-rate} \end{figure} \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/phase1_lteospep_qos_rate_rtn.png} \caption{Return link throughput (b/s)} \label{fig:return_qos_data-rate} \end{figure} \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/phase1_lteospep_qos_rtt.png} \caption{Round Trip Time} \label{fig:rtt_qos} \end{figure} The results related to the QoS measurements are presented in Figure~\ref{fig:forward_qos_data-rate}, Figure~\ref{fig:return_qos_data-rate}, Figure~\ref{fig:rtt_qos}. The traffic has been generated by iperf3 (TCP), nuttcp (TCP/UDP), fping (ICMP ping) and hping (TCP/IP ping) The forward channel rate is limited to 20~Mbps and 10~Mbps in the return channel. Changes in TCP throughput already illustrate the impact of congestion losses on TCP throughput and its ability to use all of the available capacity. Moreover, the use of OpenBACH makes it possible to obtain these curves with less effort and greater control and thus assuring the consistency of the results. The delay measured by hping is $0$ because the traffic is intercepted by the PEP, which gives an illusion of low latency. \section{Platform details} \label{sec:platform} This sections provides details on the exploited platform. \subsection{On the need for a controlled emulation} The exploitation of an emulated platform let us consider mechanisms and algorithms~\cite{when-emulation} that are close to those implemented in deployed systems. When it comes to considering QoE measurements, simulations may not map actual protocol performances~\cite{vtc-trustable}. However, using different proprietary equipments may result in outputs that are difficult to undersand and analyse. To cover this issue, we propose the exploitation of a maximum number of opensource softwares towards reproducible and controlled tests while considering systems as close as possible to real ones. \subsection{End-to-end emulated platform} \begin{figure}[h] \centering \includegraphics[width =\linewidth]{figures/archi_platform.png} \caption{Platform architecture} \label{fig:archi-platform} \end{figure} Figure~\ref{fig:archi-platform} presents the different equipments that compose the platform: \begin{itemize} \item Proprietary Performance Enhancing Proxies (PEP); \item The cellular network is emulated with AMARISOFT~\cite{amarisoft}; \item The satellite system is emulated with OpenSAND~\cite{opensand,opensand-site}; \item The tests are orchestrated by OpenBACH~\cite{openbach}. \end{itemize} The PEPs are not deployed within the LTE emulation network since our equipment could not deal with packets encapsulated within GTP-U tunnels. \section{Impact of transport protocol mechanisms} \label{sec:access_channel} This section provides the experiments that have led to assess the impact of transport protocol mechanisms in a GEO-satellite backhaul system. \subsection{Experiment set up} This section presents the results for a web access or for a short file transfer. The characteristics of the scenarios presented in this section are the following: \begin{itemize} \item Number of connected UEs: 10; \item Test duration: 30 seconds; \item Data transfer is started for 9 UEs to load the link: 7 in download and and 2 in upload; \item The 10th UE consumes a given type of service (VoIP, video, Web or File transfer); \item The 9 UEs that load the link start their activity at the same time; \item The 10th UE that consumes the service starts a few seconds later, once the link is loaded; \item OpenSAND limits: 20 Mbps Forward / 1 Mbps in Return; \item OpenSAND access: CRA = 100 kbps / RBDC = 900 kbps or CRA = 500 kbps / RBDC = 500 Kbps; \item Flow limitation per UE (SLA) of 2 Mbps in download and 100 kbps in upload. \end{itemize} The results presented in this section do not take into account the diversity of web pages and protocols used. For example, a page using the HTTP1 protocol and multiple objects has very different characteristics and probably different performance than a page using HTTP2. The tests concerning the use of application traffic representative of a voice over IP or a video transmission did not show different results, whether the WAN accelerators are activated or not and whatever the configuration of the access (CRA = 100 kbps / RBDC = 900 kbps, CRA = 500 kbps / RBDC = 500 kbps). \subsection{Focus on web transfer} In order to limit the complexity of the analysis, it was decided to test a single web page with the following characteristics: 6.5 MB page size using the HTTP protocol. On a 2 Mbps capacity link, the optimal transfer time would be 26 seconds. \begin{table}[h] \centering \caption{Simple web page downloading time} \label{table:pep_web} \begin{tabular}{c|c|c|c|c} Access Method & \multicolumn{2}{c|}{CRA=100kbps} & \multicolumn{2}{c}{CRA=500 kbps} \\ & \multicolumn{2}{c|}{RBDC=900kbps} & \multicolumn{2}{c}{RBDC=500kbps} \\ \cline{2-5} & No PEP & PEP & No PEP & PEP \\ \hline Average (s) & 33.9 & 36.1 & 34.5 & 34.8 \\ Max (s) & 35.9 & 40.8 & 41.1 & 38.0 \\ Min (s) & 31.0 & 32.2 & 31.6 & 32.4 \\ \end{tabular} \end{table} The calculation of a statistic analysis based on 10 experiments is presented in Table~\ref{table:pep_web}. The transfer times of the page vary between 31 and 41 seconds, all configurations combined. The introduction of a PEP does not bring significant gain on web browsing in a congested context. It is worth pointing out that these experiments do not consider the exploitation of a PEP equipment where it should provide benefits, \textit{i.e.} within the SATCOM system. This was not possible due to the lack of GTP-U capable PEPs. \subsection{Focus on file transfer} The client proceeds to two different downloads during the experiment. During the first one (fetch 1), the network is loaded with cross-traffic. The second download (fetch 2) occurs when no other UE are using the satellite resource. Each configuration is tested five times. \begin{table}[h] \centering \caption{Simple web page downloading time and CRA=100 kbps, RBDC=900 kbps} \label{table:pep_file_100} \begin{tabular}{c|c|c|c|c} & \multicolumn{2}{c|}{No PEP} & \multicolumn{2}{c}{PEP} \\ \cline{2-5} & Fetch 1 & Fetch 2 & Fetch 1 & Fetch 2 \\ Average download time (s) & 11.67 & 9.44 & 11.34 & 8.16 \\ \end{tabular} \end{table} \begin{table}[h] \centering \caption{Simple web page downloading time and CRA=500 kbps, RBDC=500 kbps} \label{table:pep_file_500} \begin{tabular}{c|c|c|c|c} & \multicolumn{2}{c|}{No PEP} & \multicolumn{2}{c}{PEP} \\ \cline{2-5} & Fetch 1 & Fetch 2 & Fetch 1 & Fetch 2 \\ Average download time (s) & 11.91 & 9.32 & 11.49 & 6.88 \\ \end{tabular} \end{table} Tables~\ref{table:pep_file_100} and~\ref{table:pep_file_500} present the results for this experiment. In general, all fetchs 1 show the same values. It means that whatever the channel access characteristics and whatever the transport layer, when the network is loaded, the results are the same. However, when the network is not loaded (fetch 2), including a PEP results in 3 \% to 26 \% performance improvements. The gains are more important when the CRA is high. \subsection{Discussion} Regarding file transfer, the gains brought by WAN acceleration are significant without congestion, and amplify the conclusions on the adaptation of the access method: the higher the CRA, the higher the gain brought by the PEP. Regarding web browsing, for a simple web page and in congestion, the WAN accelerator does not bring significant gains.
{ "timestamp": "2021-05-06T02:11:50", "yymm": "2105", "arxiv_id": "2105.01901", "language": "en", "url": "https://arxiv.org/abs/2105.01901" }
\section{Introduction} \label{sec-intro} Turbulent multiphase flows are ubiquitous in nature and technology. Examples are raindrops \citep{rain, zaleski}, ocean waves \cite{spray}, fuel sprays \cite{fuel}, and the transmission of virus-laden droplets during respiratory events \cite{covid,covid-steven,covid-cs}, just to name a few. In order to gain deeper insights into their complex and rich behavior, efficient, high-fidelity computations are crucial. For turbulent multiphase flows, direct numerical simulations (DNSs) present far greater challenges than for single-phase flows \cite{rev-tmf}. The reasons are the much finer length-scales and faster time-scales induced by the existence of the second phase, especially when the deformable interfaces between the fluids break up or coalesce. To-date, many numerical methods have been developed, such as phase field methods (also known as diffuse interface methods) \cite{soldati1, soldati2, breakup}, volume of fluid methods \cite{pop16, luka19jfm}, level set methods \cite{ls-tmf}, and front tracking \cite{ft-tmf}, Lattice-Boltzmann \cite{LB19JFM}, and immersed boundary \cite{roberto-jcp, cs20} methods. Among them, the phase-field method is an approach in which a scalar (volume fraction of one fluid) is tracked by the Cahn-Hilliard equation and the sharp fluid-fluid interface is replaced by a narrowly mixed layer \cite{jacqmin99}. In the past decade, application of the phase-field method has been increasingly appealing because of its versatility. For example, the method has been applied to the simulation of turbulent flows \cite{soldati1, soldati2, soldati4, breakup}, flows with moving contact lines \cite{liu15,sui14,ding07jfm,zy}, fluid-structure interaction \cite{chen1,liu17,chen,liu20}, melting flows \cite{melt1,melt2,melt3}, ternary flows \cite{liu18}, and even brittle fracture simulation \cite{brittle}. In the phase-field method, two immiscible phases are represented by their volume fractions $C$ and $1-C$, respectively. The spatial distribution of $C$ is determined by the Cahn-Hilliard equation \cite{pfm96,jacqmin99,ding07jcp}: \begin{equation} \frac {\partial C} {\partial t} + \nabla \cdot ({\bf u} C) = M \nabla^2 \left[a_1\nabla^2 C+a_2\psi'(C)\right]. \label{eq-ch-d} \end{equation} The quantity in square brackets is the chemical potential defined by the variation of free energy with respect to $C$. It includes an excess free energy term (the first term), and a bulk energy term (the second term) with $\psi=1/4\,C^2(C-1)^2$ being the simplest non-singular form that has two equal energy minima, namely at $C=0$ and $1$ \cite{pfm96,jacqmin99,ding07jcp}. Physically, $\psi$ represents the bulk energy density due to the inhomogeneous distribution of volume fraction in the interfacial region. We will give more technical details in Section \ref{sec-ch}. In Eq.~(\ref{eq-ch-d}), the Laplacian of the first term on the right hand side is biharmonic, i.e. it contains fourth-order derivatives. State-of-the-art solvers for the standard single phase flow Navier-Stokes equation is highly efficient and well-studied for turbulent flows. This is because the typical algorithm to solve the computationally demanding Poisson equation--a necessary step for enforcing incompressibility--is based on fast Fourier transforms (FFTs) \cite{fft1,fft2}, as described in Ref.~\cite{cf15}. In Refs.~\cite{jcp12} and \cite{jcp14}, the FFTs is extended to multiphase flows by employing a split method, meaning the variable-coefficient pressure-gradient term is spilt into an implicit constant term and an explicit variable term. As a result, the Poisson equation can be solved up to $40$ times faster than without the split method \cite{jcp14}. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig0} \hspace{0.05\linewidth} \includegraphics[width=0.45\linewidth]{fig0-2} \caption{\label{fig-dis} (a) $25$ points used to discretize the biharmonic terms in the Cahn-Hilliard equation (\ref{eq-ch}) in the scheme of Eq.~(\ref{eq-disbi}). Symbols in different colors represents the points at different $z$ plane. In the new discretization scheme of Eq.~(\ref{eq-new}), the spherical points are replaced by the cubic ones. (b) Two dimensional situation of the discretization of the biharmonic terms. The circles are replaced by the square ones.} \end{figure} However, with the application of FFTs in multiphase flows, the computational cost of the biharmonic term becomes the new bottleneck for the phase-field method. The reason for this is that the common solution technique for the biharmonic term in the phase-field method involves an implicit solution that requires $25$ grid points for a second-order spatial discretization, see Fig.~\ref{fig-dis}(a) (details in Section \ref{sec-bi}). Therefore, in this study, we will in particular focus on an optimal discretization of the biharmonic term. We propose a novel discretization scheme for the biharmonic term in the phase-field method to couple with the approximate-factorization method, which is an efficient way to implicitly solve the hyperbolic systems \cite{axayaz} and easily parallelize it. We will implement the phase-field method \cite{ding07jcp} with this novel scheme into our open-source DNS package AFiD (\href{https://github.com/PhysicsofFluids/AFiD}{www.afid.eu}) \cite{jcp96,cf15}, which is a second-order finite difference solver that has been well-validated in many studies of turbulent flows \cite{zhu-tc,shan,qi3}. AFiD is highly-parallelized with a pencil distributed strategy \cite{cf15,gpu}, and includes an FFT-based Poisson solver \cite{jcp96}. In addition, we will apply a split method \cite{jcp12,jcp14} to the pressure solver to deal with large density differences between the two phases. To validate the present approach, we simulated cases of drop deformation in a shear flow and of a rising buoyant bubble. Our results are compared to previous studies and are further assessed using a grid convergence study. Finally, we simulated the case of a breakup of one big drop as well as the coalescence of $O(10^3)$ drops in turbulent Rayleigh-B\'enard convection, and show the good performance of the present approach for large-scale computation. The paper is organized as follows. The governing equations are introduced in Section \ref{sec-ge}. Then we address the numerical methodology in Section \ref{sec-num}. In Section \ref{sec-case}, we simulate several test cases to validate our approach and show its ability to deal with turbulent multiphase flows in large-scale computation. We conclude our study in Section \ref{sec-con}. \section{Governing Equations} \label{sec-ge} \subsection{Cahn-Hilliard (CH) equation} \label{sec-ch} Turbulent flows with two incompressible immiscible fluids are investigated here. We use the phase-field method \cite{jacqmin00,ding07jcp} to capture the interface between two fluids. Here, the sharp interface is modeled by a diffused one with finite thickness, and represented by contours of the volume fraction $C$ of fluid $1$, and thus the volume fraction of fluid $2$ is $1-C$. The evolution of the volume fraction $C$ is governed by the Cahn-Hilliard equation, \begin{equation} \begin{array}{ll} \displaystyle \frac {\partial C} {\partial t} + \nabla \cdot ({\bf u} C)&\displaystyle=\frac{1}{\Pe}\left[-\Cn ^{2} \nabla^4 C+\nabla^2 \left(C^{3} - 1.5 C^{2}+ 0.5 C\right)\right], \end{array} \label{eq-ch} \end{equation} where $\bf u$ is the flow velocity. We choose the P\'eclet number (the ratio of advection and diffusion) and the Cahn number (a dimensionless measure of the thickness of diffuse interface) the same as in Ref.~\cite{liu15}, i.e. $\Pe=0.9\Cn$ and $\Cn=0.75h/L$ with $h$ and $L$ the uniform mesh size and the characteristic length, respectively. To enforce mass conservation, the correction method proposed by \cite{shu} is used. This correction method resembles that of Ref.~\cite{soldati3} and exhibits good performance (see Section \ref{sec-rb}). \subsection{Navier-Stokes (NS) equations} \label{sec-ns} The fluid motion is governed by the momentum and continuity equations, \begin{equation} \rho\left(\frac {\partial {\bf u}} {\partial t} + {\bf u} \cdot \nabla {\bf u}\right)= - \nabla P + \frac{1}{\Re} \nabla \cdot \mu (\nabla {\bf u}+ \nabla {\bf u}^{T}) + \frac{\bf F_{st}}{\We}+ {\bf G}, \label{eq-ns} \end{equation} \begin{equation} \nabla \cdot {\bf u}= 0, \label{eq-con} \end{equation} which have been made dimensionless using the properties of fluid $1$. Here, $\bf u$ is the velocity and $P$ the pressure. $\rho$ and $\mu$ are the density and the dynamic viscosity, respectively, which are both functions of $C$ defined as, \begin{equation} \rho =C + \lambda_\rho(1-C), \label{eq-rho} \end{equation} \begin{equation} \mu =C + \lambda_\mu(1-C), \label{eq-mu} \end{equation} where $\lambda_\rho=\rho_2/\rho_1$ and $\lambda_\mu=\mu_2/\mu_1$ are the ratio of the densities and viscosities of the two phases (denoted by the subscript), respectively. The surface tension force ${\bf F}_{st}$ is computed as in \cite{ding07jcp}, \begin{equation} {\bf F}_{st} =6\sqrt{2}\phi \nabla C / \Cn. \label{eq-fst} \end{equation} In Eq.~(\ref{eq-ns}), the gravity force is ${\bf G}=-\rho/\Fr \, {\bf j}$ with $\bf j$ being the vertical direction. The dimensionless numbers controlling the problems are thus the Reynolds number $\Re=\rho_1UL/\mu_1$, the Weber number $\We=\rho_1 U^2 L/\sigma$, and the Froude number $\Fr=U^2/(gL)$, where $\sigma$ the surface tension coefficient, $g$ the gravity acceleration, and $U$ is the characteristic velocity. \section{Numerical method} \label{sec-num} We use staggered meshes and solve the CH equation on the uniform mesh with size $h$ for all three directions and the NS equations on the stretched mesh: the procedure for the coupling of the two meshes (uniform and stretched) is based on that reported in \cite{rodolfo} and it is described in Section \ref{sec-mesh}. A low-storage third-order Runge-Kutta method \cite{rk} is used to temporally advanced all the equations. The biharmonic term in Eq.~(\ref{eq-ch}), viscosity term in Eq.~(\ref{eq-ns}), and diffusion term in Eq.~(\ref{eq-t}) are implicitly solved by the Crank-Nicolson scheme, while the other terms are solved explicitly. In spatial discretization, central second-order accurate finite-difference schemes are used for all terms (details can be found in \cite{jcp96,ding07jcp}), except for two: one is the advection term of volume fraction $C$ in CH equation (\ref{eq-ch}), which is solved by fifth-order WENO scheme \cite{ding07jcp}, and the other is the biharmonic term which is solved by a novel scheme proposed in Section \ref{sec-bi}. \subsection{Discretization of biharmonic term in CH equation} \label{sec-bi} To accurately advance the CH equation (\ref{eq-ch}) with a large time step, we should implicitly solve the biharmonic term $\Cn^2\nabla^4 C$ at the right-hand side of Eq.~(\ref{eq-ch}). At the same time, its discretization scheme should retain the same order of error as the term $\nabla^2(C^3-1.5C^2+0.5C)$, which is also at the right-hand side of Eq.~(\ref{eq-ch}) and discretized by central second-order finite-difference schemes of $O(h^2/L^2)$. Typically, the biharmonic term is discretized according to Fig.~\ref{fig-dis}(b) (we restrict the expression to a 2D case for the ease of representation), \begin{equation} \begin{array}{rl} (\Cn^2\nabla^4 C)_{i,j}=&\displaystyle \left(\frac{0.75h}{L}\right)^2\left(\frac{\partial^4 C}{\partial x^4}+\frac{\partial^4 C}{\partial y^4}+\frac{2\partial^4 C}{\partial x^2 \partial y^2}\right)_{i,j}\\ \\ =&\displaystyle \left(\frac{0.75h}{L}\right)^2\left(\frac{L}{h}\right)^4(C_{i-2,j} -8 C_{i-1,j} +20 C_{i,j} -8 C_{i+1,j} + C_{i+2,j} \\ & +C_{i,j+2}-8 C_{i,j+1}-8C_{i,j-1}+C_{i,j-2} \\ & +2C_{i-1,j+1} +2 C_{i+1,j+1}+2C_{i-1,j-1}+2C_{i+1,j-1})\\ \\ & +O(h^4/L^4). \end{array} \label{eq-disbi} \end{equation} When we implicitly solve this expression, the presence of mixed partial derivatives poses challenges for computational cost and code parallelisation. To circumvent the use of mixed partial derivatives when solving Eq.~(\ref{eq-disbi}), we propose a new discretization scheme, which is shown in Eqs.~(\ref{eq-new}), (\ref{eq-a2d}) and (\ref{eq-a3d}). Thus, we can split this discretization into two one-dimensional parts $A_x C$ with $C_{i+m,j}$ and $A_y C$ with $C_{i,j+n}$, \begin{equation} \Cn^2\nabla^4 C=(A_x+A_y)C, \label{eq-divbi} \end{equation} which means that only the points on the axes remain (Fig.~\ref{fig-dis}b). Then, we can use the approximate-factorization method (described at the end of this section) to efficiently solve $\Cn^2\nabla^4 C$ implicitly. Our main idea is replacing $C_{i\pm1,j\pm1}$ in Eq.~(\ref{eq-disbi}) with $C_{i+m,j}$ and $C_{i,j+n}$ (Fig.~\ref{fig-dis}b), where $m$ and $n=-2,-1,0,1,2$. The replacement is justified based on the Taylor series expansions, \begin{equation} \left\{\begin{array}{lr} C_{i+m,j}= C_{i,j} \quad + m (h/L) C'_x \quad+ m^2 (h/L)^2 C''_x/2 & +\,m^3 (h/L)^3 C'''_x/6\quad+O(h^4/L^4),\\ & m=-2,-1,0,1,2;\\ \\ C_{i,j+n}=\ C_{i,j}\quad + n (h/L) C'_y\quad +n^2 (h/L)^2 C''_y/2 & +\,n^3 (h/L)^3 C'''_y/6\quad+O(h^4/L^4),\\ & n=-2,-1,0,1,2;\\ \\ C_{i+m,j+n}=C_{i,j}\ +\sqrt{2}m (h/L) C'_s\ +m^2(h/L)^2 C''_s & +\sqrt{2}m^3 (h/L)^3 C'''_s/3\ +O(h^4/L^4),\\ & (m,n)=(-1,1), (1,-1);\\ \\ C_{i+m,j+n}=C_{i,j}\ +\sqrt{2}m (h/L) C'_\tau\ +m^2(h/L)^2 C''_\tau & +\sqrt{2}m^3 (h/L)^3 C'''_\tau/3\ +O(h^4/L^4),\\ & (m,n)=(1,1), (-1,-1);\\ \end{array}\right. \label{eq-taylor} \end{equation} where we define $C'_e=(\partial C/\partial e)_{i,j}$ with $e=x$, $y$, $s$ and $\tau$, so do $C''_e$ and $C'''_e$. The directions $x$ and $y$ are the perpendicular axis directions in Cartesian coordinates, and the directions $s$ and $\tau$ are obtained by rotating $x$ and $y$ by $45^\circ$. Since the Laplacian operator is rotational invariant, we have \begin{equation} \nabla^2 C = C''_s+C''_\tau=C''_x+C''_y, \label{eq-c2} \end{equation} so we have the relations, \begin{equation} \begin{array}{rl} &C_{i+1,j+1}+C_{i+1,j-1}+C_{i-1,j+1}+C_{i-1,j-1}\\ =&4C_{i,j}+2(h/L)^2(C''_s+C''_\tau)+O(h^4/L^4)\\ =&4C_{i,j}+2(h/L)^2(C''_x+C''_y)+O(h^4/L^4)\\ =&2C_{i,j}+0.5\{[C_{i,j}+2^2 (h/L)^2 C''_x/2+O(h^4/L^4)]+[C_{i,j}+(-2)^2 (h/L)^2 C''_x/2\\&+O(h^4/L^4)]\}\\&+0.5\{[C_{i,j}+2^2 (h/L)^2 C''_y/2+O(h^4/L^4)]+[C_{i,j}+(-2)^2 (h/L)^2 C''_y/2\\&+O(h^4/L^4)]\}\\ =&0.5(2C_{i,j}+C_{i+2,j}+C_{i-2,j})+0.5(2C_{i,j}+C_{i,j+2}+C_{i,j-2})+O(h^4/L^4), \end{array} \label{eq-trans1} \end{equation} where the first and third-order derivatives are eliminated since the points are symmetrical about $(i,j)$. Thus, $C_{i\pm1,j\pm1}$ can be replaced by $C_{i+m,j}$ and $C_{i,j+n}$ as shown in Fig.~\ref{fig-dis}(b). Substituting Eq.~(\ref{eq-trans1}) into Eq.~(\ref{eq-disbi}), we get the new discretization scheme, \begin{equation} \begin{array}{rl} (\Cn^2\nabla^4 C)_{i,j}=&\displaystyle \left(\frac{0.75h}{L}\right)^2\left(\frac{L}{h}\right)^4(C_{i-2,j} -8 C_{i-1,j} +20 C_{i,j} -8 C_{i+1,j} + C_{i+2,j} \\ & +C_{i,j+2}-8 C_{i,j+1} -8C_{i,j-1}+C_{i,j-2}\\ & +2C_{i,j}+C_{i+2,j}+C_{i-2,j}+2C_{i,j}+C_{i,j+2}+C_{i,j-2})\\ \\ & +O(h^2/L^2)\\ \\ =&\displaystyle \left(\frac{0.75h}{L}\right)^2\left(\frac{L}{h}\right)^4(2C_{i-2,j} -8 C_{i-1,j} +10 C_{i,j} -8 C_{i+1,j} + 2C_{i+2,j} \\ & +2C_{i,j+2}-8 C_{i,j+1}+10 C_{i,j} -8C_{i,j-1}+2C_{i,j-2}) \\ \\ & +O(h^2/L^2), \end{array} \label{eq-new} \end{equation} where the error $O(h^2/L^2)$ is of the same order as the term $\nabla^2(C^3-1.5C^2+0.5C)$ at the right-hand side of Eq.~(\ref{eq-ch}). Comparing Eq.~(\ref{eq-new}) and Eq.~(\ref{eq-divbi}), we get the following pentadiagonal matrix, \begin{equation} A_x=A_y=\displaystyle \left(\frac{0.75L}{h}\right)^2 \left[\begin{array}{cccccccc} \cdots &&&\cdots&&&&\cdots\\ 2&-8&12&-8&2 &&&0\\ 0&\ddots&\ddots&\ddots&\ddots&\ddots &&\vdots\\ \vdots&&\ddots&\ddots&\ddots&\ddots&\ddots &0\\ 0&& &2&-8&12&-8&2 \\ \cdots &&&&\cdots&&&\cdots\\ \end{array}\right], \label{eq-a2d} \end{equation} for 2D, where the values in the first and last rows are determined by boundary conditions. Now, with the convenient form of Eq.~(\ref{eq-a2d}), the approximate-factorization method can be employed to solve the biharmonic term implicitly. The same idea can be directly extended to three-dimensions, and the points used in the mixed partial derivatives are replaced as shown in Fig.~\ref{fig-dis}(a). Thus, we get the operators, \begin{equation} A_x=A_y=A_z=\displaystyle \left(\frac{0.75L}{h}\right)^2 \left[\begin{array}{cccccccc} \cdots &&&\cdots&&&&\cdots\\ 4&-16&24&-16&4&&&0\\ 0&\ddots&\ddots&\ddots&\ddots&\ddots &&\vdots\\ \vdots&&\ddots&\ddots&\ddots&\ddots&\ddots &0\\ 0&& &4&-16&24&-16&4 \\ \cdots &&&&\cdots&&&\cdots\\ \end{array}\right], \label{eq-a3d} \end{equation} for 3D. With $A_x$, $A_y$ and $A_z$, we can use the approximate-factorization method \cite{axayaz,jcp96} to efficiently solve the following equation with the known $q^l$ from the previous time step and unknown $q^{l+1}$ for the next time step, \begin{equation}\label{eq-example} \frac{q^{l+1}-q^l}{\delta t} = E+\beta(A_x +A_y +A_z)\frac{q^{l+1}+q^l}{2}, \end{equation} where $E$ represents the terms calculated explicitly, $\beta$ is the constant coefficient, $A_x$, $A_y$ and $A_z$ are discretization operators, and $(q^{l+1}+q^l)/2$ originates from the Crank-Nicolson scheme. Eq.~(\ref{eq-example}) can be rewritten as, \begin{equation}\label{eq-re} \left[1-\frac{\delta t \beta}{2}(A_x +A_y +A_z)\right](q^{l+1}-q^l) = \delta t E+ \delta t \beta(A_x +A_y +A_z) q^l. \end{equation} Then we factorize the operators on the left, \begin{equation}\label{eq-fac} \left[1-\frac{\delta t \beta}{2}(A_x +A_y +A_z)\right] = \left(1-\frac{\delta t \beta}{2}A_x\right) \left(1-\frac{\delta t \beta}{2}A_y\right) \left(1-\frac{\delta t \beta}{2}A_z\right)+O(\delta t^2 \beta^2). \end{equation} After factorization, the computation only requires inversions of separate tridiagonal matrices rather than the inversion of a large sparse matrix, which leads to a significant reduction in computation cost and memory \cite{axayaz,jcp96}. Then, Eq.~(\ref{eq-re}) can be solved by the following steps, \begin{equation} \label{eq-fac-1} \left(1- \frac{\delta t \beta}{2}A_x\right)\delta q^* = \delta t E+ \delta t \beta(A_x +A_y +A_z) q^l, \end{equation} \begin{equation} \label{eq-fac-2} \left(1- \frac{\delta t \beta}{2}A_y\right)\delta q^{**} = \delta q^*, \end{equation} \begin{equation} \label{eq-fac-3} \left(1- \frac{\delta t \beta}{2}A_z\right)(q^{l+1}-q^l) = \delta q^{**}, \end{equation} where the superscript $*$ represents the intermediate parameter. In Eqs.~(\ref{eq-fac-1}), (\ref{eq-fac-2}) and (\ref{eq-fac-3}), the inversion of matrix will be extremely cheap when we carefully choose $A_x$, $A_y$ and $A_z$, respectively, provided they only involve the points in one dimension. \subsection{FFT-based solver with a split method for Poisson equation with large density contrast} \label{sec-fft} The NS equation (\ref{eq-ns}) is solved here by a projection method, \begin{equation} \frac{{\bf u}^{l+1}-{\bf u}^{*}}{\delta t}=-\frac{1}{\rho^{l+1}}\nabla P^{l+1}, \label{eq-pro} \end{equation} where ${\bf u}^*$ is an intermediate velocity field calculated from Eq.~(\ref{eq-ns}) without the pressure term. Considering $\nabla \cdot {\bf u}^{l+1}=0$, we have, \begin{equation} \nabla \cdot \left(\frac{1}{\rho^{l+1}}\nabla P^{l+1}\right)=\frac{1}{\delta t}\nabla \cdot {\bf u}^{*}. \label{eq-poi} \end{equation} To solve this Poisson equation with large density variations, we use the split method proposed by \cite{jcp14} to apply fast Poisson solver to Eq.~(\ref{eq-poi}). In the split method \cite{jcp14}, the Poisson equation (\ref{eq-poi}) with the variable coefficient $1/\rho^{l+1}$ is split into an implicit constant density part and an explicit variable part, \begin{equation} \frac{1}{\rho^{l+1}}\nabla P^{l+1}=\frac{1}{\rho_2}\nabla P^{l+1}+\left(\frac{1}{\rho^{l+1}}-\frac{1}{\rho_2}\right)\nabla (2P^l-P^{l-1}), \label{eq-split} \end{equation} where we define $\rho_2 \le \rho_1$. Substitute Eq.~(\ref{eq-split}) into Eq.~(\ref{eq-poi}), \begin{equation} \nabla^2 P^{l+1}=\nabla \cdot \left[\left(1-\frac{\rho_2}{\rho^{l+1}}\right)\nabla (2P^l-P^{l-1})\right]+\frac{\rho_2}{\delta t}\nabla \cdot {\bf u}^{*}. \label{eq-newpoi} \end{equation} Then, a standard fast Poisson solver can be used here. After getting $P^{l+1}$, the velocity field is updated as, \begin{equation} {\bf u}^{l+1}={\bf u}^{*}-\delta t\left[\frac{1}{\rho_2}\nabla P^{l+1}+\left(\frac{1}{\rho^{l+1}}-\frac{1}{\rho_2}\right)\nabla (2P^l-P^{l-1})\right]. \label{eq-u} \end{equation} \subsection{Pencil distributed parallel strategy} \label{sec-para} The parallel method in the present approach is a pencil distributed parallel strategy (details in \cite{cf15}). Here, the computational domain is split in two dimensions and this strategy allows us to use more CPU cores for large-scale computation, such as $70$ billion points with $64K$ cores as reported in \cite{cf15}. The other advantage is that this strategy is well coupled with the approximate-factorization method to implicitly solve the equations. The high performance of this parallel method has been extensively validated in \cite{cf15} and \cite{gpu}. Moreover, it has already been used in many studies of turbulent flows in large-scale simulations \cite{richard,zhu-tc,shan,zhu}. \subsection{Multi-resolution meshes for $C$ and $\bf u$} \label{sec-mesh} One feature of our method is that the volume fraction field $C$ can be integrated on a refined uniform mesh, even if the momentum field $\bf u$ is integrated on a non-uniform mesh. For the $C$ field, a uniform mesh is a recommended choice. The reasons for this are as follows: The computation of surface tension force is key to simulate multiphase flows. To ensure the truncation error of surface tension in space is of the same order in all directions, uniform mesh spacing in each direction is necessary near the interface. Furthermore, considering the drops in turbulent flows is likely to break up into smaller sized drops and distribute throughout the domain, the use of a uniform mesh can easily handle the spatially dispersed drops. Therefore, the uniform mesh is a good choice for $C$ field. On the other hand, in wall-bounded turbulence, the resolution requirements of the $\bf u$ field are more restrictive at the walls, where very thin kinematic boundary layers need to be resolved. The same strict requirements apply for Rayleigh--B\'{e}nard convection, where a large number of near-wall nodes are required to resolve thin thermal boundary layers \cite{olga}. Therefore, a stretched non-uniform mesh is a good choice for resolving $\bf u$ or the temperature field. This multi-resolution treatment of the mesh allows for large computational savings \cite{rodolfo, liu-dual} since the operations are by far cheaper when integrating the momentum field on coarser meshes, as compared to the single scalar Cahn-Hillard equation without any elliptic equation. The multi-resolution method that decouples $\bf u$ and $C$ works as follows. $\bf u$ is projected from a base mesh, which is non-uniform, to a refined uniform mesh on which $C$ resides. The projection employs a tri-cubic Hermite spline interpolation, with a stencil of four points in each direction, for a total of sixty-four points in three dimensions. Here, the Hermitian interpolation is a preferred since the accuracy has been proven to be sufficient for turbulent flows, and is considerably cheaper than other methods such as B-splines \cite{rodolfo}. This stencil is generated only once at the start of the simulation and is reused throughout. To preserve the solenoidal properties of the momentum field, instead of directly projecting $\bf u$, the normal velocity gradients on the base mesh are first computed and then the projection is applied on the normal velocity gradients. Finally, with a refined 2D velocity field interpolated at a reference location (in each direction), the refined velocities are integrated for the entire domain using the interpolated gradients. For the back-coupling of the $C$ field, the refined uniform mesh is directly projected to the stretched mesh since there is no solenoidal requirement for $C$. This down-sampling projection step is used to obtain $\mu$, $\rho$ and ${\bf F}_{st}$. The present method is an improvement over the previous method used in \cite{rodolfo}, since here, the stretched mesh can contain an arbitrary number of nodes employing different stretching parameters. \section{Results and discussion} \label{sec-case} In Section \ref{sec-shear}, we test the convergence of the results with mesh refinement and the performance of the new discretization scheme for the biharmonic term. Section \ref{sec-buble} shows the ability of the present approach to deal with large density and viscosity contrasts. In Section \ref{sec-rb0}, a possible application of multiphase turbulence is simulated --- Rayleigh-B\'enard convection with drops, where the performance of the multi-resolution meshes is also tested. \subsection{Drop deformation in shear flow} \label{sec-shear} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig1-1}% \caption{\label{fig-shear1} Configuration for drop deformation in shear flow.} \end{figure} In order to test the mesh refinement convergence of our approach and the performance of the new discretization scheme for the biharmonic term, we consider the deformation of a drop in a shear flow with matched density and viscosity. A drop of radius $R$ is initially placed at the center of a domain of $8R \times 8R \times 8R$, as shown in Fig.~\ref{fig-shear1}. In the domain, there are two no-slip plates moving at a speed of $U$ in opposite direction, and periodic boundary conditions are used in the other directions. Due to the shear stress exerted by the surrounding fluid, the drop elongates until the surface tension counteracts the resulting load. We define the deformation ratio $\Gamma = (L - B)/(L + B)$ as in \cite{dual, shear1, shear2} to quantify the degree of drop deformation, where $B$ and $L$ are the lengths of the minor and major axes of the deformed drop at equilibrium, respectively, see Fig.~\ref{fig-shear1}. The governing dimensionless parameters are the capillary number $\Ca = \mu\dot{\gamma} R/\sigma$, the Reynolds number $\Re= \rho\dot{\gamma} R^2/\mu$, and the Weber number $\We=\rho(\dot{\gamma} R)^2 R/\sigma=\Ca \, \Re$, where $\dot{\gamma} =2U/H$ is the shear rate and H the thickness of the fluid layer. Gravity is not considered here. With $\Ca \ll 1$ and $\Re \ll 1$, $\Gamma$ is expected to linearly depend on $\Ca$ accounting to $\Gamma\approx (35/32)\Ca$ \cite{shear0}. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{fig1-2a}% \hspace{0.1\linewidth} \includegraphics[width=0.4\linewidth]{fig1-2b}% \caption{\label{fig-shear2} (a) Comparison of the present results ($\nabla$) with the theoretical approach in \cite{shear0} (black line) and the previous numerical results in \cite{dual} ($\Delta$) in terms of the drop deformation ratio $\Gamma$ at various capillary numbers $\Ca$. (b) Convergence study with mesh refinement at $\Ca=0.1$ in terms of the error $E_h$ of $\Gamma$. The slope of the solid line is $k=1.4$.} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig1-3a}% \hspace{0pt} \includegraphics[width=0.7\linewidth]{fig1-3b}% \caption{\label{fig-shear3} Breakup of a spherical drop in shear flow at $\Ca=0.39$ and $\Re=1$. The snapshots are at $t=20$ (upper) and $t=29$ (lower).} \end{figure} Fig.~\ref{fig-shear2} shows the variation of the deformation ratio $\Gamma$ as function of $\Ca$ at $\Re=0.03$, for simulations performed on a grid with $h=0.005$. The comparison with the theoretical prediction \cite{shear0} and the previous numerical results \cite{dual} gives good agreement. With increasing $\Ca$, the deformation ratio $\Gamma$ becomes larger than the theoretical prediction \cite{shear0} since the assumption of $\Ca \ll 1$ for this prediction is no longer satisfied. As reported in the previous studies \cite{dual, shear1, shear2}, the drop breaks up at $\Ca = 0.39$ and $\Re=1$. We also perform this case in a domain of $12R \times 8R \times 8R$ as shown in Fig.~\ref{fig-shear3}. The drop breaks up into three smaller ones as expected. Fig.~\ref{fig-shear4} shows the results of the convergence study with different mesh size $h=0.0031$, $0.0042$, $0.0050$, $0.0063$ and $0.0100$ at $\Ca=0.1$. The numerical error $E_h$ is calculated by comparing $\Gamma$ to the value obtained with the finest mesh ($h=0.0031$). The convergence rate is of $1.4$, which is between $1$ and $2$, as expected since the phase-field method \cite{jacqmin00, ding07jcp} for the interface used here is first-order accurate while the NS solver is second order \cite{jcp96}. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig1-4}% \caption{\label{fig-shear4} Convergence study with mesh refinement in terms of $E_{max}$. Red symbols $\nabla$ represents the data with the explicit scheme of Eq.~(\ref{eq-disbi}) and blue $\delta t=5 \times 10^{-5}$, and $\Delta$ with the new implicit scheme of Eq.~(\ref{eq-new}) and $\delta t=2 \times 10^{-3}$. The slope of the solid line is $k=1.4$.} \end{figure} We have also tested the performance of the explicit discretization scheme in Eq.~(\ref{eq-disbi}) and the new implicit scheme in Eq.~(\ref{eq-a3d}) for the biharmonic term $\Cn^2\nabla^4 C$ described in Section \ref{sec-bi}. Since the explicit scheme requires a small time step, here we consider the quantity $\Gamma_{max}$, which is reached around $t=0.4$, instead of $\Gamma$ at equilibrium, which is attained only around $t=30$. Here we show a convergence study with mesh refinement at $\delta t = 5\times 10^{-5}$ (the largest value to maintain numerical stability) with the explicit scheme and $\delta t = 2\times 10^{-3}$ with the new scheme in Fig.~\ref{fig-shear4}, where the results agree well. It shows the new implicit scheme in Eq.~(\ref{eq-a3d}) is highly efficient and accurately discretizes the biharmonic term. Thanks to this, we can perform the large-scale simulations of turbulent multiphase flows. \subsection{Rising bubble with buoyancy} \label{sec-buble} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig2-1}% \caption{\label{fig-bubble1} Configuration for rising bubble with buoyancy.} \end{figure} In this subsection, we test the performance of the present approach by simulating a three-dimensional bubble rising in liquid water with a large density and viscosity contrast up to $1000$ and $100$ times, respectively, which has the same configuration as previous axisymmetric studies \cite{ding07jcp,rising}. Initially, we place a bubble (fluid 2) of radius $R$ in the domain of $8R \times 8R \times 8R$ with the distance from the bottom plate to bubble center of $1.6R$, as shown in Fig.~\ref{fig-bubble1}. No-slip and non-penetration boundary conditions are enforced at all boundaries. The dimensionless parameters controlling this problem are the Reynolds number $\Re=\rho U R/\mu=100$, the Bond number $\Bo=\rho U R^2/\sigma=200$, and the density and viscosity ratios $\lambda_\rho=0.001$ and $\lambda_\mu=0.01$, respectively. Note that $\Fr=1$ and $\We=\Bo$ due to the characteristic velocity $U=\sqrt{gR}$. The mesh used here is $400 \times 400 \times 400$, where the mesh size is the same as in the axisymmetric simulations \cite{ding07jcp,rising}. \begin{figure} \centering \includegraphics[width=\linewidth]{fig2-2}% \caption{\label{fig-bubble2} (a) Interface shape of the rising bubble of the present study. (b) Comparison of the results in the $x-z$ plane of the present study (black) with those of the previous studies \cite{ding07jcp} (right half, green dashed line) and \cite{rising} (left half, red dashed line) at $t=0.8$, $1.6$ and $2.4$, respectively.} \end{figure} Thanks to buoyancy, the bubble rises. For $\Bo \gg 1$, with the surface tension not large enough to counteract buoyancy, the bottom of the bubble will rise faster than the top, as shown in Fig.~\ref{fig-bubble2}. Therefore, eventually, the bubble breaks up from the tip and evolves into a toroid. Although here a three-dimensional case is performed to test the performance of our code, the flow is indeed axisymmetric, so that we can compare our results with the previous studies of axisymmetric simulations \cite{ding07jcp,rising}. In our numerical simulations, the breakup occurs at t = 1.61 and y = 4.1R, which agrees well with the simulations from previous studies using different numerical approaches, which are t = 1.60 and y = 4.05R with level set method \cite{rising}, and t = 1.61 and y = 4.09R with diffuse-interface method \cite{ding07jcp}. Besides, Fig.~\ref{fig-bubble2} presents the comparisons of the bubble shape at different time instants $t=0.8$, $1.6$ and $2.4$. It shows that also the shape of the bubble's interface in the present study is in good agreement with the previous ones \cite{ding07jcp,rising}. \subsection{Multiphase turbulent Rayleigh-B\'enard convection} \label{sec-rb0} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig3-1}% \caption{\label{fig-rb1} Configuration for breakup of one big drop in turbulent Rayleigh-B\'enard convection in a domain with a hot bottom plate ($\theta=1$) and a cold top plat ($\theta=0$).} \end{figure} Here we consider a possible application of multiphase turbulent flows by using the present approach: Turbulent Rayleigh-B\'enard convection with drops, as shown in Fig.~\ref{fig-rb1}. Rayleigh-B\'enard convection is the motion of a fluid in a cell heated from below and cooled from above \cite{rev1,rev2,chilla}. For Rayleigh-B\'enard convection, the temperature advection equation reads \begin{equation} \rho c_p \left(\frac{\partial {\theta}}{\partial t} + {\bf u} \cdot \nabla \theta \right)= \frac{1}{\sqrt {\Pr \Ra }}\nabla \cdot (k_d\nabla \theta), \label{eq-t} \end{equation} where $c_p=k_d/(\kappa \rho)$ is the specific heat capacity. The thermal conductivity $k_d$ is defined as \begin{equation} k_d =C + \lambda_{k_d}(1-C), \label{eq-k} \end{equation} where $\lambda_{k_d}=k_{d2}/k_{d1}$ is the ratio of the thermal conductivity. We choose the distance between the hot and cold plates as the characteristic length, and the free fall velocity $U=\sqrt{\alpha_1 g L \Delta}$ as the characteristic velocity. The relevant dimensionless groups of the configuration are the Rayleigh number $\Ra=\alpha_1 \rho_1 g L^3 \Delta/(\mu_1 \kappa_1)$, the Prandtl number $\Pr=\mu_1/(\rho_1 \kappa_1)$, where $\alpha$ is the thermal expansion coefficient, $\Delta$ the temperature difference and $\kappa$ the thermal diffusivity, in addition to the dimensionless numbers controlling the droplets. For this case the gravity force ${\bf G}$ in Eq.~\ref{eq-ns} depends on both $C$ and the dimensionless temperature $\theta$, whose effects on density are considered within the Boussinesq approximation, \begin{equation} {\bf G}=\left\{[C+\lambda_\alpha \lambda_\rho (1-C)] \, \theta-\frac{\rho}{\Fr}\right\} {\bf j}, \label{eq-g} \end{equation} where $\lambda_\alpha=\alpha_2/\alpha_1$ is the ratio of the thermal expansion coefficients $\alpha$. \subsubsection{Breakup of one big drop in turbulent Rayleigh-B\'enard convection} \label{sec-rb} Initially, a drop of radius $0.23H$ (represented by $C=1$) with matched density and viscosity with the ambient fluid is placed at the center of the domain $H \times H \times H$, with a linear temperature profile and zero velocity. The boundary conditions at the top and bottom plates are set as $C=0$, no-slip condition and fixed temperature $\theta=0$ (top) and $1$ (bottom). Periodic boundary conditions are used in the horizontal directions. The chosen dimensionless parameters are $\Ra=10^8$, $\Pr=1$ and $\We=8000$. Note that $\We$ here is large because it is defined by the system height instead of the droplet size. For local Weber number which is defined using the droplet size, we find that the value is $O(1)$ after the droplet breakup, which is consistent with the Kolmogorov-Hinze theory \cite{kol,hinze}. The chosen Rayleigh number is large enough for the flow to enter the turbulent regime. The mesh is $500 \times 500 \times 500$, which is consistent with the grid resolution checks in \cite{zhou}. \begin{figure} \centering \includegraphics[width=\linewidth]{fig3-2}% \caption{\label{fig-rb2} Snapshots of the interface shape of drops at $\Ra=10^8$, $\Pr=1$ and $\We=8000$. The times are (a) $t=9$, (b) $t=12$, (c) $t=15$ and (d) $t=100$. Temperature on the surface is shown in different colors (hot in red and cold in blue).} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig3-3}% \caption{\label{fig-rb3} Probability distribution function (PDF) of the drop size $D/H$ calculated from the drop volume. The solid and dashed lines indicate the scaling laws $-10/3$ \cite{pdf} and $-4/3$ \cite{pdfjfm}, respectively. The red circles denote the results on the single resolution meshes, and the blue cross symbols on the multi-resolution meshes.} \end{figure} Fig.~\ref{fig-rb2} shows snapshots of the drops in Rayleigh-B\'enard convection. The drops first deform due to buoyancy (see Fig.~\ref{fig-rb2}a), and then breaks up because of the small surface tension (see Fig.~\ref{fig-rb2}b). As time evolves, hundreds of drops of various sizes are advected in the turbulent field (Fig.~\ref{fig-rb2}c and \ref{fig-rb2}d). The drop size is characterized by an effective diameter $D$, which is defined as $\frac{4\pi}{3} (D/2)^3=V$, with $V$ being the drop volume. The resulting distribution of the drop sizes is shown in Fig.~\ref{fig-rb3}. We observe that the probability distribution function (PDF) of the large drops follows the scaling $(D/H)^{-10/3}$ while that of the small drops obeys the scaling $(D/H)^{-4/3}$, which both originate from the previous theory studies for the respective regimes \cite{pdf,pdfjfm}: First, in turbulent flows, the distribution of the drop size has been studied extensively. The well-known $-10/3$ scaling law for the large drops in turbulence was proposed in ref. \cite{pdf} and validated by many experimental and numerical studies \cite{nature02,pipe,pop16,LB19JFM}. Second, the derivation of the $-4/3$ scaling law for the relatively small drops originates from a recent study \cite{pdfjfm}. It is based on the energy balance in a regime dominated by surface tension. Fig.~\ref{fig-rb3} shows that the present numerical simulations and the theoretical analyses \cite{pdf,pdfjfm} give consistent results. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig3-4}% \caption{\label{fig-rb4} Time evolution of the mass loss $l_{loss}$ of drops in turbulent Rayleigh-B\'enard convection. The blue and red curves represent the results on the single and multi-resolution meshes, respectively.} \end{figure} The mass conservation is also tested in this section. Fig.~\ref{fig-rb4} shows the normalized mass loss $l_{loss}= (m_t-m_0)/m_0$, where $m_t$ is the mass of fluid $1$ (drops) at time $t$ and $m_0$ is the initial mass of the drop. We see that the maximal mass loss is of the order of $10^{-5}$ and the value of $l_{loss}$ is not increasing in time. This demonstrates the good mass conservation in the present approach, which is consistent with the other studies with phase-field methods \cite{ding07jcp, shu03, soldati3}. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{fig3-5a}% \hspace{0.1\linewidth} \includegraphics[width=0.4\linewidth]{fig3-5b}% \caption{\label{fig-rb5} (a) Wall time and (b) speedup of the computation time compared to that with single core as functions of CPU cores with gridpoints of $1000^3$ ($\Delta$) and $2000^3$ ($\nabla$). The empty symbols are the present data, and the filled symbols the data of turbulent single-phase flows \cite{gpu}.} \end{figure} We also simulated the case on multi-resolution meshes with otherwise unchanged parameters, uniform mesh of $500^3$ for the CH equation and stretched mesh of $250^3$ for the NS equation, i.e. the same resolution for volume fraction $C$ and a coarser one for velocity $\bf u$ and temperature $\theta$ compared to the single-resolution gird. The consistent results obtained on the multi- and single-resolution meshes are shown in Fig.~\ref{fig-rb3} and \ref{fig-rb4} in terms of PDF of $D/H$ and the time evolution of $l_{loss}$. We also test the computational efficiency of the method on the supercomputer MareNostrum at the Barcelona Computing Center (2 sockets Intel Xeon Platinum 8160 CPU with 24 cores each @ 2.10GHz, for a total of 48 cores per node). Two sets of gridpoints are used, i.e. $1000^3$ and $2000^3$, and the option of multi-resolution is not used here to fit the setting of the previous study. The wall clock time per step and the speedup comparing with a single core as functions of CPU cores are presented in Fig.~\ref{fig-rb5}. Compared to the AFiD code for single phase flows \cite{gpu}, the computational cost of the present approach for the multiphase flows is only less than $1.5$ times more. Moreover, the parallel efficiency is quite good until the CPU cores used are more than $3072$. These data show that the computational performance of the present approach for turbulent multiphase flows is nearly as good as the solver for turbulent single-phase flows. \subsubsection{Coalescence of $O(10^3)$ drops in Rayleigh-B\'enard convection} \label{sec-1000} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig4-1}% \caption{\label{fig-1000i} Initial configuration for the study of coalescence of $O(10^3)$ drops in turbulent Rayleigh-B\'enard convection. The color code represents the temperature.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{fig4-2}% \caption{\label{fig-1000t} Snapshots of the interface shape of drops at $\Ra=10^8$, $\Pr=1$ and $\We=1000$ with initially (a) $1000$ drops and (b) only one drop. Temperature on the surface is shown in the same color bar of Fig.~\ref{fig-rb2}. } \end{figure} The topological change of the interface includes the breakup and coalescence of drops. In Section \ref{sec-rb}, we clearly observed the breakup of drops. In this section, we will show the coalescence of $O(10^3)$ drops in turbulent Rayleigh-B\'enard convection. The initial setup is presented in Fig.~\ref{fig-1000i}, where we placed $1014$ drops with a uniform diameter of $0.08H$ in a domain of $2H\times 2H \times H$. The simulation was performed on the mesh of $1000\times 1000\times500$ and $2048$ CPU cores. The Weber number was set to $\We=1000$, which is smaller than that in Section \ref{sec-rb}. The other dimensionless parameters and boundary conditions are the same as in Section \ref{sec-rb}. As seen from the snapshots at $t=10$, $40$ and $150$ in Fig.~\ref{fig-1000t} (a), most of drops coalesce into larger ones. Since the Weber number here is smaller than that in Section \ref{sec-rb}, surface tension here is stronger and can resist inertia, leading to larger drop sizes. We also simulated a case with a different initialization, where only one big drop with a diameter of $0.8H$ is placed at the center of the domain. Although different initial conditions are used, similar statistic equilibrium states were obtained after sufficiently long times (see Fig.~\ref{fig-1000t}). \section{Conclusion} \label{sec-con} In this study we have shown how to efficiently implement the phase-field method into the single-phase DNS solver AFiD. A new discretization scheme for the biharmonic term $\Cn^2\nabla^4 C$ of the Cahn-Hilliard equation has been proposed. Together with the approximate-factorization method, the FFT-based Poisson solver, and a pencil distributed parallel strategy, massive DNSs (up to 8 billion gridpoints and 3072 CPU cores are used) for turbulent multiphase flows can be performed. The suggested new approach has then been validated by comparisons with several numerical experiments. In the case of drop deformation in shear flow, the results agree well with theoretical and previous numerical results, and the convergence study with mesh refinement shows an accuracy between first and second order, as expected. Then, also for the case of a rising bubble with buoyancy, good agreement is achieved when comparing our results with previous simulations, even with large density or viscosity contrast of up to $1000$ or $100$ times, respectively. Furthermore, in the case of breakup and coalescence of drops in turbulent Rayleigh-B\'enard convection, we observe good performance of our approach to deal with turbulent multiphase flows, including good mass conservation and high efficiency of computation, thus establishing our scheme to perform reliable simulations for turbulent multiphase flows in large-scale computations. The new scheme and code therefore offer great opportunities to better understand the physics of turbulent two-phase flow with coalescence and breakup of droplets and bubbles. \section*{Acknowledgments} This work was financially supported by ERC-Advanced Grant under the project no. 740479. We acknowledge PRACE for awarding us access to MareNostrum in Spain at the Barcelona Computing Center (BSC) under the project 2020225335, and Irene at Tr\'es Grand Centre de calcul du CEA (TGCC) under the project 2019215098. This work was also carried out on the national e-infrastructure of SURFsara, a subsidiary of SURF cooperation, the collaborative ICT organization for Dutch education and research. \bibliographystyle{model1-num-names}
{ "timestamp": "2021-05-06T02:09:39", "yymm": "2105", "arxiv_id": "2105.01865", "language": "en", "url": "https://arxiv.org/abs/2105.01865" }
\section{Introduction} Space-times containing plane gravitational waves have seen extensive analytical study over the years and many closed form solutions, which necessarily assume certain symmetries or wave profiles, now exist and their properties are known (see \cite{griffiths2016colliding} for an excellent overview.) While there are a number of analytic solutions for the propagation and collision of waves assuming a vanishing cosmological constant \cite{brinkmann1923riemann, peres1959some, takeno1961mathematical,khan1971scattering, penrose1972geometry, nutku1977colliding}, the non-vanishing cosmological constant analogues pale in number, and there are no closed form solutions for colliding waves in this case. Penrose's cut-and-paste method \cite{penrose1972geometry, penrose1968twistor}, which cuts Minkowski space-time along a null hyperplane, shunts one half along the same surface and then pastes the two halves back together gives rise to a space-time with one impulsive gravitational wave (i.e. with a Dirac delta function wave profile.) This has been generalized to non-zero, constant curvature backgrounds \cite{podolsky1999nonexpanding, podolsky1999expanding, griffiths2000exact, podolsky2000collision, podolsky2002exact, podolsky2019cut} where the wave fronts are topologically spherical for $\lambda>0$ and hyperboloidal for $\lambda<0$. There do not exist however, closed form solutions to the full non-linear Einstein equations with $\lambda\neq0$ that contain gravitational waves with \emph{plane symmetric} wave fronts. De Sitter space-time, the unique solution to the Einstein vacuum equations with constant positive scalar curvature, can be thought of as a model of a universe which is expanding at an accelerated rate from the positive $\lambda$ contribution. Quantum gravitational back-reaction on inflation \cite{tsamis1997quantum} allows for the creation of cosmic scale gravitational radiation, where, if one does not account for their creation, can be modelled completely classically through gravitational perturbations of de Sitter space-time \cite{tsamis2013pure,tsamis2014classical}. It is theorized that such a background of radiation may weaken the expansion and even halt it completely. Analytical calculations have been done to explore this hypothesis by studying how an expansion parameter and its time derivative could be manipulated through such a field at an initial instance of time. The question of what happens away from this surface remains unanswered, and attempting to answer this in the full non-linear regime analytically would be very difficult, if not impossible. In this paper, we numerically evolve the Einstein vacuum equations with positive cosmological constant in plane symmetry with the goal of shedding light on the above topics. To do so, we implement an initial boundary value problem following Friedrich and Nagy \cite{friedrich1999initial}, which is wellposed, and allows us to generate gravitational perturbations through boundary conditions rather than solving the constraints. This framework has already been implemented and numerically validated in previous work \cite{frauendiener2014numerical} for $\lambda=0$. We generalize this to an arbitrary cosmological constant as well as the inclusion of matter terms through components of $\Phi_{ab} = -(1/2)R_{ab} + (1/8)Rg_{ab}$ and scalar curvature $\Lambda = (1/24)R$ for completeness. We follow the conventions of Penrose and Rindler \cite{penrose1986spinors,penrose1988spinors} throughout. \section{Review of plane gravitational waves with $\lambda=0$} Here we briefly present the space-times of a single impulsive gravitational plane wave and the collision of two, which are colinearly polarized, with $\lambda=0$. This can be accomplished by summarizing the Khan-Penrose solution \cite{khan1971scattering}, which describes the latter. \begin{figure}[H] \centering \includegraphics[width=0.4\linewidth]{kpsoln.png} \caption{The structure of the Khan-Penrose solution.} \label{fig:kpsoln} \end{figure} Fig.~\ref{fig:kpsoln} showcases the Khan-Penrose solution in null coordinates $u,\,v$, where the two spatial dimensions that span the planes are suppressed, so that each point represents a plane. Null curves are represented by lines with slope $\pm1$ and the impulsive waves are given by $\Psi_0 = \delta(v),\,\Psi_4=\delta(u)$, where $\delta$ is the Dirac delta function, so their path is given by the dashed lines. These lines split the space-time into four regions. The lower region is Minkowski space-time, the two side regions are space-times containing one propagating wave only, and the top region is the interaction region after scattering. All four regions can be represented by the single line element \begin{eqnarray}\label{eq:kpsoln} \textrm{d}s^2 &= \frac{2(1 - p^2 - q^2)^{3/2}}{\sqrt{1 - p^2}\sqrt{1 - q^2}(pq + \sqrt{1 - p^2}\sqrt{1 - q^2})^2}\textrm{d}u\textrm{d}v \nonumber \\ &\quad -(1 - p^2 - q^2)\Big{(}\frac{\sqrt{1 - p^2} + q}{\sqrt{1 - p^2} - q}\Big{)}\Big{(}\frac{\sqrt{1 - q^2} + p}{\sqrt{1 - q^2} - p}\Big{)}\textrm{d}x^2 \nonumber \\ &\quad -(1 - p^2 - q^2)\Big{(}\frac{\sqrt{1 - p^2} - q}{\sqrt{1 - p^2} + q}\Big{)}\Big{(}\frac{\sqrt{1 - q^2} - p}{\sqrt{1 - q^2} + p}\Big{)}\textrm{d}y^2, \end{eqnarray} where $p := u\,\Theta(u)$ and $q := v\,\Theta(v)$. The interaction region contains a spacelike curvature singularity on the surface $u^2 + v^2 = 1$ and can be seen as such due to the divergence of, for example, the Weyl invariant $I$. The region containing only the $\Psi_0$ wave is where $u<0$ and $v\geq0$ and the line element Eq.~\eref{eq:kpsoln} reduces to \begin{equation}\label{eq:OneImpulsiveWave} \textrm{d}s^2 = 2\textrm{d}u\textrm{d}v - (1 + q)^2\textrm{d}x^2 - (1 - q)^2\textrm{d}y^2. \end{equation} This region and its $\Psi_4$ counterpart contain a \emph{fold singularity} along $v=1$ resp. $u=1$. As Eq.~\eref{eq:OneImpulsiveWave} can be transformed to Minkowski space-time by a coordinate transformation, one would think this is merely a coordinate singularity. However looking closer one sees that this is not the case as there does not exist a $C^1$ extension from this region to $v=1$ resp. $u=1$ \cite{matzner1984metaphysics}. Further, it is found that a certain projection of the $u=\;$constant, $v=\;$constant surfaces into Minkowski space in standard null coordinates converge at $v=1$. This has the consequence, which is discussed in more detail in Sec.~\ref{sec:AnalysisOfSingleWave}, that the spin-coefficients $\rho$ and $\rho'$, which when positive, represent the converging of a null geodesic congruence along $l^a$ and $n^a$ respectively, diverge to positive infinity, showing an ever strengthening contraction of null rays in both null directions. \section{The equations} \subsection{General setup}\label{sec:general-setup} We write the Einstein equations in the form of an IBVP following Friedrich and Nagy \cite{friedrich1999initial} with the additional imposition of a pair of commuting space-like Killing vectors that represent our plane symmetry. Further, we include matter coming from an energy momentum tensor $T_{ab}$. A detailed explanation of this process in vacuum with vanishing cosmological constant has been laid out in \cite{frauendiener2014numerical}. We only give a brief summary here, emphasising the differences when including a non-vanishing cosmological constant and matter. The Einstein equations take the form \begin{equation} \Phi_{ab} + (3\Lambda - \frac12\lambda)g_{ab} = 4\pi T_{ab}, \end{equation} where \begin{equation} R_{ab} = 6\Lambda g_{ab} - 2\Phi_{ab}, \end{equation} and $\Lambda$ and $\Phi_{ab}$ correspond to the trace and tracefree part of the Ricci tensor $\Phi_{ab}$ and $\lambda$ is the cosmological constant. To start setting up our gauge, we first assume our space-time can be foliated by planes. We then define the coordinates $t,z$ for time and the direction of wave propagation respectively, both being constant within the planes. Using the holonomic basis we define the null tetrad \begin{eqnarray} l^a &= \frac{1}{\sqrt{2}}\Big{(}(1+B)(\partial_t)^a + A(\partial_z)^a\Big{)},\\ n^a &= \frac{1}{\sqrt{2}}\Big{(}(1-B)(\partial_t)^a - A(\partial_z)^a\Big{)},\\[4pt] m^a &= \xi(\partial_x)^a+\eta(\partial_y)^a, \end{eqnarray} where $A,B,\xi,\eta$ are functions of $(t,z)$ only. This leads to the metric \begin{equation} g = \mathrm{d} t^2 - 2 \frac{B}{A}\, \mathrm{d} t\mathrm{d} z - \frac{1-B^2}{A^2}\, \mathrm{d} z^2 + \frac2{(\xi\bar\eta - \bar\xi\eta)^2} \left(\eta\,\mathrm{d} x - \xi\,\mathrm{d} y\right)\left(\bar\eta\,\mathrm{d} x - \bar\xi\,\mathrm{d} y\right). \end{equation} To obtain equations for the metric functions and find algebraic relations for the spin-coefficients (due to the plane symmetry assumption) we apply the commutator equations (see \cite{penrose1986spinors} Eq. (4.11.11)) to the coordinates. To obtain equations for the spin-coefficients we use the curvature equations (see \cite{penrose1986spinors} Eq. (4.11.12)). To obtain equations and algebraic relations for the components of the Weyl tensor $C_{abcd}$, $\Phi_{ab}$ and $\Lambda$, we use the equations coming from the Bianchi identity (see \cite{penrose1986spinors} Eqs (4.12.36-4.12.41)). The algebraic conditions are found to be \begin{eqnarray} \rho = \bar\rho,\quad \rho' = \bar\rho',\quad \kappa = \kappa' = \alpha = \beta = \tau = \tau' = 0, \\[4pt] \Psi_1 = \Psi_3 = 0,\quad \Psi_2 = \sigma\sigma' - \rho\rho' + \Lambda + \Phi_{11}, \\[4pt] \Phi_{01} = \Phi_{10} = \Phi_{12} = \Phi_{21} = 0. \end{eqnarray} Following Friedrich and Nagy, we set \begin{equation} \epsilon = \frac12(\rho - \rho' + F - \mu),\qquad \gamma = \frac12(\rho - \rho' + F + \mu), \end{equation} where the free function $F = \chi + i f$ is a freely specifiable gauge source function and $\mu$ is taken as a system variable. $\chi$ is the mean extrinsic curvature of the $z=$ constant hypersurfaces and $f$ determines the rotation of the $m^a$ frame vector along $(\partial_t)^a$. The geometrical interpretation of the new variable $\mu$ can be explained in the gauge $F = \rho' - \rho$, which is the gauge used for most of our results and turns out to be the Gau\ss\; gauge. Although predisposed to develop caustics, an expanding universe, which we consider here, acts to counter this. The fact we are in the Gau\ss\; gauge can be seen immediately by noticing that the only non-vanishing component of the acceleration of the unit time-like vector $(\partial_t)^a$ along itself is proportional to \begin{equation} \gamma + \bar{\gamma} + \epsilon + \bar{\epsilon} = F + \bar{F} + 2(\rho - \rho') = 0 \end{equation} for this choice of $F$. The ``acceleration'' $z^a\nabla_az^b$ of the space-like unit vector $z^a := A(\partial_z)^a$ along itself is proportional to $\mu + \bar{\mu}$, which gives an interpretation for the real part of $\mu$. The imaginary part just corresponds to a phase change of $m^a$. It is found that the equations for $\eta,\xi$ decouple from the others, and as they are superfluous to the results subsequently presented we do not include them in the system. The evolution equations are \numparts \begin{eqnarray} \sqrt2 \partial_t A &= (\mu + \bar\mu)\,A, \label{ee:1}\\ \sqrt2 \partial_t B &= (2\rho - 2\rho' + F + \bar F) + (\mu + \bar\mu) B,\label{ee:2}\\ \sqrt2 \partial_t \rho &= 3\rho^2 + \sigma \bar\sigma + \rho(F + \bar F) + \Phi_{00} - \Phi_{11} - 3\Lambda,\label{ee:3}\\ \sqrt2 \partial_t \rho' &= 3\rho^{\prime2} + \sigma' \bar\sigma' - \rho'(F + \bar F) - \Phi_{11} + \Phi_{22} - 3\Lambda,\label{ee:4}\\ \sqrt2 \partial_t \sigma &= 4\rho\sigma - \rho'\sigma + \rho\bar\sigma' + \sigma(3F - \bar F) + \Psi_0,\label{ee:5}\\ \sqrt2 \partial_t \sigma' &= 4\rho'\sigma' - \rho\sigma' + \rho'\bar\sigma - \sigma'(3 F - \bar F) + \Psi_4,\label{ee:6}\\ \sqrt2 \partial_t \mu &= \mu^2 + \mu\bar \mu - 3 (\rho - \rho')^2 + (\mu + \bar \mu) (\rho + \rho') - \sigma \bar\sigma - \sigma'\bar\sigma' + 2 \sigma\sigma'\nonumber\\ & - (\rho - \rho')(\bar F + 3F) - F^2 - F \bar F - \sqrt2 A\partial_z F - \sqrt2 B\partial_t F \nonumber \\ & - \Phi_{00} + 2\Phi_{11} - \Phi_{22} - 6\Lambda, \label{ee:7}\\[5pt] &\hspace{-3.9cm}(1-B) \partial_t \Psi_0 - A \partial_z \Psi_0 = \sqrt2 \left((2\rho - \rho' + 2 F + 2\mu)\Psi_0 + \sigma(3\Psi_2 + 2\Phi_{11}) + \bar\sigma'\Phi_{00}\right),\label{ee:8}\\ &\hspace{-3.9cm}(1+B) \partial_t \Psi_4 + A \partial_z \Psi_4 = \sqrt2 \left((2\rho' - \rho - 2 F + 2\mu)\Psi_4 + \sigma'(3\Psi_2 + 2\Phi_{11}) + \bar\sigma\Phi_{22}\right),\label{ee:9} \end{eqnarray} \endnumparts while the constraints take the form \numparts \begin{eqnarray} 0=C_1 &:= \sqrt2 A\partial_z\rho - (1 - 3 B) \rho^2 - (1 - B) \sigma\bar\sigma + \rho (\mu + \bar\mu + 2\rho')\nonumber \\ &\quad + \rho B (F + \bar F) -(1-B)\Phi_{00} - (1+B)\Phi_{11} - 3(1+B)\Lambda,\label{ce:1}\\[4pt] 0=C_2 &:= \sqrt2 A\partial_z\rho' + (1 + 3 B) {\rho'}^2 + (1 + B) \sigma'\bar\sigma' - \rho' ( \mu + \bar\mu + 2\rho) \nonumber \\ &\quad - \rho' B (F+\bar F) = (1-B)\Phi_{11} + (1+B)\Phi_{22} + 3(1-B)\Lambda,\label{ce:2}\\[4pt] 0=C_3&:=\sqrt2 A\partial_z\sigma + (1+B) \rho\bar\sigma' - 2 (1-2B)\rho\sigma + (1-B) \rho'\sigma \nonumber\\ &\hskip8em + \sigma(3\mu - \bar\mu) + B\sigma(3F - \bar F) - (1-B) \Psi_0 ,\label{ce:3}\\ 0=C_4&:= \sqrt2 A\partial_z\sigma' - (1-B) \rho'\bar\sigma + 2 (1+2B)\rho'\sigma' - (1+B) \rho\sigma' \nonumber\\ &\hskip8em - \sigma'(3\mu - \bar\mu) - B\sigma'(3F - \bar F) + (1+B) \Psi_4. \label{ce:4} \end{eqnarray} \endnumparts To supplement the above, the divergence free condition on the energy-momentum tensor (equivalently the Bianchi identity, which are given in \cite{penrose1986spinors}, see Eq. 4.12.40) gives \numparts \begin{eqnarray} &(1-B)\partial_t\Phi_{00} + (1+B)(\partial_t\Phi_{11} + 3\partial_t\Lambda) \nonumber \\ &= \sqrt{2}(2\rho + \mu + \bar\mu + F + \bar F)\Phi_{00} + 4\sqrt{2}\rho\Phi_{11} \nonumber \\ &\quad+ A(\partial_z\Phi_{00} - \partial_z\Phi_{11} - 3\partial_z\Lambda),\label{dfe:1}\\ &(1+B)\partial_t\Phi_{22} + (1-B)(\partial_t\Phi_11 + 3\partial_t\Lambda) \nonumber \\ &= \sqrt{2}(2\rho' + \mu + \bar\mu - F - \bar F)\Phi_{22} + 4\sqrt{2}\rho'\Phi_{11}\nonumber \\ &\quad+ A(\partial_z\Phi_{11} - \partial_z\Phi_{22} + 3\partial_z\Lambda).\label{dfe:2} \end{eqnarray} \endnumparts Considering only the vacuum equations with cosmological constant, i.e. $\Phi_{ab}=0$, Eqs~\eref{dfe:1}--\eref{dfe:2} are identically satisfied and Eqs~\eref{ee:1}--\eref{ee:9}, Eqs~\eref{ce:1}--\eref{ce:4} comprise a closed system of equations, where the evolution equations are symmetric hyperbolic and the constraints propagate. When matter terms are present and one takes into account Eqs~\eref{dfe:1}--\eref{dfe:2}, it is still found that the above system is symmetric hyperbolic and the constraints propagate. The resulting subsidiary system is \numparts \begin{eqnarray} \sqrt{2}\partial_tC_1 &= (6\rho + F + \bar F)C_1 + \bar\sigma C_3 + \sigma \overline{C_3}, \\ \sqrt{2}\partial_tC_2 &= (6\rho' - F - \bar F)C_2 + \bar\sigma' C_4 + \sigma' \overline{C_4}, \\ \sqrt{2}\partial_tC_3 &= (4\sigma + \bar\sigma')C_1 - \sigma C_2 + (4\rho - \rho' + 3F - \bar F)C_3 + \rho\overline{C_4}, \\ \sqrt{2}\partial_tC_4 &= (4\sigma' + \bar\sigma)C_2 - \sigma' C_1 + (4\rho' - \rho - 3F + \bar F)C_4 + \rho'\overline{C_3}. \end{eqnarray} \endnumparts In order to close the system, one must in general couple it to equations describing the evolution of matter. There is a lot of freedom in this choice and it depends very much on the physical situation one wants to model. In general this choice will alter the principal part and, as a consequence, symmetric hyperbolicity and constraint propagation could be lost. Two useful quantities are now introduced for monitoring the behaviour of the evolved space-time. First we note that the extrinsic curvature of our $t=\;$constant surfaces is $K_{ab}=-h_a^ch_b^d\nabla_ct_d$, where $h_{ab} = g_{ab} - t_at_b$ is the induced 3-metric on the surfaces and $t_a = (1-B^2)^{-1/2}(\textrm{d}t)_a$ is the unit conormal. We then define a local expansion parameter proportional to the mean extrinsic curvature $K_a{}^a$ as \begin{equation} \mathcal{H} := -\frac13K_a{}^a = \frac{\sqrt{2}(B^2-1)(B(F + \bar{F}) + \mu + \bar{\mu} + 2(\rho + \rho')) - 2 A \partial_zB}{6(1-B^2)^{3/2}}, \end{equation} which is used to monitor the expansion rate of the space-time along the time coordinate vector field. The Weyl scalar curvature invariants are useful tools for identifying whether a singularity is a curvature singularity. In the absence of matter and with our plane symmetry assumptions, the real part of $C_{abcd}C^{abcd}$ is the Weyl scalar curvature invariant \begin{equation}\label{eq:KretschmannScalar} I := 2\Psi_0\Psi_4 + 6\Psi_2^2. \end{equation} We define the wave profile \begin{equation} p(x) = \cases { 32a\sin(bx)^8 & $\displaystyle0<x<\frac{\pi}{b}$ \cr 0 & otherwise }, \end{equation} where $b=35\pi/4$ so that the area of the profile is $a=\int_0^{\pi/b} p(x)\textrm{d}x$ and the amplitude is $32a$. We take $a$ as a measure of the strength of the wave. The boundary conditions for $\Psi_0$ and $\Psi_4$ will make use of $p(x)$ and are chosen in the subsequent sections. \subsection{De-Sitter space-time} We investigate a variety of cases of plane gravitational waves propagating in de Sitter space-time (dS). The unperturbed metric in inflationary coordinates can be written \cite{hawking1973large} \begin{equation} \mathrm{d} s^2 = dt^2 - A_0^{-2}e^{2Ht}(\mathrm{d} x^2 + \mathrm{d} y^2 + \mathrm{d} z^2), \qquad H^2=\lambda/3,\label{eq:dSLineElement} \end{equation} which covers half of the space-time and matches our setup for plane symmetry. This represents an expanding universe of the FLRW type. The appearance of $A_0 := A(0,z)$ is used to scale the spatial directions and will be useful later. It is useful to write dS in terms of null coordinates as \begin{eqnarray} \mathrm{d} s^2 &= e^{2Ht}\Big{(}2\mathrm{d} u\,\mathrm{d} v - (\mathrm{d} x^2 + \mathrm{d} y^2)\Big{)} \\ &= 2\Big{(}\sqrt{2} - H(u + v + \sqrt{2})\Big{)}^{-2}\Big{(}2\mathrm{d} u\,\mathrm{d} v - (\mathrm{d} x^2 + \mathrm{d} y^2)\Big{)}, \end{eqnarray} with transformations \begin{eqnarray} u &= \frac{1}{\sqrt{2}}[H^{-1}(1 - e^{-Ht}) - A_0^{-1}(1+z)],\label{id:u}\quad\\ v &= \frac{1}{\sqrt{2}}[H^{-1}(1 - e^{-Ht}) - A_0^{-1}(1-z)].\label{id:v} \end{eqnarray} The Minkowskian analogue of the above can be found in the limit $H\rightarrow0$. In our formalism Eq.~\eref{eq:dSLineElement} gives the initial data \begin{eqnarray}\label{eq:dSID} A = A_0,\quad \rho = \rho' = \mu = \pm\sqrt{\lambda/6}, \end{eqnarray} with the remaining system variables, gauge quantities and matter terms vanishing. We will use the negative non-vanishing initial data, corresponding to a \emph{future expanding} universe, and set $\Phi_{ab}=0$. We incorporate into the system null coordinates $u(t,z),v(t,z)$ which satisfy $l^a\nabla_au=0$ and $n^a\nabla_av=0$ respectively. Their initial and boundary data are fixed by Eqs~\eref{id:u}, \eref{id:v} so that when no wave is present we reproduce the same null coordinates as in Eq.~\eref{eq:dSLineElement} when $F$ is chosen appropriately. The above expressions for $u,v$ were chosen so that initially $u(0,-1) = 0 = v(0,1)$. Having $u,v$ available allows us to define the semi-invariant coordinates $(T,Z)$ by $T:=\sqrt{2}(v+u)$ and $Z:=\sqrt{2}(v-u)$ with which we can produce Penrose-Carter diagrams, i.e. diagrams where null curves are lines with slope $\pm1$. When in exact dS, as $t\rightarrow\infty$ we obtain $T\rightarrow2(H^{-1} - A_0^{-1})$ and $Z\rightarrow2zA_0^{-1}$. \section{Numerical setup} We utilize the Python package COFFEE \cite{doulis2019coffee}, which contains all the necessary functionality to perform a numerical evolution using the method of lines. We discretize the $z$-direction into equi-distant points in the interval $[-1,1]$ and approximate the $z$-derivative using Strand's finite difference stencil \cite{strand1994summation} which is fourth order in the interior, third order on the boundary and has the summation-by-parts property \cite{gustafsson1995time}. We march in time using the explicit fourth order Runge-Kutta scheme with a timestep determined by $\Delta t= c\,\Delta z$, where $\Delta z$ is the step size in the $z$-direction and $c$ is the CFL constant. Unless otherwise stated we take $c=0.5$. Boundary conditions are imposed using the Simultaneous Approximation Term (SAT) method \cite{carpenter1999stable} with $\tau=1$. This particular selection of numerical methods within COFFEE has proven to be numerically sound for a variety of different systems (see for example \cite{frauendiener2014numerical,beyer2017numerical}). In the subsequent situations, all constraints are verified to converge at the expected order everywhere. \section{A single wave}\label{sec:SingleWave} \subsection{An analytical view}\label{sec:AnalysisOfSingleWave} Before analyzing the numerical results, it is worthwhile to perform a small analytic study of the propagation of one wave when either Minkowski or de Sitter initial data are taken. Firstly, the evolution equations for $\rho$ and $\sigma$ (Eqs~\eref{ee:3} and \eref{ee:5}), which have a close relationship to Sachs' optical equations, give with vanishing $\Phi_{ab}$ \begin{eqnarray} \sqrt{2}\partial_t\rho &= \rho(F + \bar{F}) + 3\rho^2 + \sigma\bar{\sigma} - 3\lambda,\qquad \\ \sqrt{2}\partial_t\sigma &= \sigma(3F - \bar{F} + 4\rho - \rho') + \rho\bar{\sigma}' + \Psi_0. \end{eqnarray} For the case of Minkowski initial data, which is obtained by setting $\lambda=0$ in the de Sitter initial data, and where we choose $F(t,z)=0$ to extend the exact gauge of dS to the whole space-time, we find the following: The introduction of a non-zero $\Psi_0$ on the right boundary causes $\sigma$ to become non-zero there. This in turn causes $\rho$ to become non-zero. As $\partial_t\rho>0$, we find that $\rho$ will inevitably diverge. Further, one can see by looking at the evolution equations for the primed spin-coefficients, all primed spin-coefficients stay zero throughout the space-time, due to the forever zero $\Psi_4$. Further, this implies that $\Psi_2=0$ everywhere and thus the Weyl invariant $I$ given by Eq.~\eref{eq:KretschmannScalar} also remains zero everywhere. These are well known result for propagation of a single plane gravitational wave in Minkowski space-time, see \cite{griffiths2016colliding} for an overview (in a different gauge). The case of expanding de Sitter initial data, with non-vanishing $\lambda$ and again choosing $F(t,z)=0$, is quite different. A non-zero $\Psi_0$ leads to a non-zero $\sigma$ as before, but now a non-zero $\sigma$ leads to a non-zero $\sigma'$ as well as a non-zero $\rho$. This non-zero $\sigma'$ then makes $\rho'$ and even $\Psi_4$ non-zero, implying the non-linear back-reaction effect is realized. This in turn leads to a non-zero Weyl invariant. The added complexity of the non-zero $\lambda$, which couples all system variables together in a complicated, non-linear way, stops us from concluding statements analogous to the Minkowski case as above, emphasising the need for numerics. \subsection{Numerical analysis} We now fix $\lambda=3$ and choose the boundary conditions to be \begin{equation} \Psi_4(t,-1) = 0,\qquad \Psi_0(t,1) = p(v(t)), \end{equation} where $p(v)$ has the area of the wave packet as a parameter, and the change of area is realized by a change in amplitude. We perform evolutions with wave areas $a$ taking the values $1.67,\,1.6765,\,1.6769105,\,1.6769106,\,1.676912$, $1.67695,\,1.68$ for reasons that will become apparent shortly. In all these cases, once the wave has entered and subsequently left the computational domain, the space-time is fully excited in that all system variables have evolved away from their original values. For the case of four smallest values of $a$ we find that the space-time asymptotes back to the de Sitter space-time everywhere. This indicates that the wave has been wiped out by the accelerated expansion, already in stark contrast to the Minkowskian analogue where a future singularity is guaranteed. Fig.~\ref{fig:Psi0andHOneWaveNoBlowup} shows a contour plot of $\Psi_0$ and $\mathcal{H}$ over the entire space-time. It is clear that $\mathcal{H}$ decreases due to the addition of the gravitational wave, but then settles back down to its original value of one. The only remaining effect after the wave has passed is the time delay between different regions of space-time, such as the left and right boundaries. \begin{figure}[H] \centering \subfloat[\centering $\Psi_0$] {{\includegraphics[width=0.5\linewidth]{Psi0ContourSingleWaveNoBlowup.png}}} \qquad \subfloat[\centering $\mathcal{H}$] {{\includegraphics[width=0.5\linewidth]{HContourSingleWaveNoBlowup.png}}} \caption{Contour plots of $\Psi_0$ and $\mathcal{H}$ plotted with respect to the semi-invariant $T$ and $Z$ coordinates where $a=1.67$. The dashed line represents the last timeslice and the crossed lines are $u=0$ and $v=0$.} \label{fig:Psi0andHOneWaveNoBlowup} \end{figure} To see how the representation of the null directions $l^a$ and $n^a$ in the coordinate basis change during the simulation, we look at the metric functions $A$ and $B$. It is seen that $A\rightarrow0$ as in the exact de Sitter case, representing the exponential expansion, and although initially $B$ increases to some value less than one, it asymptotes back to zero. Notably, the rate at which $A\rightarrow0$ and $B\rightarrow0$ causes the $\textrm{d}t\textrm{d}z$ metric coefficient to asymptote to a constant non-zero value and the $\textrm{d}z^2$ metric coefficient to diverge to positive infinity. The fact that $A$ and $B$ never actually reach zero implies our gauge remains regular. \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{NullVec1.png} \caption{Two diagrams showcasing how the vectors $l^a$, $n^a, \partial^a_t$ and $\partial^a_z$ behave on the right boundary when a future singularity occurs.} \label{fig:RightBoundaryDiagram} \end{figure} For the four largest values of $a$ we find that the simulation crashes after some time due to $A\rightarrow0$ and $B\rightarrow1$ in finite time on the right boundary, the same as in the Minkowskian case. Fig.~\ref{fig:RightBoundaryDiagram} shows how this affects the relevant frame vectors there, where we note the relationships \begin{equation} t^a := \partial_t^a = \frac{1}{\sqrt{2}}\Big{(}l^a + n^a\Big{)},\qquad z^a := A \partial_z^a = \frac{1}{\sqrt{2}}\Big{(}(1-B)l^a - (1+B)n^a\Big{)}, \end{equation} where $t^a$ and $z^a$ are normalised. The left diagram is with respect to the $\{l^a,n^a\}$ null basis defined in the tangent space and exemplifies the fact that $t^a = \partial_t^a$ and is always normalised to one. It also showcases that the evolution of $z^a$ can cause trouble. This can be seen by noting that as $B\rightarrow1$ the $z=\;$constant surfaces become characteristic. The right diagram looks at another potential issue, this time in our $(T,Z)$-coordinates. In this case $t^a$ is no longer given by a vertical line, but $l^a$ and $n^a$ remain as lines with slope $\pm1$ from the definition of $u$ and $v$. The ``shrinking'' of the $n^a$ and the ``growing'' of $l^a$ is due to both coefficients of $n^a$ in the coordinate basis approaching zero, and enforces that $t^a$ is proportional to the sum of the two and that their normalisation conditions are maintained. This behaviour affects the expansion rate $\mathcal{H}$ and we find it decreases and actually diverges to $-\infty$ on the right boundary, as shown in Fig.~\ref{fig:IandHOneWaveBlowup}. \begin{figure}[H] \centering \subfloat[\centering $I$] {{\includegraphics[width=0.5\linewidth]{IContourSingleWaveBlowup.png}}} \qquad \subfloat[\centering $\mathcal{H}$] {{\includegraphics[width=0.5\linewidth]{HContourSingleWaveBlowup.png}}} \caption{Contour plots of $I$ and $\mathcal{H}$ plotted with respect to the semi-invariant $T$ and $Z$ coordinates where $a=3$. The dashed line represents the last timeslice.} \label{fig:IandHOneWaveBlowup} \end{figure} The features discussed above indicate that for these larger wave areas, the expansion rate of the space-time is not strong enough to overcome the contractivity of the wave, and a future singularity is formed. In the Minkowski case the analogue is a \emph{fold singularity} as discussed in Sec.~\ref{sec:AnalysisOfSingleWave}. In our de Sitter case, Fig.~\ref{fig:IandHOneWaveBlowup} and Fig.~\ref{fig:IAlongRightBoundary} show that the Weyl invariant $I$ is diverging on the right boundary (and similarly close to the right boundary), unlike the Minkowski case, and adds emphasis to the classification of a curvature singularity. \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{I_AlongBoundary_a3p0.png} \caption{The Weyl invariant $I$ along the right boundary with $a=3$ for multiple $z$-resolutions which all fall within the same drawn curve.} \label{fig:IAlongRightBoundary} \end{figure} A final note is that changing the polarization of the wave, implemented by replacing $p(x)$ with $e^{i\phi}p(x)$ for some real constant $\phi$, does not affect the expansion rate or Weyl invariant as seen in Fig.~\ref{fig:Psi0andHOneWaveNoBlowup}, Fig.~\ref{fig:IandHOneWaveBlowup} and Fig.~\ref{fig:IAlongRightBoundary}. \subsection{Critical behaviour}\label{sec:criticalbehaviour} An obvious question has been raised: What is the critical behaviour when the ingoing wave has the critical wave area $a_c$ that separates these two distinct futures? One can obtain $a_c$ by using a simple binary search. For $\lambda=3$ this is found to be $1.67691055 < a_c < 1.67691056$. Fig.~\ref{fig:rhosigma_AlongRightBoundary} shows $\rho$ and $\sigma$ along the right boundary for various wave areas close to $a_c$. It is clear that as $a\rightarrow a_c$ an interval appears where $\rho$ and $\sigma$ are constant in time and the interval becomes longer the closer $a$ is to $a_c$. This indicates that a special critical behaviour may exist. \begin{figure}[H] \centering \subfloat[\centering $\rho$] {{\includegraphics[width=0.5\linewidth]{rhoAlongRightBoundary.png}}} \qquad \subfloat[\centering $\sigma$] {{\includegraphics[width=0.5\linewidth]{sigmaAlongRightBoundary.png}}} \caption{Plots of $\rho$ and $\sigma$ along the right boundary for different wave areas close to $a_c$. The curves corresponding to the first four values of $a$ from smallest to largest are the curves asymptoting back to their initial values from left to right. The curves corresponding to the larger four values of $a$ from smallest to largest are the curves which diverge from right to left.} \label{fig:rhosigma_AlongRightBoundary} \end{figure} All system variables except $A$ become constant in a finite $t$-interval on the boundary which becomes larger the closer to $a_c$ we take our wave area, and $\mu,\rho,\rho',\sigma,\sigma'$ take on values different than their initial ones. This implies a steady state solution, different from the de Sitter space-time. It turns out we can solve for unconstrained steady state solutions (but with $A$ a function of time) algebraically by setting all time derivatives except $A$ to zero in our evolution system, as well as taking $\Psi_0 = \Psi_4 = F = 0$. One of these solutions is found to match the values we see numerically. However, this exact solution \emph{does not} satisfy the constraint equations, and is thus a ``false'' steady state. This can be seen explicitly during our evolution, by noticing that the constraints do not converge and are wildy violated during this steady state period, see Fig.~\ref{fig:BifurcationConstraintViolation}. This is a consequence of our free evolution scheme, which by definition, is ``free'' from enforcing the constraints to be satisfied. It is found that the only free steady state solution (with $A$ varying in time) that also satisfies the constraints in the case of a positive cosmological constant with $\Psi_0 = \Psi_4 = F = 0$ is the de Sitter space-time. \begin{figure}[H] \centering \includegraphics[width=0.65\linewidth]{ConstraintViolationOneWaveBifurcation} \caption{A convergence test for the constraint $C_1$ along the right boundary for the case of a single wave where $a=1.6769105$ and $\lambda=3$.} \label{fig:BifurcationConstraintViolation} \end{figure} The fact that no critical behaviour exists for this wave profile ansatz will be important when attempting to find a solution where the expansion is halted with gravitational radiation, and will be discussed in detail in Sec.~\ref{sec:SupressingExpansion}. \subsection{An impulsive wave}\label{sec:OneImpulsiveWave} Many analytical solutions describing gravitational waves in the literature have an impulsive wave profile, i.e. $\Psi_0 = \delta(v)$ where $v$ is a null coordinate, which is a consequence of the cut-and-paste method of Penrose \cite{penrose1972geometry}. An example is the propagation of a single impulsive gravitational plane wave with $\lambda=0$ given by Eq.~\eref{eq:OneImpulsiveWave}, where $\Psi_2 = 0 = \Psi_4$. To date, an exact solution for a single propagating plane gravitational wave with $\lambda>0$ has not been found. One cannot use Penrose's cut-and-paste method to find such a solution because this leads to wavefronts that are spherical or hyperboloidal when $\lambda>0$ or $\lambda<0$ respectively \cite{podolsky2019cut}. Thus, to try shed some light toward an analytic solution, we numerically evolve our system with $\lambda>0$ and with one ingoing wave, whose wave profile approximates the Dirac delta function. We set $\Psi_0(v,1) = q(v)$ where \begin{equation}\label{eq:qBC} q(x) := \cases { a\sin(bx)^8 & $\displaystyle0<x<\frac{\pi}{b}$ \cr 0 & otherwise }, \end{equation} where $b = 35\pi a / 128$ and $q(x)$ has the property that $\displaystyle\lim_{a\rightarrow\infty}q(x)=\delta(x)$. We also change our gauge and fix $F$ by the condition that $\partial_t B=0$. This matches the gauge of the exact solution given by Eq.~\eref{eq:OneImpulsiveWave} and yields $F = \rho' - \rho$. We choose $a=128,\,256,\,512,\,1024$, populate our spatial interval $z\in[-1,1]$ with $6401$ equi-distant points to accurately resolve these steep wave profiles and choose $\lambda=0.6$ and $\lambda=1.2$ to exemplify futures that do and do not have a singularity respectively. For $\lambda=1.2$, to see the effect of the limit $a\rightarrow\infty$, Fig.~\ref{fig:ImpulsiveWaveAlongz0Lambda0p2} shows the Weyl components along $z=0$. These seem to indicate that in this limit, they all vanish for $v>0$ along $z=0$. By inspection it is clear that this happens along any $z=\;$constant curve once the wave has past and thus in the whole region $v>0$. Further, all system variables asymptote back to dS after the wave has past, and no singularity is formed. Note the numerical error in Fig.~\ref{fig:ImpulsiveWaveAlongz0Lambda0p2} (a) just before $t=1$. This is due to the steep wave profile and its interaction with the left boundary propagating back into the computational domain. This phenomenon is discussed in detail in Sec.~\ref{sec:CollidingImpulsiveWaves}. \begin{figure}[H] \centering \subfloat[\centering $\Psi_0$] {{\includegraphics[width=0.33\linewidth]{psi0_OneImpulsiveWave_Alongz0_Lambda0p2.png}}} \qquad \subfloat[\centering $\Psi_2$] {{\includegraphics[width=0.33\linewidth]{psi2_OneImpulsiveWave_Alongz0_Lambda0p2.png}}} \qquad \subfloat[\centering $\Psi_4$] {{\includegraphics[width=0.33\linewidth]{psi4_OneImpulsiveWave_Alongz0_Lambda0p2.png}}} \caption{Plots along $z=0$ of $\Psi_0, \Psi_2$ and $\Psi_4$ as $a\rightarrow\infty$ with $\lambda=0.6$.} \label{fig:ImpulsiveWaveAlongz0Lambda0p2} \end{figure} Fig.~\ref{fig:ImpulsiveWaveAlongRightBoundaryLambda0p10p2} shows the $\Psi_2$ and $\Psi_4$ components along the right boundary for $\lambda=0.6$ and $\lambda=1.2$, where a future singularity is formed when $\lambda=0.6$. \begin{figure}[H] \centering \subfloat[\centering $\Psi_2$ for $\lambda=0.6$] {{\includegraphics[width=0.5\linewidth]{psi2_OneImpulsiveWave_AlongRightBoundary_Lambda0p1.png}}} \qquad \subfloat[\centering $\Psi_2$ for $\lambda=1.2$] {{\includegraphics[width=0.5\linewidth]{psi2_OneImpulsiveWave_AlongRightBoundary_Lambda0p2.png}}} \\ \subfloat[\centering $\Psi_4$ for $\lambda=0.6$] {{\includegraphics[width=0.5\linewidth]{psi4_OneImpulsiveWave_AlongRightBoundary_Lambda0p1.png}}} \qquad \subfloat[\centering $\Psi_4$ for $\lambda=1.2$] {{\includegraphics[width=0.5\linewidth]{psi4_OneImpulsiveWave_AlongRightBoundary_Lambda0p2.png}}} \caption{Plots along $z=1$ of $\Psi_2$ and $\Psi_4$ as $a\rightarrow\infty$ with $\lambda=0.6$ and $\lambda=1.2$.} \label{fig:ImpulsiveWaveAlongRightBoundaryLambda0p10p2} \end{figure} Like in the $\lambda=0$ case this is a curvature singularity. It is much easier to see this in the $\lambda > 0$ case as the Weyl invariant $I$ diverges to positive infinity. \section{Two waves} We now present results pertaining to the scattering of two colinearly polarized gravitational waves. The setup is analogous to the single wave case of Sec.~\ref{sec:SingleWave} with the exception of the boundary condition for $\Psi_4$, which is now taken to be $\Psi_4(t,-1) = p(u(t))$. We continue to use the gauge $F=\rho'-\rho$ which corresponds to the gauge used in the Khan-Penrose solution for colliding colinearly polarized impulsive gravitational plane waves with $\lambda=0$ \cite{khan1971scattering}. It is found that many features are similar to the case of one wave. \subsection{Comparison against $\lambda=0$} The general behaviour can be explained by looking at contour plots of $I$ in Fig.~\ref{fig:CollidingWavesI} for varying $\lambda$ (so that we can see how $\lambda>0$ differs from $\lambda=0$) and fixing $a=1$ in the wave profiles. If $\lambda$ is small enough ($\lambda=0$ or $\lambda=0.06$), we obtain a future curvature singularity. As $\lambda$ gets larger ($\lambda=0.6$), the expansion increases the time before this singularity occurs. If we increase $\lambda$ more ($\lambda=6$), we get to the situation where the expansion has wiped out the waves and the effect of their scattering on the curvature, and we asymptote back to dS again. \begin{figure}[H] \centering \subfloat[\centering $\lambda=0$] {{\includegraphics[width=0.5\linewidth]{I_Lambda0.png}}} \qquad \subfloat[\centering $\lambda=0.06$] {{\includegraphics[width=0.5\linewidth]{I_Lambda0p01.png}}} \\ \subfloat[\centering $\lambda=0.6$] {{\includegraphics[width=0.5\linewidth]{I_Lambda0p1.png}}} \qquad \subfloat[\centering $\lambda=6$] {{\includegraphics[width=0.5\linewidth]{I_Lambda1.png}}} \caption{Penrose-Carter contour plots of the Weyl invariant $I$ for the case of colliding waves with varying $\lambda$.} \label{fig:CollidingWavesI} \end{figure} Fig.~\ref{fig:CollidingWavesH} shows the expansion rate $\mathcal{H}$ decreasing the most in the centre of the collision, $u=v$, where the Weyl invariant $I$ attains a local maximum (in time and space.) \begin{figure}[H] \centering \subfloat[\centering $\lambda=0.06$] {{\includegraphics[width=0.5\linewidth]{ContourPlotHTwoCollidingWavesLambda0p01}}} \qquad \subfloat[\centering $\lambda=6$] {{\includegraphics[width=0.5\linewidth]{ContourPlotHTwoCollidingWavesLambda1}}} \caption{Penrose-Carter contour plots of the expansion rate $\mathcal{H}$ for the case of colliding waves with varying $\lambda$.} \label{fig:CollidingWavesH} \end{figure} \subsection{Critical behaviour}\label{sec:criticalbehaviourtwowaves} Now we fix $\lambda=0.6$ and see how varying the area of the wave profile $a$ affects things. We find the following three scenarios, where $a_1$ and $a_2$ are given later in the section, and are found with binary search: \begin{itemize} \item[1] $a\lessapprox a_1$: asymptote back to dS. \item[2] $a_1\lessapprox a \lessapprox a_2$: $\mu\rightarrow\infty$ but $I\rightarrow0$. \item[3] $a\gtrapprox a_2$: $\mu\rightarrow\infty$ and $I\rightarrow\infty$. \end{itemize} Only in case 3 do $\rho,\,\rho',\,\sigma,\,\sigma'$ diverge, in the other two they asymptote back to their initial values. Due to the evolution equation $\sqrt{2}\partial_tA = (\mu + \bar{\mu})A$, in cases 2 and 3 we have that $A\rightarrow\infty$ also, causing the $t,z$ portion of the line element to approach $\textrm{d}t^2$, causing an infinite contraction in the $z$-direction. This is represented in $l^a$ and $n^a$ as shown in Fig.~\ref{fig:AToInfinityDiagram}, where the $t=\;$constant surfaces approach being null. Further, as we discovered in Sec.~\ref{sec:general-setup}, the real part of $\mu$ is essentially the acceleration of the unit conormal to the $z=\;$constant surfaces and the fact that this acceleration diverges to negative infinity agrees with the contraction in this direction. We are in the Gau\ss\; gauge, and along spatially constant curves, which are in this case geodesics, the proper time and the time $t$ are equivalent. Our gauge can then be thought of as adapted to free falling observers. This then lends the physical interpretation of the caustic singularity in case 2. The three possible futures occurring after the interaction of the gravitational waves with these observers can then be described as follows: \begin{itemize} \item Case 1: The gravitational contraction is not strong enough to cause the timelike geodesics to converge or the curvature to diverge. \item Case 2: The gravitational contraction is strong enough to cause the timelike geodesics to converge and create a coordinate singularity. However, it is not strong enough to cause the curvature to diverge and this goes back to zero. \item Case 3: The gravitational contraction is strong enough to cause both the timelike geodesics to converge and the curvature to diverge, resulting in a physical curvature singularity. \end{itemize} \begin{figure}[H] \centering \includegraphics[width=0.3\linewidth]{NullVec2.png} \caption{The effect of $A\rightarrow\infty$ as $t\rightarrow\infty$ on the null vectors along a $z=\;$constant curve.} \label{fig:AToInfinityDiagram} \end{figure} It is noted that in the gauge $B=0$ the characteristic speeds of the waves are $\pm A$. In the cases where $A\rightarrow\infty$ we decrease the CFL number $c$ dynamically to avoid instabilities and settle with smaller timesteps instead. As we now have \emph{two} bifurcations, which we call $a_1$ and $a_2$, it remains to be seen whether these will have critical behaviours. We find, again using binary search, that approximately $a_1 \approx 0.852548$. Unlike in Sec.~\ref{sec:criticalbehaviour}, the constraints do not diverge as our simulations use a wave profile with area $a$ closer to $a_1$. Fig.~\ref{fig:HIMuCrit} and Fig.~\ref{fig:AHAlongz0MuCrit} show that the expansion rate drops to around $25\%$ of its original value at its minimum, for a long time, before asymptoting back to dS again. This implies that we can cause, with just two colliding waves, the expansion rate to locally decrease substantially for a certain period, without causing a future singularity. Further, it is noticed that although $\mu$ differs substantially in the above cases, $\rho, \rho', \sigma$ and $\sigma'$ change very little, and if drawn differ by an amount smaller than the drawn curve. \begin{figure}[H] \centering \subfloat[\centering $\mathcal{H}$] {{\includegraphics[width=0.5\linewidth]{HContourCollidingWavesMuCrit.png}}} \qquad \subfloat[\centering $\mathcal{I}$] {{\includegraphics[width=0.5\linewidth]{IContourCollidingWavesMuCrit.png}}} \caption{The expansion rate $\mathcal{H}$ and Weyl invariant $I$ with $a\approx a_1$ and $\lambda=0.6$.} \label{fig:HIMuCrit} \end{figure} \begin{figure}[H] \centering \subfloat[\centering $A$] {{\includegraphics[width=0.4\linewidth]{A_CollidingWavesAlongz0MuCrit.png}}} \qquad \subfloat[\centering $\mathcal{H}$] {{\includegraphics[width=0.4\linewidth]{H_CollidingWavesAlongz0MuCrit.png}}} \caption{The metric function $A$ and the expansion rate $\mathcal{H}$ with $\lambda=0.6$ along $u=v$ equiv. $z=0$ for multiple values of $a$ close to $a_1$.} \label{fig:AHAlongz0MuCrit} \end{figure} We find, again using binary search, that approximately $a_2\approx0.9595$, and find that taking $a$ close to this value results in the constraints remaining well behaved. Fig.~\ref{fig:IAlongz0ICrit} shows that the Weyl invariant $I$ diverges for $a>a_2$, goes to zero for $a<a_2$ and goes to some other value when $a\approx a_2$. In all these cases $\mu$ diverges to infinity and thus so does $A$. This implies that to maintain a stable evolution our timestep must decrease to compensate, and the simulations shown in Fig.~\ref{fig:IAlongz0ICrit} stop when the timestep becomes smaller than 1e-8. It is likely that the simulation with $a=0.9595$ does not converge to some constant value other than zero, but rather we cannot march in time far enough to see it either diverge to infinity or approach zero. \begin{figure}[H] \centering \subfloat[] {{\includegraphics[width=0.4\linewidth]{I_CollidingWavesAlongz0ICrit.png}}} \qquad \subfloat[] {{\includegraphics[width=0.4\linewidth]{Crit_I_a0p9595.png}}} \caption{(a) The Weyl invariant $I$ with $\lambda=0.6$ along $u=v$, i.e. $z=0$, for multiple values of $a$ close to $a_2$ and (b) a contour plot of $I$ for $a=0.9595$.} \label{fig:IAlongz0ICrit} \end{figure} \subsection{Impulsive waves}\label{sec:CollidingImpulsiveWaves} As in Sec.~\ref{sec:OneImpulsiveWave} we mimic the Dirac delta function wave profiles of the $\lambda=0$ solutions. For colliding waves, this is when $\Psi_0 = \delta(v)$ and $\Psi_4 = \delta(u)$. We thus choose our wave profiles as $\Psi_0(v,1) = q(v)$, $\Psi_4(u,-1) = q(u)$, where $q(x)$ is given in Eq.~\eref{eq:qBC} and approximates the Dirac delta function. Our results in Sec.~\ref{sec:criticalbehaviourtwowaves} indicate that we should explore three possible regions, namely regions where we asymptote back to dS, $\mu$ diverges but not $I$, and where $I$ diverges. These still exist for the approximately impulsive wave profiles and are exemplified by choosing $\lambda=6,\,0.72$ and $0.6$ respectively. \begin{figure}[H] \centering \subfloat[\centering $\Psi_2$ for $\lambda=0.06$] {{\includegraphics[width=0.5\linewidth]{psi2_CollidingImpulsiveWaves_Alongz0_Lambda0p01.png}}} \qquad \subfloat[\centering $\Psi_4$ for $\lambda=0.06$] {{\includegraphics[width=0.5\linewidth]{psi4_CollidingImpulsiveWaves_Alongz0_Lambda0p01.png}}} \\ \subfloat[\centering $\Psi_2$ for $\lambda=0.72$] {{\includegraphics[width=0.5\linewidth]{psi2_CollidingImpulsiveWaves_Alongz0_Lambda0p12.png}}} \qquad \subfloat[\centering $\Psi_4$ for $\lambda=0.72$] {{\includegraphics[width=0.5\linewidth]{psi4_CollidingImpulsiveWaves_Alongz0_Lambda0p12.png}}} \\ \subfloat[\centering $\Psi_2$ for $\lambda=6$] {{\includegraphics[width=0.5\linewidth]{psi2_CollidingImpulsiveWaves_Alongz0_Lambda1.png}}} \qquad \subfloat[\centering $\Psi_4$ for $\lambda=6$] {{\includegraphics[width=0.5\linewidth]{psi4_CollidingImpulsiveWaves_Alongz0_Lambda1.png}}} \caption{Plots along $u=v$ equiv. $z=0$ of $\Psi_2$ and $\Psi_4$ as $a\rightarrow\infty$ with $\lambda=0.6,\,0.72$ and $6$.} \label{fig:CollidingImpulsiveWavesAlongz0} \end{figure} Fig.~\ref{fig:CollidingImpulsiveWavesAlongz0} shows $\Psi_2$ and $\Psi_4$ over time along $u=v$ for the different values of $\lambda$. In particular, we see that they do not converge to zero for $u,v>0$ as $a\rightarrow\infty$ as in the case of one wave. This is to be expected from comparison with the Khan-Penrose solution, which already has non-vanishing $\Psi_0,\,\Psi_2$ and $\Psi_4$ in the region after scattering, as well as a theorem by Szekeres \cite{szekeres1965gravitational}. Of particular note is the abrupt change in sign of the first time derivative of $\Psi_4$ for $\lambda=0.72$. This sharp turn, which is smooth with a small enough timestep, does not appear this distinctly in any other system variables, except for $\Psi_0$ due to symmetry. Fig.~\ref{fig:Psi4ContourCollidingWaves} shows that this turning point occurs not only at some point along $u=v$ but along an entire null surface which follows the characteristic of $\Psi_4$ from the point where the left boundary hits $v=0$. This is the result of the vanishing boundary condition for $\Psi_4$ on the left boundary being in disagreement with the non-vanishing $\Psi_4$ tail generated by $\Psi_0$ as it passes through the boundary. While at first sight it makes sense to impose a no ingoing radiation condition, this is blatantly unphysical when the evolution itself creates ingoing modes. Between the boundary condition and the evolution equation it is the latter which is fundamental. The boundary condition is nearly completely free to choose and is put in ``by hand''. A common question in a non-linear regime with boundaries that contain both ingoing and outgoing modes is then: How does one make consistent the ``corner condition'', i.e. the physical compatibility between data on a timeslice induced via evolution and the boundary data to yield a physically meaningful result? The answer is simply that there is no clear way to prescribe boundary conditions that match the values in the interior unless one already has an exact solution \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{Psi4ContourLambda0p12a128.png} \caption{A contour plot of $\Psi_4$ for $a=128$ and $\lambda=0.72$ in the case of colliding impulsive waves.} \label{fig:Psi4ContourCollidingWaves} \end{figure} \section{Suppressing the expansion with a train of waves}\label{sec:SupressingExpansion} In \cite{tsamis2014classical}, the rate of change of an expansion rate parameter $H_{TW}$ with respect to time on a space-like initial value surface (IVS) was calculated to be $N(H^2 - (1/3)K^{ab}K_{ab})$, where $N$ is the lapse in their coordinate system, $K_{ab}$ is the extrinsic curvature to the IVS and $H$ is as per our definition of dS in inflationary coordinates. They hypothesize that there should be no reason why initial data cannot be chosen to satisfy $K^{ab}K_{ab} > 3H^2$ so that the expansion is slowed down and even completely halted\footnote{$N$ is a lapse and should always be positive.}. We can investigate this numerically without solving the constraints by simply choosing dS initial data together with a variety of boundary conditions and seeing how the space-time evolves. We thus explore how a train of waves, generated by choosing the boundary conditions for $\Psi_0$ and $\Psi_4$ appropriately, might accomplish this. To do so, we fix $\lambda=0.6$ and define a new function \begin{equation}\label{eq:streambc} p_{stream}(x) = \cases { 32a\cos(c\,x^2)^8\sin(b\,x)^8 & $\displaystyle0<x<\sqrt{\frac{\pi}{2c}}$ \cr 0 & otherwise }, \end{equation} where $a=0.894,\,b=3129\pi/128000,\,c=1/3$ and we choose $\Psi_0(v,1) = p(v),\,\Psi_4(u,-1) = p(u)$. These constants were chosen through trial and error to give the largest decrease in the expansion while maximizing the interval of time this occurred, before either a singularity is formed or the space-time starts to approach dS again. The cosine factor has the effect of decreasing the amplitude of the wave until it completely vanishes at $ct^2=\pi/2$. This is to hold off a future singularity forming, while still decreasing the expansion rate $\mathcal{H}$. \begin{figure}[H] \centering \subfloat[$\mathcal{H}$] {{\includegraphics[width=0.5\linewidth]{HContourStream.png}}} \qquad \subfloat[$I$] {{\includegraphics[width=0.5\linewidth]{IContourStream.png}}} \caption{Our expansion rate $\mathcal{H}$ and the Weyl invariant $I$ where boundary conditions were chosen using Eq.~\eref{eq:streambc} and with $\lambda=0.6$.} \label{fig:HIContourStream} \end{figure} Fig.~\ref{fig:HIContourStream} shows the expansion rate $\mathcal{H}$ and Weyl invariant $I$ as contour plots. We see that the expansion rate slowly declines across the entire spatial domain and after a long time ($t\approx14$), forms a coordinate singularity as in case 2. This Weyl invariant shows clearly where the collision regions are, and after a time the waves begin to drag more and more curvature along with them as a tail. \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{HAlongz0.png} \caption{Our expansion rate $\mathcal{H}$ along $z=0$ where boundary conditions were chosen using Eq.~\eref{eq:streambc} and with $\lambda=0.6$.} \label{fig:HStreamAlongZ0} \end{figure} Fig.~\ref{fig:HStreamAlongZ0} shows just how long we can decrease the expansion for, while holding off forming a singularity. Note that along a spatially constant curve the simulation time $t$ is the proper time of a free falling observer along this curve, and so when we talk about trying to maximize the length of time before a singularity occurs, it is inherently physical. In previous sections we have found that if a singularity was to form after a collision of two waves (with our wave profile), this happens after a few $t$. However here, we can decrease the expansion considerably, without forming a singularity, for up to $t=13$. We cannot however find boundary conditions that lower the expansion rate to zero without very quickly forming a singularity. It is certainly possible that such finely tuned boundary conditions exist, but our studies suggest that they would be very special. \section{Summary and discussion} \label{sec:summary} In this paper we have put forward the full non-linear Einstein equations with cosmological constant and non-vanishing energy momentum with the assumption of plane symmetry. These equations were realized through the Newman-Penrose formalism and the imposition of the Friedrich-Nagy gauge, leading to a wellposed initial boundary value problem with timelike boundaries. We specialized to vacuum where $\lambda>0$ and chose initial data to be that induced by the de Sitter space-time in inflationary coordinates. This allowed the exploration of how this space-time is affected by gravitational perturbations, which we generated through appropriate boundary conditions for $\Psi_0$ and $\Psi_4$. It was found that when only one of the waves was non-vanishing the space-time either wiped out the wave via expansion, or the wave was too strong and a future singularity was produced. The bifurcation was studied and did not produce any critical behaviour. The wave profile was taken to approximate the Dirac delta function to analogize with a known exact solution for $\lambda=0$. With both waves non-vanishing, and in the physically motivated Gau\ss\; gauge, we found three distinct situations: The waves were not strong enough to cause a contraction of our timelike curves to create a singularity, a coordinate singularity is formed but the curvature remains finite, or a curvature singularity is formed. The second case shows that we can create a singularity where the Weyl invariant $I$ does not diverge, but our expansion parameter diverges to negative infinity along, and close to, the surface $u=v$. The critical behaviour of the two bifurcations separating these futures was explored. Impulsive wave profiles were approximated and it was shown that two bifurcations occur in this case as well. We encountered two numerical pitfalls during our exploration. Firstly, our free evolution resulted in a false steady state solution close to the bifurcation of the single wave case. As we chose our wave area closer to the bifurcation value, our free evolution approached a steady state (while $A$ was still evolving in time) that did not satisfy the constraints. This happened even though the constraints were satisfied and converged above and below this critical value, showing how careful one must be in monitoring constraints during a free evolution. Secondly, we found that the combination of the evolution system and our non-radiating boundary conditions became unphysical in the colliding wave case after the waves left the computational domain through the boundaries. This was due to the backreaction of the waves creating tails of ingoing radiation, at odds with the boundary conditions. This is however, independent of the fact that our system is wellposed and numerically stable. The question as to how one could ``guess'' the right boundary conditions is delicate and creates a problem that all non-linear simulations, in particular in numerical relativity, face. We presented how the above situations affected the local expansion rate $\mathcal{H}$, which was taken to be the mean extrinsic curvature of our timeslices up to a constant factor. It was shown that for the case of two waves colliding, we could decrease $\mathcal{H}$ substantially for a long period of time, where the cut-off was determined by numerical limitations, before the space-time asymptoted back to dS. We could do a similar thing with a continuous stream of waves, making the expansion rate drop more uniformly over the computational domain. This showcased the potential to lower the expansion rate over a wider spatial interval. Although we were not able to find boundary conditions that completely halted expansion for a period before either asymptoting back to de Sitter space-time or forming a singularity, we could still lower it substantially for a long time. Even so, our results do not violate the hypothesis of Woodard and Tsamis', namely that our universe may be in an unstable gravitationally bound state. It would be interesting to see whether, with further testing, we may be able to find boundary conditions that do completely halt expansion for a period. Now that exploration toward the behaviour of plane gravitational waves with $\lambda>0$ has started and details have been uncovered, it would also be interesting to see whether one can use the results as hints toward an exact solution for impulsive waves. For the case of one propagating impulsive wave, knowing that the Weyl components vanish in the region after the wave has passed should already be a good start. \section*{References} \bibliographystyle{elsarticle-num}
{ "timestamp": "2021-05-06T02:12:00", "yymm": "2105", "arxiv_id": "2105.01906", "language": "en", "url": "https://arxiv.org/abs/2105.01906" }
\section{Introduction} 29P/Schwassmann-Wachmann 1 (SW1) is a continuously active Centaur at the inner cusp of the Centaur-to-Jupiter-Family transition region and presents a rare opportunity to investigate activity drivers and ongoing material processing that occurs in a region too cold for vigorous water-ice sublimation. Recent dynamical simulations have shown that its current nearly-circular trans-Jovian orbit (eccentricity, semi-major axis and perihelion respectively: $e$ = 0.04, $a$ = 6.03 au and $q$ = 5.77 au)\footnote{Minor Planet Circular (MPC) 111773.} is typical for Centaurs in a short-lived transitional ``gateway'' from the outer solar system to the Jupiter-family comets (JFCs) population \citep{sarid_2019}. Interestingly, despite SW1's modest variation in energy input from the Sun, it frequently undergoes major outbursts superimposed on its normally-present background, or ``quiescent'' coma \citep{whipple_1980, larson_1980, jewitt_1990_sw1, 2010MNRAS.409.1682T, 2013Icar..225..111K, hosek_2013, miles_2016, schambeau_2017, schambeau_2019}. Additionally, the CO-production rate during periods of quiescent activity is more similar to long-period comets at similar heliocentric distances than JFCs \citep{bauer_2015, kacper_2017, womack_2017, bockelee_2021}, and its dust outbursts may be uncorrelated with large fluctuations of its CO outgassing rate \citep{wierzchos_2020}. Thus questions naturally arise as to what activity drivers explain its enigmatic activity, and do all JFCs experience a period of similar behaviors while they are in the gateway region? Are SW1's activity behaviors reflective of outer solar system materials being thermally activated in the gateway, after a long period of cryogenic storage? Or, are they an intrinsic property to it alone? In 2015, we reported a new analysis of 2003 November {\it Spitzer} Infrared Array Camera (IRAC) 5.8 $\mu$m \& 8.0 $\mu$m and Multiband Imaging Photometer (MIPS) 24 $\mu$m \& 70 $\mu$m imaging, originally published by \cite{stansberry_2004}. Using a new Spitzer data pipeline and intensive image processing techniques, the 2015 paper presented a new nucleus radius, beaming parameter, and infrared geometric albedo of SW1 \citep{schambeau_2015}. Subsequently, we determined that the {\it Spitzer} ``blue" (i.e. at 16 $\mu$m) images obtained in the 2003 dataset have sufficient coma detected for its analysis, modeling, and removal, and thus, they can provide new physical insights and constraints to SW1 models. Here, for the first time, we present the {\it Spitzer} 16 $\mu$m images, and analyze them in the context of the 5.8 $\mu$m, 8.0 $\mu$m, 24 $\mu$m, and 70 $\mu$m data. We describe relevant observational details of the UT 2003 Nov. epoch in Section \ref{sec:observations}. In Section \ref{sec:image_analysis}, we present characterization of thermal infrared emission using the 16 $\mu$m, 24 $\mu$m, and 70 $\mu$m imaging data through coma morphology analysis, estimates of the $\epsilon f \rho$ parameter, coma modeling of the dust grain size distribution and dust-production rates for micron sized and larger grains, and derivation of a coma color temperature map. In Section \ref{sec:neatm} we present analysis of these images to provide a fifth nucleus photometry measurement at 16 $\mu$m. Using the five spectral flux density measurements of the nucleus, we implemented a NEATM \citep{harris_1998} to derive a new measurement of the nucleus' effective size and infrared beaming parameter ($\eta$; a proxy for nucleus surface thermal inertia and/or surface roughness). In Section \ref{sec:conclusion} we summarize our results and implications for SW1's nucleus, quiescent large grain coma, and activity state. \section{Observations} \label{sec:observations} This work analyzes the {\it Spitzer} imaging data obtained with the 16 $\mu$m IRS blue PU and 24 $\mu$m and 70 $\mu$m MIPS instruments. Here we address the observational details of the 16 $\mu$m data, and direct readers to our earlier work, \cite{schambeau_2015}, for information about the 24 $\mu$m and 70 $\mu$m images. During the {\it Spitzer} in-orbit checkout and science verification phase \citep{werner_2004_spitzer} SW1 was observed with the InfraRed Spectrograph (IRS; AORKEY: 6068992; \cite{houck_2004_IRS}). Shortly before the IRS observations, blue-channel PU images were acquired in order to center SW1's position on the detector's ``sweet spot" (the detector pixel location of the target's centroid peak enabling optimal alignment and centering for the IRS slit). The blue PU channel of IRS's Si:As array detector has dimensions of 44 $\times$ 31 pixels, an effective monochromatic wavelength equivalent to 15.8 $\mu$m, and an effective pixel scale of 1$''$.85/pixel in detector X direction and 1$''$.82/pixel in detector Y direction. A total of six independent blue PU images were acquired: three images with SW1's peak located on the center of the detector and three on the detector's sweet spot, approximately 3 pixels away from the center of the array. Level 1 basic calibrated images were downloaded from the Spitzer Heritage Archive (SHA). An example image of SW1 located on the sweet spot is shown in Figure \ref{fig:images} along with enhanced images to highlight the coma's morphology \citep{larson_1984, samarasinha_larson_2014}. Table \ref{tab:geometry} provides a summary of the observational circumstances. The coma is slightly enhanced in the south-southeast direction and has a similar morphology to that seen in the MIPS 24 $\mu$m images, suggesting that the same particles are being measured in both bandpasses. Overall, aside from the slight increase in dust emission on the south-southeast side of the coma, as indicated by the division of an azimuthal average enhanced image (Figure \ref{fig:images}(b)), the coma is lacking any defining coma morphology. A faint linear feature can be seen from approximately the 1 o'clock to 7 o'clock positions. \begin{deluxetable*}{ l c c c c c } \tablenum{1} \label{tab:geometry} \tablecaption{Observations and Geometry Summary for UT 2003 November 23} \tablewidth{0pt} \startdata \\ Parameter & & & & Value \\ \hline Observations [Start:Stop] & & & & [(07:15:32.960):(07:17:13.409)] \\ Exposure Time Per Image & & & & 9.44 s \\ Heliocentric Distance to SW1 ($R_H$) & & & & 5.73 au \\ Spitzer-SW1 Distance ($\Delta$) & & & & 5.54 au \\ Solar Phase Angle of SW1 ($\alpha$) & & & & 10.0$^{\circ}$ \\ True Anomaly of SW1 & & & & 342.8$^{\circ}$ \\ Position Angle of Skyplane Projected Sun Direction$^{\textrm{a}}$ & & & & 248.2$^{\circ}$ \\ Position Angle of Skyplane Projected Heliocentric Velocity Vector$^{\textrm{a}}$ & & & & 59.9$^{\circ}$ \\ \enddata \tablecomments{ $^{\textrm{a}}$ The position angle is measured counter clockwise from north through east.} \end{deluxetable*} \begin{figure} \gridline{ \fig{figures/1-Figure.png}{0.8\textwidth}{} } \caption{One of the {\it Spitzer} 16 $\mu$m blue PU images (with color scale black to blue to white indicating increasing surface brightness): (a) original image, (b) division by azimuthal average, (c) 1/$\rho$ profile removal, and (d) rotational shift differencing of 18$^{\circ}$ \citep{larson_1984, samarasinha_larson_2014}. Equatorial north and east are indicated. The skyplane projected directions for the Sun and SW1's heliocentric velocity vector are indicated by the yellow and red arrows, respectively. A black arrow on panel (c) indicates the slight coma enhancement in the south-southeast direction that is the south-southeast end of the 1 to 7 o'clock linear feature. \label{fig:images}} \end{figure} For reference, the filter bandpasses of the 16, 24, and 70 $\mu$m images are respectively: 13.3$-$18.7 $\mu$m, 20.8$-$26.1 $\mu$m, and 60.9$-$80.6 $\mu$m. \section{Image Analysis and Discussion} \subsection{Thermal Infrared Coma Analysis}\label{sec:image_analysis} Thermal infrared imaging of cometary dust comae allows for preferential probing of grain sizes on the order of microns and larger, such as those recorded with the IRS PU and MIPS, because smaller grains with 2$\pi a/\lambda <$ 1, where $a$ is the grain radius, are inefficient emitters in the infrared \cite[see][]{hanner_1994_apj, lisse_1998, lisse_2004}. Our {\it Spitzer} 16 $\mu$m, 24 $\mu$m, and 70 $\mu$m images were analysed to characterize the continuum emission created by $\mu$m-sized and larger grains in SW1's quiescent dust coma. We note that micron and sub-micron sized grains also contain silicate emission bands between $\sim 8-13$ $\mu$m and at $\sim$ 20 $\mu$m, which probably contribute a few percent to the flux in the 24 $\mu$m images \citep[see][Figures 13 and 14]{schambeau_2015}. However, a detailed analysis of these emission features and their relatively minor impacts on the 24 $\mu$m imaging is beyond the scope of our current work. In this section we take advantage of these thermal infrared images in combination with {\it Spitzer}'s stable and well characterized point spread function (PSF) in order to accurately isolate SW1's dust coma flux contributions in each image. We assumed that the dominant grain sizes contributing to the detected flux in each band were approximately the size of their effective monochromatic bandpass wavelengths: 15.80 $\mu$m, 23.68 $\mu$m, and 71.42 $\mu$m (as used by e.g., \cite{bauer_2015, bauer_2017}). To aid in the analysis of comae morphology, it is useful to reference an idealized ``canonical" coma, containing an isotropic and steady state emission of dust grains from the nucleus, with negligible dust grain fragmentation and solar radiation pressure. This canonical coma has a surface brightness profile following a $1/\rho$ behavior (where $\rho$ is the skyplane projected conetocentric distance from the nucleus's position), and is assumed in the derivation of the often used $A f \rho$ and $\epsilon f \rho$ parameters \citep{ahearn_1984, lisse_dust_2002, kelley_2013} that are described in more detail in Section \ref{sec:efrho}. In practice the assumptions used to derive $\epsilon f \rho$ break down for real comae, but its calculation provides a first order estimate of comae dust production behaviors. SW1 experienced quiescent activity for at least two months surrounding the UT 2003 Nov. epoch of {\it Spitzer} observations, based on Minor Planet Center (MPC) reported magnitude measurements\footnote{Minor Planet Circulars: 49762, 49871, 49872, 49873, 50347, 50348.}, so the canonical coma assumption is reasonable for these observations. \subsubsection{16 $\mu$m and 24 $\mu$m Coma Morphology} \label{sec:coma_morphology} \begin{figure} \gridline{ \fig{figures/6-Figure.png}{0.8\textwidth}{} } \caption{Shown is a cropped version of the 24 $\mu$m image (a), along with enhanced images: (b) division by an azimuthal average, (c) 1/$\rho$ profile removal, and (d) rotational shift differencing of 18$^{\circ}$. Equatorial north and east are indicated. The sky-plane projected directions for the Sun and SW1's heliocentric velocity vector are indicated by the yellow and red arrows. The 16 $\mu$m image (Figure \ref{fig:images}(a)) is shown to highlight the differences in the field of views between the 16 $\mu$m and 24 $\mu$m images. The large scale coma morphology shows an increased brightness in the south-west direction, possibly indicating preferential sunward emission. Also present are a more compact curved feature initially directed towards the south-southwest, curving towards the south-east and a linear feature from 1 o'clock to 7 o'clock similar to that in the 16 $\mu$m image. \label{fig:24um_images}} \end{figure} The 16$\mu$m blue PU images (Fig. \ref{fig:images}) were obtained 1.3 days before the 24 $\mu$m MIPS images (Fig. \ref{fig:24um_images}), which were obtained on UT 2003-11-24 15:05. The 16 $\mu$m image's coma did not display any clearly distinguishable large scale radial or azimuthal features in either the un-enhanced or enhanced images. A slight enhancement on the south-southeast through south-west side of the coma is detected in the division by azimuthal average and the 1/$\rho$-removed enhanced images (Figure \ref{fig:images}(b) and (c)). This is further confirmed in Figure \ref{fig:16um_profiles}, which displays radial surface-brightness profiles for position angles (PA) at 45$^{\circ}$ spacings for the 16$\mu$m image. The radial profiles were generated by taking the median pixel value at a given radial position using 10$^{\circ}$ wide wedges center on the indicated PA. For comparison, each PA plot includes a radial profile for a scaled STINYTIM generated point spread function (PSF; \cite{kirst_2006_tinytim}) representing how a detection of SW1's bare nucleus would behave in the absence of a coma. A $C/\rho^n$ functional form was fit to the profiles for $\rho$ values between 14$''$ and 30$''$ for each PA (i.e., beyond any significant influence from the nucleus point source contribution), where $C$ is a scaling constant representing the peak coma flux near the nucleus and $n$ is the power index of the coma's profile. The fitted profile power law indices are listed in Table \ref{tab:profile_slopes}. Profiles for PAs spanning from the south-through-west directions, approximately centered on the projected sunward direction, have nearly canonical 1/$\rho$ coma profiles whereas profiles in the northeast have profile powers of approximately $n$ = 2. This asymmetric profile behavior is consistent with preferential emission of grains in the sunward direction (south-west). \begin{figure}[h!] \gridline{ \fig{figures/5-Figure.png}{0.9\textwidth}{} } \caption{Radial profiles of the 16 $\mu$m image for different position angles in SW1. The 16 $\mu$m image is shown in the center of the plots for reference with the location of the nucleus indicated by a black circle; orientation of the image is equatorial north up and east to the left. Best-fit profiles are indicated by the yellow dashed lines for each PA and provided in Table \ref{tab:profile_slopes}. For reference, included in each plot are two black lines representing a 1/$\rho$ and 1/$\rho^2$ coma behavior. The ``roller coaster" shaped profile for the PSF is the result of the Airy diffraction pattern of the space-based telescope. \label{fig:16um_profiles}} \end{figure} \FloatBarrier \begin{deluxetable*}{ c | c c c c }[h!] \tablenum{2} \label{tab:profile_slopes} \tablecaption{16 $\mu$m and 24 $\mu$m Coma Profile Power Law Indices} \tablewidth{0pt} \tablehead{ \colhead{Position Angle} & \colhead{16 $\mu$m (14$''$ - 30$''$)} & \colhead{24 $\mu$m (14$''$ - 30$''$)} & \colhead{24 $\mu$m (30$''$ - 130$''$)} & \colhead{24 $\mu$m (200$''$ - 470$''$)} \\ \colhead{} & \colhead{(56,000 - 120,000 km)} & \colhead{(56,000 - 120,000 km)} & \colhead{(120,000 - 520,000 km)} & \colhead{(800,000 - 1,900,000 km)}} \startdata 0$^{\circ}$ & -1.7 & -1.1 & -0.8 & -1.4 \\ 45$^{\circ}$ & -2.1 & -0.9 & -0.7 & -1.5 \\ 90$^{\circ}$ & -1.6 & -0.7 & -0.7 & -1.5 \\ 135$^{\circ}$ & -1.1 & -0.6 & -0.6 & -1.7 \\ 180$^{\circ}$ & -1.0 & -0.7 & -0.6 & -1.1 \\ 225$^{\circ}$ & -1.0 & -0.6 & -0.8 & -0.8$^{\textrm{a}}$ \\ 270$^{\circ}$ & -1.0 & -0.9 & -0.9 & -0.9$^{\textrm{a}}$ \\ 315$^{\circ}$ & -1.8 & -0.9 & -1.0 & -1.0$^{\textrm{a}}$ \\ \enddata \tablecomments{ $^{\textrm{a}}$ The coma surface brightness profile power index between 30$''$ - 470$''$ was best fitted to a single value indicated in the column to the left.} \end{deluxetable*} The overall coma morphology as seen in the unenhanced 24 $\mu$m (Fig. \ref{fig:24um_images}(a)) image similarly shows an increased brightness in the southwest direction. This is further confirmed by the division by an azimuthal average and 1/$\rho$-removed enhanced images. The rotational-shift-differenced enhanced image (Fig. \ref{fig:24um_images}(d)) contains a curved wing feature that \cite{stansberry_2004} attribute to a rotating jet and from which they derived an $\sim$ 60 day rotation period for SW1's nucleus. Taking into consideration the great similarity in the 16 $\mu$m and 24 $\mu$m image morphology taken 1.3 days apart, and the relationship between the projected nucleus-Sun vector and the curved wing's structure suggests that this feature is possibly not the result of nucleus rotation, but is instead due to solar radiation pressure effects on micron sized dust grains emitted in the sunward direction being turned back to form the dust tail in the north-east direction \citep{jian-yang_2014, farnham_2005, mueller_2013}. While the $\sim$ 60 day rotation period derived by the earlier work may in fact coincidentally be reflective of SW1 potentially possessing a long rotation period \citep{miles_2016, schambeau_2017, schambeau_2019}, we propose that this curved wing feature is not the result of a slowly rotating nucleus. Interestingly, the wing would be symmetric around the skyplane projected nucleus-Sun axis for the case of isotropic emission from a localized nucleus surface area. Instead it is asymmetric, indicating a possible preferential direction for dust lofting from this source region. Similar asymmetric curved-shapes features have long been seen in broadband visible imaging data of SW1 while undergoing major outbursts. Accounts of these coma morphologies have been reported in the early works of \cite{jeffers_1956} and \cite{roemer_1958_sw1}. \cite{whipple_1980} presents a detailed analysis of SW1's outburst coma morphology as detected over a 50 year baseline, resulting in the descriptive term of ``ringtailed snorter" for this often seen curved shape feature. While it may at first seem appropriate to compare the outburst and quiescent coma morphologies, detailed analyses of SW1's dust coma while in both phases of activity \citep{hosek_2013, miles_2016, schambeau_2017, schambeau_2019} have provided descriptions of the underlying processes ongoing in both phases of activity and that the two are different. The morphology of the 24 $\mu$m quiescent coma's wing may resemble that of SW1's outburst coma; however, it was produced by different mechanisms (i.e., slow, sustained dust lofting with expansion velocities in the range of 10-50 m/s while quiescent \citep{jewitt_1990_sw1} vs. impulsive short lived dust emission at high velocities in the 100-300 m/s range during major outbursts \citep{2010MNRAS.409.1682T, schambeau_2017, schambeau_2019, feldman_1995}.) The outer edge of the wing feature seen in the 24 $\mu$m image in the south-west direction (Fig. \ref{fig:24um_images}(d)) may indicate an approximate projected length for the turn-back distance of the grains from solar radiation pressure. Using a projected cometocentric distance of $\sim$ 90$''$ (352,000 km) for the turning point of the wing as the approximate turn-back distance and the \cite{mueller_2013} equation for turn-back distance due to solar radiation pressure, we estimate the dust coma's expansion velocity: \begin{equation} v = \Bigg[ \frac{2 \rho_g \beta g \sin{\alpha} }{(\cos{\gamma})^2 } \Bigg]^{1/2}, \end{equation} \noindent where, $\rho_g$ is the projected sky-plane turn back distance of the dust grains, $\gamma$ is the angle between the initial direction of the dust grains and the sky-plane, $\beta$ is the ratio of radiation pressure acceleration to acceleration due to solar gravity, $\alpha$ is the solar phase angle of the observations, and $g = G M_{\odot} / R_H^2$ is the solar gravitational acceleration on the dust grains ($G$ is the gravitational constant, $M_{\odot}$ is the Sun's mass and $R_H$ is the heliocentric distance of the dust grains). We estimate a $\beta$ value based on equations from \cite{finson_probstein_1968} and \cite{fulle_2004}: \begin{equation} \beta = \frac{C_{pr} Q_{pr}}{\rho_d d} \end{equation} \noindent where $C_{pr}$ is a collection of constants equal to $3 E_{\odot} / (8 \pi c G M_{\odot})$, where $E_{\odot}$ is the Sun's mean radiation. The parameter $Q_{pr}$ is the scattering efficiency for radiation pressure for a dust grain of diameter $d$. \cite{burns_1979} provide a thorough description of $Q_{pr}$ and explain that a value of $Q_{pr} \approx 1$ is appropriate for the assumed $d = 24$ $\mu$m grains here. We use a value for the dust grain bulk density based on recent spacecraft visited comae in situ measurements: $\rho_d = 500$ kg/m$^3$ \citep{fulle_2016}. With these assumptions we arrive at an estimated value of $\beta = 0.096$. The exact value for $\gamma$ of the dust grains most dominantly contributing to the wing feature is unknown. Most probably it is the result of dust grains emitted over a continuum of angles. For this reason we calculate the outflow velocity for a range of sky-plane projected dust grain angles: $\gamma = 0^{\circ}$ ($v = 50$ m/s), $\gamma = 45.0^{\circ}$ ($v = 65$ m/s) and $\gamma = 80.0^{\circ}$ ($v = 270$ m/s). A similar radial surface brightness profile analysis for the 24 $\mu$m image is shown in Figure \ref{fig:24um_profiles}. The overall appearance of the coma morphology is similar to that seen in the 16 $\mu$m image, however the larger field of view (FOV) and higher S/N coma detection in the 24 $\mu$m image allows a more detailed investigation of the underlying processing ongoing within the dust coma. A change in slope of the profiles at a cometocentric distance of $\sim$ 130$''$ for PAs between 0 - 180$^{\circ}$ is suggestive of possible ongoing fragmentation for larger grains out to a projected cometocentric distance of 520,000 km (i.e., $\sim$ 130$''$). This view is supported by the coma profile's power law index being shallower than $-1$ interior to 520,000 km, suggesting an overabundance of dust grains interior to this projected distance when compared to a canonical steady-state dust emission. This behavior is possibly explained by a process of larger grains emitted from the nucleus and their subsequent fragmentation as they expand in the coma, or possibly from the decreasing size via sublimation of larger icy grains losing their volatile content. In Section \ref{sec:color_map} we discuss the possibility of icy grains in more detail. These larger (0.1 - 1.0 mm) grain populations would not contribute significantly to the 24 $\mu$m coma cross section close to the nucleus because of its relative lack of surface area, but could still easily support the observed number density of 24 micron sized grains due to a fragmentation cascade (N.B. - as long as there are particles $>>$ 24 $\mu$m in radius, they can always fragment/disrupt into many smaller particles and keep the observed particle size distribution (PSD) going) and thus maintain the coma's enhanced 24 $\mu$m surface brightness. \begin{figure}[h!] \gridline{ \fig{figures/7-Figure.png}{0.9\textwidth}{} } \caption{Radial profiles of the 24 $\mu$m image for different position angles. The coma morphology for radial profiles between 0$^{\circ}$ - 180$^{\circ}$ contains a knee-shaped feature at $\rho \sim$ 130$''$ (520,000 km) that is suggestive of a projected skyplane length for ongoing coma grain fragmentation and/or the projected turnback distance of dust grains from solar radiation pressure. Fitted power law indices corresponding to the yellow, red and orange curves are presented in Table \ref{tab:profile_slopes}. The location of the nucleus is indicated by the black circle in the center image. For reference, included in each plot are two black lines representing a 1/$\rho$ and 1/$\rho^2$ coma behavior. The roller coaster shaped profile for the PSF is the result of the Airy diffraction pattern of the space-based telescope. \label{fig:24um_profiles}} \end{figure} Similar to the 16 $\mu$m image, the coma's profiles in 24 $\mu$m close to the projected sunward direction (PAs: 225$^{\circ}$, 270$^{\circ}$, and 315$^{\circ}$) all have a single profile index close to $-1$. A possible explanation for this constant surface brightness could be a preferential sunward emission of dust grains. \FloatBarrier \subsubsection{$\epsilon f \rho$ Measurements and Dust Production Estimates}\label{sec:efrho} For this analysis we calculated the $\epsilon f \rho$ parameter \citep{lisse_dust_2002, kelley_2013}, an often used proxy for dust production rates using infrared emission that is analogous to the $A f \rho$ parameter for reflected dust flux in the visible \citep{ahearn_1984}. While the assumed canonical dust coma used to derive $\epsilon f \rho$ is not valid for many comets, the utility of $\epsilon f \rho$ comes from it establishing a standard procedure for estimating comae dust production rates and allowing a relative comparison between individual comets. The expression for $\epsilon f \rho$ used is \begin{equation} \label{eq:efrho} \epsilon f \rho (\lambda) = \frac{F_{th}(\lambda)}{\pi B(\lambda, T_c)} \times \frac{\Delta^2}{\rho}, \end{equation} \noindent where $\epsilon$ is the emissivity of the dust grains at wavelength $\lambda$, $f$ is a filling factor expressing the fraction of the photometry aperture containing dust grains, $\rho$ is the linear aperture radius centered on the nucleus which is being used to measure the flux, $F_{th}(\lambda)$ is the flux measured in the photometric aperture for wavelength $\lambda$, $B(\lambda, T_c)$ is the Planck function calculated at the color temperature $T_c$ of the dust grains, and $\Delta$ is the geocentric distance during the observation. For the 2003 epoch of {\it Spitzer} SW1 imaging, we used properties for the dust coma derived from our earlier analysis of IRS observations of SW1. This analysis indicated the coma was dominated by sub-$\mu$m to $\mu$m-sized amorphous silicate and amorphous carbon grains at a color temperature of $\sim$ 140 K \citep{schambeau_2015}. The color temperature map shown in Section \ref{sec:color_map} also indicates dust grains at similar color temperatures, but also that there is color temperature structure present in the coma complicating the interpretation of a derived $\epsilon f \rho$ based on an assumed dust coma with uniform temperature. With these understood limitations, we used Equation \ref{eq:efrho} to calculate $\epsilon f \rho$ values for each of the three bands containing extracted coma flux measurements. Additionally, we calculated $\epsilon f \rho$ values using an expression for dust coma color temperature ($T_c = 300$ K/$\sqrt{(R_H)}$ = 125 K) based on the results of the Survey of Ensemble Physical Properties of Cometary Nuclei (SEPPCoN) for JFCs observations by {\it Spitzer} \citep{kelley_2013} and for the case of grains at an ideal blackbody temperature ($T_{bb} = 278$ K/$\sqrt{(R_H)}$ = 117 K) for comparison. The IRS-derived and SEPPCoN-derived dust color temperatures are slightly hotter than an ideal blackbody at the same heliocentric distance. Most probably this is the result of super-heated sub-$\mu$m sized amorphous carbon grains present in the dust coma \citep{hanner_1997} and/or potentially from the many emission features present in the thermal infrared region \citep{wooden_2002, markkanen_2019}. For $F_{th}(\lambda)$ we subtracted the nucleus' contribution to SW1's overall flux in each aperture based on the scaled PSFs found during the coma removal process presented in Section \ref{sec:neatm}. Additionally, flux from background sources (some of them serendipitously detected asteroids) was removed by interpolating the dust coma behavior for regions around each background source. Figure \ref{fig:efrho} shows plots of the 16 and 24 $\mu$m measured spectral flux density values for an array of aperture radii along with their associated $\epsilon f \rho$ measurements for the three color temperature assumptions. Table \ref{tab:dust_rates} reports the measured flux and $\epsilon f \rho$ values along with their associated uncertainties for the largest photometry apertures used for each image. The 16 $\mu$m's nearly constant $\epsilon f \rho$ value for aperture radii larger than $\sim$ 5$''$ indicates that the 3-D shape of the dust coma primarily contributing to this image maintains a nearly canonical spherical shape \citep{fink_2012}. On the other hand, the 24 $\mu$m $\epsilon f \rho$ profile has a slight positive slope indicating deviations from a canonical 1/$\rho$ coma's expected aperture-independent constant value. The 24 $\mu$m slope behaviors support the possibility for an overabundance of 24 $\mu$m sized dust grains for larger cometocentric distances. The steep decrease for $\epsilon f \rho$ profiles for small apertures is an artifact of the coma's image being the convolution of the coma's intrinsic surface-brightness distribution with the telescope's PSF (e.g., the intrinsic surface-brightness is spread over a larger projected surface area by the convolution process resulting in a decrease in integrated flux for apertures smaller than the PSF). \begin{figure} \gridline{ \fig{figures/7a-Figure.png}{0.95\textwidth}{} } \gridline{ \fig{figures/7b-Figure.png}{0.95\textwidth}{} } \caption{Top panel: SW1's coma spectral flux density measurements and associated $\epsilon f \rho$ measurements for the 16 $\mu$m image. Bottom panel: Similar to top, but for the 24 $\mu$m image. The 16 $\mu$m image appears to behave as a canonical dust coma with a semi-independent relationship between $\epsilon f \rho$ and aperture size, while the 24 $\mu$m has an increased value with increasing aperture size once past 5$''$. \label{fig:efrho}} \end{figure} To verify that the difference in aperture photometry for the coma between the 16 $\mu$m and 24 $\mu$m images is not the result of the local infrared background in each image, we compared coadded Wide-field Infrared Survey Explorer ({\it WISE}; \cite{wright_2010}) backgrounds retrieved from the W3 (12 $\mu$m) and W4 (22 $\mu$m) intensity images downloaded from the NASA/IPAC Infrared Science Archive. W3 and W4 coadded images centered on SW1's nucleus position during each epoch of imaging were compared and we found no significant differences that could explain the different photometry behaviors. The 70 $\mu$m image's low S/N surface brightness coma detection did not allow a similar radial profile analysis. Instead, we report in Table \ref{tab:dust_rates} an updated $9''$ radius aperture coma flux measurement. Our earlier reported 70 $\mu$m flux density value \citep{schambeau_2015} did not included an aperture correction for the measurement, so the earlier reported flux measurement is an underestimate. Based on a new reported measurement of 103 $\pm$ 50 mJy, we calculated an $\epsilon f \rho$ value. The large uncertainty in the derived 70 $\mu$m coma flux measurement is due to the low S/N present in the mosaicked image and SW1's proximity to one of the jail-bar artifacts often present in MOPEX generated mosaicked images \citep[see][Figure 2(b)]{schambeau_2015}. We use the measured $\epsilon f \rho$ values to estimate dust production rates during the {\it Spitzer} imaging according to: \begin{equation} \dot{M} = (\epsilon f \rho) \times \frac{8 a \rho_d v}{3 \epsilon}, \end{equation} where $a$ is the radius of the grains, $\rho_d$ is the density of the grains, and $v$ is the radial velocity of the grains lofted from the nucleus' surface. For our calculations we assumed that the diameter of the grains dominating the emitted flux for each band is equal to the effective wavelength of each band: 15.8, 23.68, and 71.42 $\mu$m. For the density of the grains we used the same value of $\rho_d$ = 500 kg/$m^3$ \citep{fulle_2016} that was used for the estimate of the dust expansion velocity. The velocity of the emitted dust grains was chosen to be 50 m/s based on the lower approximate values for dust expansion velocity from the 24 $\mu$m coma morphology and turnback distance from solar radiation pressure. While it is likely that larger grains will have slower radial velocities than smaller grains, we adopt the same value for each band, due to the observational uncertainties of the measurements. We use a value for the dust emissivity of $\epsilon=$ 0.95. Estimated dust production rates are presented in Table \ref{tab:dust_rates}. \begin{deluxetable*}{ c c c | c c | c c | c c } \tablenum{3} \label{tab:dust_rates} \tablecaption{SW1 Thermal Infrared Dust Coma Measurements} \tablewidth{0pt} \tablehead{ \colhead{Band} & \colhead{$\rho^{\textrm{a}}$} & \colhead{Flux} & \colhead{ $\epsilon f \rho$ } & \colhead{$\dot{M}$ } & \colhead{ $\epsilon f \rho$ } & \colhead{$\dot{M}$ } & \colhead{ $\epsilon f \rho$ } & \colhead{$\dot{M}$ } \\ \colhead{($\mu$m)} & \colhead{($''$)} & \colhead{(mJy)} & \colhead{(cm)} & \colhead{(kg/s)} & \colhead{(cm)} & \colhead{(kg/s)} & \colhead{(cm)} & \colhead{(kg/s)} } \startdata & & & \multicolumn{2}{c|}{$T_c$ = 117 K} & \multicolumn{2}{c|}{$T_c$ = 125 K} & \multicolumn{2}{c}{$T_c$ = 140 K} \\ \hline 16 & 20 & 145 $\pm$ 2 & 9400 $\pm$ 150 & 104 $\pm$ 2 & 5600 $\pm$ 90 & 124 $\pm$ 2 & 2600 $\pm$ 43 & 28.8 $\pm$ 0.5 \\ 24 & 20 & 570 $\pm$ 24 & 8700 $\pm$ 360 & 144 $\pm$ 6 & 6100 $\pm$ 260 & 101 $\pm$ 4 & 3700 $\pm$ 150 & 61 $\pm$ 3 \\ 24 & 200 & 8403 $\pm$ 90 & 13700 $\pm$ 150 & 227 $\pm$ 3 & 9700 $\pm$ 105 & 322 $\pm$ 4 & 5800 $\pm$ 63 & 96 $\pm$ 1 \\ 70 & 9 & 102 $\pm$ 50 & 2600 $\pm$ 1300 & 130 $\pm$ 65 & 2300 $\pm$ 1100& 113 $\pm$ 55 & 1800 $\pm$ 900 & 90 $\pm$ 45 \\ \enddata \tablecomments{ $^{\textrm{a}}$ The radius of the sky-plane projected photometry aperture.} \end{deluxetable*} We have collected similar $\epsilon f \rho$ measurements based on the SEPPCoN \citep{fernandez_2013, kelley_2013} and WISE/NEOWISE \citep{bauer_2017, bauer_data_2017} surveys for comets in order to compare SW1's measured values. Results from the WISE/NEOWISE survey \citep{bauer_2017, bauer_data_2017} enabled them to develop an empirical expression relating an expected thermal dust activity for an individual comet based on its nucleus size: \begin{equation} \log \Bigg( \frac{\epsilon f \rho}{\textrm{1 cm}} \Bigg) = 3.5 \Bigg( 1 - \textrm{exp}\Bigg( - \frac{D_N}{\textrm{5.3 km}} \Bigg) \Bigg) + N(0, 0.25), \end{equation} \noindent where $D_N$ is the nucleus diameter in km and $N(0, 0.25)$ is a Gaussian distribution with mean of 0 and variance of 0.25. In Figure \ref{fig:efrho_panel}, we plotted measurements from both infrared surveys, the empirical expression developed by \cite{bauer_2017}, and SW1's measurements from this work. As Figure 6 shows, Equation 5 fits the SEPCCoN $\epsilon f \rho$ values and SW1 values presented in this work. Interestingly, the expression implies that comets with nuclei diameters larger $\sim$ 20 km have a flattening of activity levels when compared to the steep increase of dust activity vs. diameter for comets between 1 km to 10 km diameters. This may be partly due to an observational bias in favor of detecting larger nuclei at larger heliocentric distances in combination to the distant activity being driven by a process other than water ice sublimation. This comparison between SEPPCoN, NEOWISE and SW1 values is new, and the good fit of Equation 5 to the observations indicates that the equation is a robust estimator of a comet's larger grain coma activity level. \begin{figure} \gridline{ \fig{figures/efrho_compare.png}{0.75\textwidth}{} } \caption{Comparison of measured $\epsilon f \rho$ values vs. nucleus diameter for comets and Centaurs from two infrared surveys and data presented here for SW1. The solid black curve indicates the empirically derived relation between $\epsilon f \rho$ vs. nucleus diameter presented in \cite{bauer_2017} using the WISE/NEOWISE detected comets; Equation 5 in this paper. Points for SW1 are based on the values from Table \ref{tab:dust_rates} for the dust temperature of $T = 140$ K. Uncertainties for the 16 $\mu$m and 24 $\mu$m points are smaller than data markers. The colors of individual markers of the WISE/NEOWISE and SEPPCoN values indicate the comet's heliocentric distance at the time of the $\epsilon f \rho$ measurement. A color bar to the right of the figure indicates the heliocentric distance color scale.} \label{fig:efrho_panel} \end{figure} \FloatBarrier Reports of SW1's dust production rate as derived from visible observations during periods of quiescent activity indicate a typical mass loss rate for sub-micron sized grains on the order of 1 - 50 kg/s. We arrived at these typical quiescent dust production rates using reported $A f \rho$ measurements from \cite{2010MNRAS.409.1682T} and \cite{hosek_2013}, but here we use a value of grain density $\rho_d$ = 500 kg/m$^3$ in order to be consistent with our $\epsilon f \rho$ derived dust production rates. We note that these dust production rates are upper limits due to their calculated $A f \rho$ values containing nucleus flux contributions. When compared to the estimated dust production rates as derived from the {\it Spitzer} data, which have nucleus flux contributions removed, the estimated dust production rates for grains in the range of 16 $\mu$m to 70 $\mu$m have a higher mass loss rate (Table \ref{tab:dust_rates}) than the sub-micron sized coma ($<$ 1 $\mu$m grains). It would be interesting to see if this trend of higher mass loss rate for the tens of micron sized grains is also seen during periods of major dust coma outburst (i.e., is the bulk of SW1's outburst mass loss coming from grains that are on the order of 10s of microns to 100 microns or from sub-micron sized grains), enabling investigations of the quiescent vs. outburst comae activity mechanisms. \FloatBarrier \subsubsection{Coma Modeling} Another approach to determine the dust production rate is to model the thermal emission of an ensemble of particles defined by its size distribution. We used the model described in \cite{bockelee_2017} which computes the wavelength-dependent absorption coefficient and temperature of dust particles as a function of grain size using the Mie theory combined with an effective medium theory in order to consider mixtures of different materials. Effective medium theories (EMT) allow us to calculate an effective refractive index for a medium made of a matrix with inclusions of another material. The Maxwell-Garnett mixing rule is used in this model, and is also applied to consider the porosity of the grains, set to be 50\% at maximum \citep{bockelee_2017}. The infrared thermal spectrum is computed by summing the contributions of the individual dust particles. The size distribution of the dust particles is described by a power-law $n$($a$) $\propto$ $a^{-\beta}$, where $\beta$ is the size index and the particle radius takes values from $a_{\rm min}$ to $a_{\rm max}$. The dust density is taken equal to 500 kg/m$^3$. The effect of ice sublimation on the equilibrium grain temperature was not taken into account, as it has been shown that radiative cooling dominates over cooling by sublimation at far heliocentric distances \citep{beer_2006}. We consider in this paper three different mixtures \citep[see][for the references for optical constants]{bockelee_2017}: 1) a matrix of amorphous carbon with inclusions of amorphous olivine with a Fe:Mg composition of 50:50; 2) a matrix of crystalline ice with inclusions of amorphous carbon; 3) a matrix of amorphous carbon with inclusions of crystalline ice. For mixture 1) the carbon/olivine mass ratio is 1, a value consistent with the organic mass fraction measured in comet 67P dust particles \citep{bardyn_2017}. Mixtures 2) and 3) have the same ice fraction in mass of $\sim$ 45\%, but have different optical properties. Other parameters set in the model are the dust maximum size $a_{\rm max}$ and the dust velocity as a function of particle size, described as varying $\propto$ $a^{-0.5}$, with a value of 60 m/s for 10-$\mu$m particles. The maximum liftable size from the surface of SW1's nucleus is estimated to be $a_{\rm max}$ = 250 $\mu$m, for a CO-driven activity restricted to a spherical segment with half-angle of 45$^{\circ}$ and a total CO production rate of 4 $\times$ 10$^{28}$ s$^{-1}$, assuming our nucleus radius estimate of 32.3 km (Section \ref{sec:neatm}) and a nucleus density of 500 kg/m$^3$ \citep[V. Zakharov, personal communication, see][]{zakharov_2018, zahkarov_2021}. This CO outgassing description is consistent with CO millimeter observations \citep{gunnarsson_2008, wierzchos_2020, bockelee_2021}. The model was applied to simulate the flux density in a 9'' FOV radius at 16, 24 and 70 $\mu$m, for comparison with Spitzer data. Simulations were made for a minimum dust particle size $a_{\rm min}$ in the range 0.5--50$\mu$m and size indices in the range 2.5--4.6. These two parameters have indeed a strong influence on the dust thermal spectrum, with, e.g., a larger contribution from small particles for low $a_{\rm min}$ and high $\beta$ values resulting in a higher dust color temperature. The Spitzer constraints are flux densities in a 9'' FOV radius of 64 $\pm$ 2 mJy, 198 $\pm$ 14 mJy, 103 $\pm$ 50 mJy at 16, 24 and 70 $\mu$m, respectively. This corresponds to color temperatures of $T_{16/24}$ = 129 $\pm$ 5 K, based on the 16 \& 24 $\mu$m fluxes, and $T_{24/70}$ = 177$^{+52}_{-47}$ K based on the 24 \& 70 $\mu$m fluxes. $T_{16/24}$ and $T_{24/70}$ are consistent within 1-$\sigma$ with a value of $\sim$ 130 K, but the high central value of $T_{24/70}$ resulting from the relatively faint 70$\mu$m flux might suggest an excess of small particles poorly radiating at long wavelengths. Figure~\ref{fig-Tcol} shows iso-contours of $T_{16/24}$ (black plain lines) and $T_{24/70}$ (dashed blue lines) as a function of $a_{\rm min}$ and $\beta$. Domains consistent with measured $T_{16/24}$, $T_{24/70}$ values are filled in orange and blue colors, respectively. We only show results for ice-carbon mixtures 2) and 3), since results for carbon-silicate mixture 1) are similar to those obtained for mixture 3). For mixtures 1) (not shown) and 3), the orange and blue domains overlap for $a_{\rm min}$ = 2--5 $\mu$m, whereas no overlapping is observed for mixture 2) for any set of ($a_{\rm min}$,$\beta$). Grains made of mixture 2) are hotter than other mixtures for sizes below 30$\mu$m (Fig.~\ref{fig-Tdust}), and this explains the different infrared spectra. \begin{figure}[h!] \begin{center} \includegraphics[width=0.49\textwidth, angle = 0]{figures/tcol_4.png} \includegraphics[width=0.49\textwidth, angle = 0]{figures/tcol_7.png} \end{center} \caption{Modelled dust color temperatures $T_{16/24}$, $T_{24/70}$ as a function of minimum particle size and size index, for ice/carbon mixtures 2 (panel A) and 3 (panel B). Black plain lines show contours at constant $T_{16/24}$, in steps of 10 K. Blue dashed lines show contours at constant $T_{24/70}$, in steps of 10 K, for $T_{24/70}$ $\geq$ 130 K. Color temperatures consistent with Spitzer measured $T_{16/24}$ and $T_{24/70}$ values are colored in orange and blue, respectively. The assumed maximum particle size is $a_{\rm max}$ = 250 $\mu$m.}\label{fig-Tcol} \end{figure} In Figure ~\ref{fig-Qdust}, we show dust production rates derived from the 24 $\mu$m flux density using the ($a_{\rm min}$,$\beta$) parameters that provide $T_{16/24}$ values consistent with the measured value, i.e., those defining the orange region in Fig. ~\ref{fig-Tcol}. For mixtures 1) and 3) with matrices of amorphous carbon, the range is 50--200 kg/s. The low end is obtained for the highest ($a_{\rm min}$,$\beta$) values (= (5$\mu$m, 4.1--4.4)), that is a steep size distribution where 5--10 $\mu$m grains dominate the infrared emission. For size distributions with $a_{\rm min}$= 4--5 $\mu$m, the dust production rates deduced from the 24 and 70-$\mu$m fluxes are consistent, and in the range 50--100 kg/s. However, this is not the case for size distributions with small $a_{\rm min}$ values (and consequently low $\beta$ values, Fig.~\ref{fig-Tcol}), for which 70-$\mu$m derived dust production rates are by a factor 2--3 lower than those deduced from the 24-$\mu$m flux. For the ice/carbon mixture 2 (matrix of crystalline ice), the dust production rate inferred from the 24 $\mu$m flux is between 130--240 kg/s (Fig.~\ref{fig-Qdust}). The values derived from the 70 $\mu$m flux are more than 2 times lower for all sets of ($a_{\rm min}$,$\beta$) parameters. This is an expected result since for this composition, the model fails in reproducing both the $T_{16/24}$ and $T_{24/70}$ values. \begin{figure}[h!] \begin{center} \includegraphics[width=0.75\textwidth, angle = 0]{figures/Td_select.png} \end{center} \caption{Temperature of the dust particles as a funtion of particle radius. Results for mixtures 1 (matrix of carbon with ice inclusions), 2 (matrix of ice with carbon inclusions), and 3 (matrix of carbon with olivine inclusions) are shown in blue, turquoise and red, respectively. }\label{fig-Tdust} \end{figure} The dust production rates derived with model parameters leading to a satisfactory fit to data (50--100 kg/s) are in overall agreement with those estimated in Section \ref{sec:efrho} using a simple approach. The Mie-scattering model shows that measuring dust fluxes at several wavelengths in the thermal IR can provide constraints on the particle size distribution and thermal properties. The obtained results are here limited due to the low SNR of the 70 $\mu$m dust coma flux. A flaw in the present analysis is also the known limitations of the Mie-scattering theory and of the Maxwell-Garnett mixing rule for modelling dust spectra \citep{lien_1990, mishchenko_2008}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.75\textwidth, angle = 0]{figures/Qd_select.png} \end{center} \caption{Dust production rates derived from the 24-$\mu$m flux density measured in a 9'' FOV radius, using ($a_{\rm min}$,$\beta$) parameters providing color temperature $T_{16/24}$ values consistent with the measured value of 129 $\pm$ 5 K. The range of production rate values for a given minimum size reflects the range of $\beta$ values fulfilling the requirement, and the uncertainty in the 24-$\mu$m flux. Results for mixtures 1, 2, and 3 are shown in blue, turquoise and red, respectively. }\label{fig-Qdust} \end{figure} \FloatBarrier \subsubsection{Coma Color Temperature Map} \label{sec:color_map} In Figure \ref{fig:temp_map}, a color temperature map of the coma based on the 16 $\mu$m and 24 $\mu$m images is shown. This was generated by using the spectral flux density values of the coma after removal of flux contributions from the nucleus; the procedure of nucleus vs. coma flux contributions is described in Section \ref{sec:neatm} for the 16 $\mu$m image, and in our earlier work \citep{schambeau_2015} for the 24 $\mu$m data. Masked pixels identified by the teal square near the center represent regions where the PSF's subtraction may have resulted in a significant over or under subtraction for individual pixels. The white pixels on the top-left and top-right of the color map are not ``hot", but instead are masked as white due to the low S/N 16 $\mu$m detections resulting in negative spectral flux density pixel values after background subtraction. These pixels have been excluded from the color temperature fitting procedure. We note that the actual temperatures of the grains most probably are different than the values derived from fitting a Planck blackbody profile to the individual pixel values from the 16 $\mu$m and 24 $\mu$m images due to the silicate emission features present in the 24 $\mu$m bandpass and the dust coma PSD \citep{wooden_2002, markkanen_2019}. The peak temperature of the grains of $\sim$ 140 K close to the nucleus is in agreement with a color temperature derived from the IRS spectrum as analyzed in \cite{schambeau_2015}. \begin{figure}[h!] \gridline{ \fig{figures/color_temp_annotated.png}{0.75\textwidth}{} } \caption{Coma color temperature map based on the 16 $\mu$m and 24 $\mu$m images. \label{fig:temp_map}} \end{figure} Overall, the general trend is a decreasing color temperature with increasing projected distance away from the nucleus. The eastern half of the coma has a higher temperature than the western side by $\sim$ 20 degrees. The interpretation of this behavior is uncertain based on the current {\it Spitzer} imaging data. We mention here plausible explanations for these color temperature behaviors based on properties of the dust coma. One possible explanation can be a population of relatively smaller grains on the eastern side of the coma composing the tail that are less efficient at radiating their stored thermal energy. Another possibility is that the western side of the coma has a higher abundance of sub-micron sized grains, resulting in an enhanced 24 $\mu$m emission above that of an ideal blackbody due to the silicate emission bands around 20 $\mu$m. The overall impact of this behavior would be a slightly lower color temperature for the western side of the coma. Future modeling efforts may be able to select between the combination of processes driving the observed color temperature, but are beyond the scope of this current work. Using the color temperature as a proxy for the approximate dust grain temperatures and the results of \cite{beer_2006} indicates that for grain sizes on the order of tens of microns, as we have here for the 16 $\mu$m and 24 $\mu$m images, the grains have a dust mass fraction for water ice ($X$, where $X = 1$ for pure water ice) in the range of 25-50\%, with smaller grains having a higher ice content. We calculated the expected lifetimes for the water ice content of assumed spherical icy grains with diameters equal to 16 $\mu$m, 24 $\mu$m, and 70 $\mu$m and dust mass fractions $X_{16}$ = 0.5, $X_{24}$ = 0.40, $X_{70}$ = 0.25 \citep{mukai_1986-icy-lifetime, beer_2006, lien_1990}. The lifetimes of the water ice content of the grains is respectively: 112 days, 154 days, and 373 days. For these estimated lifetimes we have ignored the increased temperatures of grains as their sizes decrease due to the ongoing water ice sublimation, so our derived lifetimes are estimated upper limits. The presence of grains containing water ice has been inferred by the increased emissivity at longer wavelengths as derived from the modeling of SW1's {\it Spitzer} IRS spectrum \citep{schambeau_2015}. Additionally, SW1's H$_2$O production rates as derived from {\it Herschel}/HIFI observations indicate a non-nuclear extended source that is explained by the sublimation of an icy grain coma \citep{bockelee_2021}. The H$_2$O measured production rates based on {\it AKARI} and {\it Herschel} observations are in the range of $Q_{\textrm{H}_2\textrm{O}}$ $\sim$ 3 - 7$\times$10$^{27}$ molecules/s \citep{ootsubo_2012_CO+CO2, bockelee_2021}. These measured production rates are the same order of magnitude as what would be produced by the sublimation of icy grains if we use the dust-to-ice mass fractions as constrained from their color temperature and the dust production rates derived from $\epsilon f \rho$. As a first order estimate of the coma's $Q_{\textrm{H}_2\textrm{O}}$ due to the sublimation of icy grains, we calculated the production rate that would be produced from sublimation of the water ice content of icy grains following the dust production rates presented in Table \ref{tab:dust_rates}. Assuming that all of the water ice content for individual grains is fully sublimated we arrive at an estimate range of $Q_{\textrm{H}_2\textrm{O}} \sim$ (1 - 3)$\times10^{27}$ molecules/s, supporting the argument that the measured water production rates may be explained by a non-nuclear source of icy grains in the coma. \FloatBarrier \subsection{Nucleus Spectral Flux Density Measurements and a new NEATM}\label{sec:neatm} To obtain nucleus photometry measurements from the blue PU images, the flux from SW1's coma was modeled and removed. We used a well-established coma modeling technique \citep{lamy-toth_1995, lisse_1999, yan_phd_1999} for this procedure, where the azimuthal coma behavior is measured in regions outside of significant contribution from the nucleus' PSF in order to generate a synthetic coma model. The model coma's flux contribution is then subtracted from the observations resulting in an approximately bare-nucleus residual image. The residual image is then used to scale an STINYTIM generated PSF \citep{kirst_2006_tinytim} to represent the nucleus's total flux. The reader is referred to our previous work \citep{schambeau_2015} for a detailed description of this procedure. The coma modeling and removal procedure was applied to each of the PU images resulting in six independent nucleus photometry measurements from six images at an effective 15.8 $\mu$m wavelength. The individual color corrected measurements are: 84.1, 85.0, 85.0, 87.5, 89.6, and 88.4 mJy, with a typical uncertainty of $\pm$ 7 mJy. The final measurement used for thermal modeling analysis was taken as the average of the individual measurements: 86 $\pm$ 2 mJy, with the stated 1-$\sigma$ uncertainty being the standard deviation of the six measurements. Figure \ref{fig:neatm} shows the new 15.8 $\mu$m measurement plotted along with the other four {\it Spitzer} nucleus photometry values that we reported earlier \citep{schambeau_2015}. We also plot the best-fitting 4-band thermal model (NEATM, \cite{harris_1998}) that we used in the earlier work to extract the nucleus' effective radius $R$ = $30.2^{+3.7}_{-2.9}$ km and beaming parameter $\eta$ = 0.99$^{0.26}_{-0.19}$. A re-fit using the now five spectral flux density measurements produces a nucleus size estimate and infrared beaming parameter that are slightly larger, but within the 1-sigma uncertainties of the earlier results: $R = 32.3 \pm 3.1$ km and $\eta = 1.1 \pm 0.2$. We propose these new values be used in future investigation of SW1 in lieu of our earlier analysis \citep{schambeau_2015}, because of the reduced uncertainty due to modeling with five, rather than four points. For our new NEATM analysis similar assumptions as those used for our previous work and for (e.g.) SEPPCoN \citep{fernandez_2013} were used: bolometric bond albedo $A = 0.012$ (assuming a visible-wavelength geometrical albedo $p = 0.04$ and phase integral relation $q = 0.290 + 0.684 G$, \citep{harris_lagerros_2002}, emissivity $\epsilon = 0.95$, and slope parameter $G = 0.05$. \begin{figure} \gridline{ \fig{figures/2020_new_NEATM_PSJ_2.png}{0.9\textwidth}{} } \caption{Five spectral flux density measurements of SW1, incorporating the new blue PU data at 16$\mu$m, all acquired during 2003 November with Spitzer. Also shown is a new 5-band NEATM, which produces a nucleus radius estimate and infrared beaming parameter of $R = 32.3 \pm 3.1$ km and $\eta = 1.1 \pm 0.2$, along with the previous 4-band NEATM from \cite{schambeau_2015}. Uncertainties are 1-$\sigma$. The consistently higher than fit value for the 8 $\mu$m measurement may be the result of enhanced emission due to silicate emission bands in this region. \label{fig:neatm}} \end{figure} \FloatBarrier \section{Summary and Conclusions} \label{sec:conclusion} A more detailed analysis of November 2003 {\it Spitzer} observations of SW1 \citep{schambeau_2015} is presented, which incorporates 16$\mu$m data for the first time, and significantly improves characterization of the Centaur's tens of microns dust coma during a period of quiescent activity. The 16 $\mu$m blue PU images were remarkably symmetric with evidence for an $\sim$ 70 percent coma enhancement in the south-southeast direction, which may be reflective of tail formation. The 16 $\mu$m coma's morphology indicated preferential sunward emission of dust grains. No signs of grain fragmentation were indicated by the data within the image FOV (273,000 $\times$ 386,000 km). Re-analysis of the 24 $\mu$m images reveal a large scale coma morphology of increased brightness in the southwest direction, consistent with preferential sunward emission. These data also show a more compact wing feature initially directed toward the south-southwest to a projected cometocentric distance of 352,000 km (90$''$) and curving toward the southeast. This feature has previously been interpreted to be due to the nucleus' rotation, but we propose instead that this is the result of solar radiation pressure effects and gravity on micron sized dust grains that were emitted in the sunward direction and were turned back to form a dust tail. Further analysis of this feature is encouraged. Interestingly, analysis of the 24 $\mu$m surface brightness radial profiles shows a noticeable change of slope at $\sim$ 520,000 km cometocentric distance at positions angles $\sim$ 0 through 180 degrees. This change in slope is consistent with the projected distance to the outer edge of the curved feature. We used measurements of this turning-back point of the curved feature to estimate a dust grain outflow velocity in the range of 50$-$270 m/s depending on the ejection direction of grains. Using the improved 140 K color temperature measured from the IRS spectrum \citep{schambeau_2015} and in this work (Section \ref{sec:color_map}) we calculated the $\epsilon f \rho$ parameters: 16 $\mu$m (2600 $\pm$ 43 cm), 24 $\mu$m (5800 $\pm$ 63 cm), and 70 $\mu$m (1800 $\pm$ 900 cm). SW1's values were found to follow the $\epsilon f \rho$ vs. nucleus size relation observed from the WISE/NEOWISE observed comets \citep{bauer_2017}. Additionally, for the first time, we compare the WISE/NEOWISE and SEPPCoN \citep{kelley_2013} derived $\epsilon f \rho$ measurements and see agreement between the two surveys, strengthening the argument for the empirically derived relationship's application as a predictor of cometary comae. A coma model \citep{bockelee_2017} was used to constrain the coma's dust grain size distribution and mass loss rate. The model was constrained by 9$''$ radius aperture photometry measurements of 16 $\mu$m, 24 $\mu$m, and 70 $\mu$m coma flux density. Models with a dust grain composition of a matrix of amorphous carbon with inclusions of (1) amorphous olivine or (2) crystalline water ice were in agreement with the {\it Spitzer} data. The two models had similar ranges for the best-fit grain size distributions: power-law index $\beta$ ranging from 4.1 to 4.4, minimum grain size $a_{\rm min}$ ranging from 4 $\mu$m to 5 $\mu$m, and maximum grain radius $a_{\rm max}$ = 250 $\mu$m. The dust production rates derived with model parameters leading to a satisfactory fit to data (50--100 kg/s) are in overall agreement with those estimated using the measured $\epsilon f \rho$ values. Using the 16 $\mu$m and 24 $\mu$m images we constructed a coma color-temperature map, which also peaks at $\sim$ 140 K, decreasing with increasing cometocentric distance, and an east-to-west asymmetry with the eastern coma being $\sim$ 20 degrees higher. This behavior is the result of a particle size distribution of grains of varying compositions. Future analyses of these data are encouraged to better constrain SW1's large grain coma environment. We used the 140 K color temperature as a plausible physical temperatures for individual grains. This assumption is supported by our earlier analysis of the IRS spectrum \citep{schambeau_2015}. Using the dust production rates measured here we estimated a H$_2$O production rate from the sublimation of icy coma grains: $Q_{\textrm{H}_2\textrm{O}} \sim$ (1 - 3)$\times10^{27}$ molecules/s. This range agrees with other measurements of SW1's water production rate \citep{ootsubo_2012_CO+CO2, bockelee_2021} Coma modeling and its removal from the IRS blue PU imaging data at 16 $\mu$m were used, along with measurements at other infrared wavelengths, to produce a nucleus radius of $R$ = 32.3 $\pm$ 3.1 km for SW1, which is within 1-$\sigma$ of and has smaller uncertainties than prior measurements using {\it Spitzer} data \citep{stansberry_2004, stansberry_2008, schambeau_2015}. This analysis also yields a slightly higher NEATM derived beaming parameter ($\eta = 1.1 \pm 0.2$). The size of SW1 places it on the smaller end of the currently-known Centaur size distribution \citep{bauer_2013, duffard_2014, lellouch_2013}, but on the larger end for small bodies with known cometary activity \citep{stansberry_2008, fernandez_2013}. With the refined nucleus size estimate presented here, we encourage future modeling efforts to better understand the bound inner coma environment of SW1. The Centaur SW1's large size among active objects, in combination with its orbital history that indicates it has not spent a significant amount of time interior to Jupiter \citep{sarid_2019}, positions it as a high-priority target for future observational and in situ investigations to better understand moderately sized and relatively pristine planetesimals to better understand the period of thermal evolution experienced while in the gateway transition from Centaur to JFC. We encourage the community to undertake new observations of SW1 and also for any currently existing and planned new observations to be listed on the SW1 observing campaign website: \href{https://wirtanen.astro.umd.edu/29P/29P_obs.shtml}{wirtanen.astro.umd.edu/29P/29P\_obs.shtml}. Additionally, we provide here links to the following resources emphasizing the importance of continued observations of SW1 and best practices for new observations: (1) the call for observations from \cite{womack_2020} and (2) a guide for new observations provided by the British Astronomical Association \citep{miles_2019}. \FloatBarrier \acknowledgments We would like to thank Dr. Richard Miles for his helpful discussions during the preparation of this manuscript. Additionally, we would like to thank our two anonymous reviewers, who's thorough review of the manuscript provided improvements to the presentation of the analyses. We also thank the NASA Earth and Space Science Fellowship (NNX16AP41H) and the Center for Lunar and Asteroid Surface Science (CLASS, NNA14AB05A) for support of this work. This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This research made use of Tiny Tim/Spitzer, developed by John Krist for the Spitzer Science Center. The Center is managed by the California Institute of Technology under a contract with NASA. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
{ "timestamp": "2021-05-19T02:20:21", "yymm": "2105", "arxiv_id": "2105.01789", "language": "en", "url": "https://arxiv.org/abs/2105.01789" }
\subsection{Similarity join under $\ell_2$ metric} \label{sec:l2} In this section, we consider the similarity join between two point sets $A$ and $B$ in $\Re^d$ under the $\ell_2$ metric. \paragraph{Reduction to halfspace containment.} We use the {\em lifting transformation}~\cite{de1997computational} to convert an instance of the similarity join problem under $\ell_2$ metric to the halfspace-containment problem in $\Re^{d+1}$. For any two points $a=(a_1,\ldots, a_d)\in A$ and $b=(b_1, \ldots, b_d)\in B$, $\lVert a-b \rVert_2\leq 1$ if and only if $(a_1-b_1)^2+\ldots +(a_d-b_d)^2\leq 1$, or $a$ lies in the unit sphere centered at $b$. The above condition can be rewritten as \[ a_1^2+b_1^2 +\cdots + a_d^2 + b_d^2 - 2a_1 b_1 - \cdots - 2a_d b_d - 1 \ge 0.\] We map the point $a$ to a point $a'=(a_1, \dots, a_d, a_1^2 + \cdots + a_d^2)$ in $\Re^{d+1}$ and the point $b$ to a halfspace $b'$ in $\Re^{d+1}$ defined as \[b': -2 b_1 z_1 - \cdots - 2b_d z_d +z_{d+1}+b_1^2 +\cdots + b_d^2 - 1\ge 0.\] Note that $\lVert a-b \rVert_2\leq 1$ if and only if $a'\in b'$. Set $A'=\{a'\mid a\in A\}$ and $B'=\{b'\mid b\in B\}$. Thus, in the following, we study the halfspace-containment problem, where given a set of points $A'$ and a set of halfspaces $B'$ we construct a dynamic data structure that reports all pairs $(a\in A', b\in B')$, such that $a$ belongs in the halfspace $b$, with delay guarantee. \paragraph{Partition tree.} A partition tree on a set $P$ of points in $\Re^d$~\cite{chan2012optimal,matouvsek1992efficient, willard1982polygon} is a tree data structure formed by recursively partitioning a set into subsets. Each point is stored in exactly one leaf and each leaf usually contains a constant number of points. Each node $u$ of the tree is associated with a simplex $\Delta_u$ and the subset $P_u=P\cap \Delta_u$; the subtree rooted at $u$ is a partition tree of $P_u$. We assume that the simplices associated with the children of a node $u$ are pairwise disjoint and lie inside $\Delta_u$, as in~\cite{chan2012optimal}. In general, the degree of a node is allowed to be non-constant. Given a query simplex $\Delta$, a partition tree finds a set of $O(n^{1-1/d})$ \emph{canonical} nodes whose cells contain the points of $P\cap \Delta$. Roughly speaking, a node $u$ is a canonical node for $\Delta$ if $\Delta_u\subset \Delta$ and $\Delta_{p(u)}\not\subseteq \Delta$. A simplex counting (resp. reporting) query can be answered in $O(n^{1-1/d})$ (resp. $O(n^{1-1/d}+k)$) time using a partition tree. Chan~\cite{chan2012optimal} proposed a randomized algorithm for constructing a linear size partition tree with constant degree, that runs in $O(n\log n)$ time and it has $O(n^{1-1/d})$ query time with high probability. \paragraph{Data structure.} For simplicity, with slight abuse of notation, let $A$ be a set of points in $\Re^d$ and $B$ a set of halfspaces in $\Re^d$ each lying below the hyperplane bounding it, and our goal is to build a dynamic data structure for halfspace-containment join on $A,B$. The overall structure of the data structure is the same as for rectangle containment described in Section~\ref{sec:linfty}, so we simply highlight the difference. Instead of constructing a range tree, we construct a dynamic partition tree $\mathcal{T}_A$ for $A$ so that the points of $A$ lying in a halfspace can be represented as the union of $O(n^{1-1/d})$ canonical subsets. For a halfplane bounding a halfspace $b\in B$, let $\bar{b}$ denote its dual point in $\Re^d$ (see \cite{de1997computational} for the definition of duality transform). Note that a point $a$ lies in $b$ if and only if the dual point $\bar{b}$ lies in the halfspace lying below the hyperplane dual to $a$. Set $\bar{B}=\{\bar{b}\mid b\in B\}$. We construct a multi-level dynamic partition tree on $\bar{B}$, so that for a pair of simplices $\Delta_1$ and $\Delta_2$, it returns the number of halfspaces of $B$ that satisfy the following two conditions: (i) $\Delta_1\subseteq b$ and (ii) $\Delta_2\cap \partial b\neq \emptyset$, where $\partial b$ is the hyperplane boundary defined by the halfspace $b$. This data structure uses $O(n)$ space, can be constructed in $\O(n)$ time, and answers a query in $\O(n^{1-1/d})$ time. For each node $u \in \mathcal{T}_A$, we issue a counting query to $\mathcal{T}_B$ and get the number of halfspaces in $B$ that have $u$ as a canonical node. Hence, $\mathcal{T}_A$ can be built in $\O(n^{2-1/d})$ time. For a node $u$, $\mu_A(u)$ can be computed in $O(1)$ time by storing $A_u$ at each node $u\in \mathcal{T}_A$. Recall that $\mu_B(u)$ is the number of halfspaces $b$ of $B$ for which $u$ is a canonical node, i.e., $\Delta_u \subseteq b$ and $\Delta_{p(u)}\cap \partial b\neq \emptyset$, where $p(u)$ is the parent of $u$. Using $\mathcal{T}_B$, $\mu_B(u)$ can be computed in $\O(n^{1-1/d})$ time. \paragraph{Update and enumeration.} The update procedure is the same that in Section~\ref{sec:linfty}, however the query time now on $\mathcal{T}_A$ or $\mathcal{T}_B$ is $\O(n^{1-\frac{1}{d}})$ so the amortized update time is $\O(n^{1-\frac{1}{d}})$. The enumeration query is also the same as in Section~\ref{sec:linfty} but a reporting query in $\mathcal{T}_B$ takes $\O(n^{1-\frac{1}{d}}+k)$ time (and it has delay at most $\O(n^{1-\frac{1}{d}})$), so the overall delay is $\O(n^{1-\frac{1}{d}})$. \begin{theorem} \label{thm:points-halfspaces} Let $A$ be a set of points and $B$ be a set of half-spaces in $\mathbb{R}^d$ with $|A| + |B| = n$. A data structure of $\O(n)$ size can be built in $ \O(n^{2-\frac{1}{d}})$ time and updated in $\O(n^{1-\frac{1}{d}})$ amortized time while supporting $\O(n^{1-\frac{1}{d}})$-delay enumeration of halfspace-containment query. \end{theorem} Using Theorem~\ref{thm:points-halfspaces} and the lifting transformation described at the beginning of this section we conclude with Corollary~\ref{cor:ell2}. \begin{corollary} \label{cor:ell2} Let $A, B$ be two sets of points in $\Re^d$, where $d\geq 1$ is a constant, with $|A|+|B|=n$. A data structure of $\O(n)$ size can be constructed in $ \O(n^{2-\frac{1}{d+1}})$ time and updated in $\O(n^{1-\frac{1}{d+1}})$ amortized time, while supporting $\O(n^{1-\frac{1}{d+1}})$-delay enumeration of similarity join under the $\ell_{2}$ metric. \end{corollary} \paragraph{Lower bound.} We show a lower bound for the similarity join in the pointer-machine model under the $\ell_2$ metric based on the hardness of unit sphere reporting problem. Let $P$ be a set of $n$ points in $\Re^d$ for $d>3$. The unit-sphere reporting problem asks for a data structure on the points in $P$, such that given any unit-sphere $b$ report all points of $P\cap b$. If the space is $\O(n)$, it is not possible to get a data structure for answering unit-sphere reporting queries in $\O(k+1)$ time in the pointer-machine model, where $k$ is the output size for $d\geq 4$ \cite{afshani2012improved}. For any instance of sphere reporting problem, we construct an instance of similarity join over two sets, with $A = \emptyset$, $B = P$, and $r=1$. Given a query unit-sphere of center $q$, we insert point $q$ in $A$, issue an enumeration query, and then remove $q$ from $A$. All results enumerated (if any) are the results of the sphere reporting problem. If there exists a data structure for enumerating similarity join under $\ell_2$ metric using $\O(n)$ space, with $\O(1)$ update time and $\O(1)$ delay, we would break the barrier. \begin{theorem} \label{theorem:lowerboundSphere} Let $A, B$ be two sets of points in $\mathbb{R}^d$ for $d > 3$, with $|A| + |B| = n$. If using $\O(n)$ space, there is no data structure under the pointer-machine model that can be updated in $\O(1)$ time, while supporting $\O(1)$-delay enumeration of similarity join under the $\ell_{2}$ metric. \end{theorem} \section{Approximate Enumeration} \label{sec:approximate} In this section we propose a dynamic data structure for answering approximate similarity-join queries under any $\ell_p$ metric. For simplicity, we use the $\ell_2$ norm to illustrate the main idea and assume $\phi(a,b)=||a-b||_2$. Recall that all pairs of $(a,b)\in A\times B$ with $\phi(a,b)\leq r$ must be reported, along with (potentially) some pairs of $(a', b')$ with $\phi(a',b')\leq (1+\varepsilon)r$, but no pair $(a,b)$ with $\phi(a,b)>(1+\varepsilon)r$ is reported. We will start with the setting where the distance threshold $r$ is not fixed and specified as part of a query, and then move to a simpler scenario where $r$ is fixed. \begin{figure} \centering \minipage{0.45\textwidth} \centering \vspace{3.2em} \includegraphics[scale=0.6]{wspd2.pdf} \vspace{0em} \caption{An example pair of $\varepsilon$-WSPD.} \label{fig:wspd} \endminipage\hfill \minipage{0.55\textwidth} \centering \includegraphics[scale=0.5]{Grid.pdf} \caption An example of active cell $c$ in the grid. } \label{fig:Grid} \endminipage \end{figure} \subsection{Variable Similarity Threshold} \label{sec:wspd} We describe the data structure when $r$ is part of the query. In this subsection we assume that the spread of $A\cup B$ is polynomially bounded, i.e., $sp(A\cup B)=\frac{\max_{p,q \in A\cup B}\phi(p,q)}{\min_{p\neq q\in A\cup B}\phi(p,q)}=n^{O(1)}$. We use a quad tree and well-separated pair decomposition (WSPD) for our data structure. We describe them briefly here and refer the reader to~\cite{har2011geometric, samet1989spatial} for details. \paragraph{Quad tree and WSPD.} A $d$-dimensional quad tree over a point set $P$ is a tree data structure $\mathcal{T}$ in which each node $u$ is associated with a hypercube $\square_u$ in $\Re^d$, called a \emph{cell}, and each internal node has $2^d$ children. The root is associated with a hypercube containing $P$. For a node $u$, let $P_u=P\cap \square_u$. A node $u$ is a leaf if $|P_u|\leq 1$. The tree recursively subdivides the space into $2^d$ congruent hypercubes until a box contains at most one point from $P$. If $sp(P)=n^{O(1)}$, the height of $\mathcal{T}$ is $O(\log n)$. Given two point sets $A, B\subset \Re^d$, with $|A|+|B|=n$, and a parameter $0< \varepsilon < \frac{1}{2}$, a family of pairs $\mathcal{W}=\{(A_1, B_1), (A_2, B_2), \cdots, (A_s, B_s)\}$ is an $\varepsilon$-WSPD if the following conditions hold: (1) for any $i\leq s$, $A_i\subseteq A$, $B_i\subseteq B$ (2) for each pair of points $(a,b) \in A\times B$, there exists a unique pair $(A_j, B_j)\in \mathcal{W}$ such that $a \in A_j$ and $b\in B_j$ (3) for any $i\leq s$, $\max\{\operatorname{diam}(A_i), \operatorname{diam}(B_i)\} \leq \varepsilon \cdot \phi(A_i, B_i)$, where $\operatorname{diam}(X) = \max_{x,y\in X} \phi(x,y)$ and $\phi(X, Y) = \min_{x \in X, y\in Y} \phi(x,y)$ (see Figure~\ref{fig:wspd}). As shown in~\cite{har2011geometric, har2006fast} if $sp(A\cup B)=n^{O(1)}$, a quad tree $T$ on $A\cup B$ can be used to construct, in time $O(n\log n + \varepsilon^{-d}n)$, a WSPD $\mathcal{W}$ of size $O(\varepsilon^{-d}n)$ such that each pair $(A_i, B_i)\in \mathcal{W}$ is associated with pair of cells $(\square_i, \boxplus_i)$ in $\mathcal{T}$ where $A_i=A\cap \square_i$ and $B_i=B\cap \boxplus_i$. It is also known that for each pair $(A_i, B_i)\in \mathcal{W}$ (i) $\square_i\cap \boxplus_i=\emptyset$, (ii) $\max\{\operatorname{diam}(\square_i), \operatorname{diam}(\boxplus_i)\}\leq \varepsilon\phi(\square_i, \boxplus_i)$, and each cell appears in $O(\varepsilon^{-d}\log n)$ cells (see Figure~\ref{fig:wspd}). We will use $\mathcal{W}=\{(\square_1,\boxplus_i), \ldots, (\square_s,\boxplus_s)\}$ to denote the WSPD, with $A_i, B_i$ being implicitly defined from the cells. Using the techniques in~\cite{callahan1995dealing, fischer2005dynamic}, the quad tree $\mathcal{T}$ and the WSPD $\mathcal{W}$ can be maintained under insertions and deletions of points in $\O(\varepsilon^{-d})$ time. \paragraph{Data structure.} We construct a quad tree $\mathcal{T}$ on $A\cup B$. For each node $u \in \mathcal{T}$, we store a pointer $A_u$ (and $B_u$) to the leftmost leaf of subtree $\mathcal{T}_u$ that contains a point from $A$ (and $B$). Furthermore, we store sorted lists $L_A$ and $L_B$ of the leaves that contain points from $A$ and $B$, respectively. We use these pointers and lists to report points in $\square_u$ with $O(1)$ delay. Using $\mathcal{T}$, we can construct a WSPD $\mathcal{W}=\{(\square_1, \boxplus_1),\ldots, (\square_s, \boxplus_s)\}$, $s=O(\varepsilon^{-d})$. For each $i$, let $\Delta_i=\min_{p\in \square_i, q\in\boxplus_i}\phi(p,q)$. We store all pairs $(\square_i, \boxplus_i)$ in a red-black tree $\mathcal{Z}$ using $\Delta_i$ as the key. The data structure has $O(\varepsilon^{-d}n)$ size and $O(\varepsilon^{-d}n\log n)$ construction time. \paragraph{Update.} After inserting or deleting an input point, the quad tree $\mathcal{T}$ and $W$ can be updated in $\O(\varepsilon^{-d})$ time, following the standard techniques in ~\cite{callahan1995dealing, fischer2005dynamic}. As there are at most $\O(\varepsilon^{-d})$ pairs changed, we can update the tree $\mathcal{Z}$ in $\O(\varepsilon^{-d})$ time. Furthermore, we note that there are only $O(1)$ changes in the structure of quad tree $\mathcal{T}$ and the height of $\mathcal{T}$ is $O(\log n)$, so we can update all necessary pointers $A_u, B_u$ and sorted lists $L_A, L_B$ in $O(\log n)$ time. \paragraph{Enumeration.} Let $r$ be the threshold parameter specified as part of a query. We traverse the tree $\mathcal{Z}$ in order and report pairs of cells until we reach a pair $(\square_j, \boxplus_j)$ with $\Delta_j>r$. For each pair $(\square_i, \boxplus_i)$ reported, we traverse we enumerate $(a,b)\in (A\cap \square_i) \times (B\cap \boxplus_i)$ using the stored pointers and the sorted lists $L_A, L_B$. The delay guarantee is $O(1)$. Let $(a, b) \in A \times B$ be a pair with $\phi(a,b)\leq r$. Implied by the definition, there exists a unique pair $(A_i, B_i)\in \mathcal{W}$ such that $a\in A_i$ and $b\in B_i$. Notice that $\phi(\square_i, \boxplus_i) \leq\phi(a,b) \leq r$. Thus, all results of $A_i \times B_i$ will be reported, including $(a,b)$. Next, let $(\square_i, \boxplus_i)$ be a pair that is reported by the enumeration procedure in $\mathcal{Z}$, with $\phi(\square_i, \boxplus_i)\leq r$. For any pair of points $x \in \square_i, y \in \boxplus_i$, we have $\phi(x,y) \le \phi(\square_i,\boxplus_i)+\operatorname{diam}(\square_i)+\operatorname{diam}(\boxplus_i) \le (1+2\cdot \frac{\varepsilon}{2}) \cdot \phi(\square_i,\boxplus_i) \le (1+\varepsilon)r$, thus $\phi(a,b)\leq (1+\varepsilon)r$ for any pair $(a,b)\in A_i\times B_i$. \begin{theorem} Let $A, B$ be two sets of points in $\mathbb{R}^d$ for constant $d$, with $O(n^{O(1)})$ spread and $|A| + |B| = n$. A data structure of $O(\varepsilon^{-d}n)$ space can be built in $\O(\varepsilon^{-d}n)$ time and updated in $\O(\varepsilon^{-d})$ time, while supporting $\varepsilon$-approximate enumeration for similarity join under any $\ell_p$ metric with $O(1)$ delay, for any query similarity threshold $r$. \end{theorem} \subsection{Fixed distance threshold} \label{sec:grid} Without loss of generality we assume that $r=1$. We use a grid-based data structure for enumerating similarity join with fixed distance threshold $r$ \newcommand{\mathcal{C}_{NE}}{\mathcal{C}_{NE}} \paragraph{Data structure.} Let $\mathcal{G}$ be an infinite uniform grid\footnote{When extending it to any $\ell_p$ norm, the size of each grid cell is $\varepsilon/(2d^{1/p})$ and the diameter is $\frac{\epsilon}{2}$.} in $\Re^d$, where the size of each grid cell is $\frac{\varepsilon}{2\sqrt{d}}$ and the diameter is $\frac{\varepsilon}{2}$. For a pair of cells $c,c' \in \G$, define $\phi(c,c') = \min_{p \in c, q \in c'} \phi(p,q)$. Each grid cell $c\in \mathcal{G}$ is associated with (1) $A_c=A \cap c$; (2) $B_c=B \cap c$; (3) $m_c=\sum_{c': \phi(c,c')\leq 1} |B_{c'}|$ as the number of points in $B$ that lie in a cell $c'$ within distance $1$ from cell $c$. Let $\mathcal{C}_{NE}\subseteq \mathcal{G}$ be the set of all non-empty cells, $\mathcal{C}_{NE}=\{c\in \mathcal{G}\mid A_c\cup B_c\neq \emptyset\}$. A grid cell $c\in \mathcal{C}_{NE}$ is {\em active} if and only if $A_c\neq \emptyset$ and $m_c > 0$ (see Figure~\ref{fig:Grid} for an example). Let $\mathcal{C} \subseteq \mathcal{C}_{NE}$ be the set of active grid cells (Figure~\ref{fig:Grid}). Notice that a grid cell is stored when there is at least one point from $A$ or $B$ lying inside it, so $|\mathcal{C}_{NE}|\leq n$. Finally, we build a balanced search tree on $\mathcal{C}$ so that whether a cell $c$ is stored in $\mathcal{C}$ can be answered in $O(\log n)$ time. Similarly, we build another balanced search tree to store the set of non-empty cells $\mathcal{C}_{NE}$. \paragraph{Update.} Assume point $a \in A$ is inserted into cell $c\in \mathcal{G}$. If $c$ is already in $\mathcal{C}_{NE}$, simply add $a$ to $A_c$. Otherwise, we add $c$ to $\mathcal{C}_{NE}$ with $A_c=\{a\}$ and update $m_c$ as follows. We visit each cell $c' \in \mathcal{C}_{NE}$ with $\phi(c,c') \le 1$, and add $|B_{c'}|$ to $m_c$. A point of $A$ is deleted in a similar manner. Assume point $b \in B$ is inserted into cell $c\in \mathcal{G}$. If $c \notin \mathcal{C}_{NE}$, we add it to $\mathcal{C}_{NE}$. In any case, we first insert $b$ into $B_c$ and for every cell $c'\in \mathcal{C}_{NE}$ with $\phi(c,c') \le 1$, we increase $m_{c'}$ by $1$ and add $c'$ to $\mathcal{C}$ if $c'$ turns from inactive to active. A point from $B$ is deleted in a similar manner. As there are $O(\varepsilon^{-d})$ cells within distance $1$ from $c$, this procedure takes $\O(\varepsilon^{-d})$ time. \paragraph{Enumeration.} For each active cell $c\in \mathcal{C}$, we visit each cell $c'\in \mathcal{C}_{NE}$ within distance $1$. If $B_{c'}\neq \emptyset$, we report all pairs of points in $A_c \times B_{c'}$. It is obvious that each pair of points is enumerated at most once. For an active cell $c$, there must exists a pair $(a\in A_c, b\in B_{c'})$ for some cell $c' \in \mathcal{C}_{NE}$ such that $\phi(a,b) \leq \phi(c,c')+\operatorname{diam}(c)+\operatorname{diam}(c')\leq 1+\varepsilon$. So it takes at most $O(\varepsilon^{-d}\log n)$ time before finding at least one result for $c$, thus the delay is $O(\varepsilon^{-d}\log n)$. Furthermore, consider every pair of points $a,b$ with $\phi(a,b)\leq 1$. Assume $a\in c$ and $b\in c'$. By definition, $c$ must be an active grid cell. Thus, $(a,b)$ will definitely be enumerated in this procedure, thus guaranteeing the correctness of $\varepsilon$-enumeration. \begin{theorem} Let $A, B$ be two sets of points in $\mathbb{R}^d$ for some constant $d$, with $|A| + |B| = n$. A data structure of $O(n)$ size can be constructed in $O(n\varepsilon^{-d}\log n)$ time and updated in $O(\varepsilon^{-d}\log n)$ time, while supporting $\varepsilon$-approximate enumeration of similarity join under any $\ell_p$ metric with $O(\varepsilon^{-d}\log n)$ delay. \end{theorem} Note that if for each active cell $c\in \mathcal{C}$, we store the cells within distance $1$ that contain at least a point from $B$, i.e., $\{c'\in C\mid \phi(c,c')\leq 1, B_c\neq \emptyset\}$, then the delay can be further reduced to $O(1)$ but the space becomes $O(\varepsilon^{-d}n)$. \section{Exact Similarity Join} \label{sec:exact} In this section, we describe the data structure for exact similarity joins under the $\ell_\infty, \ell_1, \ell_2$ metrics, assuming $d$ is constant. We first describe the data structure for the $\ell_{\infty}$ metric. We show that similarity join under the $\ell_1$ metric in $\Re^d$ can be reduced to that under the $\ell_{\infty}$ metric in $\Re^{d+1}$. Finally, we describe the data structure for the $\ell_2$ metric. Throughout this section, the threshold $r$ is fixed, which is assumed to be $1$ without loss of generality. \subsection{Similarity join under $\ell_\infty$ metric} \label{sec:linfty} Let $A$ and $B$ be two point sets in $\Re^d$ with $|A|+|B|=n$. For a point $p\in\Re^d$, let $\mathcal{B}(p)=\{x\in \Re^d\mid \norm{p-x}_{\infty}\leq 1\}$ be the hypercube of side length $2$. We wish to enumerate pairs $(a,b)\in A\times B$ such that $a\in \mathcal{B}(b)$. \paragraph{Data structure.} We build a $d$-dimensional dynamic range tree $\mathcal{T}$ on the points in $A$. For $d=1$, the range tree on $A$ is a balanced binary search tree $\mathcal{T}$ of $O(\log n)$ height. The points of $A$ are stored at the leaves of $\mathcal{T}$ in increasing order, while each internal node $v$ stores the smallest and the largest values, $\alpha_v^-$ and $\alpha_v^+$, respectively, contained in its subtree. The node $v$ is associated with an interval $I_v=[\alpha_v^-, \alpha_v^+]$ and the subset $A_v=I_v\cap A$. For $d>1$, $\mathcal{T}$ is constructed recursively: We build a $1$D range tree $\mathcal{T}_d$ on the $x_d$-coordinates of points in $A$. Next, for each node $v\in \mathcal{T}_d$, we recursively construct a $(d-1)$-dimensional range tree $\mathcal{T}_v$ on $A_v^*$, which is defined as the projection of $A_v$ onto the hyperplane $x_d=0$, and attach $\mathcal{T}_v$ to $v$ as its secondary tree. The size of $\mathcal{T}$ in $\Re^d$ is $O(n\log^{d-1} n)$ and it can be constructed in $O(n\log^d n)$ time. See~\cite{de1997computational} for details. For a node $v$ at a level-$i$ tree, let $p(v)$ denote its parents in that tree. If $v$ is the root of that tree, $p(v)$ is undefined. For each node $u$ of the $d$-th level of $\mathcal{T}$, we associate a $d$-tuple $\pi(u)=\langle u_1, u_2, \ldots, u_d=u\rangle$, where $u_i$ is the node at the $i$-th level tree of $\mathcal{T}$ to which the level-$(i+1)$ tree containing $u_{i+1}$ is connected. We associate the rectangle $\square_u=\prod_{j=1}^d I_{u_j}$ with the node $u$. For a rectangle $\rho=\prod_{i=1}^d \delta_i$ , a $d$-level node is called a \emph{canonical node} if for every $i\in [1,d]$, $I_{u_i}\subseteq \delta_i$ and $I_{p(u_i)}\not\subseteq \delta_i$. For any rectangle $\rho$, there are $O(\log^d n)$ canonical nodes in $\mathcal{T}$, denoted by $\mathcal{N}(\rho)$, and they can be computed in $O(\log^d n)$ time~\cite{de1997computational}. $\mathcal{T}$ can be maintained dynamically, as points are inserted into $A$ or deleted from $A$ using the standard partial-reconstruction method, which periodically reconstructs various bottom subtrees. The amortized time is $O(\log^d n)$; see~\cite{overmars1987design} for details. We query $\mathcal{T}$ with $\mathcal{B}(b)$ for all $b\in B$ and compute $\mathcal{N}(b):=\mathcal{N}(\mathcal{B}(b))$ the sets of its canonical nodes. For each level-$d$ tree node $u$ of $\mathcal{T}$, let $B_u=\{b\in B\mid u\in \mathcal{N}(b)\}$. We have $\sum_{u}|B_u|=O(n\log^d n)$. By construction, for all pairs $(a,b)\in A_u\times B_u$, $\norm{a-b}_{\infty}\leq 1$, so $(A_u, B_u)$ is a bi-clique of join results. We call $u$ \emph{active} if both $A_u, B_u\neq \emptyset$. A naive approach for reporting join results is to maintain $A_u, B_u$ for every $d$-level node $u$ of $\mathcal{T}$ as well as the set $\mathcal{C}$ of all active nodes. Whenever an enumerate query is issued, we traverse $\mathcal{C}$ and return $A_u\times B_u$ for all $u\in \mathcal{C}$ (referring to the tripartite-graph framework mentioned in Introduction, $C$ is the set of all level-$d$ nodes of $\mathcal{T}$). The difficulty with this approach is that when $A$ changes and $\mathcal{T}$ is updated, some $d$-level nodes change and we have to construct $B_u$ for each new level-$d$ node $u\in \mathcal{T}$. It is too expensive to scan the entire $B$ at each update. Furthermore, although the average size of $B_u$ is small, it can be very large for a particular $u$ and this node may appear and disappear several times. So we need a different approach. The following lemma is the key observation. \begin{lemma} \label{lem:A} Let $u$ be a level-$d$ node, and let $\pi(u)=\langle u_1, \ldots, u_d=u\rangle$. Then there is a $d$-dimensional rectangle $\mathcal{R}(u)=\prod_{i=1}^d\delta_i$, where the endpoints of $\delta_i$, for $i\in [1,d]$, are defined by the endpoints of $I_{u_i}$ and $I_{p(u_i)}$, such that for any $x\in \Re^d$, $u\in \mathcal{N}(x)$ if and only if $x\in \mathcal{R}(u)$. Given $u_i$'s and $p(u_i)$'s, $\mathcal{R}(u)$ can be constructed in $O(1)$ time. \end{lemma} \begin{figure}[t] \centering \includegraphics[scale=0.4]{figure.png} \caption{Left: Two levels of the range tree. Right: Definition of $\mathcal{R}(u)$.} \label{fig:example} \end{figure} \begin{proof} Notice that $\mathcal{B}(x)$ is the hypercube of side length $2$ and center $x$. Let $I_{u_i}=[\alpha_{u_i}^-, \alpha_{u_i}^+]$ for any $u_i$ and $i\in[1,d]$. Recall that $u\in \mathcal{N}(x)$ if and only if for each $i\in [1,d]$, \[I_{u_i}\subseteq [x_i-1, x_i+1] \textrm{ and } I_{p(u_i)}\not\subseteq [x_i-1, x_i+1], \ \ \ (*)\] Fix a value of $i$. From the construction of a range tree either $\alpha_{u_i}^-=\alpha_{p(u_i)}^-$ or $\alpha_{u_i}^+=\alpha_{p(u_i)}^+$. Without loss of generality, assume $\alpha_{u_i}^-=\alpha_{p(u_i)}^-$; the other case is symmetric. Then ($*$) can be written as: $x_i\leq \alpha_{u_i}^-+1$ and $\alpha_{u_i}^+-1\leq x_i<\alpha_{p(u_i)}^+-1$. Therefore $x_i$ has to satisfy three $1$D linear constraints. The feasible region of these constraints is an interval $\delta_i$ and $x_i\in \delta_i$ (see also Figure~\ref{fig:example}). Hence, $u$ is a canonical node of $\mathcal{B}(x)$ if and only if for all $i\in [1,d]$, $x_i\in \delta_i$. In other words, $x=(x_1,\ldots, x_d)\in \prod_{i=1}^d\delta_i := \mathcal{R}(u)$. The endpoints of $\delta_i$ are the endpoints of $I_{u_i}$ or $I_{p(u_i)}$. In order to construct $\mathcal{R}(u)$, we only need the intervals $I_{u_i}$ and $I_{p(u_i)}$ for each $i\in [1,d]$, so it can be constructed in $O(d)=O(1)$ time. \end{proof} In view of Lemma~\ref{lem:A}, we proceed as follows. We build a dynamic range tree $\mathcal{Z}$ on $B$. Furthermore, we augment the range tree $\mathcal{T}$ on $A$ as follows. For each level-$d$ node $u\in \mathcal{T}$, we compute and store $\mathcal{R}(u)$ and $\beta_u=|B_u|$. By construction, $|A_u|\geq 1$ for all $u$. We also store a pointer at $u$ to the leftmost leaf of the subtree of $\mathcal{T}$ rooted at $u$, and we thread all the leaves of a $d$-level tree so that for a node $u$, $A_u$ can be reported in $O(|A_u|)$ time. Updating these pointers as $\mathcal{T}$ is updated is straightforward. Whenever a new node $u$ of $\mathcal{T}$ is constructed, we query $\mathcal{Z}$ with $\mathcal{R}(u)$ to compute $\beta_u$. Finally, we store $\mathcal{C}$, the set of all active nodes of $\mathcal{T}$, in a red-black tree so that a node can be inserted or deleted in $O(\log n)$ time. The total size of the data structure is $O(n\log^{d-1} n)$, and it can be constructed in $O(n\log^d n)$ time. \paragraph{Update and Enumerate.} Updating $A$ is straightforward. We update $\mathcal{T}$, query $\mathcal{Z}$ with $\mathcal{R}(u)$, for all newly created $d$-level nodes $u$ in $\mathcal{T}$ to compute $\beta_u$, and update $\mathcal{C}$ to delete all active nodes that are no longer in $\mathcal{T}$ and to insert new active nodes. Since the amortized time to update $\mathcal{T}$ as a point is inserted or deleted is $O(\log^d n)$, the amortized update time of a point in $A$ is $O(\log^{2d} n)$ --- we spend $O(\log^d n)$ time to compute $\beta_u$ for each of $O(\log^d n)$ newly created nodes. If a point $b$ is inserted (resp. deleted) in $B$, we update $\mathcal{Z}$ and query $\mathcal{T}$ with $\mathcal{B}(b)$. For all canonical nodes $u$ in $\mathcal{N}(b)$, we increment (resp. decrement) $b_u$. If $u$ becomes active (resp. inactive), we insert (resp. delete) $u$ in $\mathcal{C}$ in $O(\log n)$ time. The amortized update time for $b$ is $O(\log^{d+1} n)$. Finally, to enumerate the pairs in join results, we traverse the active nodes $\mathcal{C}$ and for each $u\in \mathcal{C}$, we first query $\mathcal{Z}$ with $\mathcal{R}(u)$ to recover $B_u$. Recall that $B_u$ is reported as a set of $O(\log^d n)$ canonical nodes of $\mathcal{Z}$ whose leaves contain the points of $B_u$. We simultaneously traverse the leaves of the subtree of $\mathcal{T}$ rooted at $u$ to compute $A_u$ and report $A_u\times B_u$. The traversals can be performed in $O(\log^{d} n)$ maximum delay. Putting everything together, we obtain: \begin{theorem} \label{the:rectangle-point} Let $A, B$ be two sets of points in $\Re^d$, where $d\geq 1$ is a constant, with $|A|+|B|=n$. A data structure of $\O(n)$ size can be built in $\O(n)$ time and updated in $\O(1)$ amortized time, while supporting $\O(1)$-delay enumeration of similarity join under $\ell_{\infty}$ metric. \end{theorem} \subsection{Similarity join under $\ell_1$ metric} For $d\leq 2$ it is straightforward to reduce similarity join under $\ell_1$ metric to $\ell_\infty$ metric. For $d=1$, $\ell_1$ metric is obviously equivalent to the $\ell_\infty$ metric. For $d=2$, notice that the $\ell_1$ ball is a diamond, while the $\ell_\infty$ ball is a square. Hence, given an instance of the similarity join under the $\ell_1$ metric we can rotate $A\cup B$ by $45$ degrees to create an equivalent instance of the similarity join problem under the $\ell_\infty$ metric. Next, we focus on $d\geq 3$. The data structure we proposed in Section~\ref{sec:linfty} for the $\ell_\infty$ norm can be straightforwardly extended to the \emph{rectangle-containment} problem in which for each $b\in B$, $\mathcal{B}(b)$ is an arbitrary axis-aligned hyper-rectangle with center $b$, and the goal is to report all $(a,b)\in A\times B$ such that $a\in \mathcal{B}(b)$. Lemma~\ref{lem:A} can be extended so that $\mathcal{R}(u)$ is a $2d$-dimensional rectangle. Overall, Theorem~\ref{the:rectangle-point} remains the same assuming $\mathcal{B}(b)$ are hyper-rectangles (and not hypercubes). Given an instance of similarity join under $\ell_1$ metric in $\Re^d$, we next show how to reduce it to $2^d$ $(d+1)$-dimensional rectangle-containment problems. As above, assume $r= 1$, so our goal is to report all pairs $a=(a_1,\ldots, a_d)\in A$, $b=(b_1,\ldots, b_d)\in B$ such that $\sum_{i=1}^d|a_i-b_i|\leq 1$. Let $E=\{-1,+1\}^d$ be the set of all $2^d$ vectors in $\Re^d$ with coordinates either $1$ or $-1$. For each vector $e\in E$, we construct an instance of the rectangle-containment problem. For each $e=(e_1,\ldots, e_d)\in E$, we map each point $a=(a_1, \ldots, a_d)\in A$ to a point $\bar{a}_e=(a_1, \ldots, a_d, \sum_{i=1}^d e_{i}a_i) \in \Re^{d+1}$. Let $\bar{A}_e=\{\bar{a}_e\mid a\in A\}$. For each point $b=(b_1, \ldots, b_d)\in B$, we construct the axis-align rectangle $\bar{b}_e=\prod_{i=1}^{d+1} b_e^{(i)}$ in $\Re^{d+1}$, where $b_e^{(i)}$ is the interval $[b_i, \infty)$ if $e_{i}=1$ and $(-\infty, b_i]$ if $e_{i}=-1$ for each $i=1,\ldots, d$, and $b_e^{(d+1)}=(-\infty, 1+\sum_{i=1}^d e_{i}b_{i}]$. Let $\bar{B}_e=\{\bar{b}_e\mid b\in B\}$. See Figure~\ref{fig:l1transformation}. \begin{figure*} \centering \includegraphics[scale=0.7]{l1transformation.pdf} \caption{An illustration of mapping each $b$ to rectangles.} \label{fig:l1transformation} \end{figure*} For each $e \in E$, we construct the dynamic data structure for $\bar{A}_e, \bar{B}_e$. Whenever $A$ or $B$ is updated, we update all $2^d$ rectangle-containment data structures. A similarity join enumeration query on $A, B$ is answered by enumerating containment pairs $(\bar{A}_e, \bar{B}_e)$ for $e\in E$. If a pair $(\bar{a}_e, \bar{b}_e)$ is reported, we report $(a,b)$. The update time and delay are $\O(1)$. The correctness of the algorithm follows from the following lemma. Let $\sign(x)=+1$ if $x\geq 0$ and $-1$ otherwise. \begin{lemma} \label{lem:ell1} Let $a=(a_1,\ldots, a_d)\in A$, $b=(b_1,\ldots, b_d)\in B$ be an arbitrary pair of points. Let $e^*=(e^*_1,\ldots, e^*_d)$ where $e^*_i=\sign(a_i-b_i)$ for $1\leq i\leq d$. Then $\bar{a}_e\notin \bar{b}_e$ for all $e\notin E\setminus\{e^*\}$. Furthermore, $\bar{a}_{e^*}\in \bar{b}_{e^*}$ if and only if $\lVert a-b \rVert_1\leq 1$. \end{lemma} \begin{proof} First, we note that for any $e \in E \setminus \{e^*\}$, there must exist some $i$ such that $e_i \neq e^*_i$. Without loss of generality, assume $e_j = 1$ when $a_j < b_j$. By the definition of $\bar{a}_e, \bar{b}_e$, $a_j\notin [b_j, \infty)$, thus $\bar{a}_e \notin \bar{b}_e$. Next, we show that $\bar{a}_{e^*}\in \bar{b}_{e^*}$ if and only if $\lVert a-b \rVert_1\leq 1$. On one hand, we assume $\bar{a}_{e^*}\in \bar{b}_{e^*}$. By definition, $\sum_{i=1}^d e^*_i a_i$ lies in the interval associated with $b^{d+1}_{e^*}$, i.e., $\sum_{i=1}^d e^*_i a_i \le 1 + \sum_{i=1}^d e^*_i b_i$, or $\sum_{i=1}^d e^*_i (a_i -b_i) \le 1$. Implied by the fact that $\lVert a-b\rVert_1 = \sum_{i=1}^d e^*_{i}(a_i-b_i)$, we have $\lVert a-b\rVert_1 \le 1$. On the other hand, assume $\lVert a-b\rVert_1 \le 1$. Similarly, we have $ \lVert a-b\rVert_1 = \sum_{i=1}^d e^*_{i}(a_i-b_i) \le 1 \Leftrightarrow \sum_{i=1}^d e^*_{i}a_i \le 1 + \sum_{i=1}^d e^*_{i} b_i$, or $\sum_{i=1}^d e^*_{i}a_i \in (-\infty, 1 + \sum_{i=1}^d e^*_{i} b_i]$. Moreover, for any $i \in \{1,\ldots, d\}$, we have: (1) if $e^*_i = 1$, $a_i \ge b_i$, i.e., $a_i \in [b_i, \infty)$; (2) if $e^*_i = -1$, $a_i \le b_i$, i.e., $a_i \in (-\infty, b_i]$. Hence, $\bar{a}_{e^*} \in \bar{b}_{e^*}$. \end{proof} \begin{figure}[h] \centering \includegraphics[scale=0.4]{l1ball.pdf} \caption{An illustration of $\ell_1$ ball in $\Re^3$. It is decomposed to $2^d=8$ types of simplices.} \label{fig:l1ball} \end{figure} \paragraph{Remark.} Roughly speaking, we partition the $\ell_1$-ball centered at $0$ into $2^d$ simplices $\Delta_1,\ldots, \Delta_{2^d}$ (see Figure~\ref{fig:l1ball}) and build a separate data structure for each simplex $\Delta_i$. Namely, let $\mathcal{B}_i=\{b+\Delta_i\mid b\in B\}$ and we report all pairs $(a,b)\in A\times B$ such that $a\in b+\Delta_i$. If $\lVert a-b \rVert_1\leq 1$ then $a$ lies in exactly one simplex $b\in \Delta_i$. We map each simplex to a rectangle in $\Re^{d+1}$ and use the previous data structure. \medskip Using Theorem~\ref{the:rectangle-point}, we obtain: \begin{theorem} \label{the:pyramid-point} Let $A, B$ be two sets of points in $\Re^d$, where $d\geq 1$ is a constant, with $|A|+|B|=n$. A data structure of $\O(n)$ size can be built in $\O(n)$ time and updated in $\O(1)$ amortized time, while supporting $\O(1)$-delay enumeration of similarity join under $\ell_{1}$ metric. \end{theorem} \section{Appendix} \begin{comment} \subsection{Reduction from similarity join under $\ell_1$ metric to $\ell_\infty$ metric} \label{appendix:l1-linfty} For $d\leq 2$ it is straightforward to reduce similarity join under $\ell_1$ metric to $\ell_\infty$ metric. For $d=1$, $\ell_1$ metric is obviously equivalent to the $\ell_\infty$ metric. For $d=2$, notice that the $\ell_1$ ball is a diamond, while the $\ell_\infty$ ball is a square. Hence, given an instance of the similarity join under the $\ell_1$ metric we can rotate $A\cup B$ by $45$ degrees to create an equivalent instance of the similarity join problem under the $\ell_\infty$ metric. Next, we focus on $d\geq 3$. The data structure we proposed in Section~\ref{sec:linfty} for the $\ell_\infty$ norm can be straightforwardly extended to the case where for each $b\in B$, $\mathcal{B}(b)$ is an arbitrary hyper-rectangle with center $b$. In that case we aim to report all $(a,b)\in A\times B$ such that $a\in \mathcal{B}(b)$. Lemma~\ref{lem:A} can be extended so that $\mathcal{R}(u)$ is a $2d$-dimensional rectangle. Overall, Theorem~\ref{the:rectangle-point} remains the same assuming $\mathcal{B}(b)$ are hyper-rectangles (and not hypercubes). Given an instance of similarity join under $\ell_1$ metric in $\Re^d$, we next show how to reduce it to an instance of similarity join under $\ell_\infty$ metric in $\Re^{d+1}$ (assuming arbitrary hyper-rectangles). \begin{figure}[h] \centering \includegraphics[scale=0.4]{DynamicEnumerationNEW/pods2021/l1ball.pdf} \caption{An illustration of $\ell_1$ ball in $\Re^3$. It is decomposed to $2^d=8$ types of simplices.} \label{fig:l1ball} \end{figure} In the $\ell_1$ metric, $b$ can be mapped to an $\ell_1$ ball with radius $r$, i.e., a ball that contains all points $x\in \Re^d$ such that $\sum_{j=1}^d|b_j-x_j|\leq r$ (please see Figure~\ref{fig:l1ball}), where $b_j, x_j$ are the $j$-th coordinates of points $b, x$, respectively. Hence, we need to report a pair $(a,b)\in A\times B$ if and only if $\sum_{j=1}^d|b_j-a_j|\leq r$. Let $E$ be the set of all vectors in $\Re^d$ with coordinates either $1$ or $-1$. We have $|E|=2^d$. For each vector $e_i\in E$, we construct an instance of our problem. For each $e_i\in E$, we map each point $a=(a_1, \ldots, a_d)\in A$ to a point $\bar{a}_i=(a_1, \ldots, a_d, \sum_{j=1}^d e_{ij}a_j) \in \Re^{d+1}$. Let $\bar{A}_i=\{\bar{a}_i\mid a\in A\}$. In addition, for each point $b=(b_1, b_2, \ldots, b_d)\in B$, we construct the axis-align rectangle in $\Re^{d+1}$, $\bar{b}_i$ which is defined as the Cartesian product of the intervals$[b_j, \infty)$ if $e_{ij}=1$ and $(-\infty, b_j]$ if $e_{ij}=-1$ for each $j=1,\ldots, d$, and $(-\infty, r+\sum_{j=1}^de_{ij}b_{j}]$. Let $\bar{B}_i=\{\bar{b}_i\mid b\in B\}$. For each $e_i\in E$, we construct the data structure for the $\ell_\infty$ metric by taking $\bar{A}_i$ as the set of points and $\bar{B}_i$ as the set of rectangles (the range tree data structure we used can work even for unbounded rectangles). Equivalently, if $b$ is the $\ell_1$ ball of radius $r$ then it can be decomposed to $2^d$ simplices (Figure~\ref{fig:l1ball}). Each vector $e_i\in E$ corresponds to a type of simplex of these balls. Each type of simplex has a fixed orientation and can be processed independently by orthogonal range searching in $\Re^{d+1}$. Let $(a,b)\in A\times B$ be an arbitrary pair of points such that $\lVert a-b\rVert_1\leq 1$. Then we show that there is a unique vector in $e_i\in E$ such that $\bar{a}_i\in \bar{b}_i$. Let $e_i$ be the vector such that $\lVert a-b\rVert_1=\sum_{j=1}^de_{ij}(a_j-b_j)$. Notice that the first $d$ coordinates of $\bar{a}_i$ lie inside the first $d$ intervals defining $\bar{b}_i$: (1) If $e_{ij}=1$, then $a_j\geq b_j$, i.e., $a_j\in [b_j,+\infty)$; (2) If $e_{ij}=-1$, then $a_j\leq b_j$, i.e., $a_j\in [-\infty, b_j)$. Observe that $\sum_{j=1}^de_{ij}(a_j-b_j)\leq r$ can be equivalently rewritten as $\sum_{j=1}^d e_{ij}a_j\leq r+\sum_{j=1}^de_{ij}b_j$, or $\sum_{j=1}^d e_{ij}a_j \in (-\infty, r+\sum_{j=1}^de_{ij}b_{j}]$. It is easy to see that for any other vector $e_{i'}\neq e_{i}$, at least one of the constraints above does not hold. With the same argument, we can show that if $\lVert a-b\rVert_1>r$, then there is no vector $e_i\in E$ such that $\bar{a}_i\in \bar{b}_i$. Overall, we build $O(2^d)=O(1)$ data structures for the reduced instance. \end{comment} \begin{comment} \subsection{Dynamic Enumeration of Halfspace-containment problem} \label{Appndx:ell2} \paragraph{Reduction to halfspace containment.} We use the {\em lifting transformation}~\cite{de1997computational} to convert an instance of the similarity join problem under $\ell_2$ metric to the {\em halfspace-containment} problem in $\Re^{d+1}$. The join condition above can be rewritten as \[ a_1^2+b_1^2 +\cdots + a_d^2 + b_d^2 - 2a_1 b_1 - \cdots - 2a_d b_d - r^2 \ge 0.\] We map the point $a$ to a point $a'=(a_1, \dots, a_d, a_1^2 + \cdots + a_d^2)$ in $\Re^{d+1}$ and the point $b$ to a halfspace $b'$ in $\Re^{d+1}$ defined as \[ -2 b_1 z_1 - \cdots - 2b_d z_d +z_{d+1}+b_1^2 +\cdots + b_d^2 - r^2\ge 0.\] Note that $a,b$ join if and only if $a'\in b'$. Thus, in the following, we study the halfspace-containment problem. Set $A'=\{a'\mid a\in A\}$ and $B'=\{b'\mid b\in B\}$. \paragraph{Partition tree.} A partition tree on a set $P$ of points in $\Re^d$~\cite{chan2012optimal,matouvsek1992efficient, willard1982polygon} is a tree data structure formed by recursively partitioning a set into subsets. Each point is stored in exactly one leaf and each leaf usually contains a constant number of points. Each node $u$ of the tree is associated with a simplex $\Delta_u$ and the subset $P_u=P\cap \Delta_u$; the subtree rooted at $u$ is a partition tree of $P_u$. We assume that the simplices associated with the children of a node $u$ are pairwise disjoint and lie inside $\Delta_u$, as in~\cite{chan2012optimal}. In general, the degree of a node is allowed to be non-constant. Given a query simplex $\Delta$, a partition tree finds a set of $O(n^{1-1/d})$ \emph{canonical} nodes whose cells contain the points of $P\cap \Delta$. Roughly speaking, a node $u$ is a canonical node for $\Delta$ if $\Delta_u\subset \Delta$ and $\Delta_{p(u)}\not\subseteq \Delta$. A simplex counting (resp. reporting) query can be answered in $O(n^{1-1/d})$ (resp. $O(n^{1-1/d}+k)$) time using a partition tree. Chan~\cite{chan2012optimal} proposed a randomized algorithm for constructing a linear size partition tree with constant degree, that runs in $O(n\log n)$ time and it has $O(n^{1-1/d})$ query time with high probability. \paragraph{Data structure.} For simplicity, with slight abuse of notation, let $A$ be a set of points in $\Re^d$ and $B$ a set of halfspaces in $\Re^d$ each lying below the hyperplane bounding it, and our goal is to build a dynamic data structure for halfspace-containment join on $A,B$. The overall structure of the data structure is the same as for rectangle containment described in Section~\ref{sec:linfty}, so we simply highlight the difference. Instead of constructing a range tree, we construct a dynamic partition tree $\mathcal{T}_A$ for $A$ so that the points of $A$ lying in a halfspace can be represented as the union of $O(n^{1-1/d})$ canonical subsets. For a halfplane bounding a halfspace $b\in B$, let $\bar{b}$ denote its dual point in $\Re^d$ (see \cite{de1997computational} for the definition of duality transform). Note that a point $a$ lies in $b$ if and only if the dual point $\bar{b}$ lies in the halfspace lying below the hyperplane dual to $a$. Set $\bar{B}=\{\bar{b}\mid b\in B\}$. We construct a multi-level dynamic partition tree on $\bar{B}$, so that for a pair of simplices $\Delta_1$ and $\Delta_2$, it returns the number of halfspaces of $B$ that satisfy the following two conditions: (i) $\Delta_1\subseteq b$ and (ii) $\Delta_2\cap \partial b\neq \emptyset$, where $\partial b$ is the hyperplane boundary defined by the halfspace $b$. This data structure uses $O(n)$ space, can be constructed in $\O(n)$ time, and answers a query in $\O(n^{1-1/d})$ time. For each node $u \in \mathcal{T}_A$, we issue a counting query to $\mathcal{T}_B$ and get the number of halfspaces in $B$ that have $u$ as a canonical node. Hence, $\mathcal{T}_A$ can be built in $\O(n^{2-1/d})$ time. For a node $u$, $\mu_A(u)$ can be computed in $O(1)$ time by storing $A_u$ at each node $u\in \mathcal{T}_A$. Recall that $\mu_B(u)$ is the number of halfspaces $b$ of $B$ for which $u$ is a canonical node, i.e., $\Delta_u \subseteq b$ and $\Delta_{p(u)}\cap \partial b\neq \emptyset$, where $p(u)$ is the parent of $u$. Using $\mathcal{T}_B$, $\mu_B(u)$ can be computed in $\O(n^{1-1/d})$ time. \paragraph{Update and Enumeration.} The update procedure is the same that in Section~\ref{sec:linfty}, however the query time now on $\mathcal{T}_A$ or $\mathcal{T}_B$ is $\O(n^{1-\frac{1}{d}})$ so the amortized update time is $\O(n^{1-\frac{1}{d}})$. The enumeration query is also the same as in Section~\ref{sec:linfty} but a reporting query in $\mathcal{T}_B$ takes $\O(n^{1-\frac{1}{d}}+k)$ time (and it has delay at most $\O(n^{1-\frac{1}{d}})$), so the overall delay is $\O(n^{1-\frac{1}{d}})$. \begin{theorem} \label{thm:points-halfspaces} Let $A$ be a set of points and $B$ be a set of half-spaces in $\mathbb{R}^d$ with $|A| + |B| = n$. A data structure of $\O(n)$ size can be built in $ \O(n^{2-\frac{1}{d}})$ time and updated in $\O(n^{1-\frac{1}{d}})$ amortized time while supporting $\O(n^{1-\frac{1}{d}})$-delay enumeration of halfspace-containment query. \end{theorem} Using Theorem~\ref{thm:points-halfspaces} and the lifting transformation described at the beginning of this section we conclude with Corollary~\ref{cor:ell2}. \end{comment} \section{Triangle Similarity join} \label{sec:trianglejoin} In this section we propose data structures for the approximate triangle join queries. Our results can be extended to $m$-clique join queries, for constant $m$. For simplicity we describe the results for the triangle join $m=3$. Let $A, B, S\in \Re^d$ be three sets of points such that $|A|+|B|+|S|=n$. We first consider the when the distance threshold $r$ is fixed and then lift this assumption. \begin{comment} \subsection{Exact Enumeration} We can easily extend the results from Section~\ref{sec:exact} to handle triangle join queries. Assume the $\ell_\infty$ metric. The high level idea is that we are going to construct two levels of our data structures for pair similarity join queries. In particular, in the first level we consider two sets, $A'=A\cup S$ and $B$. As we had in Section~\ref{sec:exact} we construct a dynamic range tree $\mathcal{T}_{A'}$ on $A'$ and a dynamic range tree $\mathcal{T}_B$ on $B$. For each level-$d$ node $u$ of $\mathcal{T}_A$ we store and maintain the counter $\beta_u$ with the same way as in Section~\ref{sec:exact}. However, instead of keeping $A'_u$, we have a pointer $p_u$ to a pair similarity join data structure between $A_u$ and $S_u$, where $A_u$ is the set of $A$ in the subtree of $\mathcal{T}_{A'}$ rooted at $u$, and $S_u$ s the set of $S$ in the subtree of $\mathcal{T}_{A'}$ rooted at $u$. Hence, we construct a dynamic range tree $\mathcal{T}_{A}^u$ and a dynamic range tree $\mathcal{T}_{S}^u$. For each level-$d$ node $v$ in $\mathcal{T}_A^u$ we store and maintain $\beta_v$ as the number of points in $S_u$ that have $v$ as a canonical node of $\mathcal{T}_A^u$ and $A_v$ the points of $A_u$ stored in the subtree of $\mathcal{T}_{A}^u$ rooted at $v$. Let $\mathcal{C}_u$ be the active nodes in $T_{A}^u$. A level-$d$ node $u$ of $\mathcal{T}_{A'}$ is active, i.e., in $\mathcal{C}$, if $\beta_u>0$ and there exists at least one pair of $A_u, S_u$ within distance $1$. In other words it should hold that $\beta_u>0$ and $\mathcal{C}_u\neq \emptyset$. When we insert a point $a\in A$ we need $O(\log^{2d} n)$ time to insert it in $\mathcal{T}_{A'}$ and all trees $\mathcal{T}_{A}^u$. Also for each new level-$d$ node $v$ of $\mathcal{T}_A^u$ we run a range query on $\mathcal{T}_S^u$ to check if $v$ is in $\mathcal{C}_u$. Similarly, we can handle the other cases. Hence, each update take $O(\log^{3d} n)$ amortized time. The delay guarantee remains $O(\log^{d} n)$ since we only need to report points of different level-$d$ range trees. Overall, we get the following result. \begin{theorem} Let $A, B, S$ be two sets of points in $\Re^d$, where $d\geq 1$ is a constant, with $|A|+|B|+|S|=n$. A data structure of $\O(n)$ size can be built in $\O(n)$ time and updated in $\O(1)$ amortized time, while supporting $\O(1)$-delay enumeration of triangle similarity join under $\ell_{\infty}$ metric. \end{theorem} Similarly the results can be extended to $\ell_1$ and $\ell_2$ metrics. \begin{theorem} Let $A, B, S$ be two sets of points in $\Re^d$, where $d\geq 1$ is a constant, with $|A|+|B|+|S|=n$. A data structure of $\O(n)$ size can be built in $\O(n)$ time and updated in $\O(1)$ amortized time, while supporting $\O(1)$-delay enumeration of triangle similarity join under $\ell_{1}$ metric. \end{theorem} \begin{theorem} Let $A, B, S$ be two sets of points in $\Re^d$, where $d\geq 1$ is a constant, with $|A|+|B|+|S|=n$. A data structure of $\O(n)$ size can be constructed in $ \O(n^{2-\frac{1}{d+1}})$ time and updated in $\O(n^{1-\frac{1}{d+1}})$ amortized time, while supporting $\O(n^{1-\frac{1}{d+1}})$-delay enumeration of similarity join under the $\ell_{2}$ metric. \end{theorem} \end{comment} \subsection{Fixed distance threshold} \label{appndx:approxtriangle} The data structure we construct works for any $\ell_p$ norm. For simplicity, we describe it for $\ell_2$ first and extend it to any $\ell_p$ metric at last. In this subsection, we use $\phi(a,b)=||a-b||_2$. As we had in Section~\ref{sec:grid} let $\mathcal{G}$ be an infinite uniform grid in $\Re^d$ where the size of each grid cell is $\varepsilon/(2\sqrt{d})$, so its diameter is $\varepsilon/2$ (using the $\ell_2$ distance). For each grid cell $c\in \mathcal{G}$ we store $A_c=A\cap c$, $B_c=B\cap c$, and $S_c=S\cap c$. Furthermore, we store a counter $m_c=|\{(b,s)\in B\times S\mid \exists c_1, c_2 \in \mathcal{G} \text{ s.t. } b\in c_1, s\in c_2, \phi(c_1,c_2)\leq 1, \phi(c_1,c)\leq 1, \phi(c_2,c)\leq 1\}|$, i.e., the number of pairs $(b,s)$ whose cells along with cell $c$ are within distance $1$. Let $\mathcal{C}_{NE}$ be the non-empty cells, i.e., $\mathcal{C}_{NE}=\{c\in \mathcal{G}\mid A_c\cup B_C\cup S_c\}\neq \emptyset$. A grid cell $c\in C$ is active if and only if $A_c\neq \emptyset$ and $m_c>0$. Let $\mathcal{C}\subseteq \mathcal{C}_{NE}$ be the set of active grid cells. We construct a balanced search tree to answer efficiently if a cell is already in $\mathcal{C}$. Similarly we create a balanced search tree for the cells in $\mathcal{C}_{NE}$. Our data structure has $O(n)$ space. We first describe the updates. Assume that we insert a point $a\in A$. If $a$ lies in a cell $c\in \mathcal{C}_{NE}$ then we insert $a$ in $A_c$. If $a$ is inserted to a cell that did not exist then we create $c$, we add it in $\mathcal{C}_{NE}$ and we set $A_c=\{a\}$. Then we need to find the value $m_c$. The algorithm visits all existed cells around $c\in \mathcal{C}_{NE}$ within distance $1$. Let $c_1$ be such a cell such that $B_{c_1}\neq \emptyset$ or $S_{c_1}\neq \emptyset$. We need to count all points in $B$ and $S$ that lie in cells within distance $1$ from both $c$ and $c_1$. Notice that these cells are inside a rectangle $R$. Indeed, if $R_1$ is a square of radius $1$ around $c$ and $R_2$ is a square of radius $1$ around $c_1$ then $R=R_1\cap R_2$ is a rectangle. We visit all grid cells inside $R$ and find the number of points from $B, S$ in $R$. Let $m_B=|B\cap R|$ and $m_S=|S\cap R|$. We update $m_c$ with $m_c+|B_{c_1}|\cdot m_S+|S_{c_1}|\cdot m_B$. In the end it is easy to verify that $m_c$ has the correct value. Next, assume that we remove a point $a\in A$. Let $c$ be the cell of point $a$. We remove $a$ from $A_c$ and if $A_c=\emptyset$ and $c\in \mathcal{C}$ then we remove $c$ from $\mathcal{C}$. If $A_c=B_c=S_c=\emptyset$ we remove $c$ from $\mathcal{C}_{NE}$. Since there are $O(\varepsilon^{-d})$ grid cells in a square of radius $1$ we need $O(\varepsilon^{-2d}\log n)$ time to insert $a$ and $O(\varepsilon^{-d}\log n)$ to remove $a$. Then we continue by updating a point $b\in B$ (the method is similar to update $s\in S$). Assume that we add $b\in B$ in a cell $c$ (if $c$ did not exist we create it) and we insert it in $B_c$. The goal is to update all counters $m_{c_1}$ within distance $1$ from $c$. We start by visiting all cells $c_1\in \mathcal{C}_{NE}$ within distance $1$ from $c$. We need to update the value of $m_{c_1}$. In particular, we need to count the number of points in $S$ that lie in cells within distance $1$ from both $c, c_1$. This is similar to what we had for the insertion of $a$ so we can count it by visiting all grid cells within distance $1$ from $c, c_1$. Let $m_S$ be the result. Then, we update $m_{c_1}\leftarrow m_{c_1}+m_S$. Finally, assume that we remove a point $b\in B$ from a cell $c$. We remove $b$ from $B_c$ and again, we need to visit all cells $c_1$ within distance $1$ and update their $m_{c_1}$ values by $m_{c_1}\leftarrow m_{c_1}-m_S$ ($m_S$ can be found as we explain in the previous case). If $c_1\in \mathcal{C}$ and $m_{c_1}=0$ we remove $c_1$ from $\mathcal{C}$. In the end, if $A_c=B_c=S_c=\emptyset$ we remove $c$ from $\mathcal{C}_{NE}$. Again, it is easy to observe that $m_c$ have the correct values for all $c\in \mathcal{C}_{NE}$ and hence $\mathcal{C}$ is the correct set of active cells. We need $O(\varepsilon^{-2d}\log n)$ time to insert or remove a point in $B$. Next, we describe the enumeration procedure. For each $c\in \mathcal{C}$ we consider every $a\in A_c$. We visit each cell $c_1\in \mathcal{C}_{NE}$ around $c$ within distance $1$. Then we visit each cell $c_2\in \mathcal{C}_{NE}$ within distance $1$ from both $c_1, c$. We report (if any) the points $a\times B_{c_1}\times S_{c_2}$ and $a\times S_{c_1}\times B_{c_2}$. We show the correctness of our method. Let $(a\in A, b\in B, s\in S)$ be a triad within distance $1$. Let $a\in c_1, b\in c_2, s\in c_3$ for $c_1, c_2, c_3\in \mathcal{C}_{NE}$. Notice that $\phi(c_1,c_2),\phi(c_1,c_3), \phi(c_2,c_3)\leq 1$. From the update procedure we have that $m_{c_1}>0$, hence, $c_1\in \mathcal{C}$. The algorithm will visit $c_1$ and it will also consider $c_2$ since $\phi(c_1,c_2)\leq 1$. Then it will also consider $c_3$ since $\phi(c_1,c_3)\leq 1$ and $\phi(c_2,c_3)\leq 1$. Hence our enumeration procedure will return the triad $(a,b,s)$. Furthermore, it is straightforward to see that i) our enumeration algorithm will never report a triad $(a,b,s)$ such that a pairwise distance is greater than $1+\varepsilon$, and ii) whenever $c\in \mathcal{C}$ there will always be a triad $(a\in A_c, b\in B, s\in S)$ to report. Finally, since our enumeration algorithm reports points that lie in cells with pairwise distance $1$ it might be possible that it will return $(a,b,s)$ such that $\phi(a,b)\leq \phi(c_1,c_2)+\operatorname{diam}(c_1)+\operatorname{diam}(c_2)\leq 1+\varepsilon$, $\phi(a,c)\leq 1+\varepsilon$, and $\phi(b,c)\leq 1+\varepsilon$. The delay is $O(\varepsilon^{-2d}\log n)$. The same result can be extended to any $\ell_p$ norm by considering grid cells of size $\varepsilon/(2d^{1/p})$. \begin{theorem} Let $A, B, S$ be three sets of points in $\mathbb{R}^d$, with $|A| + |B| +|S| = n$. A data structure of $O(n)$ space can be constructed in $O(n\varepsilon^{-2d}\log n)$ time and updated in $O(\varepsilon^{-2d}\log n)$ time, while supporting $\varepsilon$-approximate enumeration of triangle similarity join queries under any $\ell_p$ metric with $O(\varepsilon^{-2d}\log n)$ delay. \end{theorem} For $\ell_1, \ell_\infty$, we can slightly improve the result using a data structure to find $m_B, m_S$ more efficiently. Skipping the details, we can obtain a data structure of $O(n\log^{d-1} n)$ space that can be built in $O(n\log^{d-1} n + n\cdot\min\{\varepsilon^{-d}\log^{d-1} n, \varepsilon^{-2d}\}\log n)$ time and updated in $O(\min\{\varepsilon^{-d}\log^{d-1} n, \varepsilon^{-2d}\}\log n)$ time, while supporting $\varepsilon$-approximate enumeration of triangle similarity join under $\ell_1/\ell_\infty$ metrics with $O(\min\{\varepsilon^{-d}\log^{d-1} n, \varepsilon^{-2d}\}\log n)$ delay. \subsection{Variable distance threshold} We describe two data structures for this case. One is based on grid using $O(\varepsilon^{-1}n\log (n))$ space and the other based on WSPD using $O(\varepsilon^{-2d}n)$ space. \paragraph{Grid-based data structure.} Assume that the spread $sp(A\cup B\cup S)=n^{O(1)}$ and that all points lie in a box with diagonal length $R$. The high level idea is to build multiple grids as described in Appendix~\ref{appndx:approxtriangle}. Recall that for each cell $c\in C$, we need to store counters $A_c, B_c, S_c$ and $m_c$. However, the definition of $m_c$ depends on the threshold $r$ which is not known upfront in this case. Hence we consider multiple thresholds $r_i$. In particular for each $i\in[0, \log_{1+\varepsilon/4}sp(A\cup B\cup S)]$ we construct a grid for $r_i=\frac{R}{sp(A\cup B\cup S)}(1+\varepsilon/4)^i$ as in Appendix~\ref{appndx:approxtriangle}. Hence for each $i$ we maintain the counter $m_c^i$ defined as $m_c^i=|\{(b,s)\in B\times S\mid \exists c_1, c_2 \in \mathcal{G} \text{ s.t. } b\in c_1, s\in c_2, \phi(c_1,c_2)\leq r_i, \phi(c_1,c)\leq r_i, \phi(c_2,c)\leq r_i\}|$, \footnote{For each $i$ we scale everything so that $r_i=1$, as we did in Appendix~\ref{appndx:approxtriangle}.} and the set of active cells $\mathcal{C}_i$. Notice that there are $O(\varepsilon^{-1}\log n)$ different values of $i$. For a point insertion or deletion the algorithm updates all necessary counters $m_c^i$ and active cells $\mathcal{C}_i$ for all $i$. For an enumeration query, assume that $r$ is the query threshold. Notice that $\frac{R}{sp(A\cup B\cup S)}\leq r\leq R$, otherwise the result is trivial. Running a binary search on the values of $i$ we find the smallest $i$ such that $r\leq r_i$. Then using only the active cells $\mathcal{C}_i$ and the counters $m_c^i$ we enumerate all triangles within distance $r_i$. The delay guarantee is the same as in Appendix~\ref{appndx:approxtriangle}, $O(\varepsilon^{-2d}\log n)$. We conclude with the next theorem. \begin{theorem} Let $A, B, S$ be three sets of points in $\mathbb{R}^d$ for constant $d$, with $O(\operatorname{poly}(n))$ spread, $|A| + |B| +|S| = n$, where $A\cup B\cup S$ lie in a hyper-rectangle with diagonal length $R$. A data structure of $O(\varepsilon^{-1}n\log n)$ space can be constructed in $O(n\varepsilon^{-2d-1}\log^2 n)$ time and updated in $O(\varepsilon^{-2d-1}\log^2 n)$ time, while supporting $\varepsilon$-approximate enumeration of triangle similarity join queries under any $\ell_p$ metric with $O(\varepsilon^{-2d}\log n)$ delay, for any query distance threshold $r$. \end{theorem} \paragraph{WSPD-based data structure.} We describe the main idea here. Assume that $sp(A\cup B\cup S)=n^{O(1)}$. Let $\mathcal{W}_{A,B}$ be the WPSD construction of $A, B$ as in Section~\ref{sec:wspd}. Similarly, we consider $\mathcal{W}_{A,S}$, and $\mathcal{W}_{B,S}$. For each pair $(A_i, B_i)\in \mathcal{W}_{A,B}$, let $\phi(\square_i, \boxplus_i)=r_i$. Let $c_i$ be the center of $\square_i$ and $c_i'$ be the center of $\boxplus_i$. Let $\mathcal{L}_i$ be the lune (intersection) of the spheres with radius $r_i$ and with centers $c_i, c_i'$. We run a query with $\mathcal{L}_i$ on a quadtree $\mathcal{T}_S$ on the points $S$. We get $O(\varepsilon^{-d})$ quadtree boxes. Then we construct the triplets $\mathcal{W}_{A,B}'=\{(A_1,B_1,S_1),\ldots, (A_{\xi}, B_{\xi}, S_{\xi})\}$, where $\xi=O(\varepsilon^{-2d}n)$. Similarly, we construct $\mathcal{W}_{A,S}', \mathcal{W}_{B,S}'$. Let $\mathcal{W}'=\mathcal{W}_{A,B}'\cup \mathcal{W}_{A,S}'\cup \mathcal{W}_{B,S}'$. We can show that each triplet $(a,b,s)\in A\times B \times S$ can be found in at least one triplet $(A_i, B_i, S_i)$ in $\mathcal{W}'$. In particular, let $(a,b,s)\in A\times B\times S$ be a triplet such that (without loss of generality) $\phi(a,b)\geq \phi(a,s)\geq \phi(b,s)$. From the definition of the WPSD $\mathcal{W}_{A,B}$ we have that there exists a unique pair $(A_i, B_i)$ such that $a\in A_i$ and $b\in B_i$. Notice that $\phi(a,s), \phi(b,s)<\phi(a,b)$ so $s$ should lie in the lune $\mathcal{L}_i$ so there exists a triplet $(A_i, B_i, S_i)\in \mathcal{W}_{A,B}'\subseteq \mathcal{W}'$ such that $a\in A_i, b\in B_i, s\in S_i$. In addition, due to the bounded spread we have that each node participates in at most $O(\varepsilon^{-2d}\log n)$ triplets in $\mathcal{W}'$ and each point belongs in at most $O(\varepsilon^{-2d}\log^2 n)$ triplets in $\mathcal{W}'$. Hence, each update takes $\O(\varepsilon^{-2d})$ time. Using a tree $\mathcal{Z}$ as in Section~\ref{sec:wspd} and following a deduplication method as in Section~\ref{sec:highd} we can execute $\varepsilon$-approximate enumeration of all triplets $(a,b,s)$ within distance $r$ in $\O(\varepsilon^{-2d})$ delay. \begin{theorem} Let $A, B, S$ be three sets of points in $\mathbb{R}^d$ for constant $d$, with $O(\operatorname{poly}(n))$ spread and $|A| + |B| = n$. A data structure of $O(\varepsilon^{-2d}n)$ space can be built in $\O(\varepsilon^{-2d}n)$ time and updated in $\O(\varepsilon^{-2d})$ time, while supporting $\varepsilon$-approximate enumeration for similarity join under any $\ell_p$ metric with $\O(\varepsilon^{-2d})$ delay, for any query distance threshold $r$. \end{theorem} \input{lsh} \subsection{Approximate Similarity Join under $\ell_2$ metric} \label{appendix:l2} In this section, we use the \emph{BBD} tree to derive our new index. We start by introducing BBD trees. \paragraph{BBD tree.} A balanced box decomposition (or BBD for short) tree~\cite{arya2000approximate, arya1998optimal} is a variant of the \emph{quadtree}~\cite{har2011geometric, samet1989spatial}. A BBD tree $\mathcal{T}$ on a set $P$ of $n$ points in $\mathbb{R}^d$, is a balanced binary tree of size $O(n)$ and height $O(\log n)$ with $n$ leaves. It can be constructed in $O(n\log n)$ time. Every box has a bounded aspect ratio and every $2d$ levels of descent in the tree, the size of the associated cells decreased by at least a factor of $1/2$, where the size of a cell is defined as the length of its longest side. Each node is associated with a box in $\mathbb{R}^d$ or the set theoretic difference of two nested boxes $\square_v^O, \square_v^I$ (rectangular annulus) with $\square_v^I$ possibly being empty. Hence each box $v$ is associated with an annular region $\square_v=\square_v^O\setminus \square_v^I$. Given a sphere $B$ of radius $1$, the BBD tree in \cite{arya2000approximate, arya1998optimal} returns $O(\log n + \varepsilon^{d-1})$ nodes such that the union of their regions completely covers $B\cap P$ and might cover some parts of $(1+\varepsilon)B$. Given $x\in \Re^d, r\geq 0, \varepsilon>0$, the BBD tree $\mathcal{T}$ can be \begin{itemize} \item all points of $P$ within distance $r$ from $x$ will be returned; \item no point of $P$ farther than $(1+\epsilon)r$ from $x$ will be returned; \end{itemize} Points that are at distance from $x$ between $r$ and $(1+\varepsilon)r$ from $x$ may or may not be reported. The time complexity of such a query is $O(\log n + \varepsilon^{1-d}+k)$, where $k$ is the output size. In~\cite{eppstein2005skip, mount2010dynamic}, the dynamic version of BBD tree was proposed, with $\O(1)$ time for maintenance under per update. \paragraph{Index} We build two dynamic BBD trees for points in $A, B$ as $\mathcal{T}_A, \mathcal{T}_B$ separately. The dynamic property of $\mathcal{T}_A$ will be maintained by applying the transformation described in Section~\ref{sec:prelim}. In particular, we partition the points in $A$ into $m=\log |A|$ groups $A^{(1)}, A^{(2)}, \ldots, A^{(m)}$ and construct $\log |A|$ static BBD trees $\mathcal{T}^{(i)}$, for each $i=1,2,\ldots, m$. The construction of $\mathcal{T}_B$ will use the standard techniques in~\cite{eppstein2005skip, mount2010dynamic}, which supports $\O(1)$ amortized update time. These two indexes uses $\O(n)$ space and can be built in $\O(n)$ time. \paragraph{Tripartite graph representation.} Next, we will show how to define a tripartite graph representation $G =(A \cup C \cup B, E_1 \cup E_2)$. For a node $u$ in $\mathcal{T}_A$, let $p(u)$ be the parent of $u$, $\square_u$ be the box associated with $u$ and $\textrm{size}(\square_u)$ be the largest side of the box $\square_u$. Define \[C =\{u\in \mathcal{T}_A \mid \textrm{size}(\square_u)\leq \varepsilon/\sqrt{d}< \textrm{size}(\square_{p(u)})\},\] i.e., the set of nodes that have size smaller than $\varepsilon/\sqrt{d}$ but their parents have size strictly greater than $\varepsilon/\sqrt{d}$. Notice that $C$ defines a grid on the point set $A$ with small enough cells. The BBD tree is actually used in our case to efficiently derive the cells of $C$ that are contained in a query ball. The high-level idea why we choose these nodes from $\mathcal{T}_A$ as $C$ can be explained as follows. When a new ball is inserted or deleted from $B$, we need to find all nodes in $C$ whose union cover the ball (because of the error $\varepsilon$ of the BBD tree, their union can also contain areas out of the ball). The size of nodes in $C$ is small enough so that the area out of the ball covered will be bounded, and also big enough so that for any unit ball $b \in B$, the number of nodes in $C$ to cover $b$ is bounded. Consider an arbitrary node $u \in C$. For any point $a \in A$, edge $(a,u) \in E_1$ exists if $a \in \square_u$. However, for any point $b \in B$, edge $(b,u)$ is not constructed in a deterministic way; instead, it obeys the following rules due to the approximation nature of range queries over BBD tree: if $\lVert x-b \rVert_2 \le (1+\varepsilon)^2$ for some corner $x$ of $\square_u$, $(b,u)$ exists; if $\lVert x-b \rVert_2 > (1+\varepsilon)^3$, $(b,u)$ does not exist; otherwise, $(b,u)$ may or may not exist. \paragraph{Preprocessing.} Let $\mathcal{Q}_\varepsilon(x,1+\varepsilon)$ be an approximate range query on a BBD tree, where the query range is a ball of center $x$ and radius $1+\varepsilon$, and the error parameter is $\varepsilon$. We assume that $\mathcal{Q}_\varepsilon(x,1+\varepsilon)$ returns the set of nodes of the BBD tree whose union cover the ball $\mathcal{B}(x,1+\varepsilon)$. We define the inactive/active status of nodes in $C$. Node $u \in C$ is {\em active} if $\square_u \cap A\neq \emptyset$, and there exists a corner $x$ of $\square_u$ such that running the query $\mathcal{Q}_\varepsilon(x, 1+\varepsilon)$ in $\mathcal{T}_B$ it returns at least one point from $B$, and {\em inactive} otherwise. Notice that if a node $u\in C$ is active then $\square_u$ contains at least a point in $A$ and there exists at least one corner $x$ of $\square_u$ and a point $b\in B$ with $\lVert x-b\rVert_2\leq (1+\varepsilon)^2$. All active nodes are maintained in $\mathcal{C}$. \begin{lemma} \label{lem:bbd-active} For any node $u \in C$, if $u$ is active, for any point $a \in A \cap \square_u$, there exists a point $b \in B$ such that $\lVert a-b\rVert_2\leq (1+\varepsilon)^2 +\varepsilon$; otherwise, there exists no pair of points $(a\in \square_v \cap A, b \in B)$ such that $\lVert a-b\rVert_2 \leq 1$. \end{lemma} \begin{proof} If $v$ is active, there exists a corner $x$ of $\square_v$ such that $\mathcal{Q}_\varepsilon(x, 1+\varepsilon)\neq \emptyset$. Let $b \in \mathcal{Q}_\varepsilon(x, 1+\varepsilon)$ be an arbitrary point. Moreover, $\lVert x-b\rVert_2 \le (1+\varepsilon)^2$. As $\textrm{size}(\square_v) \le \varepsilon/\sqrt{d}$, the distance between any pair of points inside $\square_v$ is at most $\epsilon$. Then, for any point $a \in \square_v \cap A^{(i)}$, $\lVert a-b\rVert_2 \le \lVert x-b\rVert_2 + \lVert x-a\rVert_2 \le (1+\varepsilon)^2 + \varepsilon$. If $v$ is inactive, assume there is a pair of points $a\in \square_v\cap A_i, b\in B$ with $\lVert a-b\rVert_2 \leq 1$. As $\textrm{size}(\square_v)\leq \varepsilon/\sqrt{d}$, any pair of points in $\square_v$ has its distance bounded by $\epsilon$. Let $x$ be an arbitrary corner of $\square_v$. We will have $\lVert x-b\rVert_2 \le \lVert a-b\rVert_2 + \lVert x-a\rVert_2 \leq 1+\varepsilon$, contradicting the fact that $v$ is inactive, i.e., no corner of $\square_v$ has a near neighbor in $B$ within distance $1+\varepsilon$. \end{proof} Note that the status of $u$ can be decided in $O(\varepsilon^{1-d} \log n)$ time, so this step takes $O(n \varepsilon^{1-d}\log n)$ time in total. \subsubsection{Update} Note that the dynamic BBD tree $\mathcal{T}_B$ can be updated in $O(\log n)$ time, following the standard techniques in~\cite{eppstein2005skip, mount2010dynamic}. Next, we focus on the update on $\mathcal{T}$ and $G$. In the first case when an update happens for points in $A$, say $a$, we follow the standard techniques in \cite{bentley1980decomposable, erickson2011static}. \paragraph{Insertion of $a$.} Let $i$ be the smallest index such that $A^{(i)} =\emptyset$. We delete all BBD trees for every group $j \leq i$ and create a new BBD tree $\mathcal{T}^{(i)}$ on points in $\{a\} \cup (\bigcup_{j\leq i} A_j)$. Following the observations in \cite{bentley1980decomposable, erickson2011static}, the amortized cost is $O(\varepsilon^{1-d}\log n)$. \paragraph{Deletion of $a$.} Assume $a \in A^{(i)}$. Let $u$ be the leaf node in $\mathcal{T}^{(i)}$ containing $a$. We just remove $a$ from $u$. Let $v \in C$ be the node lying on the path from root to $u$. If $v$ is active and $\square_v \cap A^{(i)} = \emptyset$, i.e., there is no other point lying inside $\square_u$ except $a$, we remove $v$ from $\mathcal{C}$. Note that $\mathcal{T}^{(i)}$ is static, so no operations are performed on the tree structure. When the number of deleted points in $A$ is more than the number of the remaining points in $A$, we just stop updating the existing index and build the set of $O(\lceil \log |A| \rceil)$ static BBD trees from scratch. Note that this index can be constructed in $O(n \cdot \varepsilon^{1-d}\log n)$ time, the amortized cost is $O(\varepsilon^{1-d}\log n)$. \medskip In the second case when an update comes from $B$, say $b$, we perform the following procedure for each group $A^{(i)}$ if $A^{(i)} \neq \emptyset$. The high-level idea is to find all nodes in $C$ whose status would be changed by $b$, and update them if necessary. Let $U$ be the set of nodes returned by issuing a range query $\mathcal{Q}_\varepsilon(b,(1+\varepsilon)^2)$ to $\mathcal{T}^{(i)}$. The following Lemma states that every node in $U$ is located ``above'' the nodes in $C$. \begin{lemma} \label{lem:above} For each $u\in U$, $size(\square_{p(u)})\geq \varepsilon/\sqrt{d}$. \end{lemma} \begin{proof} Let $v\in C$ be a node visited by the search procedure $\mathcal{Q}_\varepsilon(b, (1+\varepsilon)^2)$ in $\mathcal{T}^{(i)}$. We argue that the search algorithm will not continue searching at the children of $v$. The search procedure in \cite{arya2000approximate, arya1998optimal} first checks if $\square_v\subset \mathcal{B}(b, (1+\varepsilon)^3)$. If that is true then the search procedure does not consider the children of $v$ and $v\in U$. If $\square_v\not\subset \mathcal{B}(b, (1+\varepsilon)^3)$ then it checks if $\square_v\cap \mathcal{B}(b, (1+\varepsilon)^2) = \emptyset$. If it is true then it does not consider the children of $v$ and $v\notin U$. Hence, the only remaining case to consider is when $\square_v\not\subset \mathcal{B}(b, (1+\varepsilon)^3)$ and $\square_v\cap \mathcal{B}(b, (1+\varepsilon)^2) \neq \emptyset$. However, this case will not happen because $\textrm{size}(\square_v)\leq \varepsilon/\sqrt{d}$: if $\square_v\not\subset \mathcal{B}(b, (1+\varepsilon)^3)$ then $\square_v$ is so small that it does not intersect $\mathcal{B}(b, (1+\varepsilon)^2)$. In particular, any pair of points in $\square_v$ has distance at most $\varepsilon$ and the distance of $\mathcal{B}(b, (1+\varepsilon)^2)$ and $\mathcal{B}(b, (1+\varepsilon)^3)$ is more than $\varepsilon$, so if $\square_v\not\subset \mathcal{B}(b, (1+\varepsilon)^3)$ then $\square_v\cap \mathcal{B}(b, (1+\varepsilon)^2) = \emptyset$. \end{proof} For each $u\in U$, we find all nodes in $C$ which reside in the subtree of $\mathcal{T}^{(i)}$ rooted at $u$, denoted as $V_u$. Note that we will check the status for each node $v \in \bigcup_{u \in U} V_u$. To see why this procedure is correct, it suffices to show that every node of $C$ whose status will be changed by the update of $b$ must be included by $\bigcup_{u \in U} V_u$. In particular, it suffices to show that when a point $b$ is deleted from $B$, we will visit all nodes $u\in C$ such that there exists some corner $x$ of $\square_u$ with $\lVert x-b\rVert_2\leq (1+\varepsilon)^2$. Let $u$ be such a node in $C$. Notice that $\square_u\cap \mathcal{B}(b, (1+\varepsilon)^2)\neq \emptyset$. As $\textrm{size}(\square_u) \le \varepsilon/\sqrt{d}$, we observe that $\square_u \subset \mathcal{B}(b, (1+\varepsilon)^3)$. Hence the search procedure for the query range $\mathcal{Q}_\varepsilon(b, (1+\varepsilon)^2)$ in $\mathcal{T}^{(i)}$ will always find a node $w$ that lies in the path from the root to $u$ such that $\square_w\subseteq \mathcal{B}(b, (1+\varepsilon)^3)$. Notice that $\square_u\subseteq \square_w$. Hence, the search procedure always visits node $u$. \paragraph{Insertion of $b$.} If $\square_v \cap A^{(i)} \neq \emptyset$ and $v$ is inactive, we compute the distances between all corners of $\square_v$ with $b$. If any one of them has distance smaller than $1+\varepsilon$ from $b$, we add $v$ to $\mathcal{C}$. \paragraph{Deletion of $b$.} If $v$ is active and $\square_v \cap A^{(i)} \neq \emptyset$, we issue a range query $\mathcal{Q}_\varepsilon(x, 1+\varepsilon)$ to $\mathcal{T}_B$ for every corner $x$ of $\square_v$. If all queries have empty results, we remove $v$ from $\mathcal{C}$. \medskip We next analyze the complexity of this update procedure with $b$. A first observation on the number of nodes we have checked is stated in Lemma~\ref{lem:numnodes}. \begin{lemma} \label{lem:numnodes} $|\bigcup_{u\in U} V_u|=O\left(\varepsilon^{-2d}(\log 2^i + \varepsilon^{1-d})\right)$. \end{lemma} \begin{proof} From \cite{arya2000approximate, arya1998optimal}, $|U|=O(\log 2^i +\varepsilon^{1-d})$. From each node $u\in U$ we need to bound $|V_u|$. Notice that $size(\square_u)\leq 2(1+\varepsilon)^3$. Recall, that every $2d$ levels of the BBD tree $\mathcal{T}^{(i)}$ the size of the cells are decreased by at least a factor of $1/2$. Hence, $2d\log(\frac{2(1+\varepsilon)^3\sqrt{d}}{\varepsilon})$ levels below the level of $u$ the size of the cells will be at most $\varepsilon/\sqrt{d}$. So, $|V_u|=O\left(2^{2d\log(\frac{2(1+\varepsilon)^3\sqrt{d}}{\varepsilon})}\right)=O\left((\frac{2(1+\varepsilon)^3\sqrt{d}}{\varepsilon})^{2d}\right)$. \end{proof} The range query over $\mathcal{T}^{(i)}$ takes $O(\log 2^i + \varepsilon^{1-d})$ time, following~\cite{arya2000approximate, arya1998optimal}. Over all groups, it takes $O(\log n(\log n+\varepsilon^{1-d}))$ to find all nodes in $U$. From Lemma~\ref{lem:numnodes} the insertion time is $O(\varepsilon^{-2d}\log n(\log n+\varepsilon^{1-d}))$. Moreover, each range query over $\mathcal{T}_B$ takes $O(\varepsilon^{1-d}\log n)$ time. Implied by the Lemma~\ref{lem:numnodes} we will execute $O(\varepsilon^{-2d}\log n(\log n+\varepsilon^{1-d}))$ such queries when we delete a point in $B$ so the overall update time for deleting a point is $O(\varepsilon^{1-3d}\log^2 n(\log n+\varepsilon^{1-d}))$. \subsubsection{Enumeration} We visit each group $A^{(i)}$ if $A^{(i)} \neq \emptyset$. For any node $u \in \mathcal{C}$, we invoke the following procedure. For each point $a \in A^{(i)} \cap \square_u$, let $B_a$ be the result of issuing a ball query $\mathcal{Q}_\varepsilon(a, (1+\varepsilon)^3)$ to $\mathcal{T}_B$. We just report $(a,b)$ for all $b$ in $B_a$. Each range reporting query over $\mathcal{T}_B$ takes $O(\varepsilon^{1-d}\log n)$ time first, and then all results can be reported with $O(1)$ delay. The delay of this enumeration phase is $O(\varepsilon^{1-d}\log n)$. \begin{lemma} \label{lem:bbd-approximate} The index supports $(4\varepsilon+6\varepsilon^2+4\varepsilon^3 + \varepsilon^4)$-approximate enumeration. \end{lemma} \begin{proof} It suffices to show that all pairs of points within distance $r$ must be reported, and any pair of points within distance strictly larger than $(1+\varepsilon)^4$ will not be reported. Consider an arbitrary pair of points $(a, b) \in A \times B$ with $\lVert a-b \rVert_2 \leq 1$. Let $u \in C_\varepsilon^i$ be the unique node such that $a \in \square_u$. Observe that there exists a corner $x$ of $\square_u$ such that $\lVert x-b \rVert_2 \le \lVert a-b \rVert_2 + \lVert x-a \rVert_2 \leq 1+\varepsilon$, thus $u$ is active. So the enumeration query will visit node $u$ and we will run a query $\mathcal{Q}_\varepsilon(a,(1+\varepsilon)^3)$ in $\mathcal{T}_B$ finding the point $b$. When $u$ is active, for a node $u\in C$ then we will always return at least a pair $(a\in \square_u, b\in B)$ since there exists $b$ with $\lVert x-b\rVert_2\leq (1+\varepsilon)^2$, where $x$ is a corner of $\square_u$ and $\lVert a-b\rVert_2\leq (1+\varepsilon)^2+\varepsilon<(1+\varepsilon)^3$. Finally, from the definition, a reporting query $\mathcal{Q}_\varepsilon(a,(1+\varepsilon)^3)$ in $\mathcal{T}_B$ will never return a point $b$ with $\lVert a-b\rVert_2>(1+\varepsilon)^4$. \end{proof} \section{Conclusion} In this paper, we consider how to use indexes for enumerating answers to similarity join queries; these indexes also support data updates. We present several efficient indexes for similarity join under $\ell_1$, $\ell_2$, and $\ell_\infty$ metrics (some of our results can be extended to other metrics). Note that our indexes provide worst-case delay guarantee for arbitrary updates and arbitrary input data. In practice, most real-world update sequences are ``nice'', nowhere near these worst-case scenarios; and input points from two sets are always dependent, or follow certain parameterized distributions. A more fine-grained analysis on the intrinsic difficulty of dynamic enumeration over similarity joins is interesting but still open. Similar instance-dependent analysis has been considered in~\cite{wang2020maintaining}. The next interesting question is to consider other variants of similarity join queries under more general metrics, for example with constant doubling dimension, and distance functions, for example the cosine function used for measuring the similarity if the input vectors are sparse. \section{Framework} \label{sec:framework} \newcommand{G'}{G'} \newcommand{\mathcal{F}}{\mathcal{F}} All our algorithms for solving similarity join in different settings are based on a common framework. Intuitively, we model the similarity join as a bipartite graph $G' = (A\cup B, E)$, where an edge $(a,b) \in E$ exists if and only if $a$ can be joined with $b$, i.e., $\phi(a,b) \le r$. To obtain a data structure for poly-logarithmic delay enumeration, it suffices to find a compact representation of $G'$ with a set $\mathcal{F}=\{(A_1, B_1), (A_2, B_2),\ldots, (A_u,B_u)\}$ of edge-disjoint bi-cliques such that (i) $A_i\subseteq A$, $B_i\subseteq B$ for any $i$, (ii) $E = \bigcup_{i=1}^u A_i \times B_i$, and (iii) $(A_i \times B_i) \cap (A_j \times B_j) = \emptyset$ for any $i \neq j$. It remains to store and maintain these bi-cliques efficiently under update. Take equi-join as an example. After representing input tuples as sets of vertices $A$ and $B$, there is an edge between $a \in A$ and $b \in B$ if and only if $a$ and $b$ have the same join attribute value. In this way, the bipartite graph itself is a set of edge-disjoint bi-cliques. However, this bipartite representation is not resilient to update, i.e., an insertion or deletion of any tuple may incur $\Omega(n)$ changes in the worst case. A simple idea of getting around this problem is to introduce a middle layer of vertices as $C$, each representing a distinct join attribute value and there is an edge between $a \in A$ (resp. $b \in B$) and $c \in C$ if and only if the join attribute value of $a$ (resp. $b$) is equal to $c$. An example is illustrated in Figure~\ref{fig:equi-join}. \begin{figure} \centering \includegraphics[scale = 0.8]{equi-join} \caption{An example of equi-join $R_1(X,C) \Join R_2(C,Y)$.} \label{fig:equi-join} \end{figure} Let $A_c \subseteq A, B_c \subseteq B$ be the set of vertices in $A,B$ with join attribute value $c$. In our construction, $A_c, B_c$ are exactly the list of vertices which share an edge with $c$, thus the biclique $A_c \times B_c \subseteq E$ is implicitly represented by the cross product of two neighbor lists of $c$. To achieve constant-delay enumeration, we need to maintain a list of {\em active} values in $C$, i.e., $c \in C$ is active if and only if $c$ is connected to at least one value in $A$ and one value in $B$. Obviously, we only visit active values in $C$, each emitting at least one result. This improves the original representation in the following way: (1) it decreases the space complexity as well as the pre-processing time complexity from $O(n+k)$ to $O(n)$, where $k$ is the output size; (2) the update time is decreased from $O(n)$ to $O(1)$. We extend this simple example to general similarity joins with $r > 0$. A formal description of the framework is defined as follows. \begin{definition}[Tripartite Graph Representation] A tripartite graph $G = (A \cup C \cup B, E_1 \cup E_2)$ where $E_1 \subseteq A \times C$ and $E_2 \subseteq C \times B$ is a representation of the similarity join over $A, B$ under $\phi(\cdot)$ metric with threshold $r$ if for each pair of points $(a, b) \in A \times B$ with $\phi(a,b) \le r$, there exists unique $c \in C$ such that $(a,c) \in E_1, (c, b) \in E_2$. \end{definition} It should be noted that $G$ is only conceptual, instead of being stored explicitly. The only extra information we will maintain for $G$ is the {\em status} of vertices in $C$. Let $A_c, B_c$ be the set of vertices in $A, B$ connected to $c \in C$, respectively. To support constant-delay enumeration, we define a vertex $c \in C$ as {\em active} if it witnesses at least one result, i.e., $A_c \neq \emptyset$ and $B_c \neq \emptyset$, and {\em inactive} otherwise. All active vertices are maintained in $\mathcal{C} \subseteq C$. \begin{lemma}[$\delta$-delay Enumeration] \label{lem:fw-enumerate} In $G = (A \cup C \cup B, E_1 \cup E_2)$ with a set of active vertices in $C$ as $\bm{\mathcal{C}}$, if for any $c \in C$, vertices in $A_c$ (resp. $B_c$) can be enumerated with $\delta$ delay, then all join results can be enumerated with $\delta$ delay. \end{lemma} The proof of Lemma~\ref{lem:fw-enumerate} directly follows the algorithm: We visit each vertex $c \in \mathcal{C}$, and enumerate every pair $(a,b)\in A_c \times B_c$ by a nested-loop over $A_c, B_c$. It remains to show how to compute and update the active/inactive status of vertices $C$. The proof of Lemma~\ref{lem:status} is given in Appendix~\ref{appendix:framework}. \begin{lemma}[Status Maintenance] \label{lem:status} In $G = (A \cup C \cup B, E_1 \cup E_2)$ with $|C| = O(\eta)$, if (1) for any $c \in C$, whether $A_c = \emptyset$ and $B_c = \emptyset$ can be decided in $\delta$ time; (2) for any $c \in C$, $A_c$ and $B_c$ can be counted in $\zeta$ time; (3) for any $a \in A$ (reps. $b \in B$), $C_a$ (or $C_b$) can be reported in $\lambda$ time, then we have two possible options for $\mathcal{C}$ as below: \begin{itemize} \item $\mathcal{C}$ can be computed in $O(\eta \delta)$ time and updated in $O(\lambda \delta)$ time; \item $\mathcal{C}$ can be computed in $O(\eta \zeta)$ time and updated in $O(\lambda)$ time. \end{itemize} \end{lemma} In the remaining of this paper, we will see different instantiations of this framework. The details might be slightly different from what have been described here, due to the inherent difficulties of similarity joins in different settings. \section{Introduction} \label{sec:intro} There has been extensive work in many areas including theoretical computer science, computational geometry, and database systems on designing efficient dynamic data structures to store a set $\mathcal{D}$ of objects so that certain queries on $\mathcal{D}$ can be answered quickly and objects can be inserted into or deleted from $\mathcal{D}$ dynamically. A query $\mathcal{Q}$ is specified by a set of constraints and the goal is to report the subset $\mathcal{Q}(\mathcal{D}) \subseteq \mathcal{D}$ of objects that satisfy the constraints, the so-called {\em reporting} or {\em enumeration} queries. More generally, $\mathcal{Q}$ may be specified on $k$-tuples of objects in $\mathcal{D}$, and we return the subset of $\mathcal{D}^k$ that satisfy $\mathcal{Q}$. One may also ask to return certain statistics on $\mathcal{Q}(\mathcal{D})$ instead of $\mathcal{Q}(\mathcal{D})$ itself, but here we focus on enumeration queries. As an example, $\mathcal{D}$ is set of points in $\Re^d$ and a query $\mathcal{Q}$ specifies a simple geometric region $\Delta$ (e.g., box, ball, simplex) and asks to return $\mathcal{D} \cap \Delta$, the so-called {\em range-reporting} problem. As another example, $\mathcal{D}$ is again a set of points in $\Re^d$, and $\mathcal{Q}$ now specifies a value $r \ge 0$ and asks to return all pairs $(p,q) \in \mathcal{D} \times \mathcal{D}$ with $\lVert p-q\rVert \le r$. Traditionally, the performance of a data structure has been measured by its size, the time needed to update the data structure when an object is inserted or deleted, and the {\em total time} spent in reporting $\mathcal{Q}(\mathcal{D})$. In some applications, especially in exploratory or interactive data analysis of large datasets, it is desirable to report $\mathcal{Q}(\mathcal{D})$ incrementally one by one so that users can start exploiting the first answers while waiting for the remaining ones. To offer guarantees on the regularity during the enumeration process, maximum {\em delay} between the enumeration of two consecutive objects has emerged as an important complexity measure of a data structure~\cite{bagan2007acyclic}. Formally speaking, {\em $\delta$-delay enumeration} requires that the time between the start of the enumeration process to the first result, the time between any consecutive pair of results, and the time between the last result and the termination of the enumeration process should be at most $\delta$. In this paper, we are interested in dynamic data structures for {\em (binary) similarity join} queries, which have numerous applications in data cleaning, data integration, collaborative filtering, etc. Given two sets of points $A$ and $B$ in $\Re^d$, a metric $\phi(\cdot)$, and a distance threshold $r > 0$, the similarity join asks to report all pairs of $(a,b) \in A \times B$ with $\phi(a,b) \le r$. Similarity joins have been extensively studied in the database and data mining literature~\cite{chaudhuri2006primitive,jacox2008metric,paredes2009solving,silva2010similarity, wang2012can}, but it is still unclear how to enumerate similarity join results efficiently when the underlying data is updated. % Our goal is to design a dynamic data structure that can be efficiently updated when an input point is inserted or deleted; and whenever an enumeration query is issued, all join results can be enumerated from it with {\em worst-case delay} guarantee. % \subsection{Previous results} We briefly review the previous work on similarity join and related problems. See surveys~\cite{al2016survey, augsten2013similarity, silva2016experimental} for more results. \paragraph{Enumeration of Conjunctive Query.} Conjunctive queries are built upon natural join ($\Join$), which is a special case of similarity join with $r = 0$, i.e., two tuples can be joined if and only if they have the same value on the join attributes. Enumeration of conjunctive queries has been extensively studied in the static settings~\cite{bagan2007acyclic,segoufin2013enumerating, carmeli2019enumeration} for a long time. In 2017, two simultaneous papers~\cite{berkholz2017answering, idris2017dynamic} started to study dynamic enumeration of conjunctive query. Both obtained a dichotomy that a linear-size data structure that can be updated in $O(1)$ time while supporting $O(1)$-delay enumeration, exists for a conjunctive query if and only if it is {\em q-hierarchical} (e.g., the degenerated natural join over two tables is q-hierarchical). However, for non-q-hierarchical queries with input size $n$, they showed a lower bound $\Omega(n^{\frac{1}{2} - \varepsilon})$ on the update time for any small constant $\varepsilon > 0$, if aiming at $O(1)$ delay. This result is very negative since q-hierarchical queries are a very restricted class; for example, the matrix multiplication query $\pi_{X,Z} R_1(X,Y) \Join R_2(Y,Z)$, where $\pi_{X,Y}$ denotes the projection on attributes $X, Y$, and the triangle join $R_1(X,Y) \Join R_2(Y,Z) \Join R_3(Z,X)$ are already non-q-hierarchical. Later, Kara et al.~\cite{kara2019counting} designed optimal data structures supporting $O(\sqrt{n})$-time maintenance for some selected non-q-hierarchical queries like the triangle query etc. However, it is still unclear if a data structure of $O(\sqrt{n})$-time maintenance can be obtained for a large class of queries. Some additional trade-off results have been obtained in~\cite{kara2020trade, wang2020maintaining}. \paragraph{Range search.} A widely studied problem related to similarity join is {\em range searching}~\cite{agarwal2017simplex, agarwal1999geometric, bentley1979data, willard1996applications}: Preprocess a set $A$ of points in $\Re^d$ with a data structure so that for a query range $\gamma$ (e.g., rectangle, ball, simplex), all points of $A \cap \gamma$ can be reported quickly. A particular instance of range searching, the so-called {\em fixed-radius-neighbor} searching, in which the range is a ball of fixed radius centered at query point is particularly relevant for similarity joins. For a given metric $\phi$, let $\mathcal{B}_\phi(x,r)$ be the ball of radius $r$ centered at $x$. A similarity join between two sets $A, B$ can be answered by querying $A$ with ranges $\mathcal{B}_\phi(b,r)$ for all $b \in B$. % % Notwithstanding this close relationship between range searching and similarity join, the data structures for the former cannot be used for the latter: It is too expensive to query $A$ with $\mathcal{B}_\phi(b,r)$ for every $b \in B$ whenever an enumeration query is issued, especially since many such range queries may return empty set, and it is not clear how to maintain the query results as the input set $A$ changes dynamically. \paragraph{Reporting neighbors.} The problem of reporting neighbors is identical to our problem in the offline setting. In particular, given a set $P$ of $n$ points in $\Re^d$ and a parameter $r$, the goal is to report all pairs of $P$ within distance $r$. The algorithm proposed in~\cite{lenhof1995sequential} can be modified to solve the problem of reporting neighbors under the $\ell_\infty$ metric in $O(n+k)$ time, where $k$ is the output size. Aiger et al.~\cite{aiger2014reporting} proposed randomized algorithms for reporting neighbors using the $\ell_2$ metric in $O((n+k)\log n)$ time, for constant $d$. \paragraph{Scalable continuous query processing.} There has been some work on scalable {\em continuous query} processing, especially in the context of data streams~\cite{chen2000niagaracq, chandrasekaran2002streaming, wu2004interval} and publish/subscribe~\cite{fabret2001filtering}, where the queries are standing queries and whenever a new data item arrives, the goal is to report all queries that are affected by the new item~\cite{agarwal2006scalable, agarwal2005monitoring}. In the context of similarity join, one can view $A$ as the data stream and $\mathcal{B}_\phi(b,r)$ as standing queries, and we update the results of queries as new points in $A$ arrive. There are, however, significant differences with similarity joins---arbitrary deletions are not handled; continuous queries do not need to return previously produced results; basing enumeration queries on a solution for continuous queries would require accessing previous results, which can be prohibitive if stored explicitly. % % % % % \subsection{Our results} \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Enumeration} & \multirow{2}{*}{Metric} & \multirow{2}{*}{Properties} & \multicolumn{3}{c|}{Data Structures} \\ \cline{4-6} & & & Space & Update & Delay \\ \hline \multirow{2}{*}{Exact} & $\ell_1/\ell_\infty$ & $r$ is fixed & $\O(n)$ & $\O(1)$& $\O(1)$ \\ \cline{2-6} & $\ell_2$ & $r$ is fixed & $\O(n)$ & $\O(n^{1-\frac{1}{d+1}})$ & $\O(n^{1-\frac{1}{d+1}})$ \\ \hline \multirow{3}{*}{$\epsilon$-} & \multirow{3}{*}{$\ell_p$} & $r$ is fixed & $O(n)$ & $\O(\epsilon^{-d})$ & $\O(\epsilon^{-d})$ \\ \cline{3-6} \multirow{3}{*}{Approximate} & & $r$ is variable & \multirow{2}{*}{$O(\varepsilon^{-d}n)$} & \multirow{2}{*}{$\O(\varepsilon^{-d})$} & \multirow{2}{*}{$O(1)$} \\ & & spread is $\operatorname{poly}(n)$ & & & \\ \cline{2-6} & $\ell_1, \ell_2,$ & $r$ is fixed & \multirow{2}{*}{$\O(dn+n^{1+\rho})$} & \multirow{2}{*}{$\O(dn^{2\rho})$} & \multirow{2}{*}{$\O(dn^{2\rho})$} \\ & hamming & high dimension & & & \\ \hline \end{tabular} \caption{Summary of Results: $n$ is the input size; $r$ is the distance threshold; $d$ is the dimension of input points; $\smash{\rho\leq \frac{1}{(1+\varepsilon)^2}+o(1)}$ is the quality of LSH family for the $\ell_2$ metric. For $\ell_1$, Hamming $\rho\leq \frac{1}{1+\varepsilon}$. $\O$ notation hides a $\log^{O(1)} n$-factor; for the results where $d$ is constant the $O(1)$ exponent is at most linear on $d$, while for the high dimensional case the exponent is at most $3$.} \label{tab:summary} \end{table} We present several dynamic data structures for enumerating similarity joins under different metrics. Table~\ref{tab:summary} summarizes our main results. It turns out that dynamic similarity join is hard for some metrics, e.g., $\ell_2$ metric. Therefore we also consider {\em approximate similarity join} where the distance threshold $r$ is a soft constraint. Formally, given parameter $r,\varepsilon >0$, the \emph{$\varepsilon$-approximate similarity join} relaxes the distance threshold for some parameter $\varepsilon > 0$: (1) all pairs of $(a,b) \in A \times B$ with $\phi(a,b) \le r$ should be returned; (2) no pair of $(a,b)\in A \times B$ with $\phi(a,b)>(1+\varepsilon)r$ is returned; (3) some pairs of $(a,b) \in A \times B$ with $r < \phi(a,b) \le (1+\varepsilon)r$ may be returned. We classify our results in four broad categories: \paragraph{Exact similarity join.} Here we assume that $d$ is constant and the distance threshold is fixed. Our first result (Section~\ref{sec:linfty}) is an $\O(1)$-size data structure for similarity join under the $\ell_1/\ell_\infty$ metrics that can be updated in $\O(1)$ time whenever $A$ or $B$ is updated, and ensures $\O(1)$ time delay during enumeration. Based on range trees~\cite{bentley1978decomposable, de1997computational}, the data structure stores the similarity join pairs \emph{implicitly} so that they can be enumerated without probing every input point. We extend these ideas to construct a data structure for similarity join under the $\ell_2$ metric (in Section~\ref{sec:l2}) with $\O(n^{1-1/d})$ amortized update time while supporting $\O(n^{1-1/d})$-delay enumeration. Lower bounds on ball range searching~\cite{afshani2012improved, chazelle1996simplex} rule out the possibility of a linear-size data structure with $\O(1)$ delay. \paragraph{Approximate similarity join in low dimensions.} Due to the negative result for $\ell_2$ metric, we shift our attention to $\varepsilon$-approximate similarity join. We now allow the distance threshold to be part of the query but the value of $\varepsilon$, the error parameter, is fixed. We present a simple linear-size data structure based on quad trees and the notion of well-separated pair decomposition, with $O(\epsilon^{-d})$ update time and $O(1)$ delay. If we fix the distance threshold, then the data structure can be further simplified and somewhat improved by replacing the quad tree with a simple uniform grid. \paragraph{Approximate similarity join in high dimensions.} So far we assumed $d$ to be constant and the big $O$ notation in some of the previous bounds hides a constant that is exponential in $d$. % Our final result is an LSH-based~\cite{gionis1999similarity} data structure for similarity joins in high dimensions. % Two technical issues arise when enumerating join results from LSH: one is to ensure bounded delay because we do not want to enumerate false positive results identified by the hash functions, and the other is to remove duplicated results as one join result could be identified by multiple hash functions. For the $\ell_2$ metric (the results can also be extended to $\ell_1$, Hamming metrics) we propose a data structure of $\O(nd +n^{1+\rho})$ size and $\O(dn^{2\rho})$ amortized update time that supports $(1+2\varepsilon)$-approximate enumeration with $\O(dn^{2\rho})$ delay with high probability, where $\rho \le \frac{1}{(1+\varepsilon)^2}+o(1)$ is the quality of the LSH family. Alternatively, we present a data structure with $\O(dn^{\rho})$ amortized update time and $\O(dn^{3\rho})$ delay. Our data structure can be extended to the case when the distance threshold $r$ is variable. If we allow worse approximation error we can improve the results for the Hamming distance. Finally, we show a lower bound by relating similarity join to the {\em approximate nearest neighbor} query % We also consider similarity join beyond binary joins. \paragraph{Triangle similarity join in low dimensions.} Given three sets of points $A, B, S$ in $\Re^d$, a metric $\phi(\cdot)$, and a distance threshold $r > 0$, the {\em triangle similarity join} asks to report the set of all triples of $(a,b,s) \in A \times B \times S$ with $\phi(a,b) \le r, \phi(a,s) \le r, \phi(b,s) \le r$. The {\em $\varepsilon$-approximate triangle similarity join} can be defined similarly by taking the distance threshold $r$ as a soft constraint. We extend our data structures to approximate {\em triangle similarity join} by paying a factor of $\log^{O(1)} n$ in the performance. \paragraph{High-level framework.} All our data structures rely on the following common framework. We model the (binary) similarity join as a bipartite graph $G' = (A\cup B, E)$, where an edge $(a,b) \in E$ if and only if $\phi(a,b) \le r$. A naive solution by maintaining all edges of $G'$ explicitly leads to a data structure of $\Theta(n^2)$ size that can be updated in $\Theta(n)$ time while supporting $O(1)$-delay enumeration. To obtain a data structure with poly-logarithmic update time and delay enumeration, we find a compact representation of $G'$ with a set $\mathcal{F}=\{(A_1, B_1), (A_2, B_2),\ldots, (A_u,B_u)\}$ of edge-disjoint bi-cliques such that (i) $A_i\subseteq A$, $B_i\subseteq B$ for any $i$, (ii) $E = \bigcup_{i=1}^u A_i \times B_i$, and (iii) $(A_i \times B_i) \cap (A_j \times B_j) = \emptyset$ for any $i \neq j$. We represent $\mathcal{F}$ using a tripartite graph $\mathcal{G}=(A\cup B\cup C, E_1\cup E_2)$ where $C=\{c_1, \ldots, c_u\}$ has a node for each bi-clique in $\mathcal{F}$ and for every $i\leq u$, we have the edges $(a_j,c_i)\in E_1$ for all $a_j\in A_i$ and $(b_k,c_i)\in E_1$ for all $b_k\in B_i$. We \emph{cannot} afford to maintain $E_1$ and $E_2$ explicitly. Instead, we store some auxiliary information for each $c_i$ and use geometric data structures to recover the edges incident to a vertex $c_i\in C$. We also use data structures to maintain the set $C$ and the auxiliary information dynamically as $A$ and $B$ are being updated. We will not refer to this framework explicitly but it provides the intuition behind all our data structures. Section~\ref{sec:exact} describes the data structures to support this framework for exact similarity join, and Section~\ref{sec:approximate} presents simpler, faster data structures for approximate similarity join. Both Sections~\ref{sec:exact} and~\ref{sec:approximate} assume $d$ to be constant. Section~\ref{sec:highd} describes the data structure for approximate similarity join when $d$ is not constant. % % % \section{Similarity Join in High Dimensions} \label{sec:lsh} So far, we have treated the dimension $d$ as a constant. In this section we describe a data structure for approximate similarity join using the \emph{locality sensitive hashing} (LSH) technique so that the dependency on the dimension is a small polynomial in $d$, by removing the exponent dependency on $d$ from the hidden poly-log factor. For simplicity, we describe our data structure assuming that $r$ is fixed, and in the end we extend it to the case where $r$ is also part of the similarity join query. For $\varepsilon > 0$, $1\geq p_1 > p_2>0$, recall that a family $H$ of hash functions is $(r, (1+\varepsilon)r, p_1, p_2)$-sensitive, if for any uniformly chosen hash function $h \in H$, and any two points $x$, $y$, we have (1) $\Pr[h(x) = h(y)] \ge p_1$ if $\phi(x, y) \le r$; and (2) $\Pr[h(x) = h(y)] \le p_2$ if $\phi(x, y) \ge (1+\varepsilon)r$. The quality of a hash function family is measured by $ \rho = \frac{\ln p_1}{\ln p_2} < 1$, which is upper bounded by a number that depends only on $\varepsilon$; and $\rho = \frac{1}{1+\varepsilon}$ for many common distance functions~\cite{gionis1999similarity,andoni2006near,datar2004locality,har2012approximate}. For $\ell_2$ the best result is $\rho\leq \frac{1}{(1+\varepsilon)^2}+o(1) ~\cite{andoni2006near}. The essence of LSH is to hash ``similar'' points in $P$ into the same buckets with high probability. A simple approach to use LSH for similarity join is to (\romannumeral 1) hash points into buckets; (\romannumeral 2) probe each bucket and check, for each pair of points $(a,b) \in A \times B$ inside the same bucket, whether $\phi(a,b) \le r$; and (\romannumeral 3) report $(a,b)$ if the inequalities holds. However, two challenges arise with this approach. First, without any knowledge of false positive results inside each bucket, checking every pair of points could lead to a huge delay. Our key insight is that after checking specific number pairs of points in one bucket (this number will be determined later), we can safely skip the bucket, since any pair of result missed in this bucket will be found in another one with high probability. Secondly, one pair of points may collide under multiple hash functions, so an additional step is necessary in the enumeration to remove duplicates. If we wish to keep the size of data structure to be near-linear and are not allowed to store the reported pairs, detecting duplicates requires some care. As a warm-up exercise to gain intuition, we first present a relatively easy special case in which input points as well as points inserted are chosen from the universal domain uniformly. In the following, we focus on the general case without any assumption on the input distribution. Our data structure and algorithm use a parameter $M$, whose value will be determined later. Since we do not define new hash functions, all results presented in this section hold for all Hamming, $\ell_2, \ell_1$ metrics. \subsection{With Uniform Assumption} \label{appendix:uniform} Under this strong assumption, the LSH technique can be used with a slight modification. We adopt a LSH family $\H$ with quality parameter $\rho$ and randomly choose $\tau = O(n^{\rho})$ hash functions $g_1, g_2,\cdots, g_\tau$. To ensure our high-probability guarantee (as shown later), we maintain $O(\log n)$ copies of this data structure. \paragraph{Data structure.} Let $C$ be the set of all buckets over all $\tau$ hash functions. For each bucket $\square$, let $A_\square, B_\square$ be the set of points from $A, B$ falling into bucket $\square$, respectively. A nice property on $A_\square$ and $B_\square$ is stated in the following lemma, which is directly followed by the balls-into-bins result. \begin{lemma} \label{lem:ball-into-bin} If input points are randomly and uniformly chosen from the domain universe, with probability at least $1 - \frac{1}{n}$, every bucket receives $O(\log n/\log \log n)$ points. \end{lemma} As the number of points colliding in each bucket can be bounded by $O(\log n)$, it is affordable to check all pairs of points inside one bucket in $O(\log^2 n)$ time, thus resolving the challenge (1). Moreover, we introduce a variable $\square_{\text{out}}$ for each bucket $\square \in C$ indicating the number of the pair of tuples within distance $r$ colliding inside $\square$. Obviously, a bucket $\square$ is {\em active} if $\square_{\text{out}} > 0$, and {\em inactive} otherwise. All active buckets are maintained in $\mathcal{C} \subseteq C$, in increasing order of the index of the hash function it comes from. \paragraph{Update.} Assume one point $a \in A$ is inserted. We visit each hash bucket $\square$ into which $a$ is hashed. We insert $a$ into $A_\square$, count the number of pair of points $(a,b \in B_\square)$ with $\phi(a,b) \le r$, and add this quantity to $\square_{\text{out}}$. The case of deletion can be handled similarly. \paragraph{Enumeration.} Assume $(a,b)$ is to be reported. We check whether $a,b$ have ever collided into any bucket previously. If there exists no index $j < i$ such that $g_j(a) = g_j(b)$, we report it. Then, we need to notify every bucket which also witnesses $(a,b)$ but comes after $\square$. More specifically, for every $j> i$, if $g_j(a) = g_j(b)$ in bucket $\square'$, we decrease $\square'_{\text{out}}$ by $1$, and remove $\square'$ from $\mathcal{C}$ if $\square'_{\text{out}}$ becomes $0$. The pseudocode is given below. \begin{algorithm} \caption{{\sc UniEnumLSH}} All buckets in $\mathcal{C}$ are sorted by the index of hash functions\; \ForEach{$\square \in \mathcal{C}$}{ \ForEach{$(a,b) \in \square_A \times \square_B$}{ \If{$\phi(a,b) \le r$}{ \textrm{flag} = \textrm{true}\; \ForEach{$j \in \{1,2,\cdots,i-1\}$}{ \If{$g_j(a) = g_j(b)$}{ flag $ = \textrm{false}$\; } } \If{\textrm{flag} = \textrm{true}}{ {\sc Emit} $(a,b)$\; \ForEach{$j \in \{i+1, i+2, \cdots, \tau\}$}{ \If{$g_j(a) = g_j(b)$ in $\square$}{ $\square_{\text{out}} \gets \square_{\text{out}} -1$\; \If{$\square_{\text{out}} = 0$}{ $\mathcal{C} \gets \mathcal{C} - \{\square\}$\; } } } } } } } \end{algorithm} \begin{theorem} \label{thm:uniform-property} Let $A, B$ be two sets of points in $\mathbb{R}^d$, with $|A| + |B| = n$, and $\varepsilon, r$ be positive parameters. Under uniform assumption, a data structure of $\O(nd)$ size can be constructed in $\O(nd)$ time and updated in $\O(d)$ time, while with probability $1-2/n$ supporting exact enumeration of similarity join with $\O(d)$ delay. \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:uniform-property}] We first prove the correctness of the algorithm. It can be easily checked that any pair of points with their distance larger than $r$ will not be emitted. Consider any pair of points $(a,b)$ within distance $r$. Let $i$ be the smallest index such that $g_i(a) = g_i(b)$ in bucket $\square$. In the algorithm, $(a,b)$ will be reported by $\square$ and not by any bucket later. Thus, each join result will be enumerated at most once without duplication. In the case of hamming distance, we have $k = \log_2 n$ and $p_1^k = (1 - \frac{r}{d})^{\log n} \in [1/e, 1]$ since $d/r > \log n$ by padding some zeros at the end of all points\footnote{Similar assumption was made in the original paper~\cite{gionis1999similarity} of nearest neighbor search in Hamming distance.}, thus $\tau = 3 \cdot 1/p_1^k \cdot \ln n = \O(1)$. We next analyze the complexity of our data structure. It can be built in $O(nk\tau)$ time with $O(nk\tau)$ space, since there are $n$ vertices in $A \cup B$, at most $O(n\tau)$ non-empty buckets in $C$, and each tuple in $A \cup B$ is incident to exactly $l$ buckets in $C$. With the same argument, it takes $O(nkl)$ time for construct the tripartite graph representation. Moreover, it takes $O(\sum_\square |A_\square| \cdot |B_\square|)$ time for computing the quantity $\square_{\text{out}}$ for all buckets, which can be further bounded by \[\sum_{\square} |A_\square| \cdot |B_\square| < n \cdot \max_\square (|A_\square| + |B_\square|) = O(n \log n)\] implied by Lemma~\ref{lem:ball-into-bin}. Consider any bucket $\square$ from hash function $g_j$. If the algorithm visits it during the enumeration, at least one pair of points within distance $r$ will be emitted, which has not been emitted by any bucket from hash function $h_i$ for $i < j$. Checking all pairs of points inside any bucket takes at most $O((d + kl) \cdot \max_{\square} |A_\square| \cdot |B_\square|)$ time, where it takes $O(d)$ time to compute the distance between any pair of points and $kl$ time for checking whether this pair has been emitted before or marking buckets which also witnesses this pair later. Thus, the delay between any two consecutive pairs of results is bounded by $O((d + kl) \cdot \max_{\square} |A_\square| \cdot |B_\square|)$, which is $\O(d)$ under the uniform assumption. Moreover, for each pair of points within distance $r$, it will be reported by any hash function with probability at least $p_1^k$. The probability that they do not collide on any one of hash function is at most $(1-p_1^k)^{3 \cdot 1/p_1^k \cdot \ln n} \le 1/n^3$. As there are at most $n^2$ such pairs of tuples, the probability that any one of them is not reported by our data structure is at most $1/n$. By a union bound, the probability that uniform assumption fails or one join result is not reported is at most $\frac{1}{n} + \frac{1}{n} = \frac{2}{n}$. Thus, the result holds with probability at least $1 - \frac{2}{n}$. \end{proof} \subsection{Without Uniform Assumption} In general, without this uniform assumption, we need to explore more properties of the LSH family for an efficient data structure. Our key insight is that after checking some pairs of points in one bucket (the specific numbers of pairs will be determined later), we can safely skip the bucket, since with high probability any join result missed in this bucket will be found in another one. In this way, we avoid spending too much time in one bucket before finding any join result. Given a set $P$ of points and a distance threshold $r$, let $\overline{\mathcal{B}}(q, P, r) = \{p \in P\mid \phi(p,q) >r\}$. The next lemma follows from~\cite{indyk1998approximate, har2011geometric}. \begin{lemma} \label{lem:nnq} For a set $P$ of $n$ points in hamming space $\Re^d$ and a distance threshold $r$, if $k=O(\log n)$ and $\tau=O(n^{\rho})$, then for any point $p\in P$ the following conditions hold with constant probability $\gamma$: for any $q\in P$ such that $\phi(p,q)\leq r$, there exists a bucket $\square $such that $p, q$ collide and $|\square \cap \overline{\mathcal{B}}(p,P,(1+\varepsilon)r)| \le M$ for $M=O(n^{\rho})$. \end{lemma} \subsubsection{Data structure} We adopt a LSH family $\H$ with quality parameter $\rho$ and randomly choose $\tau = O(n^{\rho})$ hash functions $g_1, g_2,\cdots, g_\tau$. To ensure our high-probability guarantee (as shown later), we maintain $O(\log n)$ copies of this data structure. We construct $m = \frac{3}{\log(1/\gamma)} \log n= O(\log n)$ copies of the data structure as $\mathbb{I}_1, \mathbb{I}_2, \cdots, \mathbb{I}_m$. For each bucket $\square$, we store and maintain a set of $M$ arbitrary points $\bar{A}_\square\subseteq A_\square$ and $\bar{B}_\square\subseteq B_\square$. For each point $a\in \bar{A}_\square$ we maintain a counter $a_c=|\{b\in \bar{B}_\square\mid \phi(a,b)\leq 2(1+\varepsilon)r\}|$. A bucket $\square$ is \emph{active} if there exists a pair $(a,b)\in \bar{A}_\square\times \bar{B}_\square$ such that $\phi(a,b)\leq 2(1+\varepsilon)r$. Equivalently, a bucket $\square$ is active if there exists $a\in \bar{A}_\square$ with $a_c>0$. All active pairs are maintained in a list $\mathcal{C}$. For each bucket $\square\in \mathcal{C}$ we store a \emph{representative} pair $(a_\square,b_\square)\in \bar{A}_\square\times \bar{B}_\square$ such that $\phi(a_\square, b_\square)\leq 2(1+\varepsilon)r$. For any pair of points $(a,b) \in A \times B$ and a hashing bucket $\square$, we refer to $\square$ as the {\em proxy bucket} for $(a,b)$ if (\romannumeral 1) $a \in A_\square, b \in B_\square$; (\romannumeral 2) $|\bar{\mathcal{B}}(a, A_\square \cup B_\square, (1+\varepsilon)r)| \le M$. Lemma~\ref{lem:non-uniform-property} implies that each join result $(a,b)$ with $\phi(a,b) \le r$ has at least one proxy bucket. \begin{lemma} \label{lem:non-uniform-property} With probability at least $1-1/n$, for any pair of points $(a, b) \in A\times B$ with $\phi(a,b) \le r$, there exists a data structure $\mathbb{I}_j$ that contains a bucket $\square$ such that: \begin{itemize} \item $a,b$ will collide in $\square$; \item $|\square \cap \overline{\mathcal{B}}(a, A, (1+\epsilon)r)| \le M$ and $|\square \cap \overline{\mathcal{B}}(a, B, (1+\epsilon)r)| \le M$. \end{itemize} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:non-uniform-property}] Consider any pair of points $(a\in A, b\in B)$ within distance $r$ and an arbitrary data structure constructed as described above. From Lemma~\ref{lem:nnq}, with probability at least $\gamma$ there exists a bucket in the data structure that contains both $a, b$ and the number of collisions of $a$ (with the rest of the points in $A\cup B$) is bounded by $M$. Let $F_j$ be the event that is true if there is a bucket in $\mathbb{I}_j$ that witnesses the collision of $a, b$ and the number of collisions of $a$ is bounded by $M$. Since $F_i, F_j$ are independent for $i\neq j$, we have $\Pr[\bar{F}_1 \cap \ldots \cap \bar{F}_C]=\Pr[\bar{F}_1]\cdot \ldots \cdot \Pr[\bar{F}_C] \leq \gamma^{\frac{3}{\log(1/\gamma)}\log n}\leq 1/n^3$. Let $Z$ be the number of pairs with distance at most $r$. We have $Z\leq n^2$. Let $G_i$ be the event which is true if for the $j$-th pair of points $a', b'$ with distance at most $r$, there is at least a copy of the data structure such that there exists a bucket that contains both $a', b'$ and the number of collisions of $a'$ is bounded by $M$. Then $\Pr[G_1 \cap \ldots \cap G_Z]=1-\Pr[\bar{G}_1\cup \ldots \cup \bar{G}_Z]\geq 1-\Pr[\bar{G}_1]-\ldots-\Pr[\bar{G}_Z]\geq 1-n^2/n^3\geq 1-1/n$. Hence, with high probability, for any pair $a\in A, b\in B$ with distance at most $r$ there will be at least one bucket in the data structure such that, both $a, b$ are contained in the bucket and the number of collisions of $a$ in the bucket is bounded by $M$. \end{proof} \begin{lemma} \label{lem:proxy} For any bucket $\square$, if there exist $M$ points from $A_\square$ and $M$ points from $B_\square$, such that none of the $M^2$ pairs has its distance within $2(1+\varepsilon)r$, $\square$ is not a proxy bucket for any pair $(a, b) \in A_\square \times B_\square$ with $\phi(a,b) \le r$. \end{lemma} \begin{proof} Let $A',B'$ be two sets of $M$ points from $A_\square, B_\square$ respectively. We assume that all pairs of points in $A' \times B'$ have their distances larger than $2(1+\varepsilon)r$. Observe that $\square$ is not a proxy bucket for any pair $(a\in A', b\in B')$. It remains to show for $(a\in A_\square\setminus A', b\in B_\square)$ with $\phi(a,b) \le r$. First, assume $b\in B_\square \setminus B'$. If $A' \subseteq \bar{\mathcal{B}}(a,A,(1+\varepsilon)r)$ or $B' \subseteq \bar{\mathcal{B}}(a,B,(1+\varepsilon)r)$, $\square$ is not a proxy bucket for $(a,b)$. Otherwise, there must exist at least one point $a' \in A'$ as well as $b' \in B'$ such that $\phi(a,a') \le (1+\varepsilon)r$ and $\phi(a,b') \le (1+\varepsilon)r$, so from triangle inequality $\phi(a',b') \le \phi(a,a') + \phi(a,b') \le 2(1+\varepsilon)r$. Thus, $(a', b') \in A' \times B'$ is a pair within distance $2(1+\varepsilon)r$, coming to a contradiction. Then we assume that $b\in B'$. If $A' \subseteq \bar{\mathcal{B}}(a,A,(1+\varepsilon)r)$, $\square$ is not a proxy bucket for $(a,b)$. Otherwise, there must exist at least one point $a' \in A'$ such that $\phi(a,a') \le (1+\varepsilon)r$, so from triangle inequality $\phi(a',b) \le \phi(a,a') + \phi(a,b) \le (2+\varepsilon)r\leq 2(1+\varepsilon)r$. Thus, $(a', b) \in A' \times B'$ is a pair within distance $2(1+\varepsilon)r$, coming to a contradiction. \end{proof} Later, we see that our enumeration phase only reports each join result in one of its proxy buckets. This guarantees the completeness of query results, but de-duplication is still necessary if a pair of points has more than one proxy buckets. \subsubsection{Update} We handle insertion and deletion separately. We assume that we insert or delete a point $a\in A$. We can handle an update from $B$, similarly. All pseudocodes are given below. \begin{algorithm} \caption{{\sc Insert}$(a \in A)$} \ForEach{hash function $g$ in the data structure}{ $\square \gets$ the bucket with hash value $g(a)$\; Insert $a$ into $A_\square$\; \If{$|\bar{A}_\square|<M$}{ Insert $a$ into $\bar{A}_\square$\; Compute $a_c$ by computing $\phi(a,b)$ for each $b\in \bar{B}_\square$\; \If{$a_c>0$ AND $\square\notin \mathcal{C}$}{ $\mathcal{C}\leftarrow \mathcal{C}\cup\{\square\}$\; $(a_\square, b_\square)=(a,b)$ for a point $b\in \bar{B}_\square$ with $\phi(a,b)\leq 2(1+\varepsilon)r$\; } } } \end{algorithm} \begin{algorithm} \caption{{\sc Delete}$(a \in A)$} \ForEach{hash function $g$ in the data structure}{ $\square \gets$ the bucket with hash value $g(a)$\; Delete $a$ from $\square_A$\; \If{$a\in \bar{A}_\square$}{ Remove $a$ from $\bar{A}_\square$\; Insert an arbitrary point $a'\in A_\square\setminus \bar{A}_\square$ into $\bar{A}_\square$\; \If{$\square\notin \mathcal{C}$ AND $a_c'>0$}{ $\mathcal{C}\gets \mathcal{C}\cup \{\square\}$\; $(a_\square,b_\square)=(a', b)$ where $b\in \bar{B}_\square$ and $\phi(a',b)\leq 2(1+\varepsilon)r$\; } \ElseIf{$\square\in \mathcal{C}$ AND $a_\square=a$}{ \If{$\exists a''\in \bar{A}_\square$ with $a_c''>0$}{ $(a_\square,b_\square)=(a'', b)$ where $b\in \bar{B}_\square$ and $\phi(a'',b)\leq 2(1+\varepsilon)r$\; }\Else{ $\mathcal{C}\gets \mathcal{C}\setminus \{\square\}$ } } } } \end{algorithm} \paragraph{Insertion of $a$.} We compute $g(a)$ for each chosen hash function $g$. Assume $\square$ is the bucket with hash value $g(a)$. We first insert $a$ to $A_\square$. If $|\bar{A}_\square|<M$ we add $a$ in $\bar{A}_\square$ and we compute the counter $a_c$ by visiting all points in $\bar{B}_\square$. If $\square$ was inactive and $a_c>0$, we add $\square$ in $\mathcal{C}$, we find a point $b\in \bar{B}_\square$ with $\phi(a,b)\leq 2(1+\varepsilon)r$ and we set the representative pair $(a_{\square}, b_\square)=(a,b)$. If $|\bar{A}_\square|=M$ before we insert $a$ we do not do anything. \paragraph{Deletion of $a$.} Similarly, we compute $g(a)$ for each chosen hash function $g$. Assume $\square$ is the bucket with hash value $g(a)$. We first remove $a$ from $A_{\square}$. If $a\in \bar{A}_\square$ we also remove it from $\bar{A}_\square$ and we replace it with an arbitrary point $a'\in A_{\square}\setminus \bar{A}_\square$ by computing its counter $a_c'$. If $a$ was a point in the representative pair of $\square$ we update it by finding any point $a''\in \bar{A}_\square$ with $a_c''>0$. If there is not such point we remove $\square$ from $\mathcal{C}$. \medskip When there are $n/2$ updates, we just reconstruct the entire data structure from scratch. \subsubsection{Enumeration} The high-level idea is to enumerate the representative pair of points for each bucket in $\mathcal{C}$. Assume a representative pair $(a,b)$ is found in a bucket $\square \in \mathcal{C}$. Next, the algorithm is going to enumerate all pairs containing point $a$. Initially, all buckets containing $a$ are maintained in $\mathcal{C}(a) \subseteq \mathcal{C}$. Algorithm~\ref{alg:enumerate-non-uniform} visits every bucket $\square \in \mathcal{C}(a)$ and starts to check the distances between $a$ and points in $B_\square$ that are not marked by $X(\square, a)$ (we show when a point is marked in the next paragraph). Each time a pair $(a,b)$ within distance $2(1+\varepsilon)r$ is found, it just reports this pair and calls the procedure {\sc Deduplicate} on $(a,b)$ (details will be given later). If there are more than $M$ points far away (i.e. $>2r(1+\varepsilon)$) from $a$, we just stop enumerating results with point $a$ in this bucket, and remove the bucket\footnote{In the enumeration phase, the ``remove'' always means conceptually marked, instead of changing the data structure itself.} $\square$ from $\mathcal{C}(a)$. We also update $\bar{A}_\square$ so that if $a\in \bar{A}_\square$ we replace $a$ with another point $A_\square$. Once the enumeration is finished on $a$, i.e., when $\mathcal{C}(a)$ becomes empty, it can be easily checked that $a$ has been removed from all buckets. Next, we explain more details on the de-duplication step presented as Algorithm~\ref{alg:deduplicate}. Once a pair of points $(a,b)$ within distance $2(1+\varepsilon)r$ is reported, Algorithm~\ref{alg:deduplicate} goes over all buckets witnessing the collision of $a,b$, and marks $b$ with $X(\square, a)$ to avoid repeated enumeration (line 2). Moreover, for any bucket $\square$ with $a \in A_\square$ and $b \in B_\square$, if $(a,b)$ is also its representative pair, Algorithm~\ref{alg:deduplicate} performs more updates for $\square$. Algorithm~\ref{alg:deduplicate} first needs to decide whether $\square$ is still an active bucket for $a$ by checking the distances between $a$ and $M$ points unmarked by $a$ in $B_\square$. If such a pair within distance $2(1+\varepsilon)r$ is found, it will set this pair as new representative for $\square$. Otherwise, it is safe to skip all results with point $a$ in this bucket. In this case, it needs to further update a new representative pair for $\square$ using $\bar{A}_\square, \bar{B}_\square$. Moreover, if no representative pair can be found, it is safe to skip all results with bucket $\square$. \begin{algorithm} \caption{{\sc EnumerateLSH}} \label{alg:enumerate-non-uniform} \While{$\mathcal{C} \neq \emptyset$}{ $(a,b) \gets$ the representative pair of any bucket in $\mathcal{C}$\; $\mathcal{C}(a) \gets \{\square \in \mathcal{C}: a \in A_\square\}$\; \While{$\mathcal{C}(a) \neq \emptyset$}{ Pick one bucket $\square \in \mathcal{C}(a)$; $i \gets 0$\; \ForEach{$b \in B_\square - X(\square, a)$}{ \If{$\phi(a,b) \le 2(1+\varepsilon)r$}{ {\sc Emit}$(a,b)$\; {\sc Deduplicate}$(a,b)$\; } \Else{ $i \gets i + 1$\; \bf{if} {$i > M$} \bf{then break\;} } } $A_\square \gets A_\square - \{a\}$\; $\mathcal{C}(a) \gets \mathcal{C}(a) - \{\square\}$\; Replace $a$ from $\bar{A}_\square$ (if $a\in \bar{A}_\square$) and update its representative pair\; } } \end{algorithm} \begin{algorithm} \caption{{\sc Deduplicate$(a,b)$}} \label{alg:deduplicate} \ForEach{$\square \in C$ with $a \in A_\square$ and $b \in B_\square$ }{ $X(\square, a) \gets X(\square, a) \cup \{b\}$\; \If{$(a_\square,b_\square)=(a,b)$}{ $B' \gets M$ arbitrary points in $B_\square - X(\square, a)$\; \If{there is $b' \in B'$ with $\phi(a, b') \le 2(1+\varepsilon)r$}{ $(a_\square,b_\square)=(a, b')$\; } \Else{ $\mathcal{C}(a) \gets \mathcal{C}(a) - \{\square\}$\; $A_\square \gets A_\square - \{a\}$\; \If{$a\in \bar{A}_\square$}{ Replace it with a new item $a'\in A_{\square}\setminus \bar{A}_\square$\; Compute $a_c'$\; \If{$\exists a''\in \bar{A}_\square$ with $a_c''>0$}{ $(a_{\square},b_{\square})=(a'', b'')$ where $b''\in \bar{B}_\square$ and $\phi(a'',b'')\leq 2(1+\varepsilon)r$\; }\Else{ $\mathcal{C}\gets \mathcal{C}\setminus\{\square\}$\; } } } } } \end{algorithm} For any bucket $\square$, we can maintain the points in $A_\square, B_\square, X(\square, a)$ in balanced binary search trees, so that points in any set can be listed or moved to a different set with $O(\log n)$ delay. Moreover, to avoid conflicts with the markers made by different enumeration queries, we generate them randomly and delete old values by lazy updates~\cite{erickson2011static, overmars1981worst, overmars1987design} after finding new pairs to report. \begin{lemma} \label{lem:non-uniform-approximate} The data structure supports $(1+2\varepsilon)$-approximate enumeration, with high probability. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:non-uniform-approximate}] It can be easily checked that any pair of points with distance more than $2(1+\varepsilon)r$ will not be enumerated. Also, each result is reported at most once by Algorithm~\ref{alg:deduplicate}. Next, we will show that with high probability, all pairs of points within distance $r$ are reported. Consider any pair of points $(a,b)$ within distance $r$. Implied by Lemma~\ref{lem:non-uniform-property}, there must exist a proxy bucket $\square$ for $(a,b)$. Observe that there exists no subset of $M$ points from $A_\square$ as $\bar{A}_\square$ and subset of $M$ points from $B_\square$ as $\bar{B}_\square$, where all pairs of points in $\bar{A}_\square \times \bar{B}_\square$ have their distances larger than $2(1+\varepsilon)r$, implied by Lemma~\ref{lem:proxy}, so $\square$ is active. Moreover, there exists no subset of $M$ points from $B_\square$ as $\bar{B}_\square$, where all pairs of points $(a, b'\in \bar{B}_\square)$ have their distances larger than $2(1+\varepsilon)r$, so $\square$ is an active bucket for $a$. In Algorithm~\ref{alg:enumerate-non-uniform}, when visiting $\square$ by line 7-18, $(a,b)$ must be reported by $\square$ or have been reported previously. \end{proof} We next analyze the complexity of the data structure. The size of the data structure is $\O(dn + n k \tau)=\O(dn+n^{1+\rho})$. The insertion time is $\O(d \tau M)=\O(dn^{\rho}M)$. Using that, we can bound the construction time of this data structure as $\O(d n\tau M)=\O(dn^{1+\rho}M)$. The deletion time is $\O(d \tau M)=\O(d n^{\rho}M)$. The delay is bounded by $\O(d \tau M)=\O(d n^{\rho}M^2)$ since after reporting a pair $(a, b)$, we may visit $\O(\tau)$ buckets and spend $O(M)$ time for each in updating the representative pair. Putting everything together, we conclude the next theorem. \begin{theorem} \label{the:high-non-uniform} Let $A$ and $B$ be two sets of points in $\mathbb{R}^d$, where $|A|+|B|=n$ and let $\varepsilon, r$ be positive parameters. For $\rho=\frac{1}{(1+\varepsilon)^2}+o(1)$, a data structure of $\O(dn+n^{1+\rho})$ size can be constructed in $\O(dn^{1+2\rho})$ time, and updated in $\O(dn^{2\rho})$ amortized time, while supporting $(1+2\varepsilon)$-approximate enumeration for similarity join under the $\ell_2$ metric with $\O(dn^{2\rho})$ delay. \end{theorem} Alternatively, we can insert or delete points from $A\cup B$ without maintaining the sets $\bar{A}_\square, \bar{B}_\square$ for every bucket $\square$. In the enumeration phase, given a bucket $\square$, we can visit $M$ arbitrary points from $A_\square$ and $M$ arbitrary points from $B_\square$ and compute their pairwise distances. If we find no pair $(a\in A_\square, b\in B_\square)$ with $\phi(a,b)\leq 2(1+\varepsilon)r$ then we skip this bucket. Otherwise we report the pair $(a,b)$ and we run the deduplicate procedure. In this case the update time is $\O(dn^{\rho})$ and the delay is $\O(dn^{3\rho})$. \begin{theorem} \label{the:high-non-uniform2} Let $A$ and $B$ be two sets of points in $\mathbb{R}^d$, where $|A|+|B|=n$ and let $\varepsilon, r$ be positive parameters. For $\rho=\frac{1}{(1+\varepsilon)^2}+o(1)$, a data structure of $\O(dn+n^{1+\rho})$ size can be constructed in $\O(dn^{1+\rho})$ time, and updated in $\O(dn^{\rho})$ amortized time, while supporting $(1+2\varepsilon)$-approximate enumeration for similarity join under the $\ell_2$ metric with $\O(dn^{3\rho})$ delay. \end{theorem} Notice that the complexities of the theorems above depend on the parameter $M$ from Lemma~\ref{lem:nnq}. Hence, a better bound on $M$ will give improve the results of our data structure. In the original paper \cite{indyk1998approximate} (Section 4.2) for the Hamming metric the authors choose $\rho=\frac{\log(1/p_1)}{\log(p_1/p_2)}$ showing that for any $p,q\in P$ such that $\phi(p,q)\leq r$ there exists a bucket, with constant probability $\gamma$, that $p,q$ collide and the number of points in $P\cap \overline{\mathcal{B}}(p,(1+\varepsilon)r)$ colliding with $p$ in the bucket is at most $M=O(1)$. For $\varepsilon>1$ they show that $\rho<\frac{1}{\varepsilon}$. Equivalently we can set $\varepsilon$ as $\varepsilon-1$ and $M=O(1)$. Using this result we can get the next theorem. \begin{theorem} \label{the:Shigh-non-uniform} Let $A,B$ be two sets of points in $\mathbb{H}^d$, where $|A|+B|=n$ and let $\varepsilon, r$ be positive parameters. For $\rho=\frac{1}{1+\varepsilon}$, a data structure of $\O(dn+n^{1+\rho})$ size can be constructed in $\O(dn^{1+\rho})$ time, and updated in $\O(dn^{\rho})$ amortized time, while supporting $(3+2\varepsilon)$-approximate enumeration for similarity join under the Hamming metric with $\O(dn^{\rho})$ delay. \end{theorem} In the next remarks we show that our results can be extended to $r$ being part of the query (variable). Furthermore, we show that our result is near-optimal. \paragraph{Remark 1.} Similar to the LSH~\cite{indyk1998approximate} used for ANN query, we can extend our current data structure to the case where $r$ is also part of the query. For simplicity, we focus on Hamming metric. For $\mathbb{H}^d$, it holds that $1\leq r\leq d$. Hence, we build $Z=O(\log_{1+\varepsilon} d)=O(\varepsilon^{-1}\log d)$ data structures as described above, each of them corresponding to a similarity threshold $r_i=(1+\varepsilon)^i$ for $i=1,\ldots, Z$. Given a query with threshold $r$, we first run a binary search and find $r_j$ such that $r\leq r_j\leq (1+\varepsilon)r$. Then, we use the $j$-th data structure to answer the similarity join query. Overall, the data structure has $\O(dn+\varepsilon^{-1}n^{1+\rho}\log d)$ size can be constructed in $\O(\varepsilon^{-1}dn^{1+\rho}\log d)$ time, and updated in $\O(\varepsilon^{-1}dn^{\rho}\log d)$ amortized time. After finding the value $r_j$ in $O(\log (\varepsilon^{-1}\log d))$ time, the delay guarantee remains $\O(dn^{\rho})$. We can also extend this result to $\ell_2$ or $\ell_\infty$ metrics using known results (\cite{gionis1999similarity, har2011geometric, indyk1998approximate}). \paragraph{Remark 2.} It is known that the algorithm for similarity join can be used to answer the ANN query. Let $P$ be a set of points in $\Re^d$, where $d$ is a large number, and $\varepsilon, r$ be parameters. The ANN query asks that (1) if there exists a point within distance $r$ from $q$, any one of them should be returned with high probability; (2) if there is no point within distance $(1+\varepsilon)r$ from $q$, it returns ``no''with high probability. For any instance of ANN query, we can construct an instance of similarity join by setting $A = P$ and $B = \emptyset$. Whenever a query point $q$ is issued for ANN problem, we insert $q$ into $B$, invoke the enumeration query until the first result is returned (if there is any), and then remove $q$ from $B$. Our data structure of $\O(dn + n^{1+\rho})$ size can answer $(1+2\varepsilon)$-approximate ANN query in $\O(dn^{2\rho})$ time in $\ell_2$, which is only worse by a factor $n^\rho$ from the best data structure for answering $\varepsilon$-approximate ANN query. \section{Similarity Join in High Dimensions} \label{sec:highd} So far, we have treated the dimension $d$ as a constant. In this section we describe a data structure for approximate similarity join using the \emph{locality sensitive hashing} (LSH) technique, so that the dependency on $d$ is a small polynomial. For simplicity, we assume that $r$ is fixed, however our results can be extended to the case in which $r$ is part of the enumeration query. For $\varepsilon > 0$, $0<p_2<p_1\leq 1$, a family $\H$ of hash functions is $(r, (1+\varepsilon)r, p_1, p_2)$-sensitive, if for any uniformly chosen hash function $h \in H$ and any two points $x,y$: (\romannumeral 1) $\Pr[h(x) = h(y)] \ge p_1$ if $\phi(x, y) \le r$; and (\romannumeral 2) $\Pr[h(x) = h(y)] \le p_2$ if $\phi(x, y) \ge (1+\varepsilon)r$. The quality of $\H$ is measured by $\rho= \frac{\ln p_1}{\ln p_2} <1$, which is upper bounded by a number that depends only on $\varepsilon$; and $\rho = \frac{1}{1+\varepsilon}$ for many common distance functions~\cite{gionis1999similarity,datar2004locality,har2012approximate}. For $\ell_2$ the best result is $\rho\leq \frac{1}{(1+\varepsilon)^2}+o(1) ~\cite{andoni2006near}. The essence of LSH is to hash ``similar'' points into the same buckets with high probability. A simple approach based on LSH is to (\romannumeral 1) hash points into buckets; (\romannumeral 2) probe each bucket and check for each pair of points $(a,b) \in A \times B$ inside the same bucket whether $\phi(a,b) \le r$; and (\romannumeral 3) report $(a,b)$ if the inequalities holds. However, two challenges arise for enumeration. First, without any knowledge of false positive results inside each bucket, checking every pair of points could lead to a huge delay. Our key insight is that after checking a specific number of pairs of points in one bucket (this number will be determined later), we can safely skip the bucket, since any pair of result missed in this bucket will be found in another one with high probability. Second, one pair of points may collide under multiple hash functions, so an additional step is necessary in the enumeration to remove duplicates. If we wish to keep the size of data structure to be near-linear and if we are not allowed to store the reported pairs (so that the size remains near linear), detecting duplicates requires some care. Our data structure and algorithm use a parameter $M$, whose value will be determined later. Since we do not define new hash functions, our results hold for any metric for which LSH works, in particular for Hamming, $\ell_2, \ell_1$ metrics. \newcommand{\Xi}{\Xi} \paragraph{Data Structure.} We fix an LSH family $\H$. Let $\rho$ be its quality parameter. We randomly choose $\tau = O(n^{\rho})$ hash functions. Let $\Xi$ be the set of buckets over all hash functions, each corresponding to one possible value in the range of hash functions. We maintain some extra statistics for buckets in $\Xi$. For a bucket $\square$, let $A_\square = A \cap \square$ and $B_\square = B \cap \square$. We choose two arbitrary subsets $\bar{A}_\square, \bar{B}_\square$ of $A_\square, B_\square$, respectively, of $M$ points each. We choose $M=O(n^{\rho})$. For each point $a\in \bar{A}_\square$, we maintain a counter $\beta_a=|\{b\in \bar{B}_\square\mid \phi(a,b)\leq 2(1+\varepsilon)r\}|$, i.e., the number of points in $\bar{B}_\square$ with distance at most $2(1+\varepsilon)r$ from $a$. We store $\bar{A}_\square$ in an increasing order of their $\beta$ values. If there exists $a\in \bar{A}_\square$ with $\beta_a>0$, we call $\square$ {\em active} and store an arbitrary pair $(a,b)\in \bar{A}_\square \times \bar{B}_\square$ with $\phi(a,b)\leq 2(1+\varepsilon)r$ as its \emph{representative} pair, denoted as $(a_\square, b_\square)$. Let $\mathcal{C}$ denote the set of active buckets. To ensure high-probability guarantee, we maintain $O(\log n)$ copies of this data structure. \medskip Before diving into the details of update and enumeration, we give some intuition about active buckets. Given a set $P$ of points and a distance threshold $r$, let $\overline{\mathcal{B}}(q, P, r) = \{p \in P\mid \phi(p,q) >r\}$. For any pair of points $(a,b) \in A \times B$ and a hashing bucket $\square$, we refer $\square$ as the {\em proxy bucket} for $(a,b)$ if (\romannumeral 1) $a \in A_\square, b \in B_\square$; (\romannumeral 2) $|\bar{\mathcal{B}}(a, A_\square \cup B_\square, (1+\varepsilon)r)| \le M$. A crucial property of proxy bucket is captured by Lemma~\ref{lem:proxy-2:Main} (the proof will be given later), which implies that it is safe (with high probability) to skip a bucket after we have seen up to $M^2$ far away pairs of points inside, since the truly similar pairs of points inside it will be reported from other buckets. In this way, we only need to enumerate join results from active buckets. \begin{lemma} \label{lem:proxy-2:Main} For any bucket $\square$, if there exist $M$ points from $A_\square$ and $B_\square$ each, such that none of the $M^2$ pairs has its distance within $2(1+\varepsilon)r$, $\square$ is not a proxy bucket for any pair $(a, b) \in A_\square \times B_\square$ with $\phi(a,b) \le r$. \end{lemma} \paragraph{Update.} When a point is inserted, say $a\in A$, we visit every bucket $\square$ into which $a$ is hashed and insert $a$ to $A_\square$. If $|\bar{A}_\square|\geq M$, we do nothing. If $|\bar{A}_\square|<M$, we insert $a$ in $\bar{A}_\square$ and compute its counter $\beta_a$ by visiting all points in $\bar{B}_\square$. If $\beta_a>0$ and $\square\notin \mathcal{C}$, we add $\square$ to $\mathcal{C}$ and store the representative pair of $\square$ defined as $(a,b)$, where $b\in \bar{B}_\square$ is an arbitrary point of $\bar{B}_\square$ with $\phi(a,b)\leq 2(1+\varepsilon)r$. Notice that there always exists such a point $b$ because $\beta_a>0$. When a point is deleted, say $a\in A$, we visit every bucket $\square$ into which $a$ is hashed and delete $a$ from $A_\square$. If $a\in \bar{A}_\square$, we delete it from $\bar{A}_\square$ and insert an arbitrary point (if any) from $A_\square\setminus \bar{A}_\square$ into $\bar{A}_\square$. If $a = a_\square$, i.e., $a$ participates in the representative pair of $\square$, we find a new representative pair by considering an arbitrary point $a'\in \bar{A}_\square$ with $\beta_{a'}>0$. If no such point exists, we remove $\square$ from $\mathcal{C}$. The insertion or deletion of a point $b \in B$ is similar. After performing $n/2$ updates, we reconstruct the entire data structure from scratch. \paragraph{Enumeration.} The high-level idea is to enumerate the representative pair from every active bucket and recompute new representative pairs for it. Assume a representative pair $(a,b)$ is found in a bucket $\square \in \mathcal{C}$. Next, we enumerate all pairs that involve the point $a$. For any bucket $\square'$ such that $a \in A_{\square'}$, let $X(\square', a)\subseteq B_\square$ be the set of \emph{marked} points of $B_\square$ that the enumeration procedure has already reported their pairs with $a$. For example, if $(a,b)$ is reported and both $a,b$ lie in a set of buckets $C$, then $b\in X(\square', a)$ for each $\square'\in C$. Let $\mathcal{C}(a) \subseteq \mathcal{C}$ be the set of active buckets containing $a$. We visit every bucket $\square \in \mathcal{C}(a)$, and check the distances between $a$ and points in $B_\square\setminus X(\square, a)$. Each time a pair $(a,b)$ with $\phi(a,b) \le 2(1+\varepsilon)r$ is found, we report it and invoke a de-duplication step on $(a,b)$ to make sure that we will not report $(a,b)$ again. Details of the de-duplication procedure will be given later. When more than $M$ points from $B_\square$ have been checked without finding a pair with distance less than $2(1+\varepsilon)r$ (or if all all points in $B_\square$ have been considered), we remove\footnote{In the enumeration, ``remove'' means ``conceptually mark'' instead of changing the data structure itself.} $a$ from $A_\square$, remove $\square$ from $\mathcal{C}(a)$, and skip this bucket. If $a\in \bar{A}_\square$ \footnote{For simplicity, at the beginning of the enumeration procedure we can construct a copy of $\bar{A}_\square$ of each bucket $\square$ so that the original points in $\bar{A}_\square$ remain the same when the next enumeration query is executed. This does not affect the asymptotic complexity of the delay guarantee.} we remove $a$ from $\bar{A}_\square$ and we insert another point from $A_\square\setminus \bar{A}_\square$ that we have not visited before so that the next representative pair of $\square$ (if any) does not contain any point from $A$ that we have already visited in the current numeration phase. Once all buckets in $\mathcal{C}(a)$ have been visited, we can just pick an arbitrary active bucket in $\mathcal{C}$ with its representative pair $(a',b')$ (it will always be the case that $a' \neq a$), and start the enumeration for $a'$. Finally, we avoid reporting a pair more than once, as follows. Once a pair $(a,b)$ is enumerated, we go over each bucket $\square$ into which both $a,b$ are hashed, and mark $b$ with $X(\square, a)$ to avoid further repeated enumeration. Moreover, if $(a,b)$ is the representative pair of $\square$, we check at most $M$ points from $B_\square$, whether there exists $b'\in B_\square$ such that $\phi(a,b')\leq 2(1+\varepsilon)r$. If such a pair exists, we store it as the new representative pair (with respect to $a$) for $\square$. Otherwise, we remove $a$ from $A_\square$, remove $\square$ from $\mathcal{C}(a)$, and if $a\in \bar{A}_\square$ update $\bar{A}_\square$ accordingly. \paragraph{Correctness analysis.} The de-duplication procedure guarantees that each pair of points is enumerated at most once. It remains to show that $(1+2\varepsilon)$-approximate enumeration is supported. To prove it, we first point out some important properties of our data structures. \begin{proof}[Proof of Lemma~\ref{lem:proxy-2:Main}] Let $A',B'$ be two sets of $M$ points from $A_\square, B_\square$ respectively. We assume that all pairs of points in $A' \times B'$ have their distances larger than $2(1+\varepsilon)r$. Observe that $\square$ is not a proxy bucket for any pair $(a\in A', b\in B')$. It remains to show that $\square$ is not a proxy bucket for any pair $(a\in A_\square\setminus A', b\in B_\square)$. Assume $b\in B_\square \setminus B'$ (the case is similar if $b\in B'$). If $A' \subseteq \bar{\mathcal{B}}(a,A,(1+\varepsilon)r)$ or $B' \subseteq \bar{\mathcal{B}}(a,B,(1+\varepsilon)r)$, $\square$ is not a proxy bucket for $(a,b)$. Otherwise, there must exist at least one point $a' \in A'$ as well as $b' \in B'$ such that $\phi(a,a') \le (1+\varepsilon)r$ and $\phi(a,b') \le (1+\varepsilon)r$, so $\phi(a',b') \le \phi(a,a') + \phi(a,b') \le 2(1+\varepsilon)r$. Thus, $(a', b') \in A' \times B'$ is a pair within distance $2(1+\varepsilon)r$, coming to a contradiction. \end{proof} We show that $(1+2\varepsilon)$-approximate enumeration is supported with probability $1-1/n$. It can be easily checked that any pair of points farther than $2(1+\varepsilon)r$ will not be enumerated. Hence, it suffices to show that all pairs within distance $r$ are enumerated with high probability. From~\cite{gionis1999similarity, har2011geometric, indyk1998approximate} it holds that for $M=O(n^\rho)$, any pair $(a,b)$ with $\phi(a,b)\leq 1$ has a proxy bucket with probability $1-1/n$. Let $\square$ be a proxy bucket for pair $(a,b)$. Implied by Lemma~\ref{lem:proxy-2:Main}, there exist no $M$ points from $A_\square$ (for example $\bar{A}_\square$) and $M$ points from $B_\square$ (for example $\bar{B}_\square$) such that all $M^2$ pairs have their distance larger than $2(1+\varepsilon)r$, so $\square$ is active. Moreover, there exist no $M$ points from $B_\square$ such that all of them have distance at least $2(1+\varepsilon)r$ from $a$, so $\square$ is an active bucket for $a$. Hence, our enumeration algorithm will report $(a,b)$. \paragraph{Complexity analysis.} Recall that $\tau, M=O(n^\rho)$. The data structure uses $O(dn+ n\tau\log n)$ space since we only use linear space with respect to the points in each bucket. The update time is $\O(dM \cdot \tau)$ as there are $\O(\tau)$ buckets to be investigated and it takes $\O(dM)$ time to update the representative pair. After $n/2$ updates we re-build the data structure so the update time is amortized. The delay is $\O(dM\cdot \tau)$; consider the enumeration for point $a$. It takes $\O(dM \cdot \tau)$ time for checking all buckets while $\O(dM \cdot \tau)$ time for de-duplication. Alternatively, we can insert or delete points from $A\cup B$ without maintaining the sets $\bar{A}_\square, \bar{B}_\square$ for every bucket $\square$. In the enumeration phase, given a bucket $\square$, we can visit $M$ arbitrary points from $A_\square$ and $M$ arbitrary points from $B_\square$ and compute their pairwise distances. If we find no pair $(a\in A_\square, b\in B_\square)$ with $\phi(a,b)\leq 2(1+\varepsilon)r$ then we skip this bucket. Otherwise we report the pair $(a,b)$ and we run the de-duplicate procedure. In this case the update time is $\O(dn^\rho)$ but the delay is $O(dn^{3\rho})$. We conclude the following result: \begin{theorem} \label{the:high-non-uniform:Main} Let $A$ and $B$ be two sets of points in $\mathbb{R}^d$, where $|A|+|B|=n$ and let $\varepsilon, r$ be positive parameters. For $\rho=\frac{1}{(1+\varepsilon)^2}+o(1)$, a data structure of $\O(dn+n^{1+\rho})$ size can be constructed in $\O(dn^{1+2\rho})$ time, and updated in $\O(dn^{2\rho})$ amortized time, while supporting $(1+2\varepsilon)$-approximate enumeration for similarity join under the $\ell_2$ metric with $\O(dn^{2\rho})$ delay. Alternatively, a data structure of $\O(dn+n^{1+\rho})$ size can be constructed in $\O(dn^{1+\rho})$ time, and updated in $\O(dn^{\rho})$ amortized time, while supporting $(1+2\varepsilon)$-approximate enumeration with $\O(dn^{3\rho})$ delay. \end{theorem} The same result holds for Hamming and $\ell_1$ metrics with $\rho=\frac{1}{1+\varepsilon}$. Using~\cite{indyk1998approximate}, for the Hamming metric and $\varepsilon>1$ we can get $M=O(1)$. Skipping the details, we have: \begin{theorem} \label{the:Shigh-non-uniform:Main} Let $A$ and $B$ be two sets of points in $\mathbb{H}^d$, where $|A|+B|=n$ and let $\varepsilon, r$ be positive parameters. For $\rho=\frac{1}{1+\varepsilon}$, a data structure of $\O(dn+n^{1+\rho})$ size can be built in $\O(dn^{1+\rho})$ time, and updated in $\O(dn^{\rho})$ amortized time, while supporting $(3+2\varepsilon)$-approximate enumeration for similarity join under the Hamming metric with $\O(dn^{\rho})$ delay. \end{theorem} In Appendix~\ref{sec:lsh}, we show the full description of algorithms and proofs. We also show that our results can be extended to the case where $r$ is part of the enumeration procedure. Finally, we show a lower bound relating similarity join to the approximate nearest neighbor query. \section{thoughts} \stavros{Here I just repeat the text we have in the lsh but arguing that the number of conflicts of a point p is not $O(n^\rho)$, but it is $O(1)$. Again, that follows because the probability that $p, q$ have large distance and collide at the same bucket is $\beta^k=1/n$. For example check this presentation (slide 45) https://graphics.stanford.edu/courses/cs468-06-fall/Slides/aneesh-michael.pdf or even Sariel's book} In general without this uniform assumption, we need to explore more properties of the LSH family for an efficient index. Our observation is that after checking some pairs of points in one bucket (the specific numbers of pairs will be determined later), we can safely skip the bucket, since with high probability any join result missed in this bucket will be found in another one. In this way, we avoid spending too much time in one bucket before finding any join result. By choosing $k = O(\log n)$ and $\tau=O(n^{1/\rho})$, a nice property on this combination of parameters was proved for answering ANN query, which will be the starting point of our index. \begin{lemma}[\cite{gionis1999similarity}] \label{lem:nnq_2} For a set of $n$ points $P$ in hamming space $\mathbb{H}^d$ and a distance threshold $r$, by choosing $k=O(\log n)$, and $\tau=O(n^{\rho})$, the following two properties hold for every point $q \in P$: \begin{itemize} \item Let $p$ be a point such that $\phi(p, q) \leq r$. With probability at least $3/4$, the point $q$ will collide in some bucket with point $p$; \item Let $P(q, (1+\epsilon)r)$ be the set of points in $P$ with distance more than $(1+ \epsilon)r$ from $q$. With probability at least $3/4$, the total number of distinct points in $P(q, (1+\epsilon)r)$ that will collide in a bucket with $q$ is $M=4=O(1)$. \end{itemize} \end{lemma} \subsubsection{Index} We construct $m = \frac{3}{\log(4/3)} \log n$ copies of the index (choosing $k = O(\log n)$ and $\tau = O(n^{1/\rho})$) above, denoted as $\mathbb{I}_1, \mathbb{I}_2, \cdots, \mathbb{I}_m$. Note that each index $\mathbb{I}_j$ has $2^k \cdot \tau$ buckets. An important property on this set of indexes is stated by the following Lemma, and its proof is given in Appendix~\ref{appendix:lsh-proof}. \begin{lemma} \label{lem:non-uniform-property_2} With probability at least $1-1/n$, for any pair of points $(a, b) \in A\times B$ with $\phi(a,b) \le r$, there exists an index $\mathbb{I}_j$ such that \begin{itemize} \item $a,b$ will collide in some bucket of $\mathbb{I}_j$; \item The number of points in $A(a, (1+\epsilon)r)$ as well as $B(a, (1+\epsilon)r)$ colliding with $a$ is $M=O(1)$; \item The number of points in $A(b, (1+\epsilon)r)$ as well as $B(b, (1+\epsilon)r)$ colliding with $b$ is $M=O(1)$; \end{itemize} \end{lemma} \begin{proof} Consider any pair of points $(a, b) \in A \times B$ with $\lVert a -b\rVert \le r$ and an arbitrary index constructed as described above. From Lemma~\ref{lem:nnq}, with probability at least $1/4$ there exists a bucket in the index that contains both $a, b$ and the number of collisions of both $a$ and $b$ (with the rest of the points in $A\cup B$) is bounded by $O(1)$. Let $F_j$ be the event that is true if there is a bucket in $\mathbb{I}_j$ that witnesses the collision of $a, b$ and the number of collisions of both $a, b$ is bounded by $O(1)$. Since $F_i, F_j$ are independent for $i\neq j$, we have $\Pr[\bar{F}_1 \cap \ldots \cap \bar{F}_C]=\Pr[\bar{F}_1]\cdot \ldots \cdot \Pr[\bar{F}_C] \leq (3/4)^{\frac{3}{\log(4/3)}\log n}\leq 1/n^3$. Let $Z$ be the number of pairs with distance at most $r$. We have $Z\leq n^2$. Let $G_i$ be the event which is true if for the $j$-th pair of points $a', b'$ with distance at most $r$, there is at least a copy of the index such that there exists a bucket that contains both $a', b'$ and the number of collisions of both $a, b$ is bounded by $O(1)$. Then $\Pr[G_1 \cap \ldots \cap G_Z]=1-\Pr[\bar{G}_1\cup \ldots \cup \bar{G}_Z]\geq 1-\Pr[\bar{G}_1]-\ldots-\Pr[\bar{G}_Z]\geq 1-n^2/n^3\geq 1-1/n$. Hence, with high probability, for any pair $a\in A, b\in B$ with distance at most $r$ there will be at least one bucket in the index such that, both $a, b$ are contained in the bucket and the number of collisions of both $a, b$ in the bucket is bounded by $M=O(1)$. \end{proof} In plain language, for each join result $(a,b)$, there exists at least one bucket such that $a,b$ will collide and the number of ``noisy'' points (with distance more than $(1+\epsilon)r$ from $a$ or $b$) inside this bucket is also bounded by $O(1)$, since points inside each bucket must be distinct. For simplicity, we denote such a bucket described by Lemma~\ref{lem:non-uniform-property} as {\em proxy bucket} for the join result $(a,b)$. Later, we will see in the enumeration phase that each join result is only reported by one of its proxy bucket, thus guarantee the completeness of query results. However, an inherent challenges of duplication remains, i.e., de-duplication is necessary if a pair of points has more than one proxy buckets. \stavros{If we run the current algorithm we have but instead of checking $O(n^\rho)$ points in a bucket when we insert a point or $O(n^{2\rho})$ points in bucket when we delete a point we can just check $O(1)$ points. Hence we would have the next result.} \begin{theorem} \label{the:high-non-uniform_2} Let $A$ and $B$ be two sets of points in the hamming space $\mathbb{H}^d$, where $|A|+B|=n$, and let $\varepsilon > 2, r$ be positive parameters. For $\rho=1/(1+\varepsilon)$, an index of $\O(dn+n^{1+\rho})$ space can be constructed in $\O(dn^{1+\rho})$ time such that all pairs of points within distance $r$ will be reported (with high probability) along with some pairs of points within distance $2(1+\varepsilon)r$, with $\O(dn^{\rho})$ delay. The index can be updated in $\O(dn^{\rho})$ amortized time. \end{theorem} ==================================================== \stavros{The next is a try to also improve the $2(1+\varepsilon)$ approximation factor. After arguing as before and having proved Lemma~\ref{lem:non-uniform-property_2} we can continue as follows before we describe the algorithm.} \newcommand{\mathbf{E}}{\mathbf{E}} \begin{lemma} Let $p\in P$ be a point in $\mathbb{H}^d$ and let $P(p, (1+\varepsilon)r)$ be the points in $P$ with distance more than $(1+\varepsilon)r$ from $p$. With probability at least $3/4$ the number of points in $P(p,(1+\varepsilon)r)$ that will collide in every bucket with $p$ is $2\log (4\tau) + 1=O(\log n)$. \end{lemma} \begin{proof} Let $h$ be one of the $\tau$ hash functions and let $q\in P(p,(1+\varepsilon)r)$. From~\cite{} we have that $\Pr[h(p)=h(q)]\leq 1/n$. Let $X(q)$ be a random variable which is $1$ if $h(p)=h(q)$, and $0$, otherwise. Let $X=\sum_{q\in P(p, (1+\varepsilon)r)}X(q)$. We have $\mu=\mathbf{E}[X]\leq 1$. We have $\Pr[X\geq 2\log (4\tau) +1]\leq \Pr[X\geq (1+\frac{2\log (4\tau)}{\mu})\mu]$. From Chernoff inequality\footnote{We use the version of Chernoff inequality where $\Pr[X\geq (1+\delta)\mu]\leq e^{-\delta^2\mu/(2+\delta)}$, for $\delta\geq 0$} we have $\Pr[X\geq (1+\frac{2\log (4\tau)}{\mu})\mu]\leq e^{-\frac{2^2\log^2 (4\tau)/\mu}{2+2\log (4\tau)/\mu}}$. Notice that $2+2\log (4\tau)/\mu\leq 4\log (4\tau) /\mu$, so $\Pr[X\geq 2\log (4\tau)+1]\leq e^{-\frac{\log^2 (4\tau)/\mu}{\log (4\tau)/\mu}}=e^{-\log (4\tau)} = \frac{1}{4\tau}$. Let $G_i$ be the event which is true if point $p$ has at most $2\log (4\tau) + 1$ conflicts with points in $P(p,(1+\varepsilon)r)$ at hash function $h_i$. $\Pr[\cap_{i=1}^{\tau G_i]=1-\Pr[\bigcup_{i=1}^{\tau}\bar{G}_i]\geq 1-\frac{\tau}{4\tau}=3/4$. \end{proof} Let $p$ be a point in $P$ and $P(p, (1+\epsilon)r)$ be the set of points in $P$ with distance more than $(1+\varepsilon)r$ from $p$. Let $h$ be one hush function and let $q\in P(p,(1+\varepsilon)r)$. From \cite{} we have that $\Pr[h(p)=h(q)]=\beta^k\leq \frac{1}{n}$. Let $X(q)$ be a random variable which is $1$ if $h(p)=h(q)$, and $0$, otherwise. Let $X=\sum_{q\in P(p, (1+\varepsilon)r)}X(q)$. We have $\mu=E[X]\leq 1$. Let $c$ be a constant we will decide later. We have $\Pr[X\geq c\log n+1]\leq \Pr[X\geq (1+\frac{c\log n}{\mu})\mu]$. From Chernoff inequality we have $\Pr[X\geq (1+\frac{c\log n}{\mu})\mu]\leq e^{-\frac{c^2\log^2 (n)/\mu}{2+c\log (n)/\mu}}$. Notice that $2+c\log (n)/\mu\leq 2c\log (n) /\mu$, so $\Pr[X\geq c\log n+1]\leq e^{-\frac{c\log^2 (n)/\mu}{2\log (n)/\mu}}=e^{-\frac{c}{2}\log n} = \frac{1}{n^{c/2}}$. Hence with probability at least $1-\frac{1}{n^{c/2}}$, point $p$ will have at most $O(\log n)$ conflicts with points in $P(p, (1+\varepsilon)r)$ in one hash function $h$. Let $G_i$ be the event which is true if point $p$ has $O(\log n)$ conflicts with points in $P(p,(1+\varepsilon)r)$ at hash function $h_i$. $\Pr[\cap_{i=1}^{n^{\rho+1}} G_i]=1-\Pr[\bigcup_{i=1}^{n^{\rho+1}}\bar{G}_i]\geq 1-\frac{1}{n^{c/2-\rho-1}}$. (The number of buckets we construct in Lemma~\ref{lem:non-uniform-property_2} is $O(n^{\rho}\log n)$ which is also bounded (for simplicity) by $n^{\rho+1}$ in the bound above). Let $F_i$ be the event which is true if point $p_i\in P$ has $O(\log n)$ conflicts in all the $O(n^{\rho}\log n)$ hash functions. $\Pr[\cap_{i=1}^{n} F_i]=1-\Pr[\bigcup_{i=1}^{n}\bar{F}_i]\geq 1-\frac{1}{n^{c/2-\rho-2}}$. If we set $c=2(\rho +3)\leq 8$ then with probability at least $1-1/n$ every point $p\in P$ will have at most $8\log n$ conflicts with $P(p, (1+\varepsilon)r)$ in every hash function /bucket. \stavros{The we could use this result in several ways. For example, when we insert a point $a$ in a bucket $\square$ we could argue that we check all points in $B_\square$ until we find a point within distance $r$ or until there are no points within distance $r$. Notice that according to our proven argument this procedure will take $O(\log n)$ time (in one bucket) with high probability. Similarly, we can argue for deletions. In that way we can say that with high probability the delay/update is $O(dn^{\rho})$ and we report all pairs within distance $r$ with high probability and might report pairs with distance $(1+\varepsilon)r$. (notice that now we have a better approximation error and complexities but the complexities hold with high probability, which was not the case before. Of we want to argue that the delay/update is not randomized then we can stop the procedure after checking more than $O(\log n)$ elements in $B_\square$ when we insert a new point $a$ in a bucket $\square$ (similarly for deletion). Since we have argued that with high probability it is not possible to have more than $O(\log n)$ conflict of a point in a bucket then with high probability we will report anything the delay/update will be deterministic and the error will be $(1+\varepsilon)$.} \section{Preliminaries} \label{sec:prelim} \mparagraph{Quadtree} A $d$-dimensional quadtree~\cite{har2011geometric, samet1989spatial} over a point set $P$ is a tree data structure in which each node $u$ is associated with a box $\square_u$ in $\Re^d$ and each internal node has exactly $2^d$ children. It recursively subdivides the space into $2^d$ equal size boxes until a box contains at most one point from $P$. Let $\mathcal{S}=\frac{\max_{x,y\in P}\lVert x-y \rVert_2}{\min_{x,y\in P}\lVert x-y \rVert_2}$ be the \emph{spread} of $P$. $\mathcal{T}$ can be constructed in $O(n\log \mathcal{S})$ time, it has $O(n\log\mathcal{S})$ space, and $O(\log\mathcal{S})$ height. \mparagraph{Well Separated Pair Decomposition (WSPD) \cite{callahan1995decomposition, har2006fast, har2011geometric}} Given a set $P$ of $n$ points in $\mathbb{R}^d$ and a parameter $0< \varepsilon < \frac{1}{2}$, a set of $s$ pairs $W=\{(X_1, Y_1), (X_2, Y_2), \cdots, (X_s, Y_s)\}$ is an $\varepsilon$-WSPD if the following conditions hold: (1) for any $i \in \{1,2,\cdots,s\}$, $X_i, Y_i \subseteq P$ and $X_i\cap Y_i=\emptyset$; (2) for each pair of points $x, y \in P$, there exists a unique pair $(X_j, Y_j)\in W$ such that $x \in X_j$ and $y\in Y_j$; (3) $s=O(\varepsilon^{-d}n)$ and (4) for any $i \in \{1,2,\cdots,s\}$, $\max\{\operatorname{diam}(X_i), \operatorname{diam}(Y_i)\} \leq \varepsilon \cdot \phi(X_i, Y_i)$, where $\operatorname{diam}(P) = \max_{x,y\in P} \lVert x-y\rVert_2$ and $\phi(P_1, P_2) = \min_{x \in P_1, y\in P_2} \lVert x -y\rVert_2$. In~\cite{har2011geometric, har2006fast}, a compressed (compressed) quadtree is used to construct an $\varepsilon$-WSPD efficiently. In particular, an $\varepsilon$-WSPD $W$ can be constructed in $O(n\log n +\varepsilon^{-d}n)$ time such that each pair $(X_i, Y_i)\in W$ is a pair of nodes in a quadtree over the points set of points $P$. It is also known that for each pair $(X_i, Y_i) \in W$, (i) $\square_{X_i}\cap \square_{Y_i}=\emptyset$, and (ii) $\max\{\operatorname{diam}(\square_{X_i}), \operatorname{diam}(\square_{Y_i})\}\leq \varepsilon \phi(\square_{X_i}, \square_{Y_i})$. Furthermore, given a set $P$ of $n$ points with $O(\operatorname{poly}(n))$ spread its WSPD $W$ built on a quadtree $\mathcal{T}$, $\mathcal{T}$ has height $\O(1)$ and every node in $\mathcal{T}$ participates in $\O(\varepsilon^{-d})$ pairs of $W$ (see~\cite{har2011geometric} for details). \subsection{Variable Similarity Threshold} \label{sec:wspd} We now describe the data structure when the threshold $r$ is not fixed but instead specified as part of the query. We assume that the spread of $A\cup B$ is polynomially bounded, i.e., $sp(A\cup B)=\frac{\max_{p,q \in A\cup B}\phi(p,q)}{\min_{p,q\in A\cup B}\phi(p,q)}=n^{O(1)}$. We use quad tree and well-separated pair decomposition (WSPD) for our data structure. We describe them briefly here and refer the reader to~\cite{har2011geometric}. \paragraph{Quad tree and WSPD.} A $d$-dimensional quad tree~\cite{har2011geometric, samet1989spatial} over a point set $P$ is a tree data structure $\mathcal{T}$ in which each node $u$ is associated with a hypercube $\square_u$ in $\Re^d$ and each internal node has exactly $2^d$ children. The root is associated with a hypercube containing $P$. For a node $u$, let $P_u=P\cap \square_u$. A node $u$ is a leaf if $|P_u|\leq 1$. The tree recursively subdivides the space into $2^d$ congruent hypercubes until a box contains at most one point from $P$. If $sp(P)=n^{O(1)}$, the height of $\mathcal{T}$ is $O(\log n)$. Given two point sets $A, B\subset \Re^d$, with $|A|+|B|=n$, and a parameter $0< \varepsilon < \frac{1}{2}$, a family of pairs $\mathcal{W}=\{(A_1, B_1), (A_2, B_2), \cdots, (A_s, B_s)\}$ is an $\varepsilon$-WSPD if the following conditions hold: (1) for any $i\leq s$, $A_i\subseteq A$, $B_i\subseteq B$ (2) for each pair of points $(a,b) \in A\times B$, there exists a unique pair $(A_j, B_j)\in \mathcal{W}$ such that $a \in A_j$ and $b\in B_j$ (3) for any $i\leq s$, $\max\{\operatorname{diam}(A_i), \operatorname{diam}(B_i)\} \leq \varepsilon \cdot \phi(A_i, B_i)$, where $\operatorname{diam}(X) = \max_{x,y\in X} \phi(x,y)$ and $\phi(X, Y) = \min_{x \in X, y\in Y} \phi(x,y)$ (see Figure~\ref{fig:wspd}). As shown in~\cite{har2011geometric, har2006fast} if $sp(A\cup B)=n^{O(1)}$, a quad tree $T$ on $A\cup B$ can be used to construct, in time $O(n\log n + \varepsilon^{-d}n)$, a WSPD $\mathcal{W}$ of size $O(\varepsilon^{-d}n)$ such that each pair $(A_i, B_i)\in \mathcal{W}$ is associated with pair of nodes $(\square_i, \boxplus_i)$ in $\mathcal{T}$ where $A_i=A\cap \square_i$ and $B_i=B\cap \boxplus_i$. It is also known that for each pair $(A_i, B_i)\in \mathcal{W}$ (i) $\square_i\cap \boxplus_i=\emptyset$, and (ii) $\max\{\operatorname{diam}(\square_i), \operatorname{diam}(\boxplus_i)\}\leq \varepsilon\phi(\square_i, \boxplus_i)$ (see Figure~\ref{fig:wspd}). We will use $\mathcal{W}=\{(\square_1,\boxplus_i), \ldots, (\square_s,\boxplus_s)\}$ to denote the WSPD, with $A_i, B_i$ being implicitly defined from their nodes. Using the techniques in~\cite{callahan1995dealing, fischer2005dynamic}, the quad tree $\mathcal{T}$ and the WSPD $\mathcal{W}$ can be maintained under insertions and deletions of points in $\O(\varepsilon^{-d})$ time. \paragraph{Data structure.} We construct a quad tree $\mathcal{T}$ on $A\cup B$. For each node $u \in \mathcal{T}$, we store a pointer $A_u$ (and $B_u$) to the leftmost leaf of subtree $\mathcal{T}_u$ that contains a point from $A$ (and $B$). Furthermore, we store a sorted list $A_{\mathcal{T}}$ (and $B_{\mathcal{T}}$)of the leaves that contain points from $A$ (and $B$). We use these pointers and lists to report points in $\square_u$ with $O(1)$ delay. Using $\mathcal{T}$, we can construct a WSPD $\mathcal{W}=\{(\square_1, \boxplus_1),\ldots, (\square_s, \boxplus_s)\}$, $s=O(\varepsilon^{-d})$. For each $i$, let $\Delta_i=\min_{p\in \square_i, q\in\boxplus_i}\phi(p,q)$. We store all pairs $(\square_i, \boxplus_i)$ in a red-black tree $\mathcal{Z}$ using $\Delta_i$ as the key. The data structure has $O(\varepsilon^{-d}n)$ space and $O(\varepsilon^{-d}n\log n)$ construction time. \paragraph{Update.} After inserting or deleting an input point, the quad tree $\mathcal{T}$ and $W$ can be updated in $\O(\varepsilon^{-d})$ time, following the standard techniques in ~\cite{callahan1995dealing, fischer2005dynamic}. As there are at most $\O(\varepsilon^{-d})$ pairs changed, we can update $\mathcal{Z}$ in $\O(\varepsilon^{-d})$ time. Furthermore, we note that there are only $O(1)$ changes in the structure of quad tree $\mathcal{T}$ and the height of $\mathcal{T}$ is $O(\log n)$, so we can update all necessary pointers $A_u, B_u$ and sorted lists $A_{\mathcal{T}}, B_{\mathcal{T}}$ in $O(\log n)$ time. \paragraph{Enumeration.} We traverse $\mathcal{Z}$ in order until we reach a pair $(\square_j, \boxplus_j)$ with $\Delta_j>r$. For each pair $(\square_i, \boxplus_i)$ we traverse we enumerate $(a,b)\in (A\cap \square_i) \times (B\cap \boxplus_i)$ using the stored pointers and the sorted lists $A_\mathcal{T}, B_\mathcal{T}$. The delay guarantee is $O(1)$. Let $(a, b) \in A \times B$ be a pair with $\phi(a,b)\leq r$. Implied by the definition, there exists a unique pair $(A_i, B_i)\in \mathcal{W}$ such that $a\in A_i$ and $b\in B_i$. Notice that $\phi(\square_i, \boxplus_i) \leq\phi(a,b) \leq r$. Thus, all join results across $A_i, B_i$ will be reported, including $(a,b)$. Next, let $(A_i, B_i)$ be a pair that is found by the enumeration procedure in $\mathcal{Z}$, with $\phi(\square_i, \boxplus_i)\leq r$. For any pair of points $x \in \square_i, y \in \boxplus_i$, we have $\phi(x,y) \le & \phi(\square_i,\boxplus_i)+\operatorname{diam}(\square_i)+\operatorname{diam}(\boxplus_i) \le (1+2\cdot \frac{\varepsilon}{2}) \cdot \phi(\square_i,\boxplus_i) \le (1+\varepsilon)r$, thus any pair of points with distance strictly larger than $(1+\varepsilon)r$ will not be reported. \begin{theorem} Let $A, B$ be two sets of points in $\mathbb{R}^d$ for constant $d$, with $O(\operatorname{poly}(n))$ spread and $|A| + |B| = n$. A data structure of $O(\varepsilon^{-d}n)$ space can be built in $\O(\varepsilon^{-d}n)$ time and updated in $\O(\varepsilon^{-d})$ time, while supporting $\varepsilon$-approximate enumeration for similarity join under any $\ell_p$ metric with $O(1)$ delay, for any query similarity threshold $r$. \end{theorem}
{ "timestamp": "2021-05-06T02:07:38", "yymm": "2105", "arxiv_id": "2105.01818", "language": "en", "url": "https://arxiv.org/abs/2105.01818" }
\section{\textbf{Introduction}} Fractional calculus (FC) and fractal geometry (FG) have become rapidly growing fields in theory as well as applications. Since random fractals are a better example of a highly irregular functions, fractional calculus is the best mathematical operator for analyzing such a functions. Tatom gave a general relation between fractional calculus and fractal functions in \cite{FB}. FG is more prominent than classical geometry to study irregular sets. Theory on FG is given in reference \cite{MF}. For various notions and definitions of fractional integrals and derivatives reader may refer to \cite{KIL,Samko}.\\ The box dimension plays an important role to study the smoothness of any irregular function. A connection between FC and fractal dimensions can be found in \cite{KIM,L2,Liang11,L3,WU2}. Liang \cite{L1} proved that the box dimension of a function which is of bounded variation and continuous on $[0,1]$ is $1$ and also box dimension of its Riemann-Livouville fractional $I^{\nu} g$ on $[0,1]$ is $1$, where $$I^\nu g(x)= \frac{1}{\Gamma(\nu)}\int_{0}^x(x-s)^{\nu-1}g(s)ds.$$ Liang \cite{Liang11} investigated the fractal dimension of the fractional integral of Riemann-Liouville type of continuous function having box dimension $1$. We try to establish more general results for the fractional integral of mixed Katugampola type. Box dimension of a bivariate function which is of bounded variation in Arzel\'{a} sense and continuous on $[a,b]\times[c,d]$ has been investigated in \cite{V1}. It has been shown that box dimension of a bivariate function which is of bounded variation in Arzel\'{a} sense and continuous on $[a,b]\times[c,d]$ is $2$ . Also some examples of $2$-dimensional functions are given which are not of bounded variation. Additionally, they have proved that the box dimension of the fractional integral of mixed Riemann-Liouville type of a continuous function which is of bounded variation in Arzel\'{a} sense is $2.$ Analogous results can be seen for the mixed Katugampola fractional integral $ (\mathfrak{I}^{\alpha}f)$ in \cite{V2}, where $$ (\mathfrak{I}^{\alpha}f)(x,y)=\frac{(\rho_1+1)^{1-\alpha_1}(\rho_2+1)^{1-\alpha_2}}{\Gamma (\alpha_1) \Gamma (\alpha_2)} \int_a ^x \int_c ^y (x^{\rho_1+1}-s^{\rho_1+1})^{\alpha_1-1} (y^{\rho_2+1}-t^{\rho_2+1})^{\alpha_2-1}s^{\rho_1}t^{\rho_2}f(s,t)dsdt. $$ The fractional integral of mixed Katugampola type is the unification of the fractional integral of mixed Riemann-Liouville type and fractional integral of mixed Hadamard type. Above results are based on analytical aspects in the sense that they have used bounded variation property in Arzel\'{a} sense to establish their results. In this work we do not require this condition. We have seen that bounded variation property plays an important role in case of bivariate functions. We can reinforce that deduction from bivarite to univariate case is easy but converse is not easy because for dealing with bivariate case we need more notions of bounded variation such as Arzel\'{a}, Peirpont and Hahn, for more details, we refer to \cite{CK3}. Sometimes it is difficult to maintain continuity in bivariate case. From above discussion, a natural question arises that what would be the box dimension of the mixed Katugampola fractional integral of a $2$-dimensional continuous function.\\ In this article, we will prove that the box dimension of the mixed Katugampola fractional integral of $2$-dimensional continuous functions is also $2$. Furthermore, we will give the similar result for the mixed Hadamard fractional integral. \section{\textbf{Preliminaries}} In this section, we give definition of fractional integrals and required terminologies related to this report. \textit{2.1. Mixed Katugampola Fractional Integral:} \begin{definition} Let $f$ be a function which is defined on a closed rectangle $[a,b] \times [c,d]$ and $a\geq 0,c\geq 0.$ Assuming that the following integral exists, mixed Katugampola fractional integral of $f$ is defined by $$ (\mathfrak{I}^{\alpha}f)(x,y)=\frac{(\rho_1+1)^{1-\alpha_1}(\rho_2+1)^{1-\alpha_2}}{\Gamma (\alpha_1) \Gamma (\alpha_2)} \int_a ^x \int_c ^y (x^{\rho_1+1}-s^{\rho_1+1})^{\alpha_1-1} (y^{\rho_2+1}-t^{\rho_2+1})^{\alpha_2-1}s^{\rho_1}t^{\rho_2}f(s,t)dsdt ,$$ where $\alpha = ( \alpha_1, \alpha_2 )$ with $ \alpha_1 >0 , \alpha_2 >0$ and $(\rho_1,\rho_2)\neq(-1,-1).$ \end{definition} \textit{2.2. Mixed Hadamard Fractional Integral:}\\\\ By using L'hospital rule and $\rho_1,\rho_2 \to -1^+$, mixed Katugampola reduces to the mixed Hadamard fractional integral. \begin{definition}\label{Def2} Let $f$ be a function which is defined on a closed rectangle $[a,b] \times [c,d]$ and $a\geq 0,c\geq 0.$ Assuming that the following integral exists, mixed Hadamard fractional integral of $f$ is defined by $$ (\mathfrak{I}^{\gamma}f)(x,y)=\frac{1}{\Gamma (\gamma_1) \Gamma (\gamma_2)} \int_a ^x \int_c ^y (\log\frac{x}{u})^{\gamma_1-1} (\log\frac{y}{v})^{\gamma_2-1}\frac{f(u,v)}{uv} dudv ,$$ where $\gamma = ( \gamma_1, \gamma_2 )$ with $ \gamma_1 >0 , \gamma_2 >0.$ \end{definition} Reader may recommended to \cite{V2} for the above definitions of fractional integrals.\\ \textit{2.3. Range of $f$:} \begin{definition} Let $A=[a,b]\times [c,d]$ be the rectangle. For a function $f:A\to \mathbb{R}$, the maximum range of $f$ over $A$ is given by $$ R_f[A]:=\sup_{(t,u),(x,y)\in A}\lvert f(t,u)-(x,y)\rvert.$$ \end{definition} \textit{2.4. Box Dimension:}\\\begin{definition} Let $S\neq \emptyset$ be a bounded subset of $\mathbb{R}^n$. Let the smallest number of sets having diameter at most $\delta$ is denoted by $N_{\delta}(S)$ which can cover $S.$ Then \begin{equation} \underline{\dim}_B(S)=\mathop{\underline{\lim}}_{\delta \to 0}\frac{\log N_\delta(S)}{-\log\delta}~~~~~\text{(Lower box dimension)} \end{equation} and \begin{equation} \overline{\dim}_B(S)=\overline{\lim_{\delta \to 0}}\frac{\log N_\delta(S)}{-\log\delta}~~~~~\text{(Upper box dimension)}. \end{equation} If $\underline{\dim}_B(S)=\overline{\dim}_B(S),$ common value is called the box dimension of $S.$ That is, \begin{equation*} \dim_B(S)=\lim_{\delta \to 0}\frac{\log N_\delta(S)}{-\log\delta}. \end{equation*} \end{definition} Let $C(I\times J)$ be the set of continuous functions, where $I\times J=[0,1]\times [0,1]$. Let $C$ be an absolutely positive constant and even in the same line at different occurrences it may have different values. Sometimes we use the term fractional integral of mixed Katugampola type in place of mixed Katugampola fractional integral. \section{\textbf{Main Results}} In this section, first we give following lemmas which act as prelude to our main theorem. \begin{lemma}\label{lmm} Let $f:[0,1]\times [0,1] \to \mathbb{R}$ is continuous and $0<\delta<1,~ \frac{1}{\delta} <m,n<1+\frac{1}{\delta}$ for some $m,n \in \mathbb{N}.$ If the number of $\delta$-cubes is denoted by $N_{\delta}(Gr(f))$ which intersect the graph $Gr(f)$ of the function $f$, then \begin{equation} \sum_{j=1}^n \sum_{i=1}^m \max \left \{\frac{R_f[A_{ij}]}{\delta},1 \right \} \leq N_\delta(Gr(f))\leq 2mn+\frac{1}{\delta} \sum_{j=1}^n \sum_{i=1}^m R_f[A_{ij}], \end{equation} where $A_{ij}$ is the $(i,j)$-th cell corresponding to the net under consideration. \end{lemma} \begin{proof} If $f(x,y)$ is continuous on $I\times J$, the number of cubes having side $\delta$ in the part above $A_{ij}$ which intersect $Gr(f,I\times J)$ is atleast $$\max \left \{\frac{R_f[A_{ij}]}{\delta},1 \right \}$$ and at most $$2+\frac{R_f[A_{ij}]}{\delta}.$$ By summing over all such parts we get the required result. \end{proof} \begin{lemma} Let $f(x,y)\in C(I\times J)$ and $0<\alpha_1<1,~0<\alpha_2<1.$ If $h_1>0,h_2>0$ and $x+h_1\le 1,y+h_2\le 1,$ then \begin{dmath*} (\mathfrak{I}^{\alpha}f)(x+h_1,y+h_2)-(\mathfrak{I}^{\alpha}f)(x,y)=\\\frac{(\rho_1+1)^{-\alpha_1}(\rho_2+1)^{-\alpha_2}}{\Gamma (\alpha_1) \Gamma (\alpha_2)} \int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}.\\\left[\left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2}f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)\\-\left(x^{\rho_1+1}\right)^{\alpha_1}\left(y^{\rho_2+1}\right)^{\alpha_2}f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right)\right]dsdt. \end{dmath*} \end{lemma} \begin{proof} By considering the conditions of the lemma, \begin{dmath}\label{2.2} (\mathfrak{I}^{\alpha}f)(x+h_1,y+h_2)-(\mathfrak{I}^{\alpha}f)(x,y)=\\\frac{(\rho_1+1)^{1-\alpha_1}(\rho_2+1)^{1-\alpha_2}}{\Gamma (\alpha_1) \Gamma (\alpha_2)} \int_0 ^{x+h_1} \int_0 ^{y+h_2} ((x+h_1)^{\rho_1+1}-s^{\rho_1+1})^{\alpha_1-1} ((y+h_2)^{\rho_2+1}-t^{\rho_2+1})^{\alpha_2-1}.\\s^{\rho_1}t^{\rho_2}f(s,t)dsdt\\ -\frac{(\rho_1+1)^{1-\alpha_1}(\rho_2+1)^{1-\alpha_2}}{\Gamma (\alpha_1) \Gamma (\alpha_2)} \int_0 ^x \int_0 ^y (x^{\rho_1+1}-s^{\rho_1+1})^{\alpha_1-1} (y^{\rho_2+1}-t^{\rho_2+1})^{\alpha_2-1}s^{\rho_1}t^{\rho_2}f(s,t)dsdt \end{dmath} By applying the integral transform, let $$\left(\frac{s}{x+h_1}\right)^{\rho_1+1}=u,$$ and $$\left(\frac{t}{y+h_2}\right)^{\rho_2+1}=v.$$ Then $$dsdt =|J|dudv,$$ where \[ J= \begin{bmatrix} \frac{\partial s}{\partial u} & \frac{\partial s}{\partial v}\\ \frac{\partial t}{\partial u} & \frac{\partial t}{\partial v} \end{bmatrix} \] $$=\frac{(x+h_1)^{\rho_1+1}(y+h_2)^{\rho_2+1}}{(\rho_1+1)(\rho_2+1)s^{\rho_1}t^{\rho_2}}.$$ Thus, we have \begin{dmath*} \frac{(\rho_1+1)^{1-\alpha_1}(\rho_2+1)^{1-\alpha_2}}{\Gamma (\alpha_1) \Gamma (\alpha_2)} \int_0 ^{x+h_1} \int_0 ^{y+h_2} ((x+h_1)^{\rho_1+1}-s^{\rho_1+1})^{\alpha_1-1} ((y+h_2)^{\rho_2+1}-t^{\rho_2+1})^{\alpha_2-1}.\\s^{\rho_1}t^{\rho_2}f(s,t)dsdt\\=\frac{(\rho_1+1)^{-\alpha_1}(\rho_2+1)^{-\alpha_2}}{\Gamma (\alpha_1) \Gamma (\alpha_2)} \int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1} \left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2}.\\f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)dsdt. \end{dmath*} Similarly \begin{dmath*} \frac{(\rho_1+1)^{1-\alpha_1}(\rho_2+1)^{1-\alpha_2}}{\Gamma (\alpha_1) \Gamma (\alpha_2)} \int_0 ^x \int_0 ^y (x^{\rho_1+1}-s^{\rho_1+1})^{\alpha_1-1} (y^{\rho_2+1}-t^{\rho_2+1})^{\alpha_2-1}s^{\rho_1}t^{\rho_2}f(s,t)dsdt=\\\frac{(\rho_1+1)^{-\alpha_1}(\rho_2+1)^{-\alpha_2}}{\Gamma (\alpha_1) \Gamma (\alpha_2)} \int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1} \left(x^{\rho_1+1}\right)^{\alpha_1}\left(y^{\rho_2+1}\right)^{\alpha_2} f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right)dsdt. \end{dmath*} Consequently, we get the desired result by using these two values in \ref{2.2}. \end{proof} Now, we will establish our main result. \begin{theorem}\label{Thmain} Let a non-negative function $f(x,y)\in C(I\times J)$ and $0<\alpha_1<1,~0<\alpha_2<1, \rho_1<-1,~\rho_2<-1.$\\ If \begin{equation}\label{11} \dim_B Gr(f, I\times J)=2, \end{equation} then, the box dimension of the mixed Katugampola fractional integral of $f(x,y)$ of order $\alpha=(\alpha_1,\alpha_2)$ exists and is equal to $2$ on $I\times J$, as \begin{equation}\label{eq12} \dim_B Gr(\mathfrak{I}^{\alpha}f,I\times J)=2. \end{equation} \end{theorem} \begin{proof} Since $f(x,y)\in C(I\times J)$, $(\mathfrak{I}^{\alpha}f)(x,y)$ is also continuous on $I\times J$ (from Theorem 3.4 in \cite{V2}). From the definition of the box dimension, we can get \begin{equation}\label{12} \underline{\dim}_B Gr(\mathfrak{I}^{\alpha}f,I\times J) \geq 2. \end{equation} To prove Equation \ref{eq12}, we have to show \ref{13} \begin{equation}\label{13} \overline{\dim}_B Gr(\mathfrak{I}^{\alpha}f,I\times J) \leq 2. \end{equation} Suppose that $0<\delta<\frac{1}{2}$, $ \frac{1}{\delta} <m,n<1+\frac{1}{\delta}$ and $N_{\delta}(Gr(f))$ is the number of $\delta$-cubes that intersect $Gr(f)$. From Equation \ref{11}, it holds \begin{equation*} \lim_{\delta \to 0} \frac{\log N_{\delta}(Gr(f))}{-\log\delta}=2. \end{equation*} Let $N_{\delta}(Gr(\mathfrak{I}^\alpha f))$ is the number of $\delta$-cubes that intersect $Gr(\mathfrak{I}^\alpha f)$. Thus Inequality \ref{13} can be written as \begin{equation}\label{14} \overline{\lim_{\delta \to 0} } \frac{\log N_{\delta}(Gr(\mathfrak{I}^\alpha f))}{-\log\delta} \leq 2. \end{equation} Now, we have to prove Inequality \ref{14}.\\ For $0<\delta<\frac{1}{2}$, $ \frac{1}{\delta} <m,n<1+\frac{1}{\delta},$ let non-negative integers $i$ and $j$ such that $0\le i \le m$, $0\le j \le n$. Then \begin{dmath*} \left| \frac{(\rho_1+1)^{-\alpha_1})(\rho_2+1)^{-\alpha_2}}{\Gamma (\alpha_1)\Gamma (\alpha_2)} \right| R_{\mathfrak{I}^\alpha f}[A_{ij}]=\sup _{(x+h_1,y+h_2),(x,y)\in A_{ij}} \lvert (\mathfrak{I}^\alpha f)(x+h_1,y+h_2)-(\mathfrak{I}^\alpha f)(x,y)\rvert, \end{dmath*} where $A_{ij}=[i\delta,(i+1)\delta]\times [j\delta,(j+1)\delta].$\\ Here, \begin{dmath*} =\lvert (\mathfrak{I}^\alpha f)(x+h_1,y+h_2)-(\mathfrak{I}^\alpha f)(x,y)\rvert=\left| \int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1} \left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2}.\\f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)dsdt\\-\int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}\left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2}f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right)dsdt\\ +\int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1} \left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2}f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right)dsdt\\ - \int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1} \left(x^{\rho_1+1}\right)^{\alpha_1}\left(y^{\rho_2+1}\right)^{\alpha_2} f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right)dsdt \right |\\ \le \left |\int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1} \left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2}. \left[f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)-f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right) \right] dsdt\right |\\ +\int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right).\\\left[ \left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2}-\left(x^{\rho_1+1}\right)^{\alpha_1}\left(y^{\rho_2+1}\right)^{\alpha_2} \right]dsdt \leq \left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2} \left |\int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}.\\ \left[f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)-f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right) \right] dsdt \right |\\ +\int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right).\\\left[ \left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2}-\left(x^{\rho_1+1}\right)^{\alpha_1}\left(y^{\rho_2+1}\right)^{\alpha_2} \right]dsdt . \end{dmath*} Let $i\geq 1,j\geq 1.$ On the one hand, \begin{dmath*} \left |\int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1} \left[f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)-f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right) \right] dsdt \right |\\ =\Bigl| \int_0^{\frac{1}{i+1}} \int_0^{\frac{1}{j+1}} (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1} \left[f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)-f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right) \right] dsdt \Bigr|\\ +\sum_{l=1}^j \Bigl| \int_0^{\frac{1}{i+1}} \int_{\frac{l}{j+1}}^{\frac{l+1}{j+1}} (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}.\\ \left[f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)-f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right) \right] dsdt \Bigr|\\ +\sum_{r=1}^i\Bigl| \int_{\frac{r}{i+1}}^{\frac{r+1}{i+1}} \int_0^{\frac{1}{j+1}} (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}.\\ \left[f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)-f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right) \right] dsdt \Bigr|\\ +\sum_{r=1}^i \sum_{l=1}^j\Bigl| \int_{\frac{r}{i+1}}^{\frac{r+1}{i+1}} \int_{\frac{l}{j+1}}^{\frac{l+1}{j+1}} (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}.\\ \left[f\left((x+h_1)s^{\frac{1}{\rho_1+1}},(y+h_2)t^{\frac{1}{\rho_2+1}}\right)-f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right) \right] dsdt \Bigr| \leq \frac{1}{(i+1)(j+1)}R_f\left[[0,\delta]\times [0,\delta]\right]\\ +\sum_{l=1}^j \frac{1}{(i+1)(j+1)}\left( R_f\left[[0,\delta]\times [(l-1)\delta,l\delta]\right]+R_f\left[[0,\delta]\times [l\delta,(l+l)\delta]\right]\right)\\ +\sum_{r=1}^i\frac{1}{(i+1)(j+1)}(R_f\left[[(r-1)\delta,r\delta]\times [0,\delta]\right]+R_f\left[[r\delta,(r+1)\delta]\times [0,\delta]\right])\\ +\sum_{r=1}^i \sum_{l=1}^j\frac{1}{(i+1)(j+1)}(R_f\left[[(r-1)\delta,r\delta]\times [(l-1)\delta,l\delta]\right]+R_f\left[[(r-1)\delta,r\delta]\times [l\delta,(l+1)\delta]\right]\\+R_f\left[[r\delta,(r+1)\delta]\times [(l-1)\delta,l\delta]\right]+R_f\left[[r\delta,(r+1)\delta]\times [l\delta,(l+1)\delta]\right]). \end{dmath*} By using Bernoulli's inequality $(1+u)^{r'}\leq 1+r'u$ for $0\leq r' \leq 1$ and $u\geq -1$, we can see that $$\int_0^{\frac{1}{i+1}} \int_0^{\frac{1}{j+1}} (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}dsdt \le \frac{1}{(i+1)(j+1)}. $$\\ On the other hand, for a suitable constant $C$, we have \begin{dmath*} \int_0^1 \int_0^1 (1-s)^{\alpha_1-1}(1-t)^{\alpha_2-1}f\left(xs^{\frac{1}{\rho_1+1}},yt^{\frac{1}{\rho_2+1}}\right)\left[ \left((x+h_1)^{\rho_1+1}\right)^{\alpha_1}\left((y+h_2)^{\rho_2+1}\right)^{\alpha_2}-\left(x^{\rho_1+1}\right)^{\alpha_1}\left(y^{\rho_2+1}\right)^{\alpha_2} \right]dsdt\\ \leq \frac{ C \max _{0\leq (x,y) \leq 1} f(x,y)}{\alpha_1\alpha_2}. \end{dmath*} From Lemma \ref{lmm}, we have \begin{dmath*} N_\delta(Gr(\mathfrak{I}^\alpha f))\leq 2mn+\frac{1}{\delta} \sum_{j=1}^n \sum_{i=1}^m R_{\mathfrak{I}^\alpha f}[A_{ij}] \leq 2mn+\frac{1}{\delta} \sum_{j=1}^n \sum_{i=1}^m \left(\frac{1}{(i+1)(j+1)}R_f\left[[0,\delta]\times [0,\delta]\right]\\ +\sum_{l=1}^j \frac{1}{(i+1)(j+1)}\left( R_f\left[[0,\delta]\times [(l-1)\delta,l\delta]\right]+R_f\left[[0,\delta]\times [l\delta,(l+l)\delta]\right]\right)\\ +\sum_{r=1}^i\frac{1}{(i+1)(j+1)}(R_f\left[[(r-1)\delta,r\delta]\times [0,\delta]\right]+R_f\left[[r\delta,(r+1)\delta]\times [0,\delta]\right])\\ +\sum_{r=1}^i \sum_{l=1}^j\frac{1}{(i+1)(j+1)}(R_f\left[[(r-1)\delta,r\delta]\times [(l-1)\delta,l\delta]\right]+R_f\left[[(r-1)\delta,r\delta]\times [l\delta,(l+1)\delta]\right]+R_f\left[[r\delta,(r+1)\delta]\times [(l-1)\delta,l\delta]\right]+R_f\left[[r\delta,(r+1)\delta]\times [l\delta,(l+1)\delta]\right]) \right)\\ +\frac{1}{\delta} \sum_{j=1}^n \sum_{i=1}^m \frac{ C \max _{0\leq (x,y) \leq 1} f(x,y)}{\alpha_1\alpha_2} \leq \frac{1}{\delta}\left(C+ \sum_{j=1}^n \sum_{i=1}^m \left( \frac{1}{(i+1)(j+1)}R_f\left[[0,\delta]\times [0,\delta]\right]\\ +\sum_{l=1}^j \frac{1}{(i+1)(j+1)}\left( R_f\left[[0,\delta]\times [(l-1)\delta,l\delta]\right]+R_f\left[[0,\delta]\times [l\delta,(l+l)\delta]\right]\right)\\ +\sum_{r=1}^i\frac{1}{(i+1)(j+1)}(R_f\left[[(r-1)\delta,r\delta]\times [0,\delta]\right]+R_f\left[[r\delta,(r+1)\delta]\times [0,\delta]\right])\\ +\sum_{r=1}^i \sum_{l=1}^j\frac{1}{(i+1)(j+1)}(R_f\left[[(r-1)\delta,r\delta]\times [(l-1)\delta,l\delta]\right]+ R_f\left[[(r-1)\delta,r\delta]\times\\ [l\delta,(l+1)\delta]\right]+R_f\left[[r\delta,(r+1)\delta]\times [(l-1)\delta,l\delta]\right]+R_f\left[[r\delta,(r+1)\delta]\times [l\delta,(l+1)\delta]\right]) \right)\right) \leq\frac{C}{\delta} \left( \sum_{j=0}^n \sum_{i=0}^m \frac{1}{(i+1)(j+1)} \right)\left( \sum_{j=1}^n \sum_{i=1}^m R_f[A_{ij}] \right) \leq \frac{C}{\delta} (\log m) (\log n) \sum_{j=0}^n \sum_{i=0}^m R_f[A_{ij}] \leq C (\log m) (\log n) N_\delta(Gr(f)). \end{dmath*} Therefore, \begin{dmath*} \frac{\log N_\delta(Gr(\mathfrak{I}^\alpha f))}{-\log \delta}\leq \frac{\log \left \{ C (\log m) (\log n) N_\delta(Gr(f))\right\}}{-\log \delta} \leq \frac{\log C}{-\log \delta}+\frac{\log(\log m)}{-\log \delta}+\frac{\log(\log n)}{-\log \delta}+\frac{\log N_\delta(Gr(f))}{-\log \delta}. \end{dmath*} So, we obtain \begin{dmath*} \overline{\dim}_B Gr(\mathfrak{I}^{\alpha}f,I\times J)=\overline{\lim_{\delta \to 0}}\frac{\log N_\delta(Gr(\mathfrak{I}^\alpha f))}{-\log \delta} \leq \overline{\lim_{\delta \to 0}} \left( \frac{\log C}{-\log \delta}+\frac{\log(\log m)}{-\log \delta}+\frac{\log(\log n)}{-\log \delta}+\frac{\log N_\delta(Gr(f))}{-\log \delta}\right) \leq \overline{\lim_{\delta \to 0}}\frac{\log N_\delta(Gr(f))}{-\log \delta} =\lim_{\delta \to 0}\frac{\log N_\delta(Gr(f))}{-\log \delta}=2. \end{dmath*} So, Inequality \ref{14} holds. From Inequalities \ref{12} and \ref{14}, we get \ref{eq12}. \end{proof} \begin{corollary} Let $0<\alpha_1<1,~0<\alpha_2<1$, ~$\rho_1<-1,~\rho_2<-1$ and $f$ is a continuous function of bounded variation on $[0,1]\times[0,1].$ Then $$\dim_BGr(\mathfrak{I}^{\alpha}f)=2.$$ \end{corollary} \begin{proof} From Lemma 3.7 in \cite{V2}, for a function $f$ which is continuous and of bounded variation in Arzel\'{a} sense on $[0,1]\times[0,1]$, we have $$\dim_BGr(f)=2.$$ So, from Theorem \ref{Thmain}, we obtain $$\dim_BGr(\mathfrak{I}^{\alpha}f)=2.$$ This completes the proof. \end{proof} \begin{remark} In \cite{V2}, Verma and Viswanathan proved that the fractional integral of mixed Katugampola type of a bounded variation function is again bounded variation in Arzel\'{a} sense. By using this result they deduce that the box dimension of the fractional integral of mixed Katugampola type is $2$. Their results more on analytical aspects. But, we have proved that if $f$ is continuous function having box dimension $2$, then the box dimension of the fractional integral of mixed Katugampola type of $f$ is also $2.$ So, our results more on dimensional aspects. \end{remark} \begin{theorem}\label{thH} Let a non-negative function $f(x,y)\in C(I\times J)$ and $0<\gamma_1<1,0<\gamma_2<1.$\\ If \begin{equation} \dim_B Gr(f, I\times J)=2, \end{equation} then, the box dimension of the mixed Hadamard fractional integral of $f(x,y)$ of order $\gamma=(\gamma_1,\gamma_2)$ exists and is equal to $2$ on $I\times J$, as \begin{equation} \dim_B Gr(\mathfrak{I}^{\gamma}f,I\times J)=2. \end{equation} \end{theorem} The proof of the Theorem \ref{thH} follows as Theorem \ref{Thmain}. \begin{lemma} \cite{V1} \label{lmV2} Suppose a continuous function $h:[c,d] \to \mathbb{R}$. Define a set $H=\{(x,y,h(y)):x\in [a,b], y \in[c,d]\}$ and $a<b.$ Then, $\overline{\dim}_B(H) \leq \overline{\dim}_B(Gr(h))+1.$ \end{lemma} \begin{remark}\label{rmk1} Let $g:[a,b] \to \mathbb{R}$ and $h:[c,d] \to \mathbb{R}$ are two continuous maps. Now, define $g_1,g_2:[a,b]\times [c,d]\to \mathbb{R}$ such that $$g_1(x,y)=g(x)+h(y), ~~\text{and}~~~g_2(x,y)=g(x)h(y).$$ By using Lemma \ref{lmV2}, we have $\overline{\dim}_BGr(g_1)\le \overline{\dim}_BGr(h)+1$ and $\overline{\dim}_BGr(g_2)\le \overline{\dim}_BGr(h)+1.$ \end{remark} In the following remark, we corroborate the Theorem \ref{thH} and try to establish relation with univariate case. \begin{remark} Let $h:[a,b]\to \mathbb{R}$ be a continuous function having box dimension $1$. We define a bivariate continuous function $f:[a,b]\times [c,d] \to \mathbb{R}$ such that $f(x,y)=h(x).$ From \ref{Def2}, we have $$ (\mathfrak{I}^{\gamma}f)(x,y)=\frac{1}{\Gamma (\gamma_1) \Gamma (\gamma_2)} \int_a ^x \int_c ^y (\log\frac{x}{u})^{\gamma_1-1} (\log\frac{y}{v})^{\gamma_2-1}\frac{f(u,v)}{uv} dudv.$$ For $\gamma_2=1,$ we get $$ (\mathfrak{I}^{\gamma}f)(x,y)=\frac{1}{\Gamma (\gamma_1)} \int_a ^x \int_c ^y (\log\frac{x}{u})^{\gamma_1-1}\frac{f(u,v)}{uv} dudv.$$ From the definition of $f$, we obtain $$ (\mathfrak{I}^{\gamma}f)(x,y)=\frac{\log(\frac{y}{c})}{\Gamma (\gamma_1)} \int_a ^x(\log\frac{x}{u})^{\gamma_1-1}\frac{h(u)}{u} du.$$ Now, we have the following relation between the fractional integral of Hadamard type and mixed Hadamard type $$ (\mathfrak{I}^{\gamma}f)(x,y)=\log(\frac{y}{c})(\mathfrak{I}^{\gamma_1}h)(x),$$ where the Hadamard fractional integral is given by $$(\mathfrak{I}^{\gamma_1}h)(x)=\frac{1}{\Gamma (\gamma_1)} \int_a ^x(\log\frac{x}{u})^{\gamma_1-1}\frac{h(u)}{u}du.$$ From Remark \ref{rmk1}, we have $\overline{\dim}_BGr(\mathfrak{I}^{\gamma}f)\le \overline{\dim}_BGr(\mathfrak{I}^{\gamma_1}h)+1.$ Since, $\dim_BGr(h)=1,$ from \cite{JY} it follows that $\dim_BGr(\mathfrak{I}^{\gamma_1} h)=1$, and hence $\dim_BGr(\mathfrak{I}^\gamma f)=2.$ This corroborates Theorem \ref{thH}. \end{remark} \subsection*{Acknowledgements} The First author thanks to CSIR, India for the grant with file number 09/1058(0012)/2018-EMR-I. \bibliographystyle{amsplain}
{ "timestamp": "2021-05-06T02:10:35", "yymm": "2105", "arxiv_id": "2105.01885", "language": "en", "url": "https://arxiv.org/abs/2105.01885" }
\section{Introduction} Cold atom systems play a key role in both fundamental and applied aspects of quantum sensing as they provide a well isolated and controllable platform, whilst still being sensitive to fundamentally interesting interactions such as gravity or magnetic fields \cite{Bloch2008,Degen2017}. The high level of uniformity and homogeneity of cold atomic ensembles also provides a platform for high-accuracy time standards \cite{bidel2013,bize2005}. Recent experiments have demonstrated atomic quantum sensors in precision accelerometers \cite{Gustavson1997}, clocks \cite{Elgin2019}, and in measuring magnetic fields with an unprecedented combination of high sensitivity (nT), spatial resolution ($\SI{}{\micro\metre}$), and field of view ($\sim \SI{100}{\micro\metre}$) \cite{wildermuth2005,wildermuth2006,Patent,Romalis2007,Lev1,Lev2,Shah2018,Lev3}. As a result, there is now worldwide activity on the development of cold-atom based quantum sensing and timing technologies \cite{UKRev,EURev}. \\ Miniaturizing and integrating cold-atom quantum systems for fundamental experiments and technology development has advanced through the creation of atom-chips, which use micro-fabricated current-carrying wires to trap and control the atoms in an ultra-high vacuum and typically $1$-$\SI{100}{\micro\metre}$ from the chip surface. Such chips enable coherent manipulation of the atoms' internal and external degrees of freedom \cite{Folman2002,Fortagh2007} leading, for example, to on-chip formation of Bose-Einstein Condensates (BECs) \cite{ott2001, Hansel2001}, atom interferometers \cite{schumm2005, Wang2005}, and interfacing quantum gases with nanomechanical oscillators \cite{Hunger_Camerer_2010}, carbon nanotubes \cite{schneeweiss2012} and cryogenic surfaces \cite{Huf2009,dikovsky2009,Nirren2006,Roux2008,Cano2011,Minniberger2014}. However, commonly-used metal wires with a typical thickness of $\sim \SI{1}{\micro\metre}$, mounted on bulk insulating substrates, have adverse effects when trapped atom clouds approach the surface. Spatial imperfections in the wires roughen the trapping potential, Johnson noise currents induce spin-flip transitions that eject atoms from the trap, and the strong Casimir-Polder (CP) attraction between the atoms and the chip produce tunneling losses. Together, these loss mechanisms prevent the formation of long-lived microtraps at distances closer than several microns from the chip surface \cite{Henkel2003, Lin2004}.\\ Overcoming these limitations is needed to advance both the fundamental and technological applications of micro- and nano-engineered environments for cold atoms. Trapping atoms closer to the chip offers a number of advantages. Higher magnetic field gradients and trap frequencies can be attained for a given current, thereby facilitating fast initial cooling, i.e. before three-body collisions become relevant, as required for creating BECs under less stringent vacuum requirements. Higher trapping frequencies will also produce atomic gases that are closer to the 1D limit and thus better for studying the thermodynamics of low-dimensional gases. Sub-micron trapping was realized by utilizing the balancing of attractive and repelling forces of light in nano-fibres \cite{Vetsch2010} \cite{schneeweiss2014}, but has proven difficult for magnetic trapping \cite{Meng_2018}. Enabling this offers a pathway to creating hybrid quantum devices comprising coherently-coupled atomic and solid-state elements \cite{Verdu2009,Bernon2013}.\\ Achieving long lifetimes for atom clouds trapped within $\SI{1}{\micro\metre}$ of the chip surface will enable quantum gases to be controlled, entangled, and addressed by potential landscapes whose spatial features are finer than the intrinsic length scales of atomic gases, for example the healing length. Sub-micron atom-surface trapping distances will also advance chip-based sensors in which atoms are used to measure external fields and forces. One such sensor, the BEC microscope, can image current flow patterns in planar conductors with a spatial resolution limited primarily by the distance of the BEC from the conductor \cite{wildermuth2005,wildermuth2006,Patent,Lev1,Lev2,Lev3}.\\ Here, we show that atom-chips containing two-dimensional conductors can, in principle, overcome the present limitations on the atom-surface separation and lifetime of the trapped atom cloud. Such trapping structures can be fabricated using, for example, graphene membranes that are either free standing or enclosed by two-dimensional insulating layers so as to form a van der Waals heterostructure \cite{vdW1,vdW2}. This opens a route to achieving sub-micron trapping distances and, hence, fine features in the trapping potential landscape that are not attainable when conventional metallic wires are used. We demonstrate that van der Waals heterostructures can be used to form traps just a few hundred nanometers from their surface whilst maintaining trap lifetimes of at least ten seconds. This exceeds the duration of most experiments on the atom clouds and of typical active operation cycles in cold-atom quantum sensors. In previous work on the possible use of two-dimensional electron gases as conductors in atom chips \cite{SinucoA,SinucoB}, the lifetime of nearby atomic gases was estimated by extrapolating from the rates of tunneling losses and Johnson noise-induced spin flips near metallic conductors \cite{SinucoB}. Here, we present detailed calculations of the atom cloud lifetimes, in which the same Green function formalism is used to determine both the CP potential and Johnson noise lifetimes, thereby ensuring a fully consistent picture of atom loss rates.\\ The paper is organized as follows: In Section II, we describe the layers of two-dimensional materials that comprise the atom-chip and explain how the chip affects the lifetime and minimum practical atom-surface separation of trapped atom clouds. In Section III, we compare the limiting factors, specifically atom cloud lifetime and spatial roughness, for traps formed less than $\SI{1}{\micro\metre}$ away from the surface of chips containing graphene or metallic trapping wires. Specifically, we present detailed calculations that quantify how two-dimensional conductors such as graphene can reduce both the spin-flip atom losses resulting from Johnson noise in the conductor and the tunneling losses due to CP atom-surface attraction sufficiently to enable stable sub-micron trapping with atom-cloud lifetimes $> 10$ s. In Section IV, we propose specific routes to the realization of graphene-based atom-chips that operate under realistic experimental conditions. Finally, in Section V we conclude with an overview of possible further device geometries and experiments to demonstrate the performance and versatility of graphene-based atom-chips.\\ \section{atom-chips: structure and influence on trapped atoms}\label{sec:lifetime} \begin{figure}[ht] \includegraphics[width = \linewidth]{Figures/A_chip_diagram.png} \caption{Schematic diagram of the proposed graphene-based atom-chip showing the Z-shaped graphene conducting channel (hexagonal graphene lattice pattern) carrying current, $I$ (pink arrow), encased by thin, protective, hBN cladding layers (upper and lower green slabs). The orientations of the applied magnetic bias field and the offset field are shown by blue and orange arrows respectively. These fields combine with that produced by the conducting channel (current $I$) to trap a nearby atomic BEC (red). \label{fig:chip_diagram} \end{figure} We consider micro-fabricated atom-chip structures that produce magnetic traps for clouds of rubidium-87 ($^{87}$Rb) atoms, as this approach will facilitate comparison to previous experiments. The implications for other species of alkali atoms are straightforward to derive and do not differ qualitatively from the $^{87}$Rb case. Our proposed graphene-based atom-chip is shown schematically in Fig.~\ref{fig:chip_diagram}. The chip comprises a Z-shaped graphene wire encased by two cladding layers of $\SI{10}{\nano\metre}$-thick hexagonal boron nitride (hBN). Electrical current, $I$, through the Z-shaped wire generates an inhomogeneous magnetic field, which is supplemented by a constant applied bias field, $\mathbf{B}_{b}$, and an offset (Ioffe) field, $\mathbf{B}_{0}$, to create a magnetic field minimum at $\mathbf{r}_{0} = (x_{0}, y_{0}, z_{0})$. An ultracold atom cloud is trapped near this magnetic field minimum, whose value is non-zero due to the offset field, which suppresses atom losses due to Majorana spin-flip transitions \cite{Folman2002}. The potential energy, $U_{\mathrm{mag}}$, of the trapped atoms equals the interaction energy between the atomic magnetic moment $\boldsymbol{\mu}$ and the net magnetic field, $\mathbf{B}(\mathbf{r})$, where $\mathbf{r}$ is the spatial position with respect to the coordinate origin, i.e. \begin{equation} U_\mathrm{mag}(\mathbf{r}) = -\boldsymbol{\mu}\cdot\mathbf{B}(\mathbf{r}) = m_{F} \mu_{B} g_{F} \abs{\mathbf{B}(\mathbf{r})}. \label{equ:MagPot} \end{equation} Here, $\mu_{B}$ is the Bohr magneton and $g_F$ is the Land\'e factor of the relevant hyperfine state. For the $^{87}$Rb atoms considered here, this is typically the $\ket{F, m_{F}} = \ket{2, 2}$ level of the $5^2S_{1/2}$ ground state. Provided that the magnetic quantum number, $m_F$, is a good quantum number, atoms in metastable low-field seeking states, whose magnetic moment is aligned anti-parallel to the magnetic field orientation, will be trapped near the magnetic field minimum, $\mathbf{r}_{0}$, where $U_{\mathrm{mag}}$ is also minimal.\\ The interplay between three energy scales determines the lifetime of the trapped atomic gas. The first energy scale is the trap depth given by $V_{0} = \abs{\boldsymbol{\mu}\cdot\Delta\mathbf{B}}$, where $\Delta\mathbf{B}$ is the difference between the maximum and the minimum values of the magnetic field. The second scale is the thermal energy, $k_{B}T_{\mathrm{cloud}}$, related to the temperature, $T_{\mathrm{cloud}}$, of the trapped atoms, where $k_{B}$ is the Boltzmann constant. The last energy scale is the ground-state energy of the trapped atoms, $E_{0}$. Provided that a sufficiently deep magnetic trap, i.e. $ V_{0} \gg k_{B}T_{\mathrm{cloud}}$, $E_{0}$, is located far away from any surface the overall lifetime of the trapped atoms is limited by collisions with the background gas. In typical atom-chip experiments, pressures of $10^{-10}$ to $10^{-11}$ mbar or below can be reached, for which the background pressure-limited lifetime of the trapped atoms is of the order of tens of seconds or better \cite{Folman2002}. As the atom cloud approaches the chip surface, its lifetime is reduced by modification of the trapping potential due to CP interactions with the surface, see Figs. \ref{fig:dipole_interface} and \ref{fig:CP_Gr_hBN_Gold}, and Johnson noise in the conductor, which can cause the atoms to undergo spin-flip transitions into untrapped states. Lower-frequency Johnson noise, comparable with the trap frequencies, can also potentially cause atom losses due to parametric heating of the atom cloud. However, the rates of such heating are orders of magnitude lower than the spin-flip loss rates \cite{Henkel2003}. Due to the low Johnson noise and CP potential near graphene-based atom-chips, the lifetime of atom clouds trapped near such chips will, beyond a certain trapping distance, be limited only by the background pressure as we quantify below.\\ As the atom-surface trapping distance decreases, the trap frequencies must be increased in order to reduce depletion by the CP interaction. In turn, this increases the density of the trapped atom cloud and, therefore, also increases the rate of three-body collision losses discussed in Section III. In order to determine the optimal trapping distance, the interplay of three-body losses, the minimum detection density of the atom cloud, and the CP interaction all have to be considered, as discussed in Section III. \subsection{CP potential and resulting atom tunneling towards the chip surface}\label{sec:tunnelling towards the chip surface} The Casimir-Polder potential is essentially a position-dependent shift of the atomic energy level structure, induced by the interaction of the atom with the surrounding surface-modified electromagnetic radiation \cite{casimir1948,Wylie84}. In general, the presence of an object modifies a system's electromagnetic density of states, due to the boundary conditions that the field has to satisfy on the surface of the object. The extent of the modification, and therefore the strength of the CP potential, depends on the object's specific position in space, on its form and on the material(s) from which it is made.\\ Generally, an atom experiences an attractive CP force towards metallic and dielectric surfaces. In atom-chip systems this behavior effectively lowers the barrier at the side of a magnetic trap that is nearest the surface, as shown in Fig. \ref{fig:CP_many_structures}. In turn, this enables atoms to tunnel out of the trap and be lost from the atom cloud. tunneling losses induced by the CP potential affect key atom-chip performance parameters such as the integration time for sensor applications and the coherence time for quantum memories. In the present generation of atom-chips, metal wires used to generate the magnetic field lead to a large CP attraction on trapped atoms located within $\approx \SI{1}{\micro\metre}$ of the surface. This imposes a minimum trapping distance of $10$-$\SI{100}{\micro\metre}$ for typical atom-chip experiments \cite{Folman2002, Jones2003, Harber2003}. Our proposed 2D material-based atom-chips are expected to exert very low CP attraction, due to their extremely small ($<$ 100 nm) thickness and their specific material properties, thereby opening a new route to entering the sub-micron atom-surface trapping regime. \begin{figure}[ht] \centering \includegraphics[width = \linewidth]{Figures/A_interface.png} \caption{Schematic diagram of a dipole (electric in the case of our CP potential calculation but magnetic for our Johnson noise analysis) near an $n$-layer system, where each layer is designated by index $l = 1, 2,..., n$. Each layer is characterized by thickness, $t_{l}$, permeability, $\mu_{l}$, and permittivity, $\epsilon_{l}$. The top surface of upper material layer 2 (yellow) coincides with the origin of the $y$ coordinate. The dipole (depicted by the arrow labeled $\mathbf{d}$), is located at $\mathbf{r}^{\prime} = (x^{\prime}, y^{\prime}, z^{\prime})$ in layer 1, above the solid material layers, and acts as a point source. The arrow labeled $\mathbf{E}$, represents the orientation of the electric field of frequency, $\omega$, at point $\mathbf{r}$, which is related to the dipole $\mathbf{d}$ via a total Green's tensor. Note also that $t_{1}$ and $t_{n}$ are infinite, corresponding to semi-infinite top and bottom layers.} \label{fig:dipole_interface} \end{figure} For a system in equilibrium at a temperature $T$ consisting of an atom located at position $\mathbf{r}^{\prime}$ from a nearby material body (see Fig.~\ref{fig:dipole_interface}), both interacting with the electromagnetic field, the CP potential is given by \cite{Buhmann_thermal,Intravaia11} \begin{equation} U_{CP}(\mathbf{r}^{\prime}) = \mu_{0}k_{B}T\sum_{j=0}^{\infty}{}^{'}\xi_{j}^{2}\alpha(\mathrm{i}\xi_{j})\,\mathrm{tr}\big[\mathbf{G}^{(1)}(\mathbf{r}^{\prime}, \mathbf{r}^{\prime}, \mathrm{i}\xi_{j})\big], \label{eq:CP_for_flatsurface} \end{equation} where $\mu_{0}$ is the permeability of vacuum, $\hbar$ is the reduced Planck constant $\alpha(\omega)$ is the atomic polarizability and $\xi_{j} = 2\pi k_{B}Tj/\hbar$, are commonly known as the Matsubara frequencies \cite{Matsubara1955}. The prime on the Matsubara sum in Eq.~\eqref{eq:CP_for_flatsurface} indicates that the $j = 0$ term carries half weight \cite{Intravaia11}. In Eq.~\eqref{eq:CP_for_flatsurface}, $\mathbf{G}^{(1)}(\mathbf{r}^{\prime}, \mathbf{r}^{\prime}, \omega)$ is the scattering Green's tensor, which contains the information about the material's optical properties and the geometry of the system.\\ For atom-surface separations shorter than the size of the components of an actual atom-chip (typically of the order of a few tens of micrometers or larger) we can consider that the atoms are interacting with a large layered surface. In this case the expression for the Green tensor is known and we present it explicitly in Appendix \ref{supplement: Green's function}. As shown there, in order to determine the expression of the Green tensor we need to evaluate the reflection coefficients of the electromagnetic field incident on the atom-chip structure. In our case, the multi-layer configurations allow their determination using the scattering or the transfer-matrix approach \cite{Yariv83,Zhan2013} in combination with models describing the optical properties of graphene, hBN and gold (see Appendix \ref{supplementary: Optical properties} for details). In this work, we take the Fermi energy and electron relaxation rate of graphene to be $E_{F} = \SI{0.1}{eV}$ and $\gamma = \SI{4}{THz}$, respectively, corresponding to typical values found both theoretically \cite{gric_2019,andryieuski_lavrinenko_2013,Goncalves2016,amorim_2017} and in experiments \cite{ju_geng_horng_2011}.\\ In this paper, we consider a simple model of alkali atoms with atomic polarisability of the form \cite{JuddA,Intravaia11} \begin{equation} \alpha(\mathrm{i}\xi_{j}) = \alpha_{0}\frac{\omega_{\mathrm{T}}^{2}}{\omega_{\mathrm{T}}^{2} + \xi_{j}^{2}}, \label{eq:polarisability} \end{equation} where $\alpha_{0}$ is the ground-state static polarizability and $\omega_{\mathrm{T}}$ is the frequency of the dominant atomic transition. In the case of \textsuperscript{87}Rb atoms, $\alpha_{0} = \SI{5.27e-39}{\farad\metre^{2}}$ \cite{Schwerdt2018}, and $\omega_{\mathrm{T}} = 2\pi\times\SI{384}{THz}$ is the D2 line transition frequency corresponding to a wavelength of $\SI{780}{\nano\metre}$ \cite{Steck2020}. Eq. \eqref{eq:CP_for_flatsurface} is sufficiently generic to enable the CP potential to be calculated for our atom-chip system and compared consistently with other materials and structures.\\ \begin{figure}[ht] \includegraphics[width = \linewidth]{Figures/A_CP_Gr_hBN_Gold.png} \caption{CP potential, $U_{CP}$, calculated versus the position, $y^{\prime}$, of an \textsuperscript{87}Rb atom from the surface of: a graphene monolayer (dashed red curve), a heterostructure comprising a graphene monolayer encased by two $10$-$\SI{}{\nano\metre}$-thick hBN layers (solid green curve), a $125$-$\SI{}{\nano\metre}$-thick gold slab (solid yellow curve) at temperature $T = \SI{300}{\kelvin}$.} \label{fig:CP_Gr_hBN_Gold} \end{figure} In Fig.~\ref{fig:CP_Gr_hBN_Gold}, we compare the CP potential, $U_{CP}$, calculated versus the separation, $y^{\prime}$, of an \textsuperscript{87}Rb atom from three different material systems: a graphene monolayer (dashed red curve) for which CP potential calculations have been reported previously \cite{Churkin,MosteA,MosteB,JuddA,ScheelA,AntezzaA,mostepanenko_2020}, a heterostructure comprising a graphene monolayer encased by two $10$-$\SI{}{\nano\metre}$-thick hBN layers (solid green curve) and a $125$-$\SI{}{\nano\metre}$-thick gold slab (solid yellow curve), all at $T = \SI{300}{\kelvin}$. We choose the thickness of the gold slab to be $125$-$\SI{}{\nano\metre}$ because this is among the smallest reported thicknesses at which gold wires in atom-chip experiments \cite{salem_japha2010} have a conductivity that still behaves as bulk. Even in this limit of metallic conductor thickness, at $y^{\prime} \approx \SI{1}{\micro\metre}$, the CP potential for the heterostructure is approximately 40$\%$ of that for the thin gold slab.\\ The effect of the CP potential on the total trapping potential, $U_{\mathrm{tot}}$, can be illustrated by modelling the magnetic trapping potential, $U_H$, as simple harmonic and adding the CP potential, giving \begin{equation} U_{\mathrm{tot}}(y) = U_{\mathrm{H}}(y) + U_{\mathrm{CP}}(y). \label{eq:U_tot} \end{equation} The simple harmonic potential takes the form \begin{equation} U_{\mathrm{H}}(y) = \frac{1}{2}m\omega_{r}^{2}(y-y_{c})^{2}, \label{eq:U_Harmonic} \end{equation} where $m$ is the mass of the trapped atom, $\omega_{r}$ is the radial trapping frequency, and $y_{c}$ is the position of the centre of the simple harmonic trap measured from the surface; note that $y_{c}$ is not necessarily equal to the minimum of the total potential.\\% and therefore it might differ from the distance of the atom, $y_{0}$.\\ \begin{figure \centering \includegraphics[width =\linewidth]{Figures/A_CP_many_structures2.png} \caption{Total potential, $U_{\mathrm{tot}}$, calculated versus distance, $y$, of an \textsuperscript{87}Rb atom from the surface of: a free-standing graphene monolayer (dashed red curve); a graphene monolayer encased on each side by a $10$-$\SI{}{\nano\metre}$-thick hBN sheet (solid green curve); a $125$-$\SI{}{\nano\metre}$-thick gold sheet (solid yellow curve). The total potential is the sum of the CP potential and the harmonic model trapping potential, which is centred at $y = \SI{0.5}{\micro\metre}$ with radial trapping frequency, $\omega_{r} = 2\pi\times\SI{20}{kHz}$. All curves are for $T = \SI{300}{\kelvin}$.} \label{fig:CP_many_structures} \end{figure} Fig.~\ref{fig:CP_many_structures} shows the resulting total trapping potential, $U_{\mathrm{tot}}$, calculated versus distance, $y$, of an \textsuperscript{87}Rb atom from a graphene monolayer (dashed red curve), an hBN-graphene monolayer-hBN heterostructure (solid green curve) and a $125$-$\SI{}{\nano\metre}$-thick gold slab (solid yellow curve) calculated taking (see Eq.~\eqref{eq:U_Harmonic}) $\omega_{r} = 2\pi\times\SI{20}{kHz}$, $y_{c} = \SI{0.5}{\micro\metre}$, $T = \SI{300}{\kelvin}$, and the mass of an \textsuperscript{87}Rb atom, $m = \SI{1.44e-25}{kg}$. It is apparent that the CP potential distorts the simple harmonic trap: an energy barrier of finite height and width appears near the surface for $y < y_{c}$. The height and width effectively scale with distance of trap centre from the surface, thereby giving rise to tunneling losses, which deplete the trapped atom cloud. Since graphene creates a weaker CP attraction than even the thin gold conductor, the tunneling loss rates for graphene-based atom-chips are lower than for conventional atom-chips and we quantify this benefit below. Consequently, graphene-based atom-chips offer a performance advantage over the present generation of atom-chips, which use metallic conductors as current-carrying wires. \subsubsection{Tunneling loss rate} In order to estimate the tunneling loss rate, $\Gamma_{\mathrm{tun}}$, of an atom cloud trapped in the finite potential well shown schematically by the solid black curve in Fig.~\ref{fig:WKB_2}, we employ Gamow's theory of alpha decay \cite{harper_anderson_1997}. In this model, the atom is considered to oscillate inside the potential well and can escape by tunneling through the finite barrier nearest the surface each time it is incident on that barrier. The tunneling rate is determined by the frequency at which the atom approaches the barrier, $f$, and the transmission probability $\Tilde{T}$ that the atom tunnels out at each attempt. Mathematically, we have \begin{equation} \Gamma_{\mathrm{tun}} = f\times\Tilde{T}. \label{eq:tunnelling rate} \end{equation} \begin{figure \includegraphics[width = \linewidth]{Figures/A_WKB_2.png} \caption{Schematic diagram of the total trapping potential, $U_{\mathrm{tot}}(y)$ (solid black curve) plotted versus distance, $y$, of an \textsuperscript{87}Rb atom from an atom-chip surface. The solid blue line indicates the ground-state energy of the quantum harmonic oscillator, $E = \hbar\omega_{\mathrm{eff}}/2$, where $\omega_{\mathrm{eff}}$ is the effective characteristic frequency of the simple harmonic trap (dashed red curve), which is perturbed by the CP potential, as described in the text, and approximates $U_{\mathrm{tot}}(y)$ near the minimum. The positions $y_{0}$, $y_{1}$, $y_{2}$, indicated by arrows, are, respectively, the actual trap centre and the two classical turning points for the left-hand potential energy barrier, where $U(y_{1}) - U(y_{0}) = U(y_{2}) - U(y_{0}) = \hbar\omega_{\mathrm{eff}}/2$. The dash-dotted yellow curve is the potential of the unperturbed simple harmonic trap.} \label{fig:WKB_2} \end{figure} Fig.~\ref{fig:WKB_2} shows that the deformation of the unperturbed harmonic magnetic potential (dot-dashed yellow curve) by the CP interaction also yields an effective perturbed harmonic potential (dashed red curve) with a lower trapping frequency, $\omega_{\rm eff}$, and whose minimum shifts from $y=y_{c}$ to a new position, $y_{0}$. The perturbed harmonic potential therefore takes the form \begin{equation} U_{\mathrm{eff}}(y) = \frac{1}{2}m\omega_{\mathrm{eff}}^{2}(y-y_{0})^{2}, \label{eq:U_Harmonic_eff} \end{equation} where a Taylor expansion of $U_{\rm tot}(y)$ about $y = y_{0}$ yields \begin{equation} y_{0}\approx y_c+\frac{U'_{\rm CP}(y_c)}{m \omega_r^2}, \quad \omega_{\rm eff}^2\approx \omega_r^2+U''_{\rm CP}(y_0). \end{equation} Atoms in the ground-state of this effective potential have an energy $E = \hbar\omega_{\mathrm{eff}}/2$ and approach the barrier at frequency $f = \omega_{\mathrm{eff}}/2\pi$. Using the Wentzel–Kramers–Brillouin (WKB) approximation, the transmission probability through the tunnel barrier is given by \cite{shankar2011,karnakov2013,griffiths_2005} \begin{equation} \Tilde{T} = \mathrm{exp}\bigg(-2\int_{y_{1}}^{y_{2}}\kappa(y)\mathrm{d}y\bigg), \label{eq:transmission probability} \end{equation} where $y_{1}$, $y_{2}$, are the two classical turning points for the potential barrier, $\kappa(y) = \sqrt{2m(U_{\mathrm{tot}}(y) - E)}/\hbar$, and $U_{\mathrm{tot}}(y)$ is the form of the barrier in the total potential energy curve.\\ The average tunneling-limited lifetime of a trapped atom is then defined by \begin{equation} \tau_{\mathrm{tun}}(y_{0}) = \frac{1}{\Gamma_{\mathrm{tun}}(y_{0})}. \label{eq:tunnelling loss lifetime} \end{equation} \begin{figure}[ht] \includegraphics[width = \linewidth]{Figures/A_Tunnel_lifetime_main.png} \caption{Tunneling lifetime, $\tau_{\mathrm{tun}}$, calculated versus the position of the harmonic trap center, $y_{0}$, for an \textsuperscript{87}Rb atom trapped near a graphene monolayer (dashed red curve) and a $125$-$\SI{}{\nano\metre}$-thick gold slab (solid yellow curve). The weaker CP attraction for graphene gives rise to a higher, wider, tunnel barrier and, consequently, a higher tunneling lifetime. Parameters: $T = \SI{300}{\kelvin}$, $\omega_{r} = 2\pi\times\SI{20}{kHz}$.} \label{fig:Tunnel_lifetime_main} \end{figure} We calculated the tunneling loss rates for our model systems using the Gamow formalism described above. Figure \ref{fig:Tunnel_lifetime_main} shows the resulting lifetimes, $\tau_{\mathrm{tun}}$, calculated versus the position of the trap center, $y_{0}$ from a graphene monolayer (dashed red curve) and a thin gold slab (solid yellow curve). The minimum distance that atoms can be trapped from the surface is marked by the left-hand ends of the two curves, where the tunnel barrier vanishes. Comparison of the curves shows that using a graphene monolayer reduces this distance to $\sim \SI{0.3}{\micro\metre}$ compared to the value of $\sim\SI{0.45}{\micro\metre}$ for the gold layer. For $y_{0} > \SI{0.5}{\micro\metre}$, the tunneling lifetime for the single layer of graphene is orders of magnitude higher than for the gold slab due to the weaker CP potential. \subsection{Atom losses due to Johnson Noise} \label{sec:Temporal_Johnson Noise} Magnetically trapped atoms only remain trapped when they are in a low field seeking state with the magnetic moment aligned anti-parallel to the direction of the magnetic field. In order to keep $m_{F}$ a good quantum number, an offset magnetic field of a few Gauss is typically maintained at the trapping position in atom-chip experiments \cite{Lin2004}. Given a Zeeman splitting of, for example, 0.7 MHz/G for the $^{87}$Rb ground state, this produces transition frequencies of a few MHz between hyperfine states with different $m_{F}$ values, thus making the trapped atoms susceptible to magnetic fields in that frequency range and therefore to noise in the radio frequency domain.\\ Johnson noise arises from electrical noise currents within a conductor, which produce fluctuations of the magnetic field \cite{henkel1999}. For near-surface traps formed between $\approx \SI{1}{\micro\metre}$ and 1 mm from a metallic conductor on a conventional atom-chip, Johnson noise is usually the main limitation on the lifetime of the atom cloud \cite{Lin2004,Jones2003,Harber2003}. For example, the measured lifetime of atoms trapped $\approx \SI{1}{\micro\metre}$ from thick metallic conductors is limited to only $\approx 0.1$ s by the effects of Johnson noise \cite{Lin2004}.\\ The key advantage of using trapping wires made from graphene, or other two-dimensional conductors is their reduced level of Johnson noise \cite{SinucoA, SinucoB}. This advantage originates from their orders-of-magnitude lower sheet electron density compared with metals, which dominates over the tendency of higher carrier mobility to increase current fluctuations. We now explore this advantage using two models of increasing sophistication. Firstly, a crude estimate based on previous models for metallic conductors and derived using the fluctuation-dissipation theorem \cite{Lin2004,henkel1999,henkel2001}. Secondly, we present a rigorous quantum field theoretical calculation for 2D materials involving the full Green's function for the system, determined from the reflection coefficients for graphene and 2D multilayers.\\ Comparing graphene and gold wires with a given top surface area, $A$, the ratio of the number of free electrons in graphene, $N_{G}$, to that in gold, $N_{Au}$, is $N_{G}/N_{Au} = n_{G}/n_{Au}t_{Au}$, where $t_{Au}$ is the thickness of the gold wire and $n_{G}$, $n_{Au}$ are, respectively, the sheet and volume electron densities of undoped graphene and gold. Taking $n_{Au} = \SI{5.9e28}{\metre^{-3}}$ for bulk gold and typical values of $t_{Au} = \SI{1}{\micro\metre}$ and $n_{G} = \SI{9e14}{\metre^{-2}}$ \cite{fang_konar_2007} gives $N_{G}/N_{Au} = 1.5 \times 10^{-8}$. The carrier mobility of gold is $\mu_{Au}\approx 4.3 \times 10^{-3}$ m$^2$/Vs and, in graphene, mobilities up to $\mu_G\approx 20 $ m$^2$/Vs have been reported in free-standing membranes \cite{bolotin2008, chen_jang_2008}. For graphene on a substrate, the electron mobility is typically at least an order of magnitude lower, leading to the estimate $\mu_{Au}/\mu_G \gtrsim 2 \times 10^{-4}$.\\ These results allow us to anticipate that the Johnson noise will be far smaller in graphene than in gold. Indeed, to make a rough initial estimate of this intuitive advantage, we now use the model presented in \cite{Lin2004,henkel1999,henkel2001} to evaluate the expected spin-flip lifetime enhancement. For an atom trapped at distance, $d$, from a metal film of width $w \gg t$ and resistivity $\rho$ at temperature $T$, the $\vert F,m \rangle \rightarrow \vert F,m-1 \rangle$ spin-flip rate, given in $\SI{}{\second^{-1}}$, is $\Gamma = C(T/\rho)\times[d(1+d/t)(1+2d/w)]^{-1}$, where $C$ is a constant that depends on the Clebsch-Gordon coefficient for the transition and on the transition frequency \cite{Lin2004,henkel1999,henkel2001}. Assuming, as a crude initial approximation, that this formula can also be used to estimate the rate of spin flips induced by electrons in graphene, the ratio of the lifetimes, $\tau_G$ and $\tau_{Au}$, of atom clouds trapped at a distance $d$ above graphene and gold wires, respectively, is \be \label{eq:ratio} \frac {\tau_G}{\tau_{Au}}=\frac{\Gamma_{Au}}{\Gamma_G}=\frac{\rho_{G}}{\rho_{Au}}\frac{\left(1+\frac{d}{t_G}\right)}{\left(1+\frac{d}{t_{Au}}\right)}, \ee where $t_G=0.345$ nm is the thickness of a graphene monolayer and $\rho_{Au}=1/(n_{Au}e\mu_{Au})$, $\rho_G=t_G/(n_G e\mu_G)$ are the resistivities of the gold and graphene and $e$ is the electron charge.\\ Since $d \gg t_G$, it follows that\\ \be \begin{split} \frac {\tau_G}{\tau_{Au}} \approx \left(\frac{n_{Au}}{n_G}\right)\left(\frac{\mu_{Au}}{\mu_G}\right)\frac{t_{Au}d}{\left(d+t_{Au}\right)}\\ = \left(\frac{N_{Au}}{N_G}\right)\left(\frac{\mu_{Au}}{\mu_G}\right)\frac{d}{\left(d+t_{Au}\right)}. \end{split} \ee Unless $d \ll t_{Au}$, which is not the case for presently-attainable trapping distances, the final term in the above equation is of order unity and so $\tau_G/ \tau_{Au}$ depends primarily on the relative number of free electrons in gold and in graphene and on their mobility ratio.\\ Using the values for the carrier mobility and density given above we arrive at $\tau_G/\tau_{Au} \gtrsim 1.3 \times 10^4 d/(d+t_{Au})$. We thus predict that for atoms trapped $\approx \SI{1}{\micro\metre}$ away from a conductor, the lifetime will increase from $\approx 0.1$ s for a 1-$\SI{}{\micro\metre}$-thick metallic wire, similar to that reported in \cite{Lin2004}, to $\gtrsim 600$ s for graphene, i.e. an increase by a factor of $\tau_G/\tau_{Au} \gtrsim 6.3 \times 10^3$. The physical reason for this is that although electrons in graphene have a higher mobility than in a metal, and so produce more Johnson noise per carrier, this is more than compensated by the far lower number of charge carriers in the graphene.\\ In the next section, we derive an expression for the Johnson noise produced by van der Waals heterostructures. To quantify the advantages of using 2D conductors, rather than metal wires, to reduce noise in atom-chips we consider the particular case of graphene conduction channels. However, similar advantages are expected from other 2D materials due to their low number of electric current carriers. \subsubsection{Transition rates in terms of dyadic Green's functions} The magnetic moment vector associated with the transition $\ket{i} \to \ket{f}$ of an atom is given by \cite{Buschow2013, Rekdal2004} \begin{equation} \boldsymbol{\mu} = -\bra{i}\frac{\mu_{B}}{\hbar}\Big(g_{S}\hat{\mathbf{S}} + g_{L}\hat{\mathbf{L}} - g_{I}\frac{m_{\mathrm{e}}}{m_{\mathrm{nuc}}}\hat{\mathbf{I}}\Big)\ket{f}, \label{eq:magnetic moment} \end{equation} where $\hat{\mathbf{S}}$, $\hat{\mathbf{L}}$, and $\hat{\mathbf{I}}$ are the electron spin operator, the electron orbital angular momentum operator, and the total nuclear angular momentum operator, respectively, with their corresponding Land\'{e} g-factors $g_{S}$, $g_{L}$, and $g_{I}$, $m_{\mathrm{e}}$ is the electron mass and $m_{\mathrm{nuc}}$ is the nuclear mass. Here, the magnitude of the angular momentum, for example, $\hat{\mathbf{S}}$, is $\sqrt{S(S+1)}\hbar$ and the eigenvalue of the $z$-component of $\hat{\mathbf{S}}$, i.e. $\hat{\mathbf{S}_{z}}$, is $m_{S}\hbar$, where $S$ and $m_{S}$ are the corresponding quantum numbers for $\hat{\mathbf{S}}$ and $\hat{\mathbf{S}_{z}}$, respectively.\\ Taking $L = 0$ for the electronic ground-state and neglecting the term containing the total nuclear angular momentum operator $\hat{\mathbf{I}}$ in equation \eqref{eq:magnetic moment} because $m_{\mathrm{e}} \ll m_{\mathrm{nuc}}$, the magnetic moment vector becomes $\boldsymbol{\mu} = -\mu_{B}g_{S}\bra{i}\hat{\mathbf{S}}\ket{f}/\hbar$, where $g_{S} = 2$, and the rate of magnetic spin-flip transitions from an initial hyperfine magnetic state $\ket{i}$ to another state $\ket{f}$ is given by \cite{Rekdal2004} \begin{multline} \Gamma_{\mathrm{JN}} = \mu_{0}\frac{2(\mu_{B}g_{S})^{2}}{\hbar^{2}}\sum_{j,k}\Big\{\bra{f}\hat{\mathbf{S}}_{j}\ket{i}\bra{i}\hat{\mathbf{S}}_{k}\ket{f}\\ \times \mathrm{Im}[\curl{\curl{\mathbf{G}(\mathbf{r}_{0}, \mathbf{r}_{0}, \omega_{if})}}]_{jk}(\bar{n}_{\mathrm{th}}+1)\Big\}, \label{eq:Splin-flip rate} \end{multline} where $\hat{\mathbf{S}}_{j,k}$ denotes the $j$ and $k$ components of the electron spin operator $\hat{\mathbf{S}}$, and $\mathbf{G}(\mathbf{r}_{0}, \mathbf{r}_{0}, \omega_{if})$ is the total dyadic Green's function describing the electromagnetic field of the transition frequency $\omega_{if}$ at $\mathbf{r}_{0}$ due to a \emph{magnetic} dipole located at $\mathbf{r}_{0}$ (see Appendix \ref{supplement: Green's function}). The mean thermal photon occupation number is given by \begin{equation} \bar{n}_{\mathrm{th}} = \frac{1}{\mathrm{e}^{\hbar\omega_{if}/k_{B}T} - 1}, \label{eq:mean_thermal occupation} \end{equation} where $T$ is the temperature of the electromagnetic field system that causes the spin-flip transitions, rather than of the trapped atoms, and $\omega_{if}$ is the angular frequency of the radiation due to magnetic spin-flip transitions. The Johnson noise lifetime of a single atom is defined as \begin{equation} \tau = \frac{1}{\Gamma_{\mathrm{JN}}}. \label{eq:Lifetime} \end{equation} Note that, mathematically, $\mathbf{G}(\mathbf{r}_{0}, \mathbf{r}_{0}, \omega_{if})$ can be written as the sum of a Green's tensor, describing the field due to a dipole in an infinitely extended homogeneous bulk medium, vacuum for example, and a scattering Green's tensor describing the reflected field in the presence of reflective bodies, so that \begin{equation} \mathbf{G}(\mathbf{r}_{0}, \mathbf{r}_{0}, \omega_{if}) = \mathbf{G}^{(0)}(\mathbf{r}_{0}, \mathbf{r}_{0}, \omega_{if}) + \mathbf{{G}}^{(1)}(\mathbf{r}_{0}, \mathbf{r}_{0}, \omega_{if}). \label{eq:Total_green_function} \end{equation} The explicit forms of $\mathbf{G}^{(0)}(\mathbf{r}_{0}, \mathbf{r}_{0}, \omega_{if})$ can be found in \cite{Novotny2006, Buhmann_ii} and are summarized in Appendix \ref{supplement: Green's function}. Owing to the general properties of a Green's tensor, we have \begin{equation} \mathrm{Im}[\curl{\curl{\mathbf{G}(\mathbf{r}, \mathbf{r}_{0}, \omega)}}] = \frac{\omega^{2}}{c^{2}}\mathrm{Im}[\epsilon(\omega)\mu(\omega)\mathbf{G}(\mathbf{r}, \mathbf{r}_{0}, \omega)], \label{eq:Green function properties} \end{equation} where $\epsilon(\omega)$ and $\mu(\omega)$ are, respectively, the permittivity and permeability of the medium in which the field and source points are located. The imaginary part of the Green's tensor in vacuum has a simple form: \begin{equation} \mathrm{Im}[\mathbf{G}^{(0)}(\mathbf{r}_{0}, \mathbf{r}_{0}, \omega)]_{jk} = \frac{1}{6\pi}\frac{\omega}{c}\delta_{jk}, \label{eq:imaginary part of vaccum Green} \end{equation} where $\delta_{jk}$ is the Kronecker delta, which allows us to determine the spin-flip rates in vacuum. \subsubsection{Johnson noise lifetime calculation} In this section, we use Eq. \eqref{equ:MagPot} to determine an appropriate value of the atomic transition frequency for input into our Johnson noise lifetime calculations in the presence of the magnetic trapping field. For the $\ket{F, m_{F}} = \ket{2, 2}$ $5^{2}S_{1/2}$ ground-state of the \textsuperscript{87}Rb atoms considered here, $g_{F} = 1/2$ \cite{Steck2020, Buschow2013}. Here, we only consider the Zeeman transition from $\ket{2, 2}$ to $\ket{2, 1}$ to facilitate direct comparison with the results of Ref. \cite{Lin2004}. The angular frequency of the radiation is then given by \begin{equation} \omega_{if} = \frac{\mu_{B}\abs{\mathbf{B(\mathbf{r}_{0})}}}{2\hbar}. \label{eq:omega_spin_flip} \end{equation} Taking $\abs{\mathbf{B(\mathbf{r}_{0})}} = \SI{0.8e-4}{\tesla}$ gives $\omega_{if} = 2\pi\times\SI{560}{kHz}$. Comparing with the hyperfine splitting frequency for the ground-state of the \textsuperscript{87}Rb atom, which is $2\pi\times\SI{6.83}{GHz}$, we now see that our assumption that $m_F$ is a good quantum number is justified. The method for calculating the Clebsch-Gordon coefficients associated with the spin-flip transition matrix elements, $\bra{f}\hat{\mathbf{S}}_{j,k}\ket{i}$, can be found in \cite{Zettili2009}. For completeness, we note that these matrix elements are \begin{equation} \label{eq: Clebsch-Gordon coeff} \begin{split} \bra{2,2}\hat{\mathbf{S}}_{x}\ket{2,1} &=\phantom{-}\bra{2,1}\hat{\mathbf{S}}_{x}\ket{2,2} = \frac{1}{4},\\ \bra{2,2}\hat{\mathbf{S}}_{y}\ket{2,1} &=-\bra{2,1}\hat{\mathbf{S}}_{y}\ket{2,2} = \frac{\mathrm{i}}{4},\\ \bra{2,2}\hat{\mathbf{S}}_{z}\ket{2,1} &=\phantom{-}\bra{2,1}\hat{\mathbf{S}}_{z}\ket{2,2} = 0. \end{split} \end{equation} To proceed with our calculations of the transition rates and comparisons for different surface materials, we consider the typical thickness of the metallic wires used to generate the magnetic field in atom-chips, which is $\sim \SI{1}{\micro\metre}$ \cite{Lin2004}. Figure \ref{fig:Johnson_Gr_gold} shows the Johnson noise-limited lifetimes of the atom cloud, $\tau$, calculated versus atom-surface distance, $y_{0}$, for a $1$-$\SI{}{\micro\metre}$-thick gold slab (solid yellow curve), a $125$-$\SI{}{\nano\metre}$-thick gold slab (solid green curve), a doped graphene monolayer with $E_{F} = \SI{0.1}{eV}$ (dashed red curve), an undoped graphene monolayer (dashed blue curve) and a heterostructure consisting of a graphene monolayer encased by two 10-nm-thick hBN layers (solid black curve). As in \cite{Lin2004}, these Johnson noise lifetimes are calculated using Eqs. (\ref{eq:Splin-flip rate}) and (\ref{eq:Lifetime}). The graphene monolayers yield far longer lifetimes than the gold wires do, even for a small wire thickness of $\SI{125}{\nano\metre}$. Making gold wires thinner than $125$ $\SI{}{\nano\metre}$ is possible, but their resistivities then become higher than for bulk gold \cite{salem_japha2010}. At $y_{0} = \SI{1}{\micro\metre}$, the lifetimes for the undoped graphene monolayer and the $1$ $\SI{}{\micro\metre}$-thick gold slab are $\sim \SI{2500}{\second}$ and $\SI{0.34}{\second}$, respectively, giving a lifetime ratio of $\sim \SI{7.4e3}{}$, which is broadly consistent with the estimate of $\sim \SI{6.3e3}{}$ obtained from Eq. \eqref{eq:ratio}. The lifetime for the heterostructure is slightly longer than that for the doped graphene layer. \begin{figure}[ht] \includegraphics[width = \linewidth]{Figures/A_Johnson_Gr_gold.png} \caption{Johnson noise lifetimes, $\tau$, calculated (using Eqs. \eqref{eq:Splin-flip rate} and \eqref{eq:Lifetime}) versus the position of the harmonic trap center, $y_{0}$, for an ultra-cold gas of \textsuperscript{87}Rb atoms trapped above: an undoped graphene monolayer (dashed blue curve); a doped graphene monolayer with Fermi energy $E_{F} = \SI{0.1}{eV}$ (dashed red curve); an hBN-encased graphene-based heterostructure (solid black curve); a $1$ $\SI{}{\micro\metre}$-thick gold slab (solid yellow curve); a $125$ $\SI{}{\nano\metre}$-thick gold slab (solid green curve). The graphene monolayers yield orders of magnitude longer lifetimes than the $1$ $\SI{}{\micro\metre}$-thick gold slab because they have far lower electromagnetic reflectance than gold (see text). Parameters: $T = \SI{300}{\kelvin}$, $\omega = 2\pi\times\SI{560}{kHz}$.} \label{fig:Johnson_Gr_gold} \end{figure} We conclude that Johnson noise in graphene conductors will produce negligible spin-flip losses compared to the thick ($t_{Au}\sim \SI{1}{\micro\metre}$) metal wires typically used in atom-chips, where it dominates the loss rate. Consequently, our analysis of graphene atom-chips will, henceforth, focus on the effects of tunneling and three-body losses and of spatial imperfections. We note, however, from Eq. (\ref{eq:ratio}) that the lifetime above metallic conductors can be increased by decreasing their thickness $t_{Au}$ and, hence, $N_{Au}$. For wires with $t_{Au}=125$ nm, $\tau_G/\tau_{Au} \sim 1400$. Taking the limit of the gold layer thickness to its lattice constant, 0.4 nm, gives $\tau_G/\tau_{Au} \sim 5$. So the advantage of graphene over metallic conductors persists even if the metal wire could be thinned close to the theoretical limit of a monolayer. To our knowledge, though, graphene and other exfoliated van der Waals materials are the only monolayers so far produced. Moreover, their hexagonal crystal structure and resulting light-like linear energy band dispersion relations ensure that they can carry high currents despite their low thickness and carrier density. However, if high-quality metallic monolayers could be produced, their low electron density may reduce the Johnson noise and Casimir-Polder potential to levels comparable with exfoliated 2D materials. \subsection{Negligible corrugation effects} Spatial meandering of the current stream lines can, in principle, be created in four ways: deviation from strictly two-dimensional current flow (analogous to surface roughness of 3D conductors); edge roughness resulting from imperfect lithography; electrons scattering from one another or from phonons; spatial variations in the electron potential energy created by impurities or imperfections in, or near, the conducting channel \cite{SinucoA,SinucoB,kruger_andersson_2007, schumm_esteve_2005, wang_lukin_2004}. We now consider the importance of each potential source of roughness in turn.\\ When graphene is encapsulated in hBN, surface roughness and non-two-dimensionality in graphene is only of order 12 pm \cite{thomsen_gunst_2017} because the hBN provides an ultraflat surface for the graphene and is closely lattice matched to it \cite{dean_young_meric_2010, decker_wang_2011, xue_sanchez_2011}. Such low roughness is consistent with that of an individual graphene layer in bulk graphite and will have a negligible effect on the atom trapping potential landscape. Edge roughness will be determined by the quality of the lithography used to define and create the conducting channels. Since graphene is two dimensional, there will be negligible vertical fluctuations in the channel wall. Edge fluctuations along the channel will be determined by the lithographic process used and comparable to those in existing atom-chips with metallic conductors. For electron beam lithography, the edge fluctuations will be of order 35 nm \cite{xu_lee_2016}, whereas for helium ion beams, values below 5 nm are attainable \cite{aigner_pietra_2008}.\\ In metallic conductors, grain boundaries give rise to local electron scattering processes, which can be detected via their effect on the current flow pattern and resulting modulation of the trapping potential and BEC atom density \cite{SinucoA,SinucoB,aigner_pietra_2008,japha_entin_2008}. By contrast, graphene monolayers contain no grains to induce position-specific scattering processes and resulting atom density fluctuations. Electron-electron and electron-phonon scattering events do occur, but these are spatio-temporally stochastic, rather than occurring at particular fixed positions within the conducting channel and will therefore not produce roughness in the trapping potential and BEC density profile because of time averaging. Moreover, since their characteristic length scales are shorter than the typical dimensions of atom-chip wires, ballistic transport effects do not need to be considered. However, electron scattering mechanisms do affect the diffusive electron mobility and, hence, the Johnson noise-limited spin-flip lifetime of the trapped atom cloud.\\ Spatial fluctuations in the electronic potential energy created by imperfections and impurities that are either within the graphene or accumulate at interfaces in hBN-encased graphene structures have been studied theoretically and measured in resonant-tunneling experiments \cite{decker_wang_2011, britnell_2013, martin_akerman_2007, li_hwang_rossi_2011, yan_fuhrer_2011, yankowitz_xue_cormode_2012}. Self-consistent calculations \cite{li_hwang_rossi_2011, yan_fuhrer_2011}, which give excellent quantitative agreement with measurements of graphene's electron mobility, $\mu_G$, versus impurity density and with scanning probe surface studies \cite{martin_akerman_2007}, predict that the correlation length of these potential fluctuations is $\approx 10$ nm. Recent experiments on graphene-boron nitride tunnel transistors have shown that for graphene monolayers encased by several layers of hexagonal boron nitride (hBN), the correlation length is $\approx 12$ nm \cite{britnell_2013,greenaway_vdovin_mishchenko_2015}. Consequently, the associated small-angle current meander will have negligible effect on the potential landscape of atoms trapped even as close as 150 nm from the graphene and will therefore not influence the minimum atom-surface trapping distance.\\ When graphene is placed or grown epitaxially on hBN, the small lattice mismatch between the two materials gives rise to a strain-induced moir\'{e} pattern and superlattice potential, which can modify the electronic properties of electrons within the graphene. Moir\'{e} periods up to 80 nm have been realized \cite{davies_albar_2017} and further increases in period may modulate the current flow on a length scale long enough to produce detectable variation in the density profile of a BEC trapped nearby. Such variations could yield information about the superlattice potential and the underlying strain mechanisms. \section{Lifetime of A tRapped atomic BEC} In this section, we find an analytical expression for the total loss rate of an elongated atomic BEC trapped in the vicinity of an atom-chip. We consider contributions from atom tunneling towards the chip surface (Sec. \ref{sec:tunnelling towards the chip surface}), Johnson-noise induced losses (Sec. \ref{sec:Temporal_Johnson Noise}) and the 3-body loss mechanism. \subsection{Methodology} First, we consider a harmonic magnetic trapping field, $\mathbf{B}(\mathbf{r})$, formed near the surface of an atom-chip in the coordinate system shown in Fig.~\ref{fig:chip_diagram}, where $\mathbf{r} = (x,y,z)$, $\omega_{x}$, $\omega_{y}$, and $\omega_{z}$ are the characteristic trapping frequencies in the $x$-, $y$-, and $z$-axes, respectively, and the trap center is located at $\mathbf{r}_{c} = (0,y_{c},0)$. The potential energy profile of an atom interacting with this magnetic field is modelled by an anisotropic three-dimensional harmonic-oscillator potential \begin{equation} U(x, y, z) = \frac{1}{2}m\big(\omega_{x}^{2}x^{2} + \omega_{y}^{2}(y-y_{c})^{2} + \omega_{z}^{2}z^{2}\big).\\ \label{eq:U_Harmonic_main_text} \end{equation} Note that this magnetic potential originates from the interaction of the magnetic moment of the trapped atom and the magnetic field given in Eq. \eqref{equ:MagPot} and that the actual potential profile of an atom-chip trap is determined by the wire and current configurations. Equation \eqref{eq:U_Harmonic_main_text} gives a good approximation for the potential landscape generated by the Z-shaped trapping wires often used in atom-chip experiments.\\ We assume that such a magnetic trap has cylindrical symmetry and is elongated along the $z$-axis, so that $\omega_{r} = \omega_{x,y}$, and $\omega_{r} \gg \omega_{z}$, where $\omega_{r}$ denotes the trapping frequency in the radial direction (i.e. in the $x$-$y$ plane). We also assume that $\omega_{r}$ is so high that the trapped atoms only occupy the ground-state energy in the radial direction. To include the perturbing effect of the CP potential on the effective trapping frequency in the $y$-direction, henceforth we approximate the radial trapping frequency as $\omega_{r} = \sqrt{\omega_{x}\omega_{\mathrm{eff}}}$. An additional offset magnetic field, $\mathbf{B}_{0} = (0, 0, B_{z})$, of order $\SI{}{mT}$, is added in the $z$-direction to ensure that the magnetic field is non-zero at the trap center. Together, these assumptions enable us to treat the magnetic potential energy landscape as a highly elongated, quasi one-dimensional, trap.\\ It follows from the above assumptions about the trapping frequencies that the chemical potential, $\mu$, of the condensate must satisfy the following constraints \begin{equation} 5\hbar\omega_{z} < \mu < \frac{3}{2}\hbar\omega_{r}, \label{eq:chemical_potential_condition} \end{equation} which allows us to further assume that the mean atom density profile of the condensate can be described by a one-dimensional Thomas-Fermi distribution in the elongated ($z$) direction, and by the Gaussian ground-state wave function of a quantum harmonic oscillator in the tightly-confining radial ($r$) direction \cite{pethick_smith_2008}. The atom density profile is then given by \begin{equation} \rho_{0}(r, z) = \frac{1}{U_{0}}\Big(\mu_{\mathrm{eff}} - \frac{m\omega_{z}^{2}}{2}z^{2}\Big)e^{-r^{2}/2a_{r}^2}, \label{eq:Thomas-Fermi distribution_main_text} \end{equation} where $U_{0} = 4\pi\hbar^{2}a_{T}/m$, is the effective interaction strength for a pair of slowly moving atoms of s-wave scattering length $a_{T}$ \cite{olshanii_1998}, $\mu_{\mathrm{eff}} = \mu - \hbar\omega_{r}$, $m = \SI{1.44e-25}{kg}$ is the mass of an \textsuperscript{87}Rb atom, $r = \sqrt{x^{2} + (y-y_{c})^{2}}$ is the radial distance relative to the center of the trap, and $a_{r} = \sqrt{\hbar/m\omega_{r}}$ is the characteristic harmonic oscillator length.\\ Integrating Eq. \eqref{eq:Thomas-Fermi distribution_main_text} over the radial co-ordinate gives the mean line density profile along the $z$-axis (see Appendix \ref{sec:3b_loss_appendix}) \begin{equation} n_{0}(z) = \frac{2\pi a_{r}^{2}}{U_{0}}\Big(\mu_{\mathrm{eff}} - \frac{m\omega_{z}^{2}}{2}z^{2}\Big), \label{eq:line_density_main_text} \end{equation} where, the chemical potential $\mu$ of the trapped atom cloud is determined by the peak mean line density, i.e. at $z = 0$, as follows \cite{menotti_stringari_2002}: \begin{equation} \mu = \big(2a_{T}n_{0}(0) + 1\big)\hbar\omega_{r}, \label{eq:chemical_potential_expression_line_den} \end{equation} where $a_{T} = \SI{5.6}{\nano\metre}$ is the scattering length for \textsuperscript{87}Rb atoms in the $\ket{F,m_{F}} = \ket{2,2}$ state \cite{roberts_1998}.\\ \begin{figure}[ht] \centering \includegraphics[width = \linewidth]{Figures/A_volume_density.png} \caption{Color map of the atom volume density calculated for the trapped atom cloud using the Thomas-Fermi distribution in Eq. \eqref{eq:Thomas-Fermi distribution_main_text}. The color bar scale is in units of $\SI{}{m^{-3}}$. Parameters: $\omega_{r} = 2\pi\times\SI{20}{kHz}$, $\omega_{z} = 0.006\times\omega_{r}$, $N = 750$.} \label{CP_many_structures} \end{figure} We now define the total lifetime of the trapped atom cloud, $\tau_{\mathrm{tot}}$, to be the time taken for the initial peak atom line density, $n_0(z = 0)$, to drop below the smallest experimentally-detectable line density, which we take to be $n_{\mathrm{min}} = \SI{3e6}{m^{-1}}$ \cite{smith_aigner_2011}. We determine the upper limit of $\tau_{\mathrm{tot}}$ by taking $n_0(z = 0)$ to be the maximum possible value, $n_{\mathrm{max}}=14.8\times n_{\mathrm{min}}$, satisfying inequality \eqref{eq:chemical_potential_condition}.\\ Let us now consider the density-dependent loss rates originating from three distinct atom loss mechanisms; Johnson noise-induced spin flips, quantum tunneling to the chip surface, and three-body processes. The Johnson noise-induced loss rate is \begin{equation} \frac{\mathrm{d}n_{0}(z)}{\mathrm{d}t}\biggr\rvert_{\mathrm{JN}} = -\Gamma_{\mathrm{JN}} n_{0}(z), \label{eq:Johnson noise induced loss rate} \end{equation} where $\Gamma_{\mathrm{JN}}$ is the Johnson noise-induced spin-flip transition rate given in Eq. \eqref{eq:Splin-flip rate}. As described above, the tunneling loss rate has a similar form \begin{equation} \frac{\mathrm{d}n_{0}(z)}{\mathrm{d}t}\biggr\rvert_{\mathrm{tun}} = -\Gamma_{\mathrm{tun}} n_{0}(z), \label{eq:tunnelling loss rate} \end{equation} where $\Gamma_{\mathrm{tun}} = \omega_{\mathrm{eff}}\Tilde{T}/2\pi$.\\ By contrast, the three-body loss rate is proportional to the cube of the mean line density \cite{bouchoule_schemmer_henkel_2018,schemmer2018}, \begin{equation} \frac{\mathrm{d}n_{0}(z)}{\mathrm{d}t}\biggr\rvert_{\mathrm{3b}} = -\Gamma_{\mathrm{3b}}n_{0}(z)^{3}, \label{eq:1D loss rate_main_text} \end{equation} where $\Gamma_{\mathrm{3b}} = \kappa_{\mathrm{Rb}}/12 \pi^{2} a_{r}^{4}$ and $\kappa_{\mathrm{Rb}} = \SI{1.8e-41}{m^{6}s^{-1}}$ is the three-body recombination rate for \textsuperscript{87}Rb in the $F = m_{F} = 2$ state \cite{soding1999}.\\ Combining all three distinct loss rates gives the total loss rate \begin{equation} \frac{\mathrm{d}n_{0}(z)}{\mathrm{d}t}\biggr\rvert_{\mathrm{tot}} = -\Gamma_{\mathrm{3b}}n_{0}(z)^{3} - (\Gamma_{\mathrm{tun}}+\Gamma_{\mathrm{JN}})n_{0}(z). \label{eq:total loss rate_main_text} \end{equation} In this paper, we will only consider losses occurring at $z = 0$, where the line density peaks and so the total loss rate is maximal. Hence, we determine the lower limit on the total lifetime given by the integral: \begin{equation} \tau_{\mathrm{tot}} = \int_{n_\mathrm{max}}^{n_{\mathrm{min}}}\frac{\mathrm{d}n_{0}(z)}{-\Gamma_{\mathrm{3b}}n_{0}(z)^{3} - (\Gamma_{\mathrm{tun}}+\Gamma_{\mathrm{JN}})n_{0}(z)}\Big\rvert_{z = 0}, \label{eq:total life time} \end{equation} which can be integrated analytically to yield: \begin{equation} \tau_{\mathrm{tot}} = \frac{\log{\Bigg[\frac{\Gamma_{\mathrm{3b}}n_\mathrm{min}^{2} + (\Gamma_{\mathrm{tun}}+\Gamma_{\mathrm{JN}})}{\alpha^{2}\Gamma_{\mathrm{3b}}n_\mathrm{min}^{2} + (\Gamma_{\mathrm{tun}}+\Gamma_{\mathrm{JN}})}\Bigg]}\\ + 2\log{(\alpha)}}{2(\Gamma_{\mathrm{tun}}+\Gamma_{\mathrm{JN}})}, \label{eq:total_life_time_analytic} \end{equation} where $\alpha = n_{\mathrm{max}}/n_{\mathrm{min}} = 14.8$. \subsection{Results} In this section, we calculate and compare atom-cloud lifetimes for three different surface structures: a $1$ $\SI{}{\micro\metre}$-thick gold slab, a graphene monolayer, and a graphene monolayer encased by two $10$ $\SI{}{\nano\metre}$-thick hBN layers. The first structure is representative of the present generation of atom-chips, and provides typical lifetimes for comparison with graphene-based devices. The second structure is the theoretical limit for 2D materials and exemplifies the predicted improvements in performance and functionality. The third atom-chip structure is within the scope of existing fabrication techniques for vdW heterostructures. The graphene conductor is encased within hBN multilayers, which support it and shield it from adsorbates.\\ Fig.~\ref{fig:Total_lifetimes_Gr_and_Gold} shows the lifetimes resulting from each of the three loss mechanisms considered in the previous section, together with the total lifetime, calculated versus the position of the harmonic trap center, $y_{0}$, for \textsuperscript{87}Rb atoms near (a) a graphene monolayer and (b) the $1$ $\SI{}{\micro\metre}$-thick gold slab. Note that in this figure and henceforth, all the lifetimes are calculated using Eqs. \eqref{eq:Johnson noise induced loss rate} to \eqref{eq:total_life_time_analytic}, which account for the minimum experimentally-detectable atom density, whereas those shown in Figs. \ref{fig:Tunnel_lifetime_main} and \ref{fig:Johnson_Gr_gold} are calculated using Eqs. \eqref{eq:tunnelling loss lifetime}, \eqref{eq:Splin-flip rate} and \eqref{eq:Lifetime} to facilitate comparison with the corresponding lifetimes reported in \cite{Lin2004}. The unperturbed transverse trapping frequency, $\omega_{y} = 2\pi\times\SI{20}{kHz}$, is used in all cases.\\ Considering Fig. \ref{fig:Total_lifetimes_hBN_Gr}, we firstly note that the 3-body loss lifetimes (dashed red curves) are identical for the two structures, as expected from Eq. \eqref{eq:1D loss rate_main_text}. Secondly, as a consequence of weaker CP attraction, the tunneling loss lifetime for the graphene monolayer is longer than for the gold slab (compare dot-dashed green curves) and the minimum atom-surface trapping distance is lower. Thirdly, significant improvement in the Johnson noise-limited lifetime is apparent for the graphene monolayer. Whereas Johnson noise in the metal wire limits the total atom lifetime, for graphene the 3-body lifetime of $\sim \SI{12.5}{\second}$ is the limiting factor and Johnson noise is insignificant. \begin{figure}[ht] \includegraphics[width = \linewidth]{Figures/A_Total_lifetimes_Gr_US.png} \includegraphics[width = \linewidth]{Figures/A_Total_lifetimes_Gold_US.png} \caption{Lifetimes calculated (from Eqs. \eqref{eq:Johnson noise induced loss rate} to \eqref{eq:total_life_time_analytic}) versus the position of the harmonic trap center, $y_{0}$, for an \textsuperscript{87}Rb quasi-condensate trapped near (a) a graphene monolayer and (b) a $1$ $\SI{}{\micro\metre}$-thick gold slab, for three different loss mechanisms: 3-body processes (dashed red curves); tunneling losses (dot-dashed green curves); Johnson noise-induced losses (solid blue curves). The total lifetime is shown by the solid black curves. For the graphene monolayer, the total lifetime is limited by 3-body losses to $\sim \SI{12.5}{\second}$, whereas for the gold slab the total lifetime is limited by Johnson noise. Parameter: $\omega_{y} = 2\pi\times\SI{20}{kHz}$.} \label{fig:Total_lifetimes_Gr_and_Gold} \end{figure} Figure \ref{fig:Total_lifetimes_hBN_Gr} shows lifetimes calculated for \textsuperscript{87}Rb atoms near the hBN-graphene heterostructure, taking the same trapping frequency as in Fig.~\ref{fig:Total_lifetimes_Gr_and_Gold}. The only noticeable difference in the lifetimes compared with those for a graphene monolayer alone relates to tunneling loss (dashed red curve). For given $y_0$, the hBN-graphene structure generates a higher CP potential and, hence, a shorter tunneling lifetime and a higher minimum distance from the trap center to the surface. The Johnson noise is insensitive to the addition of the hBN cladding layers because such layers change neither the number nor mobility of the free charge carriers in the graphene and the total lifetime is still limited by the 3-body loss mechanism. \begin{figure}[ht] \includegraphics[width = \linewidth]{Figures/A_Total_lifetimes_Multi_US.png} \caption{Lifetimes calculated (from Eqs. \eqref{eq:Johnson noise induced loss rate} to \eqref{eq:total_life_time_analytic}) versus the position of the harmonic trap center, $y_{0}$, for an \textsuperscript{87}Rb quasi-condensate trapped near an hBN-encased graphene heterostructure for three different loss mechanisms: 3-body processes (dashed red curve); tunneling losses (dot-dashed green curve); Johnson noise-induced losses (solid blue curve). The total lifetime is shown by the solid black curve. For $y_{0} > \SI{0.5}{\micro\metre}$, where the magnetic trap has a well-defined barrier on the side near the surface, the lifetimes are virtually identical to those for a single layer of graphene. Parameters: $\omega_{y} = 2\pi\times\SI{20}{kHz}$, hBN thickness $= \SI{10}{\nano\metre}$.} \label{fig:Total_lifetimes_hBN_Gr} \end{figure} Figure \ref{fig:2D_total_lifetime_GrhBN_Gold} shows color maps of the total lifetimes, calculated versus the position of the trap center from the surface and the transverse trapping frequency for (a) the hBN-graphene heterostructure and (b) the $1$ $\SI{}{\micro\metre}$-thick gold slab. The color scale is logarithmic and is common to (a) and (b). Whereas the total lifetime for the thin gold slab is mainly below $\SI{1}{\second}$ (yellow-green in color scale), for the hBN-graphene structure it exceeds $\SI{100}{\second}$ (red shading) for high $y_0$ and low $\omega_y$ values. \begin{figure}[ht] \includegraphics[width = \linewidth]{Figures/A_2D_total_lifetime_GrhBN_Gold.png} \caption{Color maps of total lifetimes, calculated (from Eqs. \eqref{eq:Johnson noise induced loss rate} to \eqref{eq:total_life_time_analytic}) versus $y_0$ and $\omega_{y}/2\pi$ for (a) an hBN-encased graphene heterostructure, (b) a $1$ $\SI{}{\micro\metre}$-thick gold slab with a common color scale (right). The lifetime is not defined in the white region because the CP potential distorts the harmonic magnetic potential (i.e. reduces the barrier nearest to the surface) so strongly that the trap cannot be formed. For any given $y_0$ and $\omega_{y}/2\pi$ values, the lifetime for the hBN-graphene structure is longer than for the thin gold slab.} \label{fig:2D_total_lifetime_GrhBN_Gold} \end{figure} \section{Possible Implementations} In order to see whether a close surface magnetic trap can be realized using graphene wires we assume a simple side guide configuration, consisting of graphene wire carrying a current, $I$, and a bias magnetic field of magnitude $\abs{\mathbf{B}_{b}}$. This will form a magnetic minimum at a distance \begin{equation} y_{0} = \frac{\mu_{0}}{2\pi}\frac{I}{\abs{\mathbf{B}_{b}}} \label{eq:trap_distance} \end{equation} from the graphene sheet. Using the derivation given in \cite{Folman2002} and assuming that the trapping distance is larger then the width of the graphene wire, the trapping frequency is approximated by \begin{equation} \omega_{y} = 2\pi \sqrt{\frac{\mu_{B}g_{F}m_{F}}{m \abs{\mathbf{B}_{0}}}} \frac{\abs{\mathbf{B}_{b}}^2}{I\mu_{0}}, \label{eq:trap_freq} \end{equation} where $\abs{\mathbf{B}_{0}}$ is the magnitude of the offset magnetic field parallel to the direction of current flow used to avoid Majorana spin-flips. A trap frequency of $\omega_{y} \approx 2\pi\times\SI{20}{kHz}$ is therefore realizable at a distance of $\SI{400}{\nano\metre}$ with a total current of $\SI{0.7}{\micro A}$ in addition to a bias field of $\SI{35}{\micro T}$ and an offset (Ioffe) field of $\SI{80}{\micro T}$. Since exfoliated and epitaxial graphene on bulk substrates can support current densities in excess of $\sim$ 1000 A/m even in an ultra-high vacuum \cite{Breakdown1,Breakdown2}, and current densities as high as $\sim$ 700 A/m have been reported for free-standing monolayer graphene \cite{Bolotin2}, a graphene conducting channel only 50 nm wide would be sufficient to ensure trap operation with negligible heating. Wires with larger widths could increase the possible trapping frequencies or enable trapping further from the surface, which may assist with loading the trap. We note, however, that such a trap could not be loaded directly but would need to be mounted on a carrier chip featuring thick metal wires, which generate the field used initially to cool and trap the atoms. This carrier chip must be placed far enough from the graphene and the atoms to produce negligible Johnson noise and CP attraction effects, but also close enough to create a sufficiently compressed trap. Given a $\SI{50}{\micro\metre}$ separation between the atom cloud and the carrier chip, gold wires carrying a current of 1A could produce the transverse trap frequency of $\omega_{r} = 2\pi\times\SI{20}{kHz}$ needed for the traps shown in Figs. 4, 6, 9 and 10. Since thin van der Waals heterostructures are almost transparent, laser light can pass through them and be retro-reflected from a metal coating on the carrier chip in order to form a mirror MOT.\\ In an alternative configuration, the potential for trapping the atoms could be provided by optical fields, for example an electromagnetic standing wave generated by on-chip mounted optics. In this case, the graphene wires could be used to perturb strongly the optical potential or act as a source of magnetic fields to enable, for example, tuning the scattering length via Feshbach resonances.\\ Another issue that has been observed when trapping atoms close to a surface, especially metal, is the effect of stray electric fields originating from the polarization of adsorbed atoms \cite{Hunger_Camerer_2010}. Although this effect has not yet been measured for graphene surfaces, covering the graphene layer with a dielectric layer such as hBN is expected to suppress these effects by limiting polarization of any adsorbates and keeping them away from the graphene layer(s), so preventing them doping it.\\ Graphene-based atoms chips could be fabricated by MBE growth of graphene \cite{MBEgrowth1,MBEgrowth2,MBEgrowth3}, or deposition of exfoliated graphene on hBN, followed by selective etching of the graphene to define the conducting channel and, finally, deposition of capping layers of hBN either by epitaxial growth or by placing exfoliated hBN layers, as now widely done to create van der Waals heterostructures \cite{vdW1,vdW2}.\\ \section{Conclusions} In summary, we have presented a general formalism for calculating how the lifetime of an atomic quantum gas is affected by the Johnson noise and atom-surface CP attraction of van der Waals heterostructures comprising arbitrary configurations of 2D materials such as graphene. The electromagnetic reflection coefficients and corresponding electrical conductivities of the 2D layers are of crucial importance in determining both the Johnson noise and CP potential. Since both of these parameters are lower for graphene than for the metal layers generally used in atom-chips, so too are the Johnson noise and CP atom-surface attraction. Consequently, for given atom-surface separation, the spin-flip and tunneling loss rates are both lower near graphene-based van der Waals heterostructures than near metal wires, meaning that such heterostructures can improve the performance of atom-chips. For example, although high Johnson noise limits the lifetime of atom clouds trapped between 0.4 and $\SI{2}{\micro\metre}$ from the chip surface to less than 1s, such noise is negligible for atoms trapped near graphene, whose lifetime can, in principle, reach $\sim$ 100 s, limited only by 3-body losses and background gas collisions. For atom-surface separations below $\SI{0.4}{\micro\metre}$, the lifetime of the atom cloud is limited by tunneling losses for both metallic and 2D conductors. However, due to the weak CP atom-surface attraction, such losses are lower near van der Waals heterostructures; around 4 orders of magnitude lower for atoms held $\SI{0.4}{\micro\metre}$ from the surface.\\ As a result of their favourable noise and CP characteristics, van der Waals heterostructures offer a solid-state solution to the long-standing challenge of holding ultracold atom clouds closer than $\SI{1}{\micro\metre}$ from an atom-chip surface for long enough (up to 100 s) to perform various experiments and measurements on the atom cloud. Moreover, van der Waals heterostructures that enable robust sub-$\SI{}{\micro\metre}$ atom trapping would control atomic condensates on length scales that are smaller than presently achievable optically and below the healing length, thereby providing access to new regimes of quantum many-body physics. The ability to achieve long lifetimes for ultracold atoms held as close as 400 nm to an electronic device also opens a route to creating new hybrid atomic-solid state quantum systems, for example a Rydberg atom coupled to a quantum dot formed within a 2D conductor \cite{Mancsdot,Mancsdot2}. Since the micron-scale confinement length of electrons in the quantum dot is similar to that of the excited electron in the Rydberg atom, new types of electron orbital, shared between the atom and the condensed matter parts of the system may be created. Such hybrid states may yield new regimes of quantum control and information storage/processing, for example relating to side-band cooling of graphene \cite{Miskeen_Khan}.
{ "timestamp": "2021-05-06T02:12:04", "yymm": "2105", "arxiv_id": "2105.01907", "language": "en", "url": "https://arxiv.org/abs/2105.01907" }
\section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} \begin{figure} \centering \includegraphics[width=3.3in]{fig/subfig.pdf} \caption{\textbf{The brief architecture of our backbone Siamese vision transformer.} For an image pair (A,B), we cut them into several patches. Then every patch is flattened and projected to a fix-sized embedding with a fully connected layer, resulting in a sequence of embeddings. Subsequently, we add a classification token in the front of each sequence. Then, two sequences are fed into the Siamese transformer architecture. At last, we add a hash layer projecting the feature into B-bit hash vectors. The Bayesian learning module is employed to preserve the similarity in the hashing space for each pair. } \label{fig:subfig} \end{figure}\par The past decade has been characterized by the explosive amount of high-dimensional data generated by countless end-users or organizations, resulting in a surge of research attention on accurate and efficient information retrieval methods. Among them, large-scale image retrieval has attained growing traction for its pervasive uses in various scenarios, e.g. recommendation systems, search engines, remote sensing systems. Among all the methods proposed for this challenging task \cite{jegou2010product,ge2013optimized,malkov2018efficient,fu2017fast},~hamming hash-based methods have achieved pronounced successes. It aims to learn a hash function mapping the images in the high-dimensional pixel space into low-dimensional hamming space while preserving their visual similarity in the original pixel space. Scores of works have been introduced. Based on the way they extract features, existing hashing-based works can be divided into two categories, namely, shallow methods and deep learning-based methods. Shallow methods~\cite{charikar2002similarity,indyk1997locality,weiss2008spectral} learn their hash functions via the hand-crafted visual descriptors (e.g. \textit{GIST}~\cite{oliva2001modeling} ). Nonetheless, the handcrafted features do not guarantee accurate preservation of semantic similarities of raw image pairs, resulting in degraded performances in the subsequent hash function learning process. Deep learning-based~\cite{xia2014supervised,erin2015deep} methods generally achieve significant performance improvements when compared to their shallow counterparts. The common learning paradigm involves two phases. The first phase aims to learn discriminative feature representations with deep convolutional neural networks (CNNs),~e.g. \textit{AlexNet}. The second phase involves designing diversified non -linear functions to squash the continuous features into binary Hamming codes and devising various ~\cite{liu2016deep,he2018hashing,cao2018deep,cakir2019hashing,fan20deep} losses to preserve the similarity in the raw pixel space. \par \begin{figure*} \centering \includegraphics[width=6.7in]{fig/architecture.pdf} \caption{The detailed architecture of the proposed \textbf{TransHash}. The upper part denotes the training stage. Specifically, we follow the same protocols as ViT by feeding the patch embedding together with the position embedding into the transformer encoder. At the last layer of the transformer, we design two parallel transformer blocks: global and local transformer blocks. For the global feature and each local feature, we design a specific hashing layer. In the testing stage, the global and all the local hash vectors are concatenated and quantized into one hash code.} \label{fig:mainfig} \end{figure*} Recently, transformers~\cite{vaswani2017attention} have demonstrated great successes in natural language processing~\cite{devlin2018bert,brown2020language}. With the advent of the \textit{Vision Transformer}, a variant of transformer tailored for computer vision tasks, transformers have trumped numerous CNN-based methods in various computer vision tasks (e.g. image classification~\cite{dosovitskiy2020image}, object re-identification~\cite{he2021transreid}, and etc). As is shown in Fig.~\ref{fig:subfig},~\textit{Vision transformer} works by first reshaping the input images into a sequence of 2D patches. In the later stage, the 2D patches are transformed into \textit{D} dimensional vectors with a trainable linear projection matrix. Then, a sequence of 1D vectors is fed into the standard transformer architecture to learn a usable feature representation. Inspired by the pronounced performances of \textit{ViT} in other vision tasks, we ponder the possibility of innovating novel deep hashing methods with pure transformers. \par In this paper, we build up a novel transformer-based hashing method, dubbed \textbf{Transhash}, which is the very first deep hashing method without adopting a convolutional neural network (CNN) as the backbone architecture. Specifically, targeting pairwise deep hashing learning, we design a Siamese Transformer backbone, which is essentially two identical transformers sharing the same weight parameters\cite{bromley1993signature}. On top of this innovation, inspired by \cite{he2021transreid}, we design a dual-stream feature learning module by changing the last layer of two Siamese transformers to two parallel branches. Concretely,~for the first branch, we learn a global feature representation. In parallel, we reorder the sequence of output features from the second last layer into K groups. The K groups are concatenated with the shared output token and then fed into another transformer layer to generate K local features. The primary merits are stated as follows. Firstly, the model could simultaneously learn fine-grained global and local features with the joint global and local stream design. Secondly, similar to \cite{lai2015simultaneous}, which employs a divide-and-encode module to reduce the redundancy of the learned feature representation, our method could achieve similar effects. Since the final learned representation is a concatenation of the global representation and several local representations, the subsets of the final feature vector are loosely correlated, resulting in increased independence and minimized redundancy. To further preserve the semantic similarity of the image pairs in the feature space, we propose to adopt the Bayesian learning framework to pull close similar pairs and push away dissimilar pairs in the embedding space for all the global and local features. Finally, since the learned feature representations are continuous in nature, we need to adopt the \textit{sign} function $h = \textit{sign}(f)$ to generate binary hamming hash code in the test stage. However, owing to the sizable gap between the continuous feature representation $f$ and the hash code $h$ after \textit{sign} function, which is officially called the \textit{quantization error}\cite{zhu2016deep}, directly generating hash codes with $sign$ function in the testing stage could only lead to sub-optimal retrieval performances. In an effort to bridge the gap in the training stage, we reformulate the similarity-preserving learning problem as a constrained optimization problem. Concretely, on top of the Bayesian learning module, we add a Cauchy quantization loss~\cite{cao2018deep} to statistically bridge the gap between the continuous feature representation and the binary hash coding. \par To sum up, we make the following contributions: \begin{enumerate} \item We design a Siamese Transformer backbone based on \textit{ViT} which is two identical vision transformers sharing the same weight parameters. \item We innovate a novel two-stream feature learning module by changing the last layer of the transformer into two independent parallel branches. In this fashion, we could learn global and local features at the same time. Meanwhile, as stated before, it could also promote the independence of the learned final hash code vector while reducing bit redundancy. \item By further adopting the similarity-preserving Bayesian learning module with a quantization constraint, we build up a novel deep hashing framework for large-scale image retrieval with pure transformer. To the best of our knowledge, this is the very first work for deep learning-based hashing without adopting a convolutional neural network as the backbone. \item We conduct comprehensive experiments on three widely-studied datasets-~\textbf{CIFAR-10}, \textbf{NUSWIDE} and \textbf{IMAGENET}. The results show that we outperform all the state-of-the-art method across three datasets by large margins. \end{enumerate} \section{Related Works} \subsection{CNNs in Computer Vision} Convolutional neural network was first introduced in \cite{cnnoriginal} to recognize hand-write numbers. It proposes convolutional kernels to capture the visual context and achieves notable performances. Nonetheless, it was not until the innovation of \textit{AlexNet} \cite{alexnet} that the CNN starts to become the workhorse of almost all the mainstream computer vision tasks, e.g. \textit{Instance Segmentation}~\cite{bolya2019yolact,ren2017end}, \textit{Image Inpainting}~\cite{yeh2017semantic,yu2019free}, \textit{Deep hashing}~\cite{cao2017hashnet,cao2018deep}, \textit{Person Re-identification}~\cite{hermans2017defense,ye2018visible,ye2018hierarchical,chen2020maenet} and etc. To further boost the capability of CNNs, a series of deeper and more effective convolutional neural networks have been proposed, e.g. \textit{VGG}~\cite{vgg}, \textit{GoogleNet}\cite{googlenet}, \textit{ResNet}~\cite{resnet}, \textit{EfficientNet}~\cite{tan2019efficientnet} and etc. While CNNs are still dominant across various computer vision tasks, the recent shift in attention to transformer-based architectures has opened up possibilities to adopt transformers as potent alternatives to convolutional neural networks. Our work is among the first endeavour to replace CNNs with pure transformer-based architectures in traditional computer vision tasks. \subsection{Transformer in Vision} The \textit{Transformer} is first proposed in \cite{vaswani2017attention} for sequential data targeting at usage in the field of natural language processing (NLP). Since then, many studies have investigated the effectiveness of Transformer in computer vision tasks by feed to Transformer the sequence of feature maps extracted by CNNs~\cite{girdhar2019video,carion2020end,xie2021segmenting}. In 2020, \textbf{Google} proposed \textit{Vision Transformer}~(ViT)~\cite{dosovitskiy2020image}, which applies a pure transformer directly to a sequence of image patches for image classification. Variants of \textit{ViT} have achieved remarkable successes. For instance, \cite{liu2021swin} proposes a hierarchical vision transformer using shifted windows. \cite{wang2021pyramid} proposes a pyramid vision transformer tailored for dense prediction. Further, \cite{he2021transreid} proposes the first work of designing a pure-transform based architecture for person re-identification. By utilizing the side information and innovating a novel jigsaw branch, it achieves state-of-the-art across multiple object re-identification datasets. \textit{Vision transformer} is still in its nascent stages. Mounting research attention is being directed to investigate its potential in diversified computer vision tasks. \subsection{Hashing for Image Retrieval} Deep hashing for large-scale image retrieval has been drawing growing research attention in recent years ~\cite{indyk1997locality,weiss2008spectral,gong2012iterative,heo2012spherical,jegou2010product,ge2013optimized}. According to the way they extract the features, we could categorize the existing hashing methods into two groups: shallow hashing methods and deep learning based-hash methods. \par Typical shallow methods are reliant upon the handcraft features to learn a hashing function mapping visual images into binary hash codes. A canonical example is \textbf{LSH} (Locality Sensitive Hash) \cite{indyk1997locality}, which seeks to find a locality-sensitive hash family where the probability of hash collisions for similar objects is much higher than those dissimilar ones. Later, \cite{charikar2002similarity} further proposed another variant of LSH (dubbed SIMHASH) for cosine similarities in Euclidean space. Though these handcrafted feature-based shallow methods achieved success to some extent, when applied to real data where dramatic appearance variation exists, they generally fail to capture the discriminative semantic information, leading to compromised performances. In light of this dilemma, a wealth of deep learning-based hash methods have been proposed~\cite{cao2017hashnet,fan20deep,li2015feature,zhu2016deep,liu2016deep}, for the first time, proposes to learn the features and hash codes in an end-to-end manner. \cite{zhu2016deep} offers a Bayesian learning framework adopting pairwise loss for similarity preserving. \cite{cao2018deep} further suggests substituting the previous probability generation function for neural network output logits with a Cauchy distribution to penalize similar image pairs with hamming distances larger than a threshold. \cite{zhang2019improved} innovates a new similarity matrix targeting multi-label image retrieval.~\cite{fan20deep} further introduces a deep polarized loss for the hamming code generation, obviating the need for an additional quantization loss. \section{Proposed Method} In this section, we will elaborate on the design of our framework. \paragraph{Problem Formulation} Suppose we have a training set $T = \{I_i\}_{i=1}^{NT}$ containing $NT$ training images and the corresponding label set $Y = \{y_i\}_{i=1}^{NT}$. For all the pairs of images in the training set, we can construct a similarity matrix $\mathbf{S}$ where $s_{ij} = 1$ if $I_i$ and $I_j$ are from the same class and $s_{ij} = 0$ otherwise. The goal of deep hash for image retrieval is to learn a non-linear hash function $\mathcal{H}: \mathbf{I} \mapsto \{0,1\}^B $ which encodes each input image $I_i$ into a binary hash vector $h_i$ with $B$ bits while preserving the similarity information conveyed in $\mathbf{S}$. That is to say, the Hamming distance between $h_i$ and $h_j$ should be small if $s_{ij} = 1$ and large otherwise. \subsection{Siamese Vision Transformer Architecture} \label{sec:siamese} An overview of our architecture is illustrated in Fig.~\ref{fig:mainfig}. For an image pair $(I_i,I_j)$ with size $H \times W \times 3$, we cut them into identical small patches of patch size $P \times P \times 3$. In doing so, we obtain $N$ patches in total, where $N = H \times W / P^2$. Note that $N$ is also the effective input sequence length for the transformer. \\ \paragraph{Patch embeddings} For each image patch of $P \times P \times 3$, we flatten it into a vector of size $P^2 \times 3$. Subsequently, similar to \textit{ViT}, we embed every vector into $D$ dimensions with a trainable linear projection (fully connected layer), resulting in a sequence $\{x_p^k\} \in \mathbb{R}^{D}, k \in [1, N]$. We further prepend a learnable embedding $x_{class}$ to $x$, whose state at the end of the output layer serves as the image representation. In this way, we obtain the final embedding $X_p \in \mathbb{R}^{(N+1) \times D}$ \paragraph{Position embeddings} Positional embedding is adopted to encode the position information of the patch embedding, which is important for the transformer to learn the spatial information of each patch inside the original image. We follow the standard procedure in \textit{ViT} by adding trainable 1D position embedding for every vector in the sequence. Thus, the input for the transformer encoder $z_0$ is stated as follows: \begin{equation} z_0 = X_p + E_{pos} = [x_{class}; x_p^1 , ... , x_p^N ] + E_{pos} \end{equation} \paragraph{Self-attention encoder} The transformer encoder consists of $L-1$ blocks, each block containing a multi-headed self-attention layer (\textbf{MSA}) and \textbf{MLP} layer. A layer norm (\textbf{LN}) is applied before each layer while residual connections are applied after each layer, as shown in Fig.~\ref{fig:mainfig}. The computation of a block $\mathcal{F}_{block}$ could be formulated as: \begin{equation} \begin{split} z_{l} &= \mathcal{F}_{msa}(\mathcal{F}_{ln}(z_{l-1})) + z_{l-1} \\ z_{l} &= \mathcal{F}_{mlp}(\mathcal{F}_{ln}(z_{l} )) + z_{l} \\ \text{where} \\ l & = 1 ... (L-1) \end{split} \end{equation} \paragraph{Dual-stream feature learning} After the before-mentioned self-attention encoder, we get the hidden features which are denoted as $Z_{L-1} = [z_{L-1}^0;z_{L-1}^1,z_{L-1}^2, ... ,z_{L-1}^N]$. Note that, as stated before, $z_{L-1}^0$ is the hidden feature for the prepended learnable embedding $x_{class}$. Inspired by \cite{he2021transreid},~we design two parallel branches, the global branch $\mathcal{F}^g_{block}$ and the local branch $\mathcal{F}^l_{block}$. For the global branch, it serves as a standard transformer block encoding $Z_{L-1}$ into $Z_{L} = [f_g;z_L^1,z_L^2, ... ,z_L^ N]$, where $f_g$ is regarded as the global feature representation. For the local branch, we split $Z_{L-1}$ into $K$ groups and prepend the shared token $z_{L-1}^0$ before each group. In this fashion, K feature groups are derived which are denoted as$ \{[z_{L-1}^0;z_{L-1}^1,...,z_{L-1}^{N/K}],$ $ [z_{L-1}^0;z_{L-1}^{N/K+1},...,z_{L-1}^{2\times N/K}], [z_{L-1}^0;z_{L-1}^{N-N/K+1}$ $,...,z_{L-1}^N]\}$. Then, we feed $K$ features groups into $\mathcal{F}^l_{block}$ to learn $K$ local features $\{f_l^1,f_l^2,...,f_l^K\}$. \paragraph{Hash layer} In an effort to learn compact hash codes, we further design several hash layers projecting every feature vector into different bit sized hash vectors. Concretely, suppose the hash bit length in the retrieval stage is $B$ for each image, then, for the global feature vector of embedding size $M$, we obtain a $B/2$ bit global hash vector through \begin{equation} h_g = \mathcal{F}_h^g(f_g) = f_g W^T + b \end{equation} where $W$ is a weight parameter matrix of size $(B/2,M)$ and b is the bias parameter of size $(B/2,)$. In a similar fashion, for each local feature $f_l \in \{f_l^1,...,f_l^K\}$, we design a specific fully connected layer with $B/(2*K)$ output logits, resulting in $K$ hash vectors $\{h_l^1,...,h_l^K\}$. \par In this way, for a image pair $(I_i,I_j)$, the siamese model outputs two sets of hash vectors: $\{\{h_g\}^i,\{h_l^1\}^i,...,\{h_l^K\}^i\}$ and $\{\{h_g\}^j,\{h_l^1\}^j,$ $..., \{h_l^K\}^j\}$, respectively. \subsection{Similarity-preserving Bayesian Learning} \label{sec:similarity} In this paper, we propose to adopt a Bayesian learning framework for similarity-preserving deep hashing learning. Given training images $(I_i,I_j,s_{ij}): s_{ij} \in \textbf{S}$, where $s_{ij} =1 $ if $I_i$ and $I_j$ are from the same class and $0$ otherwise, we can formulate the logarithm Maximum a Posteriori (\textbf{MAP}) estimation of the hash codes $\boldsymbol{H} = \{ h_1, h_2,...,h_P\}$ for $P$ training points as: \begin{equation} \begin{aligned} \log P(\boldsymbol{H} \mid \mathbf{S}) & \propto \log P(\mathbf{S} \mid \boldsymbol{H}) P(\boldsymbol{H}) \\ &=\sum_{s_{i j} \in \mathbf{S}} w_{i j} \log P\left(s_{i j} \mid \boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)+\sum_{i=1}^{NT} \log P\left(\boldsymbol{h}_{i}\right) \end{aligned} \label{eq:map} \end{equation} where $ P(\mathbf{S} \mid \boldsymbol{H})$ is the weighted likelihood function and $w_{ij}$ is the corresponding weight for each image pair $(I_i,I_i)$. Since the similarity matrix $\mathbf{S}$ could be very sparse in real retrieval scenarios~\cite{cao2017hashnet}, it could lead to the data imbalance problem, resulting in sub-optimal retrieval performances. The weighted likelihood is adopted to tackle this problem by assigning weights to each training pair according to the importance of misclassifying that pair~\cite{dmochowski2010maximum}. To be clear, we set \begin{equation} w_{i j}=\left\{\begin{array}{ll} |\mathbf{S}| /\left|\mathbf{S}_{1}\right|, & s_{i j}=1 \\ |\mathbf{S}| /\left|\mathbf{S}_{0}\right|, & s_{i j}=0 \end{array}\right. \end{equation} where $\mathbf{S}_{1}=\left\{s_{i j} \in \mathbf{S}: s_{i j}=1\right\}$ is the set of similar pairs, $\mathbf{S}_{0}=\left\{s_{i j} \in \mathbf{S}: s_{i j}=0\right\}$ being the set of dissimilar pairs. For an pair $h_i,h_j$, $P\left(s_{i j} \mid \boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)$ is the conditional probability function of $s_{ij}$ given a pair of hash codes $h_i$ and $h_j$. Since the $s_{ij}$ only takes two values $0$ and $1$, it is natural to define $P\left(s_{i j} \mid \boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)$ as a Bernoulli distribution: \begin{equation} \begin{aligned} P\left(s_{i j} \mid \boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right) &=\left\{\begin{array}{ll} \sigma\left(\mathcal{D}_H\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right), & s_{i j}=1 \\ 1-\sigma\left(\mathcal{D}_H\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right), & s_{i j}=0 \end{array}\right.\\ &=\sigma\left(\mathcal{D}_H\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right)^{s_{i j}}\left(1-\sigma\left(\mathcal{D}_H\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right)\right)^{1-s_{i j}} \end{aligned} \label{eq:beyesian} \end{equation} where $\mathcal{D}_H(.)$ is the Hamming distance function and $\sigma$ is a probability function which takes as input a distance of a hash code pair and generate the probability of them from the same class. Note that, since directly optimizing the discrete binary hash code is super challenging, in the training stage, we apply continuous relaxation to the binary constraints $\mathbf{h}_i \in \{-1,1\}^B$ similar to \cite{cao2017hashnet,cao2018deep,zhu2016deep}. Thus, we adopt a surrogate $\mathcal{D}_S$ for $\mathcal{D}_H$ in the continuous space which is formulated as: \begin{equation} \begin{aligned} \mathcal{D}_S\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right) &=\frac{K}{4}\left\|\frac{\boldsymbol{h}_{i}}{\left\|\boldsymbol{h}_{i}\right\|}-\frac{\boldsymbol{h}_{j}}{\left\|\boldsymbol{h}_{j}\right\|}\right\|_{2}^{2} \\ &=\frac{K}{2}\left(1-\cos \left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right) \end{aligned} \label{eq:prob} \end{equation} For the probability function $\sigma$, the most commonly used is the \textit{sigmoid} function. Nevertheless, as stated in \cite{cao2018deep}, the probability of \textit{sigmoid} when the input Hamming distance is much larger than $2$ stays high and only starts to decrease when it approaches $b/2$.~This property makes it hard for the deep hashing method to pull the distance of similar pairs close to a sufficient amount. In light of this dilemma, we propose to adopt \textit{Cauchy} distribution function: \begin{equation} \sigma\left(\mathcal{D}_S\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right)=\frac{\gamma}{\gamma+\mathcal{D}_S\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)} \label{eq:cauchy} \end{equation} where $\gamma$ denotes the scale parameter of the \textit{Cauchy} distribution. The \textit{Cauchy} distribution has a desirable property. The probability of \textit{Cauchy} declines very fast even when the Hamming distance is small, enabling the hashing method to pull the similar images into a small Hamming radius. By taking Eq.~\ref{eq:cauchy},~Eq.~\ref{eq:prob}, Eq.~\ref{eq:beyesian} into the \textbf{MAP} estimation in Eq.~\ref{eq:map}, we could derive the optimization objective of similarity-preserving loss as: \begin{equation} \begin{aligned} L_{s} &= \sum_{s_{i j} \in \mathbb{S}} L_{ce}(\boldsymbol{h}_i,\boldsymbol{h}_j) \\ &=\sum_{s_{i j} \in \mathbf{S}} w_{i j}\left(s_{i j} \log \frac{\mathcal{D}_s\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)}{\gamma}+\log \left(1+\frac{\gamma}{\mathcal{D}_S\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)}\right)\right) \end{aligned} \label{eq:final} \end{equation} From Eq.~\ref{eq:beyesian} and Eq.~\ref{eq:final},~we can observe that $L_{s}$ takes a similar form as logistic regression. By optimizing $L_s$, for a similar pair $(I_i,I_j)$, we are increasing the value of $P(1|\textbf{h}_i,\textbf{h}_j)$, resulting in decreased value of $D_S(\textbf{h}_i,\textbf{h}_j)$ since $\sigma$ is a monotonically decreasing \textit{Cauchy} function. \\ The quantization constraint to bridge the gap between continuous features and their binary counterparts ($L_Q$) can be derived from the proposed prior $ P\left(\boldsymbol{h}_{i}\right)=\frac{\gamma}{\gamma+\mathcal{D}_S\left(\left|\boldsymbol{h}_{i}\right|, \mathbf{1}\right)}$ where $\gamma$ is the same scale parameter as Eq.~\ref{eq:cauchy} and $\mathbf{1}$ is a vector of ones. Since we are maximizing $P(H)$ in Eq.~\ref{eq:map}, the quantization loss $L_Q$ is stated as: \begin{equation} L_Q = \sum_{i=1}^{NT} Q(\boldsymbol{h}_i)=\sum_{i=1}^{NT} \log \left(1+\frac{\mathcal{D}_S\left(\left|\boldsymbol{h}_{i}\right|, \mathbf{1}\right)}{\gamma}\right) \end{equation} where $\textbf{1}$ is a vector of ones. By minimizing the quantization loss $Q$ in the training stage, each dimension of the hash vector $\textbf{h}$ is pushed to approximate 1. \subsection{End to End training} In this section, we will derive the overall optimization objective of our proposed \textbf{Transhash} method based on Sec.~\ref{sec:siamese} and Sec.~\ref{sec:similarity}. Given training images in pairs such as $(I_i,I_j)$, we obtain a pair of continuous hash vector sets $\{\{h_g\}^i,\{h_l^1\}^i,...,\{h_l^K\}^i\}$ and $\{\{h_g\}^j,\{h_l^1\}^j$ $,...,\{h_l^K\}^j\}$ through the siamese vision transformer. Subsequently,~for the local features, we obtain the Bayesian loss and quantization loss as: \begin{equation} \begin{aligned} L_{B}^{local} = \sum_{s_{i j} \in \mathbf{S} } \sum_{k}^K L_{ce}(\{\textbf{h}_l^k \}^i,\{\textbf{h}_l^k \}^j) \\ L_{Q}^{local} = \sum_i^{NT} \sum_j^K Q(\{h_l^j\}^i) \end{aligned} \end{equation} where $N$ is the total number of training images, $\mathbf{S}$ represents the similarity matrix, and $K$ denotes the number of local features for each image. In a similar fashion, we could derive the losses for the global features. The overall learning objective for \textbf{Transhash} is formulated as: \begin{equation} \min_{\theta} L_B^{global} + L_B^{local} + \lambda (L_Q^{global} + L_Q^{local}) \end{equation} where $\theta$ denotes the set of parameters of the framework, and $\lambda$ is the hyper-parameter for controlling the importance of the \textit{Cauchy} quantization loss. \begin{table*} \newcolumntype{Y}{>{\centering\arraybackslash}X} \newlength\mylength \setlength\mylength{\dimexpr 0.8\textwidth-2\tabcolsep} \caption{Mean Average Precision (MAP) of Hamming Ranking for Different Number of Bits on Three Datasets} \begin{tabularx}{\textwidth}{ XX| YYYY || YYYY ||YYYY } \hline \rowcolor{black} \multicolumn{2}{l|}{ \textcolor{white}{\textbf{Datasets}}} & \multicolumn{4}{c||}{\textcolor{white}{\textbf{CIFAR-10}@54000 }} &\multicolumn{4}{c||}{\textcolor{white}{\textbf{NUSWIDE}@5000} } &\multicolumn{4}{c|}{\textcolor{white}{\textbf{IMAGENET}@1000}} \\ \hline \multicolumn{2}{l|}{\textbf{Methods}} & 16 bits &32 bits & 48 bits & 64 bits & 16 bits &32 bits & 48 bits & 64 bits & 16 bits & 32 bits & 48 bits & 64 bits\\\hline \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{SH}}~\cite{weiss2008spectral} (NeurIPS)} & - & - & - & - & 0.4058 & 0.4209 & 0.4211 & 0.4104 & 0.2066 & 0.3280 & 0.3951 & 0.4191\\ \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{ITQ}}\cite{gong2012iterative} (TPAMI)} & - & - & - & - & 0.5086 & 0.5425 & 0.5580 & 0.5611 & 0.3255 & 0.4620 & 0.5170 & 0.5520 \\ \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{KSH}}\cite{liu2012supervised} (CVPR)} & - & - & - & - & 0.3561 & 0.3327 & 0.3124 & 0.3368 & 0.1599 & 0.2976 & 0.3422 & 0.3943 \\ \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{BRE}}\cite{kulis2009learning} (NeurIPS)} & - & - & - & - & 0.5027 & 0.5290 & 0.5475 & 0.5546 & 0.0628 & 0.2525 &0.3300 & 0.3578 \\ \hline \hline \multicolumn{2}{l|}{DSH\cite{liu2016deep12} (CVPR)} & 0.6145 & 0.6815 & 0.6828 & 0.6910 & 0.6338 & 0.6507 & 0.6664 & 0.6856 & 0.4025 & 0.4914 & 0.5254 & 0.5845 \\ \multicolumn{2}{l|}{DHN\cite{zhu2016deep} (AAAI)} & 0.6544 & 0.6711 & 0.6921 & 0.6737 & 0.6471 & 0.6725 & 0.6981 & 0.7027 & 0.4139 &0.4365 &0.4680 & 0.5018 \\ \multicolumn{2}{l|}{HashNet\cite{cao2017hashnet} (ICCV) }& 0.5105 & 0.6278 & 0.6631 &0.6826 & 0.6821 & 0.6953 & 0.7193 & 0.7341 & 0.3287 & 0.5789 & 0.6365 & 0.6656 \\ \multicolumn{2}{l|}{DCH\cite{cao2018deep} (CVPR) }& 0.6680 & 0.6936 & 0.6807 & 0.6775 & 0.7036 &0.7178 & 0.7106 & 0.7056 & 0.5868 & 0.5862 &0.5639 & 0.5540 \\ \multicolumn{2}{l|}{IDHN\cite{zhang2019improved} (TMM) }& 0.5419 & 0.5695 & 0.5895 & 0.5972 & 0.6999 & 0.7149 & 0.7225 & 0.7256 & 0.2583 & 0.3339 & 0.3708 & 0.4037 \\ \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{DPN}}\cite{fan20deep} (IJCAI) }& 0.825 & 0.838 & 0.830 & 0.829 & - & - & - & - & 0.684 & 0.740 & 0.756 & 0.756 \\ \hline \multicolumn{2}{l|}{\textbf{TransHash} }&\textcolor{red}{\textbf{0.9075}} & \textcolor{red}{\textbf{0.9108}} & \textcolor{red}{\textbf{0.9141}} & \textcolor{red}{\textbf{0.9166}} & \textcolor{red}{\textbf{0.7263}} & \textcolor{red}{\textbf{0.7393}} & \textcolor{red}{\textbf{0.7532}} & \textcolor{red}{\textbf{0.7488}} & \textcolor{red}{\textbf{0.7852}} & \textcolor{red}{\textbf{0.8733}} & \textcolor{red}{\textbf{0.8932}} & \textcolor{red}{\textbf{0.8921}} \\ \hline \hline \end{tabularx} \label{table: mainresults} \end{table*} \begin{figure*} \centering \includegraphics[width=7.0in]{fig/acmmm_mdata.pdf} \caption{The experimental results of \textbf{TransHash} and other competing methods on three datasets} \label{fig:maindata} \end{figure*} \subsection{Retrieval Process} In this section, we will elaborate on how to perform efficient image retrieval given a well-trained model. Generally, we are given a training image set $\textbf{Q}$ and a gallery image set $\textbf{G}$. For an image $I_i^q$ in $\textbf{Q}$, we feed it through the backbone transformer, and obtain a set of hash vectors $\{\{h_g\}^i,\{h_l^1\}^i,...,\{h_l^K\}^i\}$. Subsequently. we concatenate the global and local hash vectors and obtain the final hash vector $\textbf{h}_i^q$: \begin{equation} \textbf{h}_i^q = \textit{sign}(\textit{Concat}([\{\{h_g\}^i,\{h_l^1\}^i,...,\{h_l^K\}^i\}])) \end{equation} where $\textit{sign}(x)$ is a element-wise thresholding function which return 1 if $x > 0$ and -1 otherwise. And, \textit{Concat} is a function which concatenate the global and local features into a $B$ bit hash vector. In a similar fashion, for all the images in $\textbf{G} = \{I_k^g\}_{k=1}^{N_g}$, we obtain the binary hash codes $\textbf{H}^g = \{h_k^g\}_{k=1}^{N_g}$. Then, we can rank the binary gallery codes $\textbf{H}^g$ through their Hamming distance with respect to the query hash code $\textbf{h}_i^q$. \subsection{Implementation Details} All the images are first resized to $256 \times 256$. For the training images, we adopt standard image augmentation techniques including \textit{random horizontal flipping} and \textit{random cropping} with cropping size $224$. For testing images, we only apply the \textit{center cropping} with cropping size $224$. The batch size is set to $64$. \textit{SGD} optimizer is adopted with a weight decay of $1e-4$. The learning rate is initialized to $3e-2$ with cosine learning rate decay. The number of warmup steps for the scheduler is set to $500$. The patch size is set to $(32,32)$ for the Siamese transformer model, the hidden size to $1024$. The number of heads for the multi-head attention is set to $16$, and the model consists of $24$ blocks in total. \section{Experimentation} \subsection{Datasets and Evaluation Protocols} \paragraph{Datasets.} We conduct experiments on three widely-studied image retrieval datasets: \textbf{CIFAR-10},~\textbf{NUSWIDE}, and~\textbf{IMAGENET}. \\ \textbf{CIFAR-10}~\cite{krizhevsky2009learning} is a dataset with $60,000$ images from $10$ classes. We follow the standard protocol in \cite{cao2018deep,zhu2016deep}. Specifically, we randomly select 500 images for each class as the training set, resulting in $5,000$ training points. Then, we randomly select 100 images per class as the query set, the rest denoted as the database. \\ \textbf{NUSWIDE}~\cite{chua2009nus} is a widely-studied public web image dataset consisting of $269,648$ images in total. Each image is annotated with some of the $81$ ground-truth categories (concepts). For fair comparisons, we follow similar experimental protocols ~\cite{cao2017hashnet,zhu2016deep} by randomly sampling $5,000$ as the query set, the rest as the database. Subsequently, we randomly sample $10,000$ images from the database as the training set. \\ \textbf{IMAGENET} is a subset of the dataset for Large Scale Visual Recognition Challenge (ISVRC 2015)~\cite{russakovsky2015imagenet}. Specifically, we follow the same protocol as \cite{fan20deep}\cite{cao2017hashnet} by randomly sampling 100 classes and use all the images of these classes in the validation set as the query set. All the images of these classes in the training set are denoted as the database, while 100 images per category are sampled as the training set. \paragraph{Evaluation Protocols} We adopt Mean Average Precision (\textbf{mAP}),\\~\textbf{Precison} ~and \textbf{Recall} as the testing metrics. Concretely, we follow a similar fashion as \cite{cao2018deep,cao2017hashnet}. The \textbf{mAP} is calculated with the top 54,000 returned images for \textbf{CIFAR-10}, 5,000 for \textbf{NUSWIDE} and 1,000 for \textbf{IMAGENET} \begin{table*} \newcolumntype{Y}{>{\centering\arraybackslash}X} \setlength\mylength{\dimexpr 0.8\textwidth-2\tabcolsep} \caption{Mean Average Precision (MAP) of Different Variants of TransHash on Three Datasets} \begin{tabularx}{\textwidth}{ XX| YYYY || YYYY ||YYYY } \hline \rowcolor{black} \multicolumn{2}{l|}{ \textcolor{white}{\textbf{Datasets}}} & \multicolumn{4}{c||}{\textcolor{white}{\textbf{CIFAR-10}@54000 }} &\multicolumn{4}{c||}{\textcolor{white}{\textbf{NUSWIDE}@5000} } &\multicolumn{4}{c|}{\textcolor{white}{\textbf{IMAGENET}@1000}} \\ \hline \multicolumn{2}{l|}{\textbf{Methods}} & 16 bits &32 bits & 48 bits & 64 bits & 16 bits &32 bits & 48 bits & 64 bits & 16 bits & 32 bits & 48 bits & 64 bits\\\hline \hline \multicolumn{2}{l|}{\textbf{TransHash} }&\textcolor{red}{\textbf{0.9075}} & \textcolor{red}{\textbf{0.9108}} & \textcolor{red}{\textbf{0.9141}} & \textcolor{red}{\textbf{0.9166}} & \textcolor{red}{\textbf{0.7263}} & \textcolor{red}{\textbf{0.7393}} & \textcolor{red}{\textbf{0.7532}} & \textcolor{red}{\textbf{0.7488}} & \textcolor{red}{\textbf{0.7852}} & \textcolor{red}{\textbf{0.8733}} & \textcolor{red}{\textbf{0.8932}} & \textcolor{red}{\textbf{0.8921}} \\ \multicolumn{2}{l|}{TransHash w/o \textbf{C} }& 0.8406 & 0.8384 & 0.8958 & 0.9062 & 0.7004 & 0.7265 & 0.7336 & 0.7310 & 0.7172 & 0.7808 & 0.8064 & 0.8244 \\ \multicolumn{2}{l|}{TransHash w/o \textbf{P} }& 0.9029 & 0.9053 & 0.9028 & 0.9014 & 0.7190 & 0.7147 & 0.7339 & 0.7167 & 0.7549 & 0.8485 & 0.8635 & 0.8635 \\ \multicolumn{2}{l|}{TransHash w/o \textbf{Q} }& 0.8927 & 0.9023 & 0.9048 & 0.9078 & 0.6540 & 0.6821 & 0.6689 & 0.6915 & 0.7451 & 0.8588 & 0.8689 & 0.8758 \\ \hline \hline \end{tabularx} \label{table: ablationresults} \end{table*} \subsection{Comparison with State-of-the-Arts} \par In this section, we compare the results of our proposed \textbf{TransHash} and the state-of-the-art deep hashing methods. Specifically, the competing methods could be divided into two categories: shallow hashing methods and deep hashing methods. For the shallow hashing methods, we include the most frequently compared methods \textbf{SH}~\cite{weiss2008spectral},~\textbf{ITQ}~\cite{gong2012iterative},~\textbf{KSH}~\cite{liu2012supervised},~and~\textbf{BRE}~\cite{kulis2009learning} for detailed comparisons. For the deep learning-based hashing methods, we further include \textbf{DSH}~\cite{liu2016deep12} which is among the very first works targeting at tackling the hashing problem for image retrieval with deep convolutional neural networks. In addition, we incorporate other recent deep hashing methods including ~\textbf{DHN}\cite{zhu2016deep},~\textbf{HashNet}~\cite{cao2017hashnet},~\textbf{IDHN}~\cite{zhang2019improved} and~\textbf{DPN}~\cite{fan20deep}. \par Note that, for all the non-deep methods and \textbf{DPN}, we directly quote the results from \cite{cao2017hashnet} and \cite{fan20deep}. For the rest of the competing methods,~ we conduct experiments with the open-sourced codes from the original papers.~For fair comparisons, we conform to original protocols for the hyper-parameters and the pre-processing techniques. For example, all the images are resized to $224 \times 224$. \par The Mean Average Precision (\textbf{mAP}) results are demonstrated in Tab.~\ref{table: mainresults}.~It is rather evident that our proposed \textbf{TransHash} is a clear winner compared with the shallow hashing methods across three datasets. Specifically, we achieve absolute performance boosts of 19.93\%,~39.69\% in terms of average \textbf{mAP} for \textbf{NUSWIDE} and \textbf{IMAGENET}, respectively. The unsatisfied performances of these non-deep hashing methods could be in part attributed to the fact that these methods could not assist in the discriminative feature learning process, resulting in the generation of sub-optimal hashing codes. Clearly, deep hashing methods exhibit significantly better performances across all the datasets for different hash bit lengths. Still, our method outperforms all the competing methods by large margins. Specifically,~on \textbf{CIFAR-10}, we achieve a \textbf{mAP} of 91.66\% in terms of 64 hash bits, surpassing the state-of-the-art result by 8.8\%. The performance improvement is even more pronounced in \textbf{IMAGENET}. The average \textbf{mAP} for \textbf{TransHash} is 86.10\%, exceeding \textbf{DPN} by 12.7\%. The reasons for the notable performance gains are twofold. First, the siamese architecture and the dual-stream feature learning design could assist in learning more discriminative features. The second reason is that the ratio between the number of similar pairs and dissimilar pairs in \textbf{IMAGENET} is much larger than \textbf{CIFAR-10}, which is also known as the data imbalance problem~\cite{cao2017hashnet}, deteriorating the performance of methods trained on pairwise data~\cite{zhang2019improved,liu2016deep12}. \textbf{TransHash} tackles this problem by dynamically assigning a weight for each pair as is carried out in \cite{cao2017hashnet}. On \textbf{NUSWIDE}, our method also consistently exceeds the competing methods across different hash bit lengths. The performance gains are not as sizable as on \textbf{CIFAR-10} and \textbf{NUSWIDE} mainly because \textbf{TransHash} is not tailored for multi-label image retrieval where each image comprises multiple labels. \par We further plot the Precision-Recall curves(PR) in terms of 16 and 64 hash bits and Precision curves with respect to different numbers of top returned images. As depicted in Fig.~\ref{fig:maindata},~the performance of \textbf{TransHash}, colored with red, consistently levitates above all the competing methods by large margins for the PR curves. In terms of precision w.r.t numbers of returned images, as shown in the top right pictures in Fig.~\ref{fig:maindata}, \textbf{TransHash} achieves significantly better results against all the methods. The results on \textbf{NUSWIDE} are on the middle of Fig.~\ref{fig:maindata}. \textbf{TransHash} achieves slightly better results for PR@16 bits and PR@64 bits. For the precision w.r.t number of returned images, our method obtains a precision of 76.77\% for 100 returned images, surpassing \textbf{IDHN} by 2.7\%. Pronounced performance gains could also be spotted for \textbf{IMAGENET}. Specifically, for PR curve with 16 bits, \textbf{DCH} obtains the second place while \textbf{HashNet} tops \textbf{DCH} for 48 bits. It is easy to spot that \textbf{TransHash} still exceeds both methods in two testing scenarios with considerable margins. For the precision curve, we achieve performances of 90.35\%, 89.38\% w.r.t 100 and 1000 returned images, exceeding \textbf{HashNet} by 24.73\% and 28.18\%, respectively. The superior results could sufficiently demonstrate the effectiveness of our pure-transformer-based hashing method. \begin{figure*} \centering \includegraphics[width=7.0in]{fig/acmmm_ablation.pdf} \caption{Experimental results of different variants of \textbf{TransHash} on three datasets} \label{fig:my_ablation} \end{figure*} \subsection{Ablation Studies} To further analyze the overall design of our proposed method, we conduct a detailed ablation study to demonstrate the effectiveness of each component. Specifically, we investigate three variants of \textbf{TransHash}: \begin{enumerate} \item \textbf{TransHash w/o P}, a variant without adopting the dual-stream feature learning. \item \textbf{TransHash w/o Q}, a variant without the Cauchy quantization loss. \item \textbf{TransHash w/o C}, a variant adopting the sigmoid function as the probability function $\sigma$, following the protocols in \cite{zhu2016deep}. \end{enumerate} \begin{table}[h] \centering { \begin{tabular}{cc||c||c||c||c||c} \hline \hline \rowcolor{black} \multicolumn{2}{c||}{(\textcolor{white}{Groups \textbf{(K)}})} & \multicolumn{1}{c||}{\textcolor{white}{\textbf{2}}} &\multicolumn{1}{c||}{\textcolor{white}{\textbf{3} }} & \multicolumn{1}{c||}{\textcolor{white}{\textbf{4} }} & \multicolumn{1}{c||}{\textcolor{white}{\textbf{5} } } & \multicolumn{1}{c||}{\textcolor{white}{\textbf{6} }} \\ \hline\hline \multicolumn{2}{c||}{16 bits} & 0.9075 & - & - & - & - \\ \multicolumn{2}{c||}{32 bits} & 0.9108 & 0.9013 & - & - & - \\ \multicolumn{2}{c||}{48 bits} & 0.9141 & 0.9017 & 0.9187 & 0.9107 & 0.9143 \\ \multicolumn{2}{c||}{64 bits} & 0.9166 & 0.9103 & 0.9057 & 0.9062 & 0.8994 \\ \hline \end{tabular} } \caption{Analysis of the effects of \textbf{K} on \textbf{CIFAR-10}.~Note that $-$ denotes when K equals a certain number,~the model fails to converge as illustrated in the empirical analysis.} \label{teb:ablation} \end{table} As shown in Tab.~\ref{table: ablationresults} and Fig.~\ref{fig:my_ablation}, when then Cauchy quantization loss is removed (\textbf{TransHash w/o Q}), we experience notable performance declines in \textbf{NUSWIDE} and \textbf{IMAGENET}, from 74.88\% to 69.15\% and 89.21\% to 87.58 \% for 64 hash bits, respectively. When the model is deprived of Cauchy distribution (\textbf{TransHash w/o C}), which is similar to \cite{zhu2016deep}, we can see that the performance decreases sharply. Specifically, on \textbf{IMAGENET}, it experiences a conspicuous performance drop by an average of 5.55\% \textbf{mAP}. We also note that the drop for shorter hash codes is more severe than longer hash codes. The primary reason is that according to \cite{cao2018deep}, the Cauchy distribution could effectively pull close similar pairs into a small Hamming radius, giving it an edge when the hash code length is short. \par More importantly, to test the effectiveness of the proposed dual-stream feature learning, we also include the performances of the Siamese model with the solo global feature learning module. As depicted in Fig.~\ref{table: ablationresults}, \textbf{TransHash w/o P} consistently underperform the model with dual-feature learning design. On \textbf{NUSWIDE} and \textbf{IMAGENET}, the average decline is 2.08\% and 2.83\%, respectively. The above experimental experiments have evidenced the effectiveness of the design of our pure transformer-based hashing framework. Since the hyper-parameter $K$, which controls how many groups we will divide our local features into, is rather important in our design, we further provide an ablation study on the sensitivity of $K$ for various hash bits on \textbf{CIFAR-10}. Note that if the length of the final hash code vector is 16 and $K$ equals 2, then the global feature is responsible for learning the first 8 bit and each local feature vector for the latter 4 bits. \paragraph{Empirical analysis of \textbf{K}} As depicted in Tab.~\ref{teb:ablation}, generally, the performance is not very sensitive to $K$. Also, we observe that when the local feature vector is responsible for generating less than 4 bits, the model will fail to converge. In light of the above observations, we empirically set the $K$ to $2$ across four different hash bit lengths. \section{Conclusion} In this paper, we have proposed a novel pure transformer-based deep hashing framework (\textbf{TransHash}) to tackle the challenging large-scale image retrieval problem. Specifically, we innovate a novel Siamese transformer architecture for extracting robust image features with pairwise similarity learning. On top of that, in an attempt to learn more fine-grained features, we propose to add a dual-stream feature learning module to learn global and local features simultaneously. A well-specified Bayesian learning framework is adopted on top of all the pairwise features for similarity-preserving learning. The overall framework is optimized in an end-to-end fashion. We conduct extensive experiments and demonstrate that \textbf{TransHash} yields notable performance gains compared to the state-of-the-art deep hashing methods on \textbf{CIFAR-10}, \textbf{NUSWIDE} and \textbf{IMAGENET} datasets. \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} \begin{figure} \centering \includegraphics[width=3.3in]{fig/subfig.pdf} \caption{\textbf{The brief architecture of our backbone Siamese vision transformer.} For an image pair (A,B), we cut them into several patches. Then every patch is flattened and projected to a fix-sized embedding with a fully connected layer, resulting in a sequence of embeddings. Subsequently, we add a classification token in the front of each sequence. Then, two sequences are fed into the Siamese transformer architecture. At last, we add a hash layer projecting the feature into B-bit hash vectors. The Bayesian learning module is employed to preserve the similarity in the hashing space for each pair. } \label{fig:subfig} \end{figure}\par The past decade has been characterized by the explosive amount of high-dimensional data generated by countless end-users or organizations, resulting in a surge of research attention on accurate and efficient information retrieval methods. Among them, large-scale image retrieval has attained growing traction for its pervasive uses in various scenarios, e.g. recommendation systems, search engines, remote sensing systems. Among all the methods proposed for this challenging task \cite{jegou2010product,ge2013optimized,malkov2018efficient,fu2017fast},~hamming hash-based methods have achieved pronounced successes. It aims to learn a hash function mapping the images in the high-dimensional pixel space into low-dimensional hamming space while preserving their visual similarity in the original pixel space. Scores of works have been introduced. Based on the way they extract features, existing hashing-based works can be divided into two categories, namely, shallow methods and deep learning-based methods. Shallow methods~\cite{charikar2002similarity,indyk1997locality,weiss2008spectral} learn their hash functions via the hand-crafted visual descriptors (e.g. \textit{GIST}~\cite{oliva2001modeling} ). Nonetheless, the handcrafted features do not guarantee accurate preservation of semantic similarities of raw image pairs, resulting in degraded performances in the subsequent hash function learning process. Deep learning-based~\cite{xia2014supervised,erin2015deep} methods generally achieve significant performance improvements when compared to their shallow counterparts. The common learning paradigm involves two phases. The first phase aims to learn discriminative feature representations with deep convolutional neural networks (CNNs),~e.g. \textit{AlexNet}. The second phase involves designing diversified non -linear functions to squash the continuous features into binary Hamming codes and devising various ~\cite{liu2016deep,he2018hashing,cao2018deep,cakir2019hashing,fan20deep} losses to preserve the similarity in the raw pixel space. \par \begin{figure*} \centering \includegraphics[width=6.7in]{fig/architecture.pdf} \caption{The detailed architecture of the proposed \textbf{TransHash}. The upper part denotes the training stage. Specifically, we follow the same protocols as ViT by feeding the patch embedding together with the position embedding into the transformer encoder. At the last layer of the transformer, we design two parallel transformer blocks: global and local transformer blocks. For the global feature and each local feature, we design a specific hashing layer. In the testing stage, the global and all the local hash vectors are concatenated and quantized into one hash code.} \label{fig:mainfig} \end{figure*} Recently, transformers~\cite{vaswani2017attention} have demonstrated great successes in natural language processing~\cite{devlin2018bert,brown2020language}. With the advent of the \textit{Vision Transformer}, a variant of transformer tailored for computer vision tasks, transformers have trumped numerous CNN-based methods in various computer vision tasks (e.g. image classification~\cite{dosovitskiy2020image}, object re-identification~\cite{he2021transreid}, and etc). As is shown in Fig.~\ref{fig:subfig},~\textit{Vision transformer} works by first reshaping the input images into a sequence of 2D patches. In the later stage, the 2D patches are transformed into \textit{D} dimensional vectors with a trainable linear projection matrix. Then, a sequence of 1D vectors is fed into the standard transformer architecture to learn a usable feature representation. Inspired by the pronounced performances of \textit{ViT} in other vision tasks, we ponder the possibility of innovating novel deep hashing methods with pure transformers. \par In this paper, we build up a novel transformer-based hashing method, dubbed \textbf{Transhash}, which is the very first deep hashing method without adopting a convolutional neural network (CNN) as the backbone architecture. Specifically, targeting pairwise deep hashing learning, we design a Siamese Transformer backbone, which is essentially two identical transformers sharing the same weight parameters\cite{bromley1993signature}. On top of this innovation, inspired by \cite{he2021transreid}, we design a dual-stream feature learning module by changing the last layer of two Siamese transformers to two parallel branches. Concretely,~for the first branch, we learn a global feature representation. In parallel, we reorder the sequence of output features from the second last layer into K groups. The K groups are concatenated with the shared output token and then fed into another transformer layer to generate K local features. The primary merits are stated as follows. Firstly, the model could simultaneously learn fine-grained global and local features with the joint global and local stream design. Secondly, similar to \cite{lai2015simultaneous}, which employs a divide-and-encode module to reduce the redundancy of the learned feature representation, our method could achieve similar effects. Since the final learned representation is a concatenation of the global representation and several local representations, the subsets of the final feature vector are loosely correlated, resulting in increased independence and minimized redundancy. To further preserve the semantic similarity of the image pairs in the feature space, we propose to adopt the Bayesian learning framework to pull close similar pairs and push away dissimilar pairs in the embedding space for all the global and local features. Finally, since the learned feature representations are continuous in nature, we need to adopt the \textit{sign} function $h = \textit{sign}(f)$ to generate binary hamming hash code in the test stage. However, owing to the sizable gap between the continuous feature representation $f$ and the hash code $h$ after \textit{sign} function, which is officially called the \textit{quantization error}\cite{zhu2016deep}, directly generating hash codes with $sign$ function in the testing stage could only lead to sub-optimal retrieval performances. In an effort to bridge the gap in the training stage, we reformulate the similarity-preserving learning problem as a constrained optimization problem. Concretely, on top of the Bayesian learning module, we add a Cauchy quantization loss~\cite{cao2018deep} to statistically bridge the gap between the continuous feature representation and the binary hash coding. \par To sum up, we make the following contributions: \begin{enumerate} \item We design a Siamese Transformer backbone based on \textit{ViT} which is two identical vision transformers sharing the same weight parameters. \item We innovate a novel two-stream feature learning module by changing the last layer of the transformer into two independent parallel branches. In this fashion, we could learn global and local features at the same time. Meanwhile, as stated before, it could also promote the independence of the learned final hash code vector while reducing bit redundancy. \item By further adopting the similarity-preserving Bayesian learning module with a quantization constraint, we build up a novel deep hashing framework for large-scale image retrieval with pure transformer. To the best of our knowledge, this is the very first work for deep learning-based hashing without adopting a convolutional neural network as the backbone. \item We conduct comprehensive experiments on three widely-studied datasets-~\textbf{CIFAR-10}, \textbf{NUSWIDE} and \textbf{IMAGENET}. The results show that we outperform all the state-of-the-art method across three datasets by large margins. \end{enumerate} \section{Related Works} \subsection{CNNs in Computer Vision} Convolutional neural network was first introduced in \cite{cnnoriginal} to recognize hand-write numbers. It proposes convolutional kernels to capture the visual context and achieves notable performances. Nonetheless, it was not until the innovation of \textit{AlexNet} \cite{alexnet} that the CNN starts to become the workhorse of almost all the mainstream computer vision tasks, e.g. \textit{Instance Segmentation}~\cite{bolya2019yolact,ren2017end}, \textit{Image Inpainting}~\cite{yeh2017semantic,yu2019free}, \textit{Deep hashing}~\cite{cao2017hashnet,cao2018deep}, \textit{Person Re-identification}~\cite{hermans2017defense,ye2018visible,ye2018hierarchical,chen2020maenet} and etc. To further boost the capability of CNNs, a series of deeper and more effective convolutional neural networks have been proposed, e.g. \textit{VGG}~\cite{vgg}, \textit{GoogleNet}\cite{googlenet}, \textit{ResNet}~\cite{resnet}, \textit{EfficientNet}~\cite{tan2019efficientnet} and etc. While CNNs are still dominant across various computer vision tasks, the recent shift in attention to transformer-based architectures has opened up possibilities to adopt transformers as potent alternatives to convolutional neural networks. Our work is among the first endeavour to replace CNNs with pure transformer-based architectures in traditional computer vision tasks. \subsection{Transformer in Vision} The \textit{Transformer} is first proposed in \cite{vaswani2017attention} for sequential data targeting at usage in the field of natural language processing (NLP). Since then, many studies have investigated the effectiveness of Transformer in computer vision tasks by feed to Transformer the sequence of feature maps extracted by CNNs~\cite{girdhar2019video,carion2020end,xie2021segmenting}. In 2020, \textbf{Google} proposed \textit{Vision Transformer}~(ViT)~\cite{dosovitskiy2020image}, which applies a pure transformer directly to a sequence of image patches for image classification. Variants of \textit{ViT} have achieved remarkable successes. For instance, \cite{liu2021swin} proposes a hierarchical vision transformer using shifted windows. \cite{wang2021pyramid} proposes a pyramid vision transformer tailored for dense prediction. Further, \cite{he2021transreid} proposes the first work of designing a pure-transform based architecture for person re-identification. By utilizing the side information and innovating a novel jigsaw branch, it achieves state-of-the-art across multiple object re-identification datasets. \textit{Vision transformer} is still in its nascent stages. Mounting research attention is being directed to investigate its potential in diversified computer vision tasks. \subsection{Hashing for Image Retrieval} Deep hashing for large-scale image retrieval has been drawing growing research attention in recent years ~\cite{indyk1997locality,weiss2008spectral,gong2012iterative,heo2012spherical,jegou2010product,ge2013optimized}. According to the way they extract the features, we could categorize the existing hashing methods into two groups: shallow hashing methods and deep learning based-hash methods. \par Typical shallow methods are reliant upon the handcraft features to learn a hashing function mapping visual images into binary hash codes. A canonical example is \textbf{LSH} (Locality Sensitive Hash) \cite{indyk1997locality}, which seeks to find a locality-sensitive hash family where the probability of hash collisions for similar objects is much higher than those dissimilar ones. Later, \cite{charikar2002similarity} further proposed another variant of LSH (dubbed SIMHASH) for cosine similarities in Euclidean space. Though these handcrafted feature-based shallow methods achieved success to some extent, when applied to real data where dramatic appearance variation exists, they generally fail to capture the discriminative semantic information, leading to compromised performances. In light of this dilemma, a wealth of deep learning-based hash methods have been proposed~\cite{cao2017hashnet,fan20deep,li2015feature,zhu2016deep,liu2016deep}, for the first time, proposes to learn the features and hash codes in an end-to-end manner. \cite{zhu2016deep} offers a Bayesian learning framework adopting pairwise loss for similarity preserving. \cite{cao2018deep} further suggests substituting the previous probability generation function for neural network output logits with a Cauchy distribution to penalize similar image pairs with hamming distances larger than a threshold. \cite{zhang2019improved} innovates a new similarity matrix targeting multi-label image retrieval.~\cite{fan20deep} further introduces a deep polarized loss for the hamming code generation, obviating the need for an additional quantization loss. \section{Proposed Method} In this section, we will elaborate on the design of our framework. \paragraph{Problem Formulation} Suppose we have a training set $T = \{I_i\}_{i=1}^{NT}$ containing $NT$ training images and the corresponding label set $Y = \{y_i\}_{i=1}^{NT}$. For all the pairs of images in the training set, we can construct a similarity matrix $\mathbf{S}$ where $s_{ij} = 1$ if $I_i$ and $I_j$ are from the same class and $s_{ij} = 0$ otherwise. The goal of deep hash for image retrieval is to learn a non-linear hash function $\mathcal{H}: \mathbf{I} \mapsto \{0,1\}^B $ which encodes each input image $I_i$ into a binary hash vector $h_i$ with $B$ bits while preserving the similarity information conveyed in $\mathbf{S}$. That is to say, the Hamming distance between $h_i$ and $h_j$ should be small if $s_{ij} = 1$ and large otherwise. \subsection{Siamese Vision Transformer Architecture} \label{sec:siamese} An overview of our architecture is illustrated in Fig.~\ref{fig:mainfig}. For an image pair $(I_i,I_j)$ with size $H \times W \times 3$, we cut them into identical small patches of patch size $P \times P \times 3$. In doing so, we obtain $N$ patches in total, where $N = H \times W / P^2$. Note that $N$ is also the effective input sequence length for the transformer. \\ \paragraph{Patch embeddings} For each image patch of $P \times P \times 3$, we flatten it into a vector of size $P^2 \times 3$. Subsequently, similar to \textit{ViT}, we embed every vector into $D$ dimensions with a trainable linear projection (fully connected layer), resulting in a sequence $\{x_p^k\} \in \mathbb{R}^{D}, k \in [1, N]$. We further prepend a learnable embedding $x_{class}$ to $x$, whose state at the end of the output layer serves as the image representation. In this way, we obtain the final embedding $X_p \in \mathbb{R}^{(N+1) \times D}$ \paragraph{Position embeddings} Positional embedding is adopted to encode the position information of the patch embedding, which is important for the transformer to learn the spatial information of each patch inside the original image. We follow the standard procedure in \textit{ViT} by adding trainable 1D position embedding for every vector in the sequence. Thus, the input for the transformer encoder $z_0$ is stated as follows: \begin{equation} z_0 = X_p + E_{pos} = [x_{class}; x_p^1 , ... , x_p^N ] + E_{pos} \end{equation} \paragraph{Self-attention encoder} The transformer encoder consists of $L-1$ blocks, each block containing a multi-headed self-attention layer (\textbf{MSA}) and \textbf{MLP} layer. A layer norm (\textbf{LN}) is applied before each layer while residual connections are applied after each layer, as shown in Fig.~\ref{fig:mainfig}. The computation of a block $\mathcal{F}_{block}$ could be formulated as: \begin{equation} \begin{split} z_{l} &= \mathcal{F}_{msa}(\mathcal{F}_{ln}(z_{l-1})) + z_{l-1} \\ z_{l} &= \mathcal{F}_{mlp}(\mathcal{F}_{ln}(z_{l} )) + z_{l} \\ \text{where} \\ l & = 1 ... (L-1) \end{split} \end{equation} \paragraph{Dual-stream feature learning} After the before-mentioned self-attention encoder, we get the hidden features which are denoted as $Z_{L-1} = [z_{L-1}^0;z_{L-1}^1,z_{L-1}^2, ... ,z_{L-1}^N]$. Note that, as stated before, $z_{L-1}^0$ is the hidden feature for the prepended learnable embedding $x_{class}$. Inspired by \cite{he2021transreid},~we design two parallel branches, the global branch $\mathcal{F}^g_{block}$ and the local branch $\mathcal{F}^l_{block}$. For the global branch, it serves as a standard transformer block encoding $Z_{L-1}$ into $Z_{L} = [f_g;z_L^1,z_L^2, ... ,z_L^ N]$, where $f_g$ is regarded as the global feature representation. For the local branch, we split $Z_{L-1}$ into $K$ groups and prepend the shared token $z_{L-1}^0$ before each group. In this fashion, K feature groups are derived which are denoted as$ \{[z_{L-1}^0;z_{L-1}^1,...,z_{L-1}^{N/K}],$ $ [z_{L-1}^0;z_{L-1}^{N/K+1},...,z_{L-1}^{2\times N/K}], [z_{L-1}^0;z_{L-1}^{N-N/K+1}$ $,...,z_{L-1}^N]\}$. Then, we feed $K$ features groups into $\mathcal{F}^l_{block}$ to learn $K$ local features $\{f_l^1,f_l^2,...,f_l^K\}$. \paragraph{Hash layer} In an effort to learn compact hash codes, we further design several hash layers projecting every feature vector into different bit sized hash vectors. Concretely, suppose the hash bit length in the retrieval stage is $B$ for each image, then, for the global feature vector of embedding size $M$, we obtain a $B/2$ bit global hash vector through \begin{equation} h_g = \mathcal{F}_h^g(f_g) = f_g W^T + b \end{equation} where $W$ is a weight parameter matrix of size $(B/2,M)$ and b is the bias parameter of size $(B/2,)$. In a similar fashion, for each local feature $f_l \in \{f_l^1,...,f_l^K\}$, we design a specific fully connected layer with $B/(2*K)$ output logits, resulting in $K$ hash vectors $\{h_l^1,...,h_l^K\}$. \par In this way, for a image pair $(I_i,I_j)$, the siamese model outputs two sets of hash vectors: $\{\{h_g\}^i,\{h_l^1\}^i,...,\{h_l^K\}^i\}$ and $\{\{h_g\}^j,\{h_l^1\}^j,$ $..., \{h_l^K\}^j\}$, respectively. \subsection{Similarity-preserving Bayesian Learning} \label{sec:similarity} In this paper, we propose to adopt a Bayesian learning framework for similarity-preserving deep hashing learning. Given training images $(I_i,I_j,s_{ij}): s_{ij} \in \textbf{S}$, where $s_{ij} =1 $ if $I_i$ and $I_j$ are from the same class and $0$ otherwise, we can formulate the logarithm Maximum a Posteriori (\textbf{MAP}) estimation of the hash codes $\boldsymbol{H} = \{ h_1, h_2,...,h_P\}$ for $P$ training points as: \begin{equation} \begin{aligned} \log P(\boldsymbol{H} \mid \mathbf{S}) & \propto \log P(\mathbf{S} \mid \boldsymbol{H}) P(\boldsymbol{H}) \\ &=\sum_{s_{i j} \in \mathbf{S}} w_{i j} \log P\left(s_{i j} \mid \boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)+\sum_{i=1}^{NT} \log P\left(\boldsymbol{h}_{i}\right) \end{aligned} \label{eq:map} \end{equation} where $ P(\mathbf{S} \mid \boldsymbol{H})$ is the weighted likelihood function and $w_{ij}$ is the corresponding weight for each image pair $(I_i,I_i)$. Since the similarity matrix $\mathbf{S}$ could be very sparse in real retrieval scenarios~\cite{cao2017hashnet}, it could lead to the data imbalance problem, resulting in sub-optimal retrieval performances. The weighted likelihood is adopted to tackle this problem by assigning weights to each training pair according to the importance of misclassifying that pair~\cite{dmochowski2010maximum}. To be clear, we set \begin{equation} w_{i j}=\left\{\begin{array}{ll} |\mathbf{S}| /\left|\mathbf{S}_{1}\right|, & s_{i j}=1 \\ |\mathbf{S}| /\left|\mathbf{S}_{0}\right|, & s_{i j}=0 \end{array}\right. \end{equation} where $\mathbf{S}_{1}=\left\{s_{i j} \in \mathbf{S}: s_{i j}=1\right\}$ is the set of similar pairs, $\mathbf{S}_{0}=\left\{s_{i j} \in \mathbf{S}: s_{i j}=0\right\}$ being the set of dissimilar pairs. For an pair $h_i,h_j$, $P\left(s_{i j} \mid \boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)$ is the conditional probability function of $s_{ij}$ given a pair of hash codes $h_i$ and $h_j$. Since the $s_{ij}$ only takes two values $0$ and $1$, it is natural to define $P\left(s_{i j} \mid \boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)$ as a Bernoulli distribution: \begin{equation} \begin{aligned} P\left(s_{i j} \mid \boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right) &=\left\{\begin{array}{ll} \sigma\left(\mathcal{D}_H\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right), & s_{i j}=1 \\ 1-\sigma\left(\mathcal{D}_H\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right), & s_{i j}=0 \end{array}\right.\\ &=\sigma\left(\mathcal{D}_H\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right)^{s_{i j}}\left(1-\sigma\left(\mathcal{D}_H\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right)\right)^{1-s_{i j}} \end{aligned} \label{eq:beyesian} \end{equation} where $\mathcal{D}_H(.)$ is the Hamming distance function and $\sigma$ is a probability function which takes as input a distance of a hash code pair and generate the probability of them from the same class. Note that, since directly optimizing the discrete binary hash code is super challenging, in the training stage, we apply continuous relaxation to the binary constraints $\mathbf{h}_i \in \{-1,1\}^B$ similar to \cite{cao2017hashnet,cao2018deep,zhu2016deep}. Thus, we adopt a surrogate $\mathcal{D}_S$ for $\mathcal{D}_H$ in the continuous space which is formulated as: \begin{equation} \begin{aligned} \mathcal{D}_S\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right) &=\frac{K}{4}\left\|\frac{\boldsymbol{h}_{i}}{\left\|\boldsymbol{h}_{i}\right\|}-\frac{\boldsymbol{h}_{j}}{\left\|\boldsymbol{h}_{j}\right\|}\right\|_{2}^{2} \\ &=\frac{K}{2}\left(1-\cos \left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right) \end{aligned} \label{eq:prob} \end{equation} For the probability function $\sigma$, the most commonly used is the \textit{sigmoid} function. Nevertheless, as stated in \cite{cao2018deep}, the probability of \textit{sigmoid} when the input Hamming distance is much larger than $2$ stays high and only starts to decrease when it approaches $b/2$.~This property makes it hard for the deep hashing method to pull the distance of similar pairs close to a sufficient amount. In light of this dilemma, we propose to adopt \textit{Cauchy} distribution function: \begin{equation} \sigma\left(\mathcal{D}_S\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)\right)=\frac{\gamma}{\gamma+\mathcal{D}_S\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)} \label{eq:cauchy} \end{equation} where $\gamma$ denotes the scale parameter of the \textit{Cauchy} distribution. The \textit{Cauchy} distribution has a desirable property. The probability of \textit{Cauchy} declines very fast even when the Hamming distance is small, enabling the hashing method to pull the similar images into a small Hamming radius. By taking Eq.~\ref{eq:cauchy},~Eq.~\ref{eq:prob}, Eq.~\ref{eq:beyesian} into the \textbf{MAP} estimation in Eq.~\ref{eq:map}, we could derive the optimization objective of similarity-preserving loss as: \begin{equation} \begin{aligned} L_{s} &= \sum_{s_{i j} \in \mathbb{S}} L_{ce}(\boldsymbol{h}_i,\boldsymbol{h}_j) \\ &=\sum_{s_{i j} \in \mathbf{S}} w_{i j}\left(s_{i j} \log \frac{\mathcal{D}_s\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)}{\gamma}+\log \left(1+\frac{\gamma}{\mathcal{D}_S\left(\boldsymbol{h}_{i}, \boldsymbol{h}_{j}\right)}\right)\right) \end{aligned} \label{eq:final} \end{equation} From Eq.~\ref{eq:beyesian} and Eq.~\ref{eq:final},~we can observe that $L_{s}$ takes a similar form as logistic regression. By optimizing $L_s$, for a similar pair $(I_i,I_j)$, we are increasing the value of $P(1|\textbf{h}_i,\textbf{h}_j)$, resulting in decreased value of $D_S(\textbf{h}_i,\textbf{h}_j)$ since $\sigma$ is a monotonically decreasing \textit{Cauchy} function. \\ The quantization constraint to bridge the gap between continuous features and their binary counterparts ($L_Q$) can be derived from the proposed prior $ P\left(\boldsymbol{h}_{i}\right)=\frac{\gamma}{\gamma+\mathcal{D}_S\left(\left|\boldsymbol{h}_{i}\right|, \mathbf{1}\right)}$ where $\gamma$ is the same scale parameter as Eq.~\ref{eq:cauchy} and $\mathbf{1}$ is a vector of ones. Since we are maximizing $P(H)$ in Eq.~\ref{eq:map}, the quantization loss $L_Q$ is stated as: \begin{equation} L_Q = \sum_{i=1}^{NT} Q(\boldsymbol{h}_i)=\sum_{i=1}^{NT} \log \left(1+\frac{\mathcal{D}_S\left(\left|\boldsymbol{h}_{i}\right|, \mathbf{1}\right)}{\gamma}\right) \end{equation} where $\textbf{1}$ is a vector of ones. By minimizing the quantization loss $Q$ in the training stage, each dimension of the hash vector $\textbf{h}$ is pushed to approximate 1. \subsection{End to End training} In this section, we will derive the overall optimization objective of our proposed \textbf{Transhash} method based on Sec.~\ref{sec:siamese} and Sec.~\ref{sec:similarity}. Given training images in pairs such as $(I_i,I_j)$, we obtain a pair of continuous hash vector sets $\{\{h_g\}^i,\{h_l^1\}^i,...,\{h_l^K\}^i\}$ and $\{\{h_g\}^j,\{h_l^1\}^j$ $,...,\{h_l^K\}^j\}$ through the siamese vision transformer. Subsequently,~for the local features, we obtain the Bayesian loss and quantization loss as: \begin{equation} \begin{aligned} L_{B}^{local} = \sum_{s_{i j} \in \mathbf{S} } \sum_{k}^K L_{ce}(\{\textbf{h}_l^k \}^i,\{\textbf{h}_l^k \}^j) \\ L_{Q}^{local} = \sum_i^{NT} \sum_j^K Q(\{h_l^j\}^i) \end{aligned} \end{equation} where $N$ is the total number of training images, $\mathbf{S}$ represents the similarity matrix, and $K$ denotes the number of local features for each image. In a similar fashion, we could derive the losses for the global features. The overall learning objective for \textbf{Transhash} is formulated as: \begin{equation} \min_{\theta} L_B^{global} + L_B^{local} + \lambda (L_Q^{global} + L_Q^{local}) \end{equation} where $\theta$ denotes the set of parameters of the framework, and $\lambda$ is the hyper-parameter for controlling the importance of the \textit{Cauchy} quantization loss. \begin{table*} \newcolumntype{Y}{>{\centering\arraybackslash}X} \newlength\mylength \setlength\mylength{\dimexpr 0.8\textwidth-2\tabcolsep} \caption{Mean Average Precision (MAP) of Hamming Ranking for Different Number of Bits on Three Datasets} \begin{tabularx}{\textwidth}{ XX| YYYY || YYYY ||YYYY } \hline \rowcolor{black} \multicolumn{2}{l|}{ \textcolor{white}{\textbf{Datasets}}} & \multicolumn{4}{c||}{\textcolor{white}{\textbf{CIFAR-10}@54000 }} &\multicolumn{4}{c||}{\textcolor{white}{\textbf{NUSWIDE}@5000} } &\multicolumn{4}{c|}{\textcolor{white}{\textbf{IMAGENET}@1000}} \\ \hline \multicolumn{2}{l|}{\textbf{Methods}} & 16 bits &32 bits & 48 bits & 64 bits & 16 bits &32 bits & 48 bits & 64 bits & 16 bits & 32 bits & 48 bits & 64 bits\\\hline \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{SH}}~\cite{weiss2008spectral} (NeurIPS)} & - & - & - & - & 0.4058 & 0.4209 & 0.4211 & 0.4104 & 0.2066 & 0.3280 & 0.3951 & 0.4191\\ \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{ITQ}}\cite{gong2012iterative} (TPAMI)} & - & - & - & - & 0.5086 & 0.5425 & 0.5580 & 0.5611 & 0.3255 & 0.4620 & 0.5170 & 0.5520 \\ \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{KSH}}\cite{liu2012supervised} (CVPR)} & - & - & - & - & 0.3561 & 0.3327 & 0.3124 & 0.3368 & 0.1599 & 0.2976 & 0.3422 & 0.3943 \\ \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{BRE}}\cite{kulis2009learning} (NeurIPS)} & - & - & - & - & 0.5027 & 0.5290 & 0.5475 & 0.5546 & 0.0628 & 0.2525 &0.3300 & 0.3578 \\ \hline \hline \multicolumn{2}{l|}{DSH\cite{liu2016deep12} (CVPR)} & 0.6145 & 0.6815 & 0.6828 & 0.6910 & 0.6338 & 0.6507 & 0.6664 & 0.6856 & 0.4025 & 0.4914 & 0.5254 & 0.5845 \\ \multicolumn{2}{l|}{DHN\cite{zhu2016deep} (AAAI)} & 0.6544 & 0.6711 & 0.6921 & 0.6737 & 0.6471 & 0.6725 & 0.6981 & 0.7027 & 0.4139 &0.4365 &0.4680 & 0.5018 \\ \multicolumn{2}{l|}{HashNet\cite{cao2017hashnet} (ICCV) }& 0.5105 & 0.6278 & 0.6631 &0.6826 & 0.6821 & 0.6953 & 0.7193 & 0.7341 & 0.3287 & 0.5789 & 0.6365 & 0.6656 \\ \multicolumn{2}{l|}{DCH\cite{cao2018deep} (CVPR) }& 0.6680 & 0.6936 & 0.6807 & 0.6775 & 0.7036 &0.7178 & 0.7106 & 0.7056 & 0.5868 & 0.5862 &0.5639 & 0.5540 \\ \multicolumn{2}{l|}{IDHN\cite{zhang2019improved} (TMM) }& 0.5419 & 0.5695 & 0.5895 & 0.5972 & 0.6999 & 0.7149 & 0.7225 & 0.7256 & 0.2583 & 0.3339 & 0.3708 & 0.4037 \\ \multicolumn{2}{l|}{\cellcolor{cellco} \textcolor{white}{\textbf{DPN}}\cite{fan20deep} (IJCAI) }& 0.825 & 0.838 & 0.830 & 0.829 & - & - & - & - & 0.684 & 0.740 & 0.756 & 0.756 \\ \hline \multicolumn{2}{l|}{\textbf{TransHash} }&\textcolor{red}{\textbf{0.9075}} & \textcolor{red}{\textbf{0.9108}} & \textcolor{red}{\textbf{0.9141}} & \textcolor{red}{\textbf{0.9166}} & \textcolor{red}{\textbf{0.7263}} & \textcolor{red}{\textbf{0.7393}} & \textcolor{red}{\textbf{0.7532}} & \textcolor{red}{\textbf{0.7488}} & \textcolor{red}{\textbf{0.7852}} & \textcolor{red}{\textbf{0.8733}} & \textcolor{red}{\textbf{0.8932}} & \textcolor{red}{\textbf{0.8921}} \\ \hline \hline \end{tabularx} \label{table: mainresults} \end{table*} \begin{figure*} \centering \includegraphics[width=7.0in]{fig/acmmm_mdata.pdf} \caption{The experimental results of \textbf{TransHash} and other competing methods on three datasets} \label{fig:maindata} \end{figure*} \subsection{Retrieval Process} In this section, we will elaborate on how to perform efficient image retrieval given a well-trained model. Generally, we are given a training image set $\textbf{Q}$ and a gallery image set $\textbf{G}$. For an image $I_i^q$ in $\textbf{Q}$, we feed it through the backbone transformer, and obtain a set of hash vectors $\{\{h_g\}^i,\{h_l^1\}^i,...,\{h_l^K\}^i\}$. Subsequently. we concatenate the global and local hash vectors and obtain the final hash vector $\textbf{h}_i^q$: \begin{equation} \textbf{h}_i^q = \textit{sign}(\textit{Concat}([\{\{h_g\}^i,\{h_l^1\}^i,...,\{h_l^K\}^i\}])) \end{equation} where $\textit{sign}(x)$ is a element-wise thresholding function which return 1 if $x > 0$ and -1 otherwise. And, \textit{Concat} is a function which concatenate the global and local features into a $B$ bit hash vector. In a similar fashion, for all the images in $\textbf{G} = \{I_k^g\}_{k=1}^{N_g}$, we obtain the binary hash codes $\textbf{H}^g = \{h_k^g\}_{k=1}^{N_g}$. Then, we can rank the binary gallery codes $\textbf{H}^g$ through their Hamming distance with respect to the query hash code $\textbf{h}_i^q$. \subsection{Implementation Details} All the images are first resized to $256 \times 256$. For the training images, we adopt standard image augmentation techniques including \textit{random horizontal flipping} and \textit{random cropping} with cropping size $224$. For testing images, we only apply the \textit{center cropping} with cropping size $224$. The batch size is set to $64$. \textit{SGD} optimizer is adopted with a weight decay of $1e-4$. The learning rate is initialized to $3e-2$ with cosine learning rate decay. The number of warmup steps for the scheduler is set to $500$. The patch size is set to $(32,32)$ for the Siamese transformer model, the hidden size to $1024$. The number of heads for the multi-head attention is set to $16$, and the model consists of $24$ blocks in total. \section{Experimentation} \subsection{Datasets and Evaluation Protocols} \paragraph{Datasets.} We conduct experiments on three widely-studied image retrieval datasets: \textbf{CIFAR-10},~\textbf{NUSWIDE}, and~\textbf{IMAGENET}. \\ \textbf{CIFAR-10}~\cite{krizhevsky2009learning} is a dataset with $60,000$ images from $10$ classes. We follow the standard protocol in \cite{cao2018deep,zhu2016deep}. Specifically, we randomly select 500 images for each class as the training set, resulting in $5,000$ training points. Then, we randomly select 100 images per class as the query set, the rest denoted as the database. \\ \textbf{NUSWIDE}~\cite{chua2009nus} is a widely-studied public web image dataset consisting of $269,648$ images in total. Each image is annotated with some of the $81$ ground-truth categories (concepts). For fair comparisons, we follow similar experimental protocols ~\cite{cao2017hashnet,zhu2016deep} by randomly sampling $5,000$ as the query set, the rest as the database. Subsequently, we randomly sample $10,000$ images from the database as the training set. \\ \textbf{IMAGENET} is a subset of the dataset for Large Scale Visual Recognition Challenge (ISVRC 2015)~\cite{russakovsky2015imagenet}. Specifically, we follow the same protocol as \cite{fan20deep}\cite{cao2017hashnet} by randomly sampling 100 classes and use all the images of these classes in the validation set as the query set. All the images of these classes in the training set are denoted as the database, while 100 images per category are sampled as the training set. \paragraph{Evaluation Protocols} We adopt Mean Average Precision (\textbf{mAP}),\\~\textbf{Precison} ~and \textbf{Recall} as the testing metrics. Concretely, we follow a similar fashion as \cite{cao2018deep,cao2017hashnet}. The \textbf{mAP} is calculated with the top 54,000 returned images for \textbf{CIFAR-10}, 5,000 for \textbf{NUSWIDE} and 1,000 for \textbf{IMAGENET} \begin{table*} \newcolumntype{Y}{>{\centering\arraybackslash}X} \setlength\mylength{\dimexpr 0.8\textwidth-2\tabcolsep} \caption{Mean Average Precision (MAP) of Different Variants of TransHash on Three Datasets} \begin{tabularx}{\textwidth}{ XX| YYYY || YYYY ||YYYY } \hline \rowcolor{black} \multicolumn{2}{l|}{ \textcolor{white}{\textbf{Datasets}}} & \multicolumn{4}{c||}{\textcolor{white}{\textbf{CIFAR-10}@54000 }} &\multicolumn{4}{c||}{\textcolor{white}{\textbf{NUSWIDE}@5000} } &\multicolumn{4}{c|}{\textcolor{white}{\textbf{IMAGENET}@1000}} \\ \hline \multicolumn{2}{l|}{\textbf{Methods}} & 16 bits &32 bits & 48 bits & 64 bits & 16 bits &32 bits & 48 bits & 64 bits & 16 bits & 32 bits & 48 bits & 64 bits\\\hline \hline \multicolumn{2}{l|}{\textbf{TransHash} }&\textcolor{red}{\textbf{0.9075}} & \textcolor{red}{\textbf{0.9108}} & \textcolor{red}{\textbf{0.9141}} & \textcolor{red}{\textbf{0.9166}} & \textcolor{red}{\textbf{0.7263}} & \textcolor{red}{\textbf{0.7393}} & \textcolor{red}{\textbf{0.7532}} & \textcolor{red}{\textbf{0.7488}} & \textcolor{red}{\textbf{0.7852}} & \textcolor{red}{\textbf{0.8733}} & \textcolor{red}{\textbf{0.8932}} & \textcolor{red}{\textbf{0.8921}} \\ \multicolumn{2}{l|}{TransHash w/o \textbf{C} }& 0.8406 & 0.8384 & 0.8958 & 0.9062 & 0.7004 & 0.7265 & 0.7336 & 0.7310 & 0.7172 & 0.7808 & 0.8064 & 0.8244 \\ \multicolumn{2}{l|}{TransHash w/o \textbf{P} }& 0.9029 & 0.9053 & 0.9028 & 0.9014 & 0.7190 & 0.7147 & 0.7339 & 0.7167 & 0.7549 & 0.8485 & 0.8635 & 0.8635 \\ \multicolumn{2}{l|}{TransHash w/o \textbf{Q} }& 0.8927 & 0.9023 & 0.9048 & 0.9078 & 0.6540 & 0.6821 & 0.6689 & 0.6915 & 0.7451 & 0.8588 & 0.8689 & 0.8758 \\ \hline \hline \end{tabularx} \label{table: ablationresults} \end{table*} \subsection{Comparison with State-of-the-Arts} \par In this section, we compare the results of our proposed \textbf{TransHash} and the state-of-the-art deep hashing methods. Specifically, the competing methods could be divided into two categories: shallow hashing methods and deep hashing methods. For the shallow hashing methods, we include the most frequently compared methods \textbf{SH}~\cite{weiss2008spectral},~\textbf{ITQ}~\cite{gong2012iterative},~\textbf{KSH}~\cite{liu2012supervised},~and~\textbf{BRE}~\cite{kulis2009learning} for detailed comparisons. For the deep learning-based hashing methods, we further include \textbf{DSH}~\cite{liu2016deep12} which is among the very first works targeting at tackling the hashing problem for image retrieval with deep convolutional neural networks. In addition, we incorporate other recent deep hashing methods including ~\textbf{DHN}\cite{zhu2016deep},~\textbf{HashNet}~\cite{cao2017hashnet},~\textbf{IDHN}~\cite{zhang2019improved} and~\textbf{DPN}~\cite{fan20deep}. \par Note that, for all the non-deep methods and \textbf{DPN}, we directly quote the results from \cite{cao2017hashnet} and \cite{fan20deep}. For the rest of the competing methods,~ we conduct experiments with the open-sourced codes from the original papers.~For fair comparisons, we conform to original protocols for the hyper-parameters and the pre-processing techniques. For example, all the images are resized to $224 \times 224$. \par The Mean Average Precision (\textbf{mAP}) results are demonstrated in Tab.~\ref{table: mainresults}.~It is rather evident that our proposed \textbf{TransHash} is a clear winner compared with the shallow hashing methods across three datasets. Specifically, we achieve absolute performance boosts of 19.93\%,~39.69\% in terms of average \textbf{mAP} for \textbf{NUSWIDE} and \textbf{IMAGENET}, respectively. The unsatisfied performances of these non-deep hashing methods could be in part attributed to the fact that these methods could not assist in the discriminative feature learning process, resulting in the generation of sub-optimal hashing codes. Clearly, deep hashing methods exhibit significantly better performances across all the datasets for different hash bit lengths. Still, our method outperforms all the competing methods by large margins. Specifically,~on \textbf{CIFAR-10}, we achieve a \textbf{mAP} of 91.66\% in terms of 64 hash bits, surpassing the state-of-the-art result by 8.8\%. The performance improvement is even more pronounced in \textbf{IMAGENET}. The average \textbf{mAP} for \textbf{TransHash} is 86.10\%, exceeding \textbf{DPN} by 12.7\%. The reasons for the notable performance gains are twofold. First, the siamese architecture and the dual-stream feature learning design could assist in learning more discriminative features. The second reason is that the ratio between the number of similar pairs and dissimilar pairs in \textbf{IMAGENET} is much larger than \textbf{CIFAR-10}, which is also known as the data imbalance problem~\cite{cao2017hashnet}, deteriorating the performance of methods trained on pairwise data~\cite{zhang2019improved,liu2016deep12}. \textbf{TransHash} tackles this problem by dynamically assigning a weight for each pair as is carried out in \cite{cao2017hashnet}. On \textbf{NUSWIDE}, our method also consistently exceeds the competing methods across different hash bit lengths. The performance gains are not as sizable as on \textbf{CIFAR-10} and \textbf{NUSWIDE} mainly because \textbf{TransHash} is not tailored for multi-label image retrieval where each image comprises multiple labels. \par We further plot the Precision-Recall curves(PR) in terms of 16 and 64 hash bits and Precision curves with respect to different numbers of top returned images. As depicted in Fig.~\ref{fig:maindata},~the performance of \textbf{TransHash}, colored with red, consistently levitates above all the competing methods by large margins for the PR curves. In terms of precision w.r.t numbers of returned images, as shown in the top right pictures in Fig.~\ref{fig:maindata}, \textbf{TransHash} achieves significantly better results against all the methods. The results on \textbf{NUSWIDE} are on the middle of Fig.~\ref{fig:maindata}. \textbf{TransHash} achieves slightly better results for PR@16 bits and PR@64 bits. For the precision w.r.t number of returned images, our method obtains a precision of 76.77\% for 100 returned images, surpassing \textbf{IDHN} by 2.7\%. Pronounced performance gains could also be spotted for \textbf{IMAGENET}. Specifically, for PR curve with 16 bits, \textbf{DCH} obtains the second place while \textbf{HashNet} tops \textbf{DCH} for 48 bits. It is easy to spot that \textbf{TransHash} still exceeds both methods in two testing scenarios with considerable margins. For the precision curve, we achieve performances of 90.35\%, 89.38\% w.r.t 100 and 1000 returned images, exceeding \textbf{HashNet} by 24.73\% and 28.18\%, respectively. The superior results could sufficiently demonstrate the effectiveness of our pure-transformer-based hashing method. \begin{figure*} \centering \includegraphics[width=7.0in]{fig/acmmm_ablation.pdf} \caption{Experimental results of different variants of \textbf{TransHash} on three datasets} \label{fig:my_ablation} \end{figure*} \subsection{Ablation Studies} To further analyze the overall design of our proposed method, we conduct a detailed ablation study to demonstrate the effectiveness of each component. Specifically, we investigate three variants of \textbf{TransHash}: \begin{enumerate} \item \textbf{TransHash w/o P}, a variant without adopting the dual-stream feature learning. \item \textbf{TransHash w/o Q}, a variant without the Cauchy quantization loss. \item \textbf{TransHash w/o C}, a variant adopting the sigmoid function as the probability function $\sigma$, following the protocols in \cite{zhu2016deep}. \end{enumerate} \begin{table}[h] \centering { \begin{tabular}{cc||c||c||c||c||c} \hline \hline \rowcolor{black} \multicolumn{2}{c||}{(\textcolor{white}{Groups \textbf{(K)}})} & \multicolumn{1}{c||}{\textcolor{white}{\textbf{2}}} &\multicolumn{1}{c||}{\textcolor{white}{\textbf{3} }} & \multicolumn{1}{c||}{\textcolor{white}{\textbf{4} }} & \multicolumn{1}{c||}{\textcolor{white}{\textbf{5} } } & \multicolumn{1}{c||}{\textcolor{white}{\textbf{6} }} \\ \hline\hline \multicolumn{2}{c||}{16 bits} & 0.9075 & - & - & - & - \\ \multicolumn{2}{c||}{32 bits} & 0.9108 & 0.9013 & - & - & - \\ \multicolumn{2}{c||}{48 bits} & 0.9141 & 0.9017 & 0.9187 & 0.9107 & 0.9143 \\ \multicolumn{2}{c||}{64 bits} & 0.9166 & 0.9103 & 0.9057 & 0.9062 & 0.8994 \\ \hline \end{tabular} } \caption{Analysis of the effects of \textbf{K} on \textbf{CIFAR-10}.~Note that $-$ denotes when K equals a certain number,~the model fails to converge as illustrated in the empirical analysis.} \label{teb:ablation} \end{table} As shown in Tab.~\ref{table: ablationresults} and Fig.~\ref{fig:my_ablation}, when then Cauchy quantization loss is removed (\textbf{TransHash w/o Q}), we experience notable performance declines in \textbf{NUSWIDE} and \textbf{IMAGENET}, from 74.88\% to 69.15\% and 89.21\% to 87.58 \% for 64 hash bits, respectively. When the model is deprived of Cauchy distribution (\textbf{TransHash w/o C}), which is similar to \cite{zhu2016deep}, we can see that the performance decreases sharply. Specifically, on \textbf{IMAGENET}, it experiences a conspicuous performance drop by an average of 5.55\% \textbf{mAP}. We also note that the drop for shorter hash codes is more severe than longer hash codes. The primary reason is that according to \cite{cao2018deep}, the Cauchy distribution could effectively pull close similar pairs into a small Hamming radius, giving it an edge when the hash code length is short. \par More importantly, to test the effectiveness of the proposed dual-stream feature learning, we also include the performances of the Siamese model with the solo global feature learning module. As depicted in Fig.~\ref{table: ablationresults}, \textbf{TransHash w/o P} consistently underperform the model with dual-feature learning design. On \textbf{NUSWIDE} and \textbf{IMAGENET}, the average decline is 2.08\% and 2.83\%, respectively. The above experimental experiments have evidenced the effectiveness of the design of our pure transformer-based hashing framework. Since the hyper-parameter $K$, which controls how many groups we will divide our local features into, is rather important in our design, we further provide an ablation study on the sensitivity of $K$ for various hash bits on \textbf{CIFAR-10}. Note that if the length of the final hash code vector is 16 and $K$ equals 2, then the global feature is responsible for learning the first 8 bit and each local feature vector for the latter 4 bits. \paragraph{Empirical analysis of \textbf{K}} As depicted in Tab.~\ref{teb:ablation}, generally, the performance is not very sensitive to $K$. Also, we observe that when the local feature vector is responsible for generating less than 4 bits, the model will fail to converge. In light of the above observations, we empirically set the $K$ to $2$ across four different hash bit lengths. \section{Conclusion} In this paper, we have proposed a novel pure transformer-based deep hashing framework (\textbf{TransHash}) to tackle the challenging large-scale image retrieval problem. Specifically, we innovate a novel Siamese transformer architecture for extracting robust image features with pairwise similarity learning. On top of that, in an attempt to learn more fine-grained features, we propose to add a dual-stream feature learning module to learn global and local features simultaneously. A well-specified Bayesian learning framework is adopted on top of all the pairwise features for similarity-preserving learning. The overall framework is optimized in an end-to-end fashion. We conduct extensive experiments and demonstrate that \textbf{TransHash} yields notable performance gains compared to the state-of-the-art deep hashing methods on \textbf{CIFAR-10}, \textbf{NUSWIDE} and \textbf{IMAGENET} datasets. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2021-05-06T02:07:44", "yymm": "2105", "arxiv_id": "2105.01823", "language": "en", "url": "https://arxiv.org/abs/2105.01823" }
\section{Introduction}\label{introduction} \indent A \textit{starter} in an additive abelian group $G$ of odd order $n$ is a partition of the set $G^*$ of all non-zero elements of $G$ into $q=(n-1)/2$ pairs $\{\{s_i,t_i\}\}_{i=1}^q$ such that the elements $\pm(s_i-t_i), i=1,...,q$, comprise $G^*$. Starters exist in any additive abelian group of odd order $n\ge3$. For example, the partition $\{\{x,-x\}\mid x\in G,x\ne0\}$ of $G^*$ is a starter in $G$. For convenience, we will call a partition of a set of even cardinality into pairs a \textit{2-partition} of this set. In this paper, we will consider only cyclic additive abelian groups, more precisely, groups $\mathbb{Z}_n$ of integers modulo $n$, where $n\ge3$ is odd. \begin{df}\label{strong, skew starters} A 2-partition $S=\{\{x_i,y_i\}\}_{i=1}^q$ of $\mathbb{Z}^*_{n}$, $ n=2q+1$, $q\ge 1$ is called (a) a starter in $\mathbb{Z}_n$, if \begin{equation}\label{starter} \{\pm(x_i-y_i)\pmod{n}|\{x_i,y_i\}\in S,\ 1\le i\le q\}=\mathbb{Z}^*_{n}; \end{equation} (b) strong, if \begin{equation}\label{strong} \hat S=\{(x_i+y_i)\pmod{n}|\{x_i,y_i\}\in S,\ 1\le i\le q\}\subset \mathbb{Z}^*_{n}\quad {\rm and} \quad |\hat S|=q;\end{equation} (c) skew, if \begin{equation}\label{skew} \{\pm(x_i+y_i)\pmod{n}|\{x_i,y_i\}\in S,\ 1\le i\le q\}=\mathbb{Z}^*_{n};\end{equation} (d) cardioidal (\cite{b21}), if all its pairs are cardioidal of order $n$, that is, if each pair of the partition \begin{equation}\label{card}\{x,y\}=\{i,2i\pmod{n}\}\end{equation} for some $i\in\mathbb{Z}_n^*$; (e) Skolem, if all its pairs are Skolem of order $n$, that is, if each pair $\{x,y\}$ of the partition is such that \begin{equation} \label{skolem} y-x \le q\pmod{n} \Leftrightarrow y>x. \end{equation} Here we assume $1<2<...<2q$ to be the order of the non-zero integers modulo $n$. \end{df} We will refer to 2-partitions of $\mathbb{Z}_n^*$ as 2-partitions {\it of order} $n$. A {\it strong starter} in $\mathbb{Z}_n$ is a 2-partition of order $n$ that possesses properties (\ref{starter}) and (\ref{strong}). A {\it skew starter} in $\mathbb{Z}_n$ is a 2-partition of order $n$ that possesses properties (\ref{starter}) and (\ref{skew}). Clearly, skew 2-partitions comprise a subset of strong 2-partitians. Consequently, any skew starter is strong. Also, it is known that if a partition is cardioidal, then it is Skolem \cite{b21}. All the other implications are absent, and a 2-partition may hold any combination of these properties independently from one another. \begin{ex}\label{example of independence of properties} The 2-partition $R=\{\{1,2\},\{3,4\}\}$ of $\mathbb{Z}_5^*$ is strong ($0\ne 1+2\ne 3+4\ne 0\pmod{5}$) and cardioidal $(i=1\, {\rm and}\, 4)$ but not a starter ($2-1=4-3$) and not skew ($\pm 3=\mp 7 \pmod{5}$).\\ The 2-partition $Q=\{\{1,3\},\{2,5\},\{4,6\},\{7,8\}\}$ of $\mathbb{Z}_9^*$ is Skolem and skew: $\{\pm(1+3)\equiv \pm 4\pmod{9},\pm(2+5)\equiv \mp 2\pmod{9}, \pm(4+6)\equiv \pm 1\pmod{9},\pm(7+8)\equiv \mp 3\pmod{9}\}=\mathbb{Z}_9^*$. But it is not a starter ($3-1=6-4$), nor is it cardioidal as, for example, the pair $\{1,3\}$ does not satisfy property (\ref{card}).\\ The starter $T=\{\{2,3\},\{4,6\},\{5,1\}\}$ in $\mathbb{Z}_7$ is strong and skew: $\{\pm(2+3)\equiv \pm 5\pmod{7}, \pm(4+6)\equiv \pm 3\pmod{7},\pm(5+1)\equiv \mp1\pmod{7}\}=\mathbb{Z}_7^*$, but not Skolem, as the pair $\{5,1\}$ does not satisfy property (\ref{skolem}).\\ The starter $S=\{\{1,2\},\{10,12\},\{3,6\},\{4,8\},\{11,16\},\{9,15\},\{7,14\},\{5,13\}\}$ in $\mathbb{Z}_{17}$ is strong as all the pairs yeild pairwise different non-zero sums $\pmod{17}$ and Skolem \cite{b09}. However, $S$ is not skew as, for example, the pairs $\{10,12\}$ and $\{4,8\}$ yeild the sums $5\pmod{17}$ and $12\pmod{17}$, respectively, and $5\equiv -12\pmod{17}$. Neither $S$ is cardioidal as, for example, the pair $\{5,13\}$ does not satisfy property (\ref{card}). \end{ex} First, strong starters were introduced by Mullin and Stanton in 1968 \cite{b016} for constructing Room squares and Howell designs. In 1969, Mullin and Nemeth \cite{b01} gave a general construction for finding these starters in cyclic groups. The question of the existence (or non-existence) of a strong starter in an abelian group is crucial in the theory of Room squares. We refer readers interested in constructions of strong starters to \cite{b014}, \cite{b05}, \cite{b01} and the references therein. Strong starters in groups of order 3, 5 and 9 do not exist \cite[p.144]{b06}. It is an open question whether there exists a strong starter in every cyclic group of an odd order exceeding 9. In 1981, Dinitz and Stinson \cite{b07} found (by a computer search) strong starters in the cyclic group of order $n$ for all odd $7\le n\le999,\ n\ne 9$. At present, the strongest known general statement on the existence of strong starters is the following \cite[p.625]{b014}: \textit{For any $n>5$ coprime to 6, an abelian group of order n admits a strong starter.} Skew starters give rise to special Room squares called \textit{skew Room squares}, important combinatorial designs. It is known \cite[p.627]{b014} that skew starters of order $n$ do not exist, if $3|n$. \smallskip \smallskip \textit{Skolem starters}, the objects of our close attention, are defined only in $\mathbb{Z}_n$. \begin{df}\label{defn of skolem starter1} Let $n=2q+1$, and $1<2<...<2q$ be the order of the non-zero integers modulo n. A starter in $\mathbb{Z}_n$ is Skolem if it can be written as a set of ordered pairs $\{(s_i,t_i)\}_{i=1}^q$, where $t_i-s_i\equiv i\pmod{n}$ and $t_i>s_i,\ 1\le i\le q$. \end{df} Skolem starters received their name \cite{b09} after Skolem sequences. A \textit{Skolem sequence} of order $q$ is a sequence $(s_1,...,s_{2q})$ of integers from $D=\{1,...,q\}$ such that for each $i\in D$ there is exactly one $j\in \{1,...,2q\}$ such that $s_j=s_{j+i}=i$. Skolem sequences exist iff $q\equiv0$ or $1\pmod{4}$ \cite{b014}. They were originally used by Skolem in 1957 for the construction of Steiner triple systems \cite{b02}. Skolem sequences are widely applied in many areas such as triple systems, balanced ternary designs, factorization of complete graphs, starters, labelling graphs. Readers intersted in these applications may address their attention to \cite{b23} and the references therein. Given a Skolem sequence $(x_1,x_2,...,x_{2q})$, consider all pairs $\{i_k,j_k\}$ such that $j_k>i_k$ and $x_{i_k}=x_{j_k}=k,\ k=1,...,q$. This set of pairs forms a partition of the set $\mathbb{Z}_n^*$ of all non-zero elements of $\mathbb{Z}_n$, where $n=2q+1$. Since $j_k-i_k\equiv k\pmod{n}$, (and consequently, $i_k-j_k\equiv -k\pmod{n}),\ k=1,...,q$, this set of pairs is a starter in $\mathbb{Z}_n$. \begin{ex}\label{skolem sequence} Sequence $(1,1,5,2,4,2,3,5,4,3)$ is a Skolem sequence of order $5$: the length of the sequence is $2\cdot5=10$, and $x_1=x_2=1,x_4=x_6=2,x_7=x_{10}=3,x_5=x_9=4, x_3=x_8=5$, so $1$'s, $2$'s, $3$'s, $4$'s and $5$'s are one, two, three, four and five positions apart, respectively. This Skolem sequence yields a starter $T=\{\{1,2\},\{4,6\},\{7,10\},\{5,9\},\{3,8\}\}$ in $\mathbb{Z}_{11}$. \end{ex} \begin{lemma}\label{defn of Skolem starter2} A starter $S$ in $\mathbb{Z}_k$ is Skolem if and only if all its pairs are Skolem of order $k$. In other words, Skolem starters are partitions with properties (\ref{starter}) and (\ref{skolem}). \end{lemma} \begin{proof} The lemma follows from Definitions \ref{strong, skew starters} and \ref{defn of skolem starter1}. \end{proof} Clearly, Skolem starters in $\mathbb{Z}_n$ are in one-to-one correspondence with Skolem sequences of order $q=(n-1)/2$. Therefore, Skolem starters exist in $\mathbb{Z}_n$ iff $n\equiv1$ or $3\pmod{8}$. {\it Strong Skolem starters} are partitions with properties (\ref{starter}), (\ref{strong}), and (\ref{skolem}). The value of strong Skolem starters of order $2q+1$ is in their applicability in constructing Room squares and cubes of order $2q+2$ on one hand, and \textit{Steiner triple systems}, STS($6q+1$), on the other. Recall that an STS($v$) is a collection of $3$-subsets, called $blocks$, of a $v$-set $S$, such that every two elements of $S$ occur together in exactly one of the blocks. \begin{theorem}\label{Shalaby theorem} $($Shalaby, $1991\ \cite[pp.60-62]{b09}.)$ For $11\le n\le 57, n\equiv1$ or $3\pmod{8}$, $\mathbb{Z}_n$ admits a strong Skolem starter. \end{theorem} \begin{conjecture}\label{Shalaby conjecture} $($Shalaby, $1991\ \cite[p. 62]{b09}.)$ Every $\mathbb{Z}_n$ with $n\equiv 1$ or $3\pmod{8}$ and $n\ge11,$ admits a strong Skolem starter. \end{conjecture} Up to 2018, there were known only finitely many strong Skolem starters. In 2018, Ogandzhanyants et al. explicitly constructed an infinite family of strong Skolem starters \cite{b21}, proving the following \begin{theorem}\label{Ogandzhanyants theorem1} Let $n=\prod_{i=1}^m p_i^{k_i}$, where $p_i>3,\ i=1,...,m$, are pairwise distinct primes such that $\mathrm{ord}_{p_i}(2)\equiv 2\pmod{4}$, and $k_i\in\mathbb{N},\ i=1,...,m$. Then $\mathbb{Z}_n$ admits a skew Skolem starter. \end{theorem} In addition, in was shown that all Skolem starters found in \cite{b21} are cardioidal starters, that is, they possess property (\ref{card}), and no strong cardioidal starter lies outside of the family fully described in \cite{b21}. The discovery in \cite{b21} boosted up the attention of some other researchers towards the proof of Conjecture \ref{Shalaby conjecture}, see for example \cite{b26} and the references therein. They explored alternative approaches to constructing strong Skolem starters, but no infinite family of strong Skolem starters other than strong cardioidal starters has been found. Theorems \ref{intermediate result} and \ref{main} stated and proved in this paper allow formation of new infinite families of strong (and skew) Skolem starters of composite orders, which are not cardioidal and thus significantly extends the previous result. Gross \cite[p.170]{b22} in 1974 indicated a way to produce a starter for the group $G\oplus H$, the direct sum of two finite abelian groups, given a starter for $H$ and a set of starters for $G$. He showed that under certain conditions, strong starters for $H$ and $G$ give rise to a strong starter for $G\oplus H$. Our constructions of the products given in Definitions \ref{WST} and \ref{dfn of cardioidal product} are inspired by that paper. However, in contrast with Gross who focused on the existence of strong starters and starters with adders in a general setting, our constructions are explicitly defined in cyclic groups $\mathbb{Z}_n$ as we are concerned with skew and strong Skolem starters. In addition, most of our statements have a converse. Our construction of products resembles the one, given by Turgeon in 1979 \cite{b28} for \textit{additive sequences of permutations}, in the general context of difference sets. Indeed, Skolem starters could be treated as a very special case of difference sets. However, in this paper we avoid over-generalization and adapt the presentation specifically to our needs. Thus, we first apply the construction to 2-partitions in $\mathbb{Z}_n^*$ without any restrictions imposed on them. Then we endow the 2-partitions with a certain property stated in Definition \ref{strong, skew starters}, apart from all other properties, and deduce the direct and inverse relationship of these properties for the resulting construction. Whereas the direct relations may also be concluded from the previous research on difference sets, the converse statements are our contribution to the topic. The structure of this papers is as follows. In Section 2, we introduce the notion of a product of a pair of 2-partitions of $\mathbb{Z}_{2p+1}^*$ and $\mathbb{Z}_{2q+1}^*$ respectively. In Section 3, we focus our attention on starters and other special classes of 2-partitions. Subsection 3.1 gives some preliminaries. In Subsection 3.2, we prove several properties of the product of two 2-partitions and give an important intermediate result, Theorem \ref{intermediate result}. In Subsection \ref{The parametrized product of starters}, we generalize the initial definition of the product and compare its properties to the initial one. In Subsection \ref{Cardioidal product of two 2-partitions}, we show explicit ways to apply the products of Skolem starters. In Section 4, we conclude with a discussion of several implications of the statements proved in this paper and give the main result of this paper, Theorem \ref{main}. \section{The product of 2-partitions: Construction}\label{section construction} \begin{lemma}\label{orderx} Let $n=2q+1$, $q\ge 1$. From any 2-partition $S=\{\{a_i,b_i\}\}^q_{i=1}$ of $\mathbb{Z}^*_n$, it is possible to make a set of ordered pairs $\bar{S}=\{(x_i,y_i)\}_{i=1}^q$, where either $x_i=a_i, y_i=b_i$ or $x_i=b_i, y_i=a_i$, for all $1\le i\le q$, such that \begin{equation}\label{order} \cup^q_{i=1}\{\pm x_i\}=\mathbb{Z}_n^*. \quad \end{equation} \end{lemma} \begin{proof} Let us denote by $\{a,b\}\mapsto (x,y)$ the operation of making an ordered pair $(x,y)$ from an unordered pair $\{a,b\}$ by setting $x\equiv a,\ y\equiv b\pmod{n}$, then removing $\{a,b\}$ from $S$ and placing $(x,y)$ in $\bar S$. Below we describe an explicit construction of $\bar S$. First, for each pair $\{a_i,b_i\}$, where $b_i\ \equiv -a_i\pmod{n}$, if exists, set $\{a_i,-a_i\}\mapsto (x_i,y_i)$, and place it in the end of the list $\bar S$. From all other pairs remaining in $S$, pick any pair, say, $\{a_1,b_1\}\in S$, and set $\{a_1,b_1\}\mapsto (x_1,y_1)$. Then, find a pair which the element $-b_1$ belongs to. Without loss of generality (WLOG), let $a_k\equiv-b_1\pmod{n}$. Then set $\{-b_1,b_k\}\mapsto (x_2,y_2)$. Then, find a pair which the element $-b_k$ belongs to. WLOG, let $a_j\equiv-b_k\pmod{n}$. Then set $\{-b_k,b_j\}\mapsto (x_3,y_3)$. And so on, until the element $-a_1$ appears in some pair $\{a_m,b_m\}\in S$, which will produce $(x_l,y_l)$, where $y_l\equiv -a_1\pmod{n}$. Note, that by the construction $x_2\equiv -y_1\pmod{n}$, $x_3\equiv -y_2\pmod{n}$, etc. Clearly, such a collection of pairs spans over the subset $\{x_1,y_1,-y_1,y_2,-y_2,...,-x_1\}\subset\mathbb{Z}_n^*$. Finally, pick any remaining pair in $S$, give it an order and continue the process until all the pairs are ordered. \end{proof} \begin{remark} Note that for the set $\bar S$ described in Lemma \ref{orderx}, we automaticaly have: \begin{equation}\label{ordery} \cup^q_{i=1}\{\pm y_i\}=\mathbb{Z}_n^*. \quad \end{equation} \end{remark} \begin{ex} For $q=6$ and partition $\{ \{1,12\}; \{2,3\}; \{4,6\}; \{5,7\}; \{8,9\}; \{10,11\}\} $ we form $\bar S$ by making the following 3 clusters: $$ \quad \bar S=\{ (2,3), (-3,-2), \quad (4,6), (-6,5), (-5,-4),\quad (1,-1)\pmod{13}\} $$ with property (\ref{order}), as required: $$ \{\pm 2, \mp 3, \pm 4, \mp 6, \mp 5, \pm 1\pmod{13}\}= \mathbb{Z}_{13}^*. $$ \end{ex} Let $S=\{\{a_i,b_i\}\}^q_{i=1}$ be a 2-partition of $\mathbb{Z}_{2q+1}^*$. Denote by $\bar S$ a set of $q$ {ordered} pairs of $S$, that obey property (\ref{order}): \begin{equation}\label{SS} \bar S=\{(x_i,y_i)\}_{i=1}^q. \end{equation} The existence of these sets is sequred by Lemma \ref{orderx}. In addition, by $\bar S'$ we denote the set $\bar S'=\{(-x_i,-y_i)\}_{i=1}^q$ and by $\tilde S$ we denote a set of arbitrary ordered pairs of $S$. \begin{df}\label{WST} Given two 2-partitions, $S$ of $\mathbb{Z}_{2q+1}^*$ and $T$ of $\mathbb{Z}_{2p+1}^*$, let us form sets of ordered pairs $\tilde{S}$, and $\bar{T},\ \bar{T'}$ as specified above. Consider the set $W_{ST}=\{\{u_i,v_i\}\}^{k}_{i=1}$ of $k=2qp+ q+p$ pairs of the form \begin{equation}\label{uv} \{u,v\},\quad {\rm where}\quad u=(2q+1)r+x,\quad v=(2q+1)t+y \end{equation} divided in the following types:\\ (i) $(2p+1)q$ pairs: one for each $(r,t)\in \bar{T}\cup \bar{T'}\cup \{(0,0)\}$ and for each $(x,y)\in \tilde S$ and\\ (ii) $p$ pairs: one for each $(r,t)\in \bar{T}$ and $x=y=0$.\\ We will call the set of pairs $W_{ST}$ a product of $S$ and $T$. \end{df} \begin{ex}\label{ex1} Let us construct the set $W_{ST}$ for 2-partitions $S=T=\{1,2\}$ of $\mathbb{Z}_3^*$. Here $q=p=1$. Take $\tilde S= \bar T=\{(1,2)\}$ and $\bar T'=\{(2,1)\}$. So we have the pairs of the two types:\\ (i) $\{3\times0+1,3\times0+2\}=\{1,2\}$, $\{3\times 1+1,3\times 2+2\}=\{4,8\}$, $\{3\times 2+1,3\times 1+2\}=\{7,5\}$\\ (ii) $\{3\times 1+0,3\times 2+0\}=\{3,6\}$\\ The set of these four pairs constitutes $W_{ST}$. \end{ex} \begin{remark}\label{starred types} The pairs of $W_{ST}$ can be formed in various ways, depending on the choices made in the process of constructing $\tilde S$ and $\bar T$ from the 2-partitions $S$ and $T$. So, a product of the two 2-partitions is not unique. Nevertheless, the properties proven below hold for $W_{ST}$ regardless of the ordering choices. We will consider an alternative construction of a product in Sections 3.3 and 3.4. As well, we will outline more possibilities in Section \ref{Conclusion}. \end{remark} A product $W_{ST}$ of two 2-partitions, $S$ of $\mathbb{Z}_n^*$ and $T$ of $\mathbb{Z}_m^*$, as we will prove, preserves some properties of the factors. The most general one is given in Theorem \ref{W is a partition} below. Prior to that we recall the following simple results needed in the proofs. \begin{lemma}\label{modul} Let $m,n$ be any natural numbers and $a,b,c,d$ be integers. \begin{enumerate} \item If $a\equiv b \pmod{mn}$ then $a\equiv b \pmod{n}$ and $a\equiv b \pmod{m}$. \item If $(an+c)\equiv (bn+c) \pmod{mn}$ then $a\equiv b \pmod{m}$. \item If $(an+c)\equiv (bn+d) \pmod{n}$ then $c\equiv d \pmod{n}$. \item Let $X_c^n$ be a finite multiset of integers congruent modulo ${n}$ to a given integer $c$, not necessarily all distinct. If $|X_c^n|>m$ then there exist $b, d \in X_c^n$ such that $b=d \pmod {mn}$. \end{enumerate} \end{lemma} \begin{proof} 1. By definition, $a\equiv b \pmod{mn}$ means $(mn) | (a-b)$. But then $m | (a-b)$, so $a\equiv b \pmod{m}$, and $n | (a-b)$, so $a\equiv b \pmod{n}$. 2. Similarly, $(an+c)\equiv (bn+c) \pmod{mn}$ means $(mn)| ((a-b)n)$. Then $m | (a-b)$, so $a\equiv b \pmod{m}$. 3. As well, $(an+c)\equiv (bn+d) \pmod{n}$ means $n|((a-b)n+(c-d))$. Then $n | (c-d)$, so $c\equiv d \pmod{n}$. 4. For any integer $0\le c< mn$ there are exactly $m$ numbers $0\le a < mn$ congruent modulo $n$ to $c$. Thus, for any multiset $X_c^n$ of more that $m$ integers, the Dirichlet principle implies the existence of $b=d \pmod {mn}$. \end{proof} \begin{theorem}\label{W is a partition} Let $n,m\ge 3$ be odd integers and $S$ and $T$ be 2-partitions of $\mathbb{Z}_n^*$ and $\mathbb{Z}_m^*$ respectively. Their product $W_{ST}$ (Definition \ref{WST}) is a 2-partition of $\mathbb{Z}_{mn}^*$. \end{theorem} \begin{proof} Let $n=2p+1$ and $m=2q+1$, $p,q\ge 1$. Let us also establish the natural order in $\mathbb{Z}_k:\ 0<1<...<k-1,\ k\in \mathbb{N}$. By definition, $W_{ST}$ consists of $2pq+p+q$ pairs, totaling to $4pq+2p+2q=mn-1$ elements, which equals the cardinality of $\mathbb{Z}_{mn}^*$. It remains to show that all these elements of $W_{ST}$ are distinct modulo $mn$. Indeed, all the elements of the pairs of type (ii) are distinct as $T$ is a 2-partition of $\mathbb{Z}_m^*$, and they are multiples of $n$. All the elements of the pairs for $r=t=0$ and $(x,y)\in \tilde S$ are distinct and less than $n$ because $S$ is a 2-partition of $\mathbb{Z}_n^*$. All the remaining elements of the pairs of type (i) are greater than $n$ and are not multiples of $n$. Assume for the sake of contradiction the possibility that among them there is a pair $\{u_1,v_1\}$ and a pair $\{u_2,v_2\}$ with a non-empty intersection. Here \begin{equation}\label{u1v1} u_1=r_1n+x_1,\, v_1=t_1n+y_1 \quad u_2=r_2n+x_2,\, v_2= t_2n+y_2, \end{equation} and WLOG, we let $(x_i,y_i)\in\tilde{S},\ i=1,2,\ (r_1,t_1)\in\bar{T},\ (r_2,t_2)\in\bar{T'}$. Let for example, $u_1\equiv u_2\pmod{mn}$, that is $(r_1n+x_1)\equiv (r_2n+x_2)\pmod{mn}$. Then, by Lemma \ref{modul}(1\&3), $x_1\equiv x_2\pmod{n}$. Therefore, by Lemma \ref{modul}(2), $r_1\equiv r_2\pmod{m}$. But this is impossible due to property (\ref{order}). Similar argument will lead to a contradiction if one assumes $v_1\equiv v_2\pmod{mn}$ or $u_1\equiv v_2\pmod{mn}$ or $u_2\equiv v_1\pmod{mn}$. This proves that all $4pq+2p+2q=mn-1$ elements appeared in the pairs of $W_{ST}$ are distinct. Therefore $W_{ST}$ is a 2-partition of $\mathbb{Z}^*_{mn}$. \end{proof} \section{The product of special classes of 2-partitions} \subsection{Preliminaries} The 2-partitions of $\mathbb{Z}_n^*$ we mainly concern with are strong and skew Skolem starters. Lemma \ref{orderx} applies to starters in $\mathbb{Z}_{2q+1} $ as they form a 2-partition of $\mathbb{Z}^*_{2q+1}.$ Before we get to the properties of the product of two starters, we present some additional definitions and a lemma which will be helpful in the sequel. \begin{df}\label{defn of pairs} A pair $\{x,y\}\in S$ is called a canonical pair of order $k$ if $\{x,y\}=\{i,-i\pmod{k}\}$ for some $i\in\mathbb{Z}_k^*$. If all pairs of $S$ are canonical, then $S$ is called a canonical starter of order $k$. \end{df} \begin{df} Two 2-partitions S and S' in the same group are called conjugate if $\{x,\ y\}\in S$ implies $\{-x,\ -y\}\in S'$. \end{df} Obviously, every 2-partition has a conjugate. Note that the 2-partition of $\mathbb{Z}_n^*$ which is a canonical starter is always conjugate to itself. Moreover, a starter is canonical if and only if it is \textit{self-conjugate}. The following properties of conjugate 2-partitions are rather trivial as each of them follows immediately from the definitions of their counterparts, but very important: \begin{lemma} If a 2-partition is either a starter, or canonical, or strong, or skew, or Skolem, or cardioidal, so is its conjugate. \end{lemma} \subsection{Properties of the product of two 2-patitions}\label{the properties of two 2-patitions} In Example \ref{ex1}, the two 2-partitions we use are starters in $\mathbb{Z}_3$. (We have no choice as the only 2-partition of $\mathbb{Z}_3^*$ is a starter in $\mathbb{Z}_3$). And their product turns out to be a starter in $\mathbb{Z}_{3\cdot 3}=\mathbb{Z}_9$. Consider the product of two starters from different groups. \begin{ex} \label{ex2} Let us construct the set $W_{ST}$ for starters $S=\{\{1,4\},\{2,3\}\}$ in $\mathbb{Z}_5$ and $T=\{1,2\}$ in $\mathbb{Z}_3$. In this case $n=5, m=3$ and $q=2, p=1$. Take $\tilde S=\{(1,4), (2,3)\}$, $\bar T=\{(1,2)\}$, $\bar T'=\{(2,1)\}$. Then we have the pairs of the two types:\\ (i) $\{5\times0+1,5\times0+4\}=\{1,4\},\ \{5\times0+2,5\times0+3\}=\{2,3\}$ $\{5\times 1+1,5\times 2+4\}= \{ 6, 14\} $, $\{5\times 1+2,5\times 2+3\}= \{7,13\} $ $\{5\times 2+1,5\times 1+4\}= \{ 11, 9\} $, $\{5\times 2+2,5\times 1+3\}= \{12,8\} $\\ (ii) $\{5\times 1+0, 5\times 2+0\}=\{5,10\}$\\ The set of these seven pairs constitutes $W_{ST}$. In fact, $W_{ST}$ is a starter in $\mathbb{Z}_{3\cdot 5}=\mathbb{Z}_{15}$. \end{ex} This is not coincidental. A product of two starters is a starter. It turns out that the converse is true as well, that is, if $W_{ST}$ is a starter, then both $S$ and $T$ are starters. The following theorem secures this property. \begin{theorem}\label{W is a starter} Let $n,m\ge 3$ be odd integers and $S$ and $T$ be 2-partitions of $\mathbb{Z}_n^*$ and $\mathbb{Z}_m^*$ respectively. Their product $W_{ST}$ (Definition \ref{WST}) is a starter in $\mathbb{Z}_{mn}$ if and only if $S$ is a starter in $\mathbb{Z}_n$ and $T$ is a starter in $\mathbb{Z}_m$. \end{theorem} \begin{proof} (a) Sufficiency. Let $S$ be a starter in $\mathbb{Z}_n$ and $T$ be a starter in $\mathbb{Z}_m$. In order to prove that $W_{ST}$ is a starter in $\mathbb{Z}_{mn}$, we need to show that $W_{ST}$ is a partition of $\mathbb{Z}^*_{mn}$ into pairs $\{\{u_i,v_i\}\}_{i=1}^{(mn-1)/2}$ such that \begin{equation}\label{starter's differences} \{\pm(u_i-v_i)\pmod{mn}|\{u_i,v_i\}\in W_{ST}\}=\mathbb{Z}^*_{mn}. \end{equation} Now, let us look at the differences $\pm( u_k-v_k)\pmod{mn}$, $1\le k\le \frac{mn-1}2$. Since $T$ is a starter in $\mathbb{Z}_m$, the pairs of type (ii) make all possible $m-1$ differences of the form $n\Delta$, where $\Delta\in \mathbb{Z}_m^*$. Consider two distinct pairs $\{u_1,v_1\}$ and $ \{u_2,v_2\}$ of type (i). Suppose, for the sake of contradiction, that $(u_1-v_1)\equiv (u_2-v_2) \pmod{mn}$. Using notation (\ref{u1v1}), we have \begin{equation}\label{W starter1} [(r_1 n+x_1)-(t_1n+y_1)] \equiv [(r_2n+x_2)- (t_2n+y_2)]\pmod{mn}. \end{equation} By Lemma \ref{modul}(1), equation (\ref{W starter1}) implies $$[(r_1 n+x_1)-(t_1n+y_1)] \equiv [(r_2n+x_2)- (t_2n+y_2)]\pmod{n}.$$ Then, by Lemma \ref{modul}(3), we obtain $$(x_1-y_1) \equiv (x_2- y_2)\pmod{n}.$$ Since $S$ is a starter in $\mathbb{Z}_n$, it is possible if and only if $\{x_1,y_1\}= \{x_2,y_2\}$. WLOG, assume that this pair is ordered by $(x_1,y_1)= (x_2,y_2)=(x,y)\in \bar{S}$. We have \begin{equation}\label{W starter2} ((r_1 n+x)-(t_1n+y)) \equiv ((r_2n+x)- (t_2n+y))\pmod{mn}. \end{equation} By Lemma \ref{modul}(2), equation (\ref{W starter2}) implies $(r_1-t_1)\equiv (r_2- t_2)\pmod{m}$. Since $T$ is a starter in $\mathbb{Z}_m$, it is possible if and only if $r_1=r_2$ and $t_1=t_2$, which contradicts our assumption that $\{u_1,v_1\}$ and $ \{u_2,v_2\}$ are distinct pairs. \smallskip (b) Necessity. Suppose that at least one of the 2-partitions $S$ and $T$ is not a starter of the corresponding group. Then to show that $W_{ST}$ is not a starter, it suffices to find at least two pairs of $W_{ST}$ which produce the same differences. If $T$ is not a starter then it contains at least two pairs $\{r_1,t_1\},\ \{r_2,t_2\}$ such that $\{\pm(r_1-t_1)\}=\{\pm(r_2-t_2)\} \pmod{m}$. Consequently, two pairs $\{r_1n,t_1n\},\ \{r_2n,t_2n\}$ in $W_{ST}$ of type (ii) will yield the same differences modulo $mn$. If $S$ is not a starter then it contains at least two pairs $\{x_1,y_1\},\ \{x_2,y_2\}$ such that $\{\pm(x_1-y_1)\}=\{\pm(x_2-y_2)\}\pmod{n}$. Then there are $2m$ pairs in $W_{ST}$ of types (i), which produce differences congruent to $\pm (x_1-y_1)$ modulo $n$. They are $\{x_1,y_1\},\ \{x_2,y_2\}$, $\{r_i n+x_1,t_i n+y_1\},\ \{r_in+x_2,t_in+y_2\}$, $\{-r_i n+x_1,-t_i n+y_1\},\ \{-r_in+x_2,-t_in+y_2\}$, $1\le i\le p$. Hence, by Lemma \ref{modul} (4), we conclude that there are two pairs among these $2m$ pairs that satisfy $\{\pm(u_1-v_1)\}=\{\pm(u_2-v_2)\}\pmod{mn}$. So, (\ref{starter's differences}) is impossible, which means $W_{ST}$ is not a starter in $\mathbb{Z}_{mn}$. This completes the proof of necessity. \end{proof} Next statement clarifies the conditions for obtaining a strong 2-partition. \begin{theorem}\label{W is strong} Let $n,m\ge 3$ be odd integers and $S$ and $T$ be 2-partitions of $\mathbb{Z}_n^*$ and $\mathbb{Z}_m^*$ respectively. Then their product $W_{ST}$ (Definition \ref{WST}) is a strong 2-partition of $\mathbb{Z}_{mn}^*$ if and only if $S$ is strong and $T$ is skew. \end{theorem} \begin{proof} (a) Sufficiency. Let $S$ be strong and $T$ be skew. To show that $W_{ST}$ is strong, we have to show that if $\{u_1,v_1\}$ and $\{u_2,v_2\}$ are two distinct pairs in $W_{ST}$ then $u_1+v_1\not\equiv u_2+v_2\pmod{mn}$, and for any $\{u,v\}\in W_{ST}$ there holds $u+v\not\equiv 0\pmod{mn}$. Suppose, for the sake of contradiction, \begin{equation}\label{thm2.1} u_1+v_1\equiv u_2+v_2\pmod{mn}. \end{equation} Using notation (\ref{u1v1}), we have $$(r_1+t_1)n+x_1+y_1\equiv (r_2+t_2)n+x_2+y_2 \pmod{mn}.$$ Consequently, by Lemma \ref{modul}(1), we obtain $$(r_1+t_1)n+x_1+y_1\equiv (r_2+t_2)n+x_2+y_2 \pmod{n}.$$ Then, by Lemma \ref{modul}(3), $x_1+y_1\equiv x_2+y_2\equiv C \pmod{n}$. If $C=0$ we get pairs of type (ii). Otherwise, since $\{x_i,y_i\}\in S,\ i=1,2$, and $S$ is strong, we conclude $\{x_1,y_1\}=\{x_2,y_2\}$. In either case, by Lemma \ref{modul}(2), (\ref{thm2.1}) implies $r_1+t_1\equiv r_2+t_2\pmod{m}$. Here, the pairs $(r_1,t_1), (r_2,t_2)$ are from either set $\bar T$ or $\bar T'$. By the hypothesis of the theorem, $T$ is skew, which means that all sums of the pairs of $T$ along with $T'$ are different $\pmod{m}$. Thus, there are two options: \begin{enumerate} \item either $r_1=r_2$ and $t_1=t_2$, \item or $r_1=t_2$ and $t_1=r_2$. \end{enumerate} Case 1 contradicts our assumption that $\{u_1,v_1\}$ and $\{u_2,v_2\}$ are two distinct pairs in $W_{ST}$. Case 2 is impossible due to the following reason. Let $r_1=t_2=r$ and $t_1=r_2=t$. WLOG, assume $(r,t)\in \bar{T}$ and $(t,r)\in \bar{T'}$. But $(-r,-t)\in \bar{T'}$, and $r-t=(-t)-(-r)$, which implies that $t\equiv -r\pmod{m}$. The latter means that $\{-r,r\}\in T$, which contradicts our assumption that $T$ is skew ($T$ is not even strong in that case since $-r+r\equiv 0\pmod{m}$). Finally, let $\{u,v\}\in W_{ST}$, $u=rn+x, v=tn+y$. If $u+v\equiv 0\pmod{mn}$, then by Lemma \ref{modul}(1 \& 3), $x+y\equiv 0\pmod{n}$, which is impossible since $S$ is strong. This completes the proof that $W_{ST}$ is a strong 2-partition $\mathbb{Z}_{mn}$. \smallskip (b) Necessity. If $S$ is not strong, then, regardless of the properties of $T$, there are two possible cases: \begin{enumerate} \item $S$ contains a pair $\{x,y\}$ such that $x+y\equiv 0\pmod{n}$. Then consider all the pairs of the type (i) and of the form $\{rn+x,tn+y\}$. There are exactly $m$ such pairs. These $m$ pairs along with $(m-1)/2$ pairs of type (ii) yield $(3m-1)/2$ sums in $\mathbb{Z}_{mn}$, which are congruent to 0 modulo $n$. But these sums can not be all different and non-zero modulo $mn$ by Lemma \ref{modul}(4). Thus $W_{ST}$ is not strong. \item $S$ contains two pairs $\{x_1,y_1\}$ and $\{x_2,y_2\}$ such that $x_1+y_1\equiv x_2+y_2\equiv c\pmod{n}$. Then consider all the pairs of the type (i) and of the form $\{rn+x_i,tn+y_i\},\ i=1,2$. There are $2m$ of them, and all of them yield sums in $\mathbb{Z}_{mn}$, which are congruent to $c$ modulo $n$. By Lemma \ref{modul} (4), the sums can not be all different modulo $mn$. Thus $W_{ST}$ is not strong. \end{enumerate} If $T$ is not skew, some of the pairs, say, $(r_1,t_1)\in\bar{T}$ and $(r_2,t_2)\in\bar{T'}$ yield the same sum $\pmod{m}$. Let us take a pair $(x,y)\in\tilde{S}$. Then pairs $\{r_1n+x,t_1n+y\}$ and $\{r_2n+x,t_2n+y\}$ produce the same sum modulo $mn$. Hence $W_{ST}$ is not strong. \end{proof} \begin{remark} Not every 2-partition of a composite order is a product of two 2-partitions. For example, there are known \cite{b05} strong starters of order $3p$ for some prime $p>3$, but obtaining them by means of a product of two starters of orders 3 and $p$ respectively would have contradicted Theorem \ref{W is strong}. \end{remark} The following Theorem clarifies the question whether or not $W_{ST}$ is skew. \begin{theorem}\label{skew theorem} Let $n,m\ge 3$ be odd integers and $S$ and $T$ be 2-partitions of $\mathbb{Z}_n$ and $\mathbb{Z}_m$ respectively. Then their product $W_{ST}$ (Definition \ref{WST}) is skew if and only if both $S$ and $T$ are skew. \end{theorem} \begin{proof} (a) Sufficiency. Let $S$ and $T$ be skew and let $\{u_1,v_1\}$ and $\{u_2,v_2\}$ be two arbitrary distinct pairs of $W_{ST}$. By Theorem \ref{W is strong}, we know that $W_{ST}$ is strong, that is, $u_1+v_1 \not\equiv u_2+v_2 \pmod{mn}$. To show that $W_{ST}$ is skew, it remains to show that there holds \begin{equation}\label{skewness condition} u_1+v_1 \not\equiv - (u_2+v_2) \pmod{mn}. \end{equation} Suppose, for the sake of contradiction, that (\ref{skewness condition}) is not true, that is, in notation (\ref{u1v1}), $$ r_1n+x_1+t_1n+y_1\equiv - (r_2n+x_2+t_2n+y_2) \pmod{mn}. $$ By Lemma \ref{modul}(1 \& 3), we obtain $x_1+y_1 \equiv -(x_2+y_2) \pmod{n}$, which is impossible as $S$ is skew, unless $\{\{u_i,v_i\}\}_{i=1}^2$ are pairs of type (ii). But then $(r_1n+t_1n)\equiv -(r_2n+t_2n)\pmod{mn}$, and hence, by Lemma \ref{modul}(1) $(r_1+t_1)\equiv -(r_2+t_2)\pmod{m}$, which is impossible, since $T$ is skew. This contradiction implies that for any two distinct pairs in $W_{ST}$, $\{u_1,v_1\}$ and $\{u_2,v_2\}$, there holds (\ref{skewness condition}). Therefore, $W_{ST}$ is skew. \smallskip (b) Necessity. By Theorem \ref{W is strong}, if $T$ is not skew, then $W_{ST}$ is not skew. Now, suppose, $S$ is strong but not skew, then $\bar{S}$ contains at least two pairs $(x_1,y_1)$ and $(x_2,y_2)$ such that $(x_1+y_1)\equiv -(x_2+y_2)\equiv c\pmod{n}$. It is clear that $c\not\equiv 0\pmod{n}$, as $S$ is strong. There are $m$ pairs $\{rn+x_1,tn+y_1\}$ of the type (i) in the starter $W_{ST}$.There are also $m$ pairs $\{rn+x_2,tn+y_2\}$ of the type (i) in the conjugate 2-partition $W'_{ST}$. These pairs yield $2m$ sums in $\mathbb{Z}_{mn}$, which are congruent to $c$ modulo $n$. But they could not be all different modulo $mn$ by Lemma \ref{modul} (4). We conclude that there are two pairs among these $2m$ pairs that satisfy $\{\pm(u_1+v_1)\}=\{\pm(u_2+v_2)\}\pmod{mn}$. Hence, $W_{ST}$ is not skew. \end{proof} Finally, we deal with Skolem 2-partitions. \begin{theorem}\label{Skolemness} Let $n,m\ge 3$ be odd integers and $S$ and $T$ be 2-partitions of $\mathbb{Z}_n$ and $\mathbb{Z}_m$ respectively. Then their product $W_{ST}$ (Definition \ref{WST}) is a Skolem 2-partition of $\mathbb{Z}_{mn}$ if and only if $S$ and $T$ are both Skolem 2-partitions of $\mathbb{Z}_{n}$ and $\mathbb{Z}_{m}$ respectively. \end{theorem} \begin{proof} Let us order $\mathbb{Z}_{k}: 0<1<...<k-1,\ k\in\mathbb{N}$. (a) Sufficiency. Let $S$ and $T$ be Skolem 2-partitions of orders $n$ and $m$ respectively. To show that $W_{ST}$ is Skolem, we have to show that all its pairs $\{u,v\}$ are Skolem pairs of order $mn$, that is, $u<v$ and $v-u\le \frac{mn-1}2$. Let the pair $\{x^*,y^*\}\in S$ make the greatest difference in $S$, that is, $x^*<y^*,\ y^*-x^*\le\frac{n-1}2$. As well, consider the pairs in $\{r^*,t^*\}\in T$ and $\{-t^*,-r^*\}\in T'$, which make the greatest difference $t^*-r^*=(-r^*)-(-t^*)\le \frac{m-1}2$. Each of the pairs $\{u,v\}$ in the form either $(x_i,y_i)$ or $(r_jn,t_jn)$ is Skolem because for any $n\ge 1$ and $m\ge 1$ we have: $$ v-u\le y^*-x^*\le\frac{n-1}{2}<\frac{mn-1}{2}, \qquad v-u\le (t^*-r^*)n\le\left(\frac{m-1}{2}\right)n=\frac{mn-n}{2}<\frac{mn-1}{2}. $$ For other pairs of type (i) we consider two cases: \begin{enumerate} \item $(x^*,y^*)\in\bar{S}$. Then a pair $\{u,v\}\in W_{ST}$, where $u=r^*n+x^*<\ v=t^*n+y^*$, makes the greatest possible difference among the pairs of $W_{ST}$, $$v-u=t^*n+y^*-(r^*n+x^*)\le\frac{m-1}{2}n-\frac{n-1}{2}=\frac{mn-1}{2}.$$ \item $(y^*,x^*)\in\bar{S}$. Then a pair $\{u,v\}\in W_{ST}$, where $u=-r^*n+y^*>\ v=-t^*n+x^*$, makes the greatest possible difference among the pairs of $W_{ST}$, $$u-v=-r^*n+y^*-(-t^*n+x^*)\le\frac{mn-1}{2}.$$ \end{enumerate} All other pairs of type (i) are clearly Skolem as they make no difference greater than $\frac{mn-1}{2}$. (b) Necessity. If $T$ is not Skolem, then it contains a pair $\{r,t\}$ which is not Skolem, that is, given $r<t$, we have $t-r\ge\frac{m+1}{2}$. Then the corresponding pair of type (ii), $\{rn,tn\}\in W_{ST}$, yields a difference greater than $\frac{mn-1}{2}$: $$tn-rn\ge\frac{m+1}{2}n=\frac{mn+n}{2}>\frac{mn-1}{2}.$$ If $S$ is not Skolem, there is $\{x,y\}\in S$ which is not Skolem, that is, given $x<y,\ y-x\ge\frac{n+1}{2}$. Now, let us take the pair $\{r,t\}\in T$, such that $r<t$ and $t-r\ge\frac{m-1}{2}$. WLOG, assume $(x,y)\in\bar{S}$ and $(r,t)\in \bar{T}$. Then the pair $\{u,v\}\in W_{ST},\ u=rn+x<v=tn+y$ makes the difference $$v-u \ge\frac{m-1}{2}n+\frac{n+1}{2}=\frac{mn-n+n+1}{2}=\frac{mn+1}{2}.$$ That means $\{u,v\}\in W_{ST}$ is not a Skolem pair (\ref{skolem}) of order $mn$ and hence, by Lemma \ref{defn of Skolem starter2}, $W_{ST}$ is not Skolem. \end{proof} Let us summarize the results of this subsection. \begin{theorem}\label{intermediate result} Let $S$ and $T$ be 2-partitions in $\mathbb{Z}_n$ and $\mathbb{Z}_m$ respectively. 1. If both $S$ and $T$ are Skolem starters and, in addition, $S$ is strong and $T$ is skew, then the product $ W_{ST}$ (Definition \ref{WST}) is a strong Skolem starter in $\mathbb{Z}_{nm}$. Moreover, if $S$ and $T$ are both skew and Skolem in their groups, then $ W_{ST}$ is a skew Skolem starter in $\mathbb{Z}_{nm}$. 2. If the product $W_{ST}$ of partitions $S$ and $T$ is a strong but not skew Skolem starter then $S$ is a strong but not skew Skolem starter and $T$ is a skew Skolem starter. If the product $W_{ST}$ of partitions $S$ and $T$ is a skew Skolem starter then both $S$ and $T$ are skew Skolem starters. \end{theorem} \begin{proof}The statement follows from Theorems \ref{W is a starter}, \ref{W is strong}, \ref{Skolemness}, \ref{skew theorem}. \end{proof} \begin{remark}The direct part of Theorem \ref{intermediate result} can be obtained by combining the idea and constructions of Gross in \cite{b22}, Turgeon in \cite{b28} and Chen et al \cite{b27}. The converse statement requires a more general consideration of the product of two partitions. \end{remark} \begin{ex}\label{order 187} Let us choose the strong (but not skew) Skolem starter $S$ of order 17 from Example \ref{example of independence of properties}. Using Definition \ref{WST}, it is possible to generate strong Skolem starters of orders $17m$, where $m$ is one of the orders of the known skew Skolem starters. By Theorem \ref{Ogandzhanyants theorem1}, there are infinitely many such starters. Paper \cite{b21} gives an explicit way of constructing a family of cardioidal starters (\ref{card}). It was proven in \cite{b21} that every cardioidal starter is skew unless its order is divisible by 3. Take for example the following cardioidal starter of order $11$: $T=\{\{1,2\},\{7,9\},\{3,6\},\{4,8\},\{5,10\}\}$. Since $T$ is a skew Skolem starter, $W_{ST}$ is a strong Skolem starter in $\mathbb{Z}_{17\cdot 11}=\mathbb{Z}_{187}$. \end{ex} Definition \ref{WST} can be modified in a variety of ways to achieve diversity of the obtained 2-partitions. For example, we could make an alternative construction of the pairs (\ref{uv}) of $W_{ST}$: (i$^*)\ 2pq$ pairs: one for each $(x,y)\in \bar{S}\cup \bar{S'}$ and for each $(r,t)\in \tilde T$ and $q$ pairs for $r=t=0$ and $(x,y)\in \bar{S}$. (ii$^*)\ p$ pairs: one for each $(r,t)\in \tilde T$ and $x=y=0$.\\ The proofs of the statements, involving this way of constructing $W_{ST}$, will be analogous. This modification, while significantly expanding the variety of the obtained starters, does not let us produce a strong starter as a product of two strong starters. For example, we cannot obtain a strong Skolem starter of order $17^2=289$ out of the starter $S$ of order 17 used in Example \ref{order 187}. To achieve this objective, we have to further modify Definition \ref{WST}. \subsection{The product of 2-partitions with a nucleus}\label{The parametrized product of starters} In this section, we construct a family of products by introducing the following object. \begin{df}\label{X} The set of ordered pairs $X_m=\{(u_i,v_i)\}_{i=1}^{m-1}\subset \mathbb{Z}_m^*\times\mathbb{Z}_m^*$ is called a nucleus of order $m$ if $\cup_{i=1}^{m-1} u_i=\cup_{i=1}^{m-1} v_i=\mathbb{Z}_m^*$. \end{df} Then we define a product with nucleus $X_m$. \begin{df}\label{WXST} Let $S$ and $T$ be 2-partitions of $\mathbb{Z}_n^*$ and $\mathbb{Z}_m^*$ respectively, $ n=2q+1,\ m=2p+1, \, q,p\ge 1$. Let $X_m$ be a nucleus of order $m$. The set $W^X_{ST}$ consisting of pairs of the form(\ref{uv}) divided in the following types: (i$_X)\ mq$ pairs: one for each $(x,y)\in \tilde S$ and $(r,t)\in \{0,0\} \cup X_m$; (ii$_X)\ p$ pairs: one for each $(r,t)\in \tilde T$ and $x=y=0$, is called the $X$-generated product of $S$ and $T$. \end{df} Below we will show that different choices of nucleus $X$ will lead to different properties of $W_{ST}^X$. It turns out that the use of a proper nucleus allows us to loosen the hypotheses of Theorems \ref{W is strong} and \ref{intermediate result}. Consequently, we can extend the family of strong Skolem starters and extend the list of orders $n$ such that $\mathbb{Z}_n$ admits a strong Skolem starter. \begin{theorem}\label{WX is a partition}The set $W^X_{ST}$ (Definitions \ref{X} and \ref{WXST}) is a partition of $\mathbb{Z}_{mn}$. \end{theorem} \begin{proof} The proof is analogous to that of Theorem \ref{W is a partition} as the set of ordered pairs $X$ has all the properties of $\bar T\cup \bar T'$ used to prove that Theorem. (Similar proof of this statement was offered in \cite{b25}). \end{proof} \begin{df}\label{Xproperties} A nucleus $X_m$ of order $m$ is called \begin{enumerate} \item subtractive in $\mathbb{Z}_m$, if $\{(u_i-v_i)\pmod{m}\}_{i=1}^{m-1}=\mathbb{Z}_m^*$; \item skew in $\mathbb{Z}_m$, if $\{(u_i+v_i)\pmod{m}\}_{i=1}^{m-1}=\mathbb{Z}_{m}^*$; \item Skolem in $\mathbb{Z}_m$, if it consists of Skolem pairs (\ref{skolem}) of order $m$. \end{enumerate} \end{df} \begin{theorem}\label{WX is a starter} The set $W^X_{ST}$ (Definitions \ref{X} and \ref{WXST}) is a starter in $\mathbb{Z}_{mn}$ if and only if $X_m$ is subtractive in $\mathbb{Z}_m$, and $S$ and $T$ are starters in $\mathbb{Z}_n$ and $\mathbb{Z}_m$ respectively. \end{theorem} \begin{proof} (a) Sufficiency The proof of sufficiency is analogous to that of Theorem \ref{W is a starter} as the set of ordered pairs $X$ has all the properties of $\bar T\cup \bar T'$ used to prove the part (a) of that Theorem. (b) Necessity Since it is subtractive, one can rewrite the set $X_m$ in the form $X_m=\cup_{i=1}^{(m-1)/2} P_i$, where $P_i=\{(u'_i,v'_i), (u''_i,v''_i)\}$ such that $ (u'_i - v'_i)\equiv -(u''_i-v''_i)\pmod m$ for all $1\le i\le (m-1)/2$. Suppose for some $i\ne j,\ P_i=P_j$. WLOG, assume $(u'_i - v'_i)\equiv (u'_j-v'_j)\pmod m$. Consequently, for a pair $(x,y)\in \tilde S$, we have $\{u'_i n+x,v'_in+y\}$ and $\{u'_jn+x,v'_jn+y\}$ are in $W^X_{ST}$. But these pairs of $W^X_{ST}$ yield the same differences modulo $mn$. Hence, $W^X_{ST}$ is not a starter. The rest of the proof is analogous to that of Theorem \ref{W is a starter}. \end{proof} Thus, Theorems \ref{W is a partition} and \ref{W is a starter} are particular cases of Theorems \ref{WX is a partition} and \ref{WX is a starter} respectively, where $X$ coincides with $\bar T\cup \bar T'$. The direct part of the statement of Theorem \ref{W is a starter} was proven in \cite{b28} as the subtractive and Skolem nucleus can be viewed as a special case of a \textit{perfect difference matrix}. \begin{theorem}\label{WXST is strong and skew} The set $W^X_{ST}$ (Definitions \ref{X} and \ref{WXST}) is a strong (skew) 2-partition of $\mathbb{Z}_{mn}^*$ if and only if $X_m$ is skew in $\mathbb{Z}_m$, $S$ is strong (skew) 2-partition of $\mathbb{Z}_n^*$ and $T$ is a strong (skew) 2-partition of $\mathbb{Z}_m^*$. \end{theorem} \begin{proof} Here, in order to obtain a strong 2-partition $W^X_{ST}$, we do not require the second 2-partition $T$ to be skew (unlike in Theorem \ref{W is strong}), because now the pairs of type ($i_X$) are formed from the skew nucleus $X_m$ and the strong first 2-partition $S$. The rest of the proof is analogous to those of Theorems \ref{W is strong} and \ref{skew theorem}. \end{proof} \begin{theorem}\label{XSkolemness} The set $W^X_{ST}$ (Definitions \ref{X} and \ref{WXST}) is a Skolem 2-partition of $\mathbb{Z}_{mn}^*$ if and only if $X_m$ is Skolem in $\mathbb{Z}_m$ and $S$ and $T$ are Skolem 2-partitions of in $\mathbb{Z}_n^*$ and $\mathbb{Z}_m^*$ respectively. \end{theorem} \begin{proof} The proof is analogous to that of Theorem \ref{Skolemness}. \end{proof} Note that a nucleus $X_m$, which is both skew and subtractive, does not exist for some odd integer orders $m\le 3$. This can be shown using the notion of a \textit{strong permutation} of a set of elements in a group. \begin{remark} Recall that a permutation $\pi$ is called strong if the maps $i\mapsto (\pi(i)-i)$ and $i\mapsto (\pi(i)+ i)$ are permutations, too. In 1973, Wallis and Mullin proved \cite{b25} that if $G$ is a group of odd order $n,\ 3\mid n$, and the 3-Sylow subgroup of $G$ is cyclic, then $G$ does not admit a strong permutation. We adjust this statement to our context \end{remark} \begin{lemma}\label{strong permutation} A skew and subtractive nucleus $X_m$ of order $m\ge 3$ exists if and only if $3\nmid m$. \end{lemma} \begin{proof} (a) According to Definitions \ref{X} and \ref{Xproperties}, the existence of a skew and subtractive $X_m$ is equivalent to the existence of a strong permutation $i\mapsto\pi(i)$ of a set $\{0,1,...,m-1\}$, given $0\mapsto\pi(0)=0$: \begin{equation} \pi:\ v_i\mapsto u_i,\ 1\le i\le m-1. \end{equation} For the sake of contradiction, assume that $3\mid m$. Then we can write $m=3^tk,\ t\ge 1, \ 3\nmid k$. Assuming that $\pi$ is a strong permutation of the elements of $\mathbb{Z}_m$, consider the sums: \begin{equation} \begin{split} \sum_{i\in\mathbb{Z}_m^*}i^2\equiv\sum_{i\in\mathbb{Z}_m^*}\pi(i)^2 & \equiv\sum_{i\in\mathbb{Z}_m^*}\pi(i)^2+\sum_{i\in\mathbb{Z}_m^*}i^2+2\sum_{i\in\mathbb{Z}_m^*}i\pi(i) \\& \equiv\sum_{i\in\mathbb{Z}_m^*}\pi(i)^2+\sum_{i\in\mathbb{Z}_m^*}i^2-2\sum_{i\in\mathbb{Z}_m^*}i\pi(i) \pmod m. \end{split} \end{equation} Taking the second line from the first one, we get $\sum_{i\in\mathbb{Z}_m^*}i\pi(i)\equiv 0 \pmod m$. Hence, $\sum_{i\in\mathbb{Z}_m^*}i^2\equiv 0 \pmod m$. But since $k,\ (2\cdot 3^tk-1),\ \frac{3^tk-1}{2}$ are all coprime to $3$, we have $$\sum_{i=1}^{3^tk-1}i^2=\frac{3^tk(3^tk-1)(2\cdot 3^tk-1)}{6}=\frac{3^{t-1}k(2\cdot 3^tk-1)(3^tk-1)}{2}\not\equiv 0\pmod{m}.$$ This is a contradiction. Thus, $3\nmid m$. (b) Let $3\nmid m$. Then the permutation $\pi:\ i\mapsto 2i$ of the elements of $\mathbb{Z}_m$ is clearly strong. (The use of this permutation for constructing strong starters was offered by Gross in \cite{b22}.) The permutation yields the skew and subtractive nucleus, which we denote by $C_m$ \begin{equation}\label{Cm} X_m=C_m=\{(i, 2i\pmod{m}),\ 0\le i\le m-1\}. \end{equation} \end{proof} \begin{remark} A nucleus $X_m$ of order $m$ divisible by 3 may be skew, unless we require that it is also subtractive. For example, we can take $X_9=\bar{Q}\cup\bar{Q'}$ of order 9, where $Q$ is the 2-partition of $\mathbb{Z}_9^*$ from Example \ref{example of independence of properties}. In this case, $X_9$ is skew and Skolem, but not subtractive. \end{remark} Next subsection discusses the product with the nucleus $C_m$ found in the proof of Lemma \ref{strong permutation}. \subsection{The cardioidal product}\label{Cardioidal product of two 2-partitions} \begin{df}\label{dfn of cardioidal product} The set $C_m$ (\ref{Cm}) is called a cardioidal nucleus of order $m$. For 2-partitions, $S$ of $\mathbb{Z}^*_{m},$ and $T$ of $\mathbb{Z}^*_{n},\ m=2q+1,\ n=2p+1$, $q,p\ge 1$, the product introduced by Definition \ref{WXST} with nucleus $X_m=C_m$ is called a cardioidal product of $S$ and $T$. It is denoted by $W^c_{ST}$. \end{df} \begin{lemma} \label {w=wc} If $T$ is a cardioidal starter then $W_{ST}=W^c_{ST}$. \end{lemma} \begin{proof} If $T$ is a cardioidal starter (refer to Definition \ref{strong, skew starters}) then $\bar T\cup\bar T'= \{(i,2i\pmod{m}), 1\le i\le m-1\}$. The rest follows from definitions \ref{WST} and \ref{dfn of cardioidal product}. \end{proof} Note, that if $T$ is a cardioidal 2-partition of $\mathbb{Z}_m^*$, but not a starter, Theorem \ref{w=wc} does not work. The following theorem clarifies when the cardioidal nucleus obeys the conditions of each theorem in Subsection \ref{The parametrized product of starters}. \begin{theorem}\label{The cardioidal parameter} The cardioidal nucleus $C_m$ is subtractive and Skolem for all odd $m\ge 3$. The cardioidal nucleus $C_m$ is skew if and only if $3\nmid m$. \end{theorem} \begin{proof} The statement follows from Lemma 3.2 of \cite{b21}, and Lemma \ref{strong permutation}. \end{proof} Then, we have: \begin{theorem}\label{main1} If $3\nmid m$ and $S$ in $\mathbb{Z}_n$ and $T$ in $\mathbb{Z}_m$ are strong (skew) Skolem starters, then so is $W^c_{ST}$. \end{theorem} \begin{proof} The statement follows from Theorems \ref{WXST is strong and skew}, \ref{XSkolemness} and \ref{The cardioidal parameter}. \end{proof} Remarkably, a pair of cardiodial starters does not necessarily produce a cardioidal starter. The following example demonstrates this idea. \begin{ex}\label{Example 121} Consider the starter $R=\{\{1,2\},\{7,9\},\{3,6\},\{4,8\},\{5,10\}\}$ in $\mathbb{Z}_{11}$. It is cardioidal as $2\equiv 1\cdot 2\pmod{11}$, $7\equiv 9\cdot 2\pmod{11}$, $6\equiv 3\cdot 2\pmod{11}$, $8\equiv 4\cdot 2\pmod{11}$, $10\equiv 5\cdot 2\pmod{11}$. But the product $W_{RR}$ of $R$ with itself contains the pair $\{7,9\}$, which is not cardioidal $\pmod{121}$: $9\cdot 2\equiv 18\not\equiv 7 \pmod{121}$ and $7\cdot 2\equiv 14\not\equiv 9 \pmod{121}$. Note that in this case $W_{RR}=W^c_{RR}$ by Lemma \ref{w=wc}. \end{ex} Let us indicate all possible cases, when a product $W_{ST}$ of two cardioidal starters, $S$ and $T$, is cardioidal. \begin{lemma}\label{the cases} A product $W_{ST}$ of two cardioidal starters, $S$ in $\mathbb{Z}_n$ and $T$ in $\mathbb{Z}_m$, is cardioidal if and only if either $n=m=3$ or $m>3$ and $\tilde S=(1,2)$. \end{lemma} \begin{proof} \begin{enumerate} \item $n=m=3$. The case $\tilde S=(1,2)$ is given in Example \ref{ex1}. Choosing $\tilde S=(2,1)$ does not change $W_{ST}$. \item $n=3,\ m>3$ and $\tilde S=(1,2)$. By Definition \ref{WST}, the pairs of type (ii) are cardioidal as $T$ is cardioidal. Consider a pair $\{u,v\}\in W_{ST}$ of type (i). We have $\{u,v\}=\{in+1,jn+2\}$, where $j\equiv 2i\pmod m$. Then $2(in+1)\equiv jn+2\pmod{mn}$. That means that $\{u,v\}$ is cardioidal. Hence, $W_{ST}$ is a cardioidal starter of order $mn$. \item $n=3,\ m>3$ and $\tilde S=(2,1)$. Then $\{5,7\}\in W_{ST}$. But this pair is not cardioidal of order $mn>9$. Hence, $W_{ST}$ is not cardioidal. \item $n> 3$. Then either $\{-2,-1\}\pmod{n}$ or $\{-4,-2\}\pmod{n}$ is a pair in $S$ as it is cardioidal of order $n$. Regardless of what kind of starter $T$ is, either $\{n-2,n-1\}$ or $\{n-4,n-2\}$ appears in $W_{ST}$. But neither of these pairs is cardioidal of order $mn>9$. Hence, $W_{ST}$ is not cardioidal. \end{enumerate} \end{proof} Note that all strong Skolem starters referred to in Theorem \ref{Ogandzhanyants theorem1} and constructed in \cite{b21} are cardioidal. The paper \cite{b21} establishes that each strong cardioidal starter is skew, and hence, by Theorems \ref{skew theorem}, the product of strong cardioidal starters is a skew starter. For instance, $W_{RR}$ from Example \ref{Example 121} is a skew starter in $\mathbb{Z}_{121}$ (as well as $W_{RR'}$ and $W_{R'R'}$). Theorem \ref{Ogandzhanyants theorem1} says that $\mathbb{Z}_n$ admits a (cardioidal) skew Skolem starter for all $n$ from $\overline{C_2\setminus\{3\}}$, i.e. from the multiplicative closure of the set $C_2\setminus\{3\}$, where $C_2=\{p\ \mbox{prime}| \mathrm{ord}_p(2)\equiv 2\pmod{4}\}$. Although in \cite{b21}, it was proved that skew cardioidal starters do not exist for orders other than indicated in Theorem \ref{Ogandzhanyants theorem1}, it was also shown that $C_2$ is infinite. Therefore, Theorem \ref{intermediate result} implies an explicit construction of an infinite family of skew Skolem starters of all composite orders from $\overline{C_2\setminus\{3\}}$. This family is fully new because it consists of starters that are not cardioidal. This family, in its turn, gives rise to further explicit construction of infinitely many strong (and even skew) Skolem starters. Meanwhile, Theorems \ref{intermediate result} and \ref{main} do not limit us to these composite orders. If we find, by some means, a strong Skolem starter $S$ of an order $n\not\in \overline{C_2\setminus\{3\}}$, Theorems \ref{W is strong} and \ref{WXST is strong and skew} pave an explicit way to construct a strong Skolem starter of any order $nm$, where $m\in \overline{C_2\setminus\{3\}}$. The following example illustrates this idea. \begin{ex} Consider a strong Skolem starter $S=\{\{25,26\},\{20,22\},\{21,24\},\{8,12\},\\\{18,23\},\{10,16\},\{7,14\},\{1,9\},\{2,11\},\{3,13\},\{4,15\},\{5,17\},\{6,19\}\}$, found by Shalaby \cite{b09}, and $T=\{\{1,2\}, \{15,17\},\{13,16\},\{4,8\},\{5,10\},\{6,12\},\{7,14\},\{3,11\},\{9,18\}\}$ is skew in $\mathbb{Z}_{19}$. Then both $W_{ST}$ and $W_{ST}^c$ are strong Skolem starters in $\mathbb{Z}_{27\cdot19}=\mathbb{Z}_{513}$. In general, we can construct a strong Skolem starter in any $\mathbb{Z}_{27m},\ m\in \overline{C_2\setminus\{3\}}$. Hence, we receive infinitely many strong Skolem starters of orders divisible by 3. \end{ex} Moreover, by Theorem \ref{main1}, given a strong Skolem starter $S$ in $\mathbb{Z}_n$, where $3\nmid n$, we can construct a strong Skolem starter of any order $n^tm,\ t\ge 1, m\in \overline{C_2\setminus\{3\}}$. If, in addition, $S$ is skew, then we can construct a skew Skolem starter of any order $n^tm,\ t\ge 1, m\in \overline{C_2\setminus\{3\}}$. Finally, we note a possibility of a product with a non-cardioidal nucleus, as shown in the following example. \begin{ex} Let $R=W^c_{PQ}$, where $P$ and $Q$ are cardioidal starters of orders $k$ and $l$ respectively, $\{k,l\}\subset \overline{C_2\setminus\{3\}}$. Then by Theorem \ref{main} and Lemma \ref{the cases}, $R$ is a skew Skolem starter, but not cardioidal. Consider $X=\bar R\cup \bar R'$. Clearly, $X$ is a skew Skolem nucleus of order $n=kl$, and $X$ is not cardioidal. Let also $S$ and $T$ be strong Skolem starters of orders $m$ and $n$ respectively. Then by Theorem \ref{main}, $W^X_{ST}$ is a strong Skolem starter of order $mn$. However, $W^X_{ST}$ is not cardioidal. This can be shown by the reasoning similar to that given in Lemma \ref{the cases}, Case 4. \end{ex} \section{Conclusion}\label{Conclusion} In this paper, we introduced products of two 2-partitions of $\mathbb{Z}^*_{n}$ and $\mathbb{Z}^*_{m}$ that give a 2-partition of $\mathbb{Z}^*_{nm}$. These binary multi-valued operations are interesting by themselves and deserve further investigations. The products reveal a remarkable phenomenon: the resulting partition inherits some properties of the initial ones such as being a starter, or being strong, skew or Skolem. Moreover, in many cases, if the resulting partition has these properties then so do the initial ones. Our results, partly relying on findings of Gross \cite{b22}, extend his results in the special case of cyclic groups $\mathbb{Z}_n$. \smallskip Not every property is passed from 2-partitions to their products. The product $W_{ST}$ of two strong starters, by Theorem \ref{W is strong}, does not lead to a strong starter unless one of the initial starters is skew. However, the product $W_{ST}^X$ of two strong starters is a strong starter if the nucleus $X_m$ is subtractive and skew, for example, if it is cardioidal $C_m$ (\ref{Cm}), where $3\nmid m$. Pursuing our particular interest in constructing (strong, skew) Skolem starters, we discuss the reverse side of the useful recursive tools to generate new Skolem starters out of the known ones, and hence, new Skolem sequences. (The latter, as it was briefly explained in Section \ref{introduction}, are undoubtedly valuable combinatorial objects.) \begin{remark}\label{enumeration} For strong (skew) Skolem starters $S$ and $T$, we consider three specific choices of the nucleus $X$ when forming the pairs of strong (skew) Skolem $W^X_{ST}$: \begin{enumerate} \item $X=\bar T\cup\bar T'$; \item $X$ is cardioidal of the same order as $T$; \item $X=\bar R\cup\bar R'$, where $R$ is a (skew but not cardioidal) Skolem starter of the same order as $T$. \end{enumerate} \end{remark} Based on Lemma \ref{the cases}, we can state that the main objective of the paper has been achieved: we constructed new family of strong Skolem starters that are not cardioidal. \begin{theorem}\label{main}There are infinitely many strong and there are infinitely many skew Skolem starters that are not cardioidal. \end{theorem} \begin{proof} The infinitude of the class of strong (skew) cardioidal starters is proven in \cite{b21}. The choices of nucleus outlined in Remark \ref{enumeration} in the construction of $W^X_{ST}$ based on two strong (skew) cardioidal starters lead to a new strong (skew) Skolem starter which is not cardioidal. \end{proof} Our new results give further support to Shalaby's conjecture stated in 1991. \smallskip Observe that the approach of constructing new strong, skew and Skolem starters outlined in this paper, has its limitations because not every starter in a group $\mathbb{Z}_{mn}$, is a product of starters in $\mathbb{Z}_n$ and $\mathbb{Z}_m$ respectively. For example, due to non-existence of Skolem starters of order 5 and 7 and non-existence of a strong starter of order 5, the strong Skolem starter of order 35 found in \cite{b09} can not be constructed as a product $W^X_{ST}$ of two starters with a nucleus, as it would have contradicted Theorems \ref{XSkolemness} and \ref{WXST is strong and skew}. Since the theorems of this paper establish equivalence conditions of existence of starters of certain types and orders, it helps to cut off some unsuccessful directions in their search. Finding a successful way in these cases is yet an open problem and a subject of further investigations.
{ "timestamp": "2021-05-06T02:11:23", "yymm": "2105", "arxiv_id": "2105.01895", "language": "en", "url": "https://arxiv.org/abs/2105.01895" }
\subsection{Our results} Our results show that, quite surprinsingly, the promise of equality up to permutation of the domain fundamentally changes the sample complexity landscape, and is both qualitatively and quantitatively different from what one could expect from the known bounds on identity and tolerant identity testing without this promise. Our first set of results indeed establishes that, in contrast to the $\Theta(\sqrt{n})$ sample complexity of ``regular'' identity testing, identity testing under promise of permutation has sample complexity merely \emph{polylogarithmic} in the domain size: \begin{theorem}[{\cref{theo:ub:testing,theo:testing:lb}, (Informal)}] Identity testing under promise of permutation has sample complexity $\bigTheta{\log^2 n}$, where $n$ is the domain size. \end{theorem} Given the fact that (regular) tolerant identity testing has sample complexity nearly quadratically higher than (regular) identity testing, one could conjecture that the sample complexity tolerant testing under our promise remains polylogarithmic. Our next set of results shows that this is far from being the case: instead, allowing for some noise tolerance makes the promise of equality up to permutation essentially useless, as the sample complexity blows up \emph{exponentially}, growing from poylogarithmic to nearly linear in the domain size: \begin{theorem}[{\cref{theo:toltesting:ub,theo:testing:lb}, (Informal)}] Tolerant identity testing under promise of permutation has sample complexity $\bigTheta{n^{1-o(1)}}$, where $n$ is the domain size. \end{theorem} We also show that relaxing the tolerance allowed from additive (as in the usual tolerant testing setting) to multiplicative in the distance parameter does not really help, as the sample complexity still remains polynomial: \begin{theorem}[{\cref{thm:tol-mult-main}, (Informal)}] Multiplicative-factor tolerant identity testing under promise of permutation, where one needs to distinguish between $\dst$-close and $C\dst$-far, has sample complexity $\bigOmega{\sqrt{n}}$ for any constant factor $C> 1$, where $n$ is the domain size. \end{theorem} We emphasize once more that those results, and in particular the lower bounds, do not follow from the known results on standard identity testing, as the promise of equality up to permutation, by strenghtening the premise, drastically changes the problem. In particular, the case where the reference $\mathbf{q}$ is uniform, while known to be the hardest case for identity and tolerant identity testing, is actually a trivially easy case under our promise (as any distribution promised to be a permutation of the uniform distribution is, of course, the uniform distribution itself.) \subsection{Previous work} Distribution testing has a long history in Statistics, that one can trace back to the work of Pearson~\cite{Pearson00}. More recently, from the computer science perspective, Goldreich, Goldwasser, and Ron initiated the field of property testing~\cite{GoldreichGR98}; of which distribution testing emerged through the seminal work of Batu, Fortnow, Rubinfeld, Smith, and White~\cite{BatuFRSW00}. We refer the read to the survey~\cite{Canonne20} for a review of the area of distribution testing. Among the problems tackled in this field, \emph{identity testing} (also known as goodness-of-fit or one-sample testing), in which the goal is to decide whether an unknown probability distribution $\mathbf{p}$ is equal to a purported model $\mathbf{q}$, has received significant attention. It is known that for identity testing with any reference distribution $\mathbf{q}$ over a domain of size $n$, $\Theta(\sqrt{n})$ samples are necessary and sufficient~\cite{Paninski08,ChanDVV14,AcharyaDK15,ValiantV17}; moreover, the exact asymptotic dependence on the distance parameter and the probability of error of the test~\cite{HuangM13,DiakonikolasGPP18}, as well as some good understanding of the dependence on the reference distribution $\mathbf{q}$ itself~\cite{ValiantV17,BlaisCG19}, are now understood. Further, we also have tight bounds for the harder problem where one seeks to allow for some noise in the data (i.e., perform \emph{tolerant} identity testing, where the algorithm has to accept distributions sufficient close to the reference $\mathbf{q}$): $\Theta(n/\log n)$ samples, a nearly linear dependence on the domain size, are known to be necessary and sufficient~\cite{valiantvaliant:10lb,valiantvaliant:10ub,ValiantV11,JiaoHW18}. However, how the identity testing problem changes under natural constraints on the input data, or under some variations of the formulation, remains largely unexplored. Among the works concerned with such problems,~\cite{bkr:04,ddsv:13} consider identity testing under monotonicity or $k$-modality constraints; and~\cite{DiakonikolasKN15a} focuses on a broad class of shape constraints on the density. Finally,~\cite{CanonneW20} focuses on a variant of identity testing, ``identity up to binning,'' where two distributions are considered equal if some binning of the domain can make them coincide. To the best of our knowledge, the question considered in the present work, albeit arguably quite natural, has not been previously considered in the Statistics or distribution testing literature.\medskip \noindent\textbf{Organization} We provide in~\cref{sec:testing-upper-bound} our algorithm for testing identity under promise of permutation, before complementing it in~\cref{sec:testing-lower-bound} by our matching lower bound.~\cref{sec:toleranttesting} is then concerned with the upper and lower bounds for the tolerant version of the problem; the bulk of which lies in proving the two lower bounds. \subsection{Upper bound} The claimed upper bound readily follows from the analogous upper bound on tolerant testing \emph{without} the promise of permutation, due to Valiant and Valiant~\cite[Theorem~4]{ValiantV11} (see, also,~\cite{JiaoHW18}). Indeed, any such estimator can be used for our problem, ignoring the additional promise of identity up to permutation. \begin{theorem} \label{theo:toltesting:ub} There exists an algorithm which, for any reference distribution $\mathbf{q}$ over $[n]$ and any $0\leq \ensuremath{\varepsilon}\xspace,\delta\leq 1$ such that $\delta = \bigOmega{1/\sqrt{\log n}}$, and given $\bigO{\frac{n}{\delta^2\log n}}$ samples from an unknown distribution $\mathbf{p}\in \Pi_n(\mathbf{q})$, distinguishes with probability at least $2/3$ between (i)~$\totalvardist{\mathbf{p}}{\mathbf{q}} \leq \dst$ and (ii)~$\totalvardist{\mathbf{p}}{\mathbf{q}} > \dst+\delta$. \end{theorem} We note that the requirement $\delta = \bigOmega{1/\sqrt{\log n}}$ has been relaxed in~\cite{JiaoHW18}. \subsection{Lower bound} In this section, we prove the theorem below, our lower bound on the sample complexity of tolerant testing under promise of permutation. Before doing so, we emphasize that the known $\bigOmega{\frac{n}{\delta^2\log n}}$ sample complexity lower bound for tolerant testing \emph{absent} this promise does not apply to our setting, as the promise of permutation makes the testing problem easier. In particular, the hard instances used to prove the aforementioned $\bigOmega{\frac{n}{\delta^2\log n}}$ lower bound do not satisfy this promise.\footnote{One can also note that the lower bound for ``standard'' tolerant testing is obtained by choosing the reference distribution to be uniform over $[n]$. Under promise of permutation, this particular instance of the problem is trivial, as any permutation of the uniform distribution is still the uniform distribution.} \begin{theorem} \label{theo:toltesting:lb:1} Any algorithm which, given a reference distribution $\mathbf{q}$ over $[n]$, $0< \dst, \delta \leq 1$, and sample access to an unknown distribution $\mathbf{p}\in \Pi_n(\mathbf{q})$, distinguishes with probability at least $2/3$ between (i)~$\totalvardist{\mathbf{p}}{\mathbf{q}} \leq \dst$ and (ii)~$\totalvardist{\mathbf{p}}{\mathbf{q}} > \dst+\delta$, must have sample complexity $\bigOmega{\delta^2 n^{1-O(1/\log(1/\delta))}}$. \end{theorem} \begin{proof} In what follows, we assume that $\delta = \bigOmega{1/\sqrt{n}}$, as otherwise there is nothing to prove. Let $k\geq 1$ be an integer to be chosen during the course of the analysis (we will set $k=\Theta(1/\delta)$), and write $n=2mk^2$ for some integer $m\geq 1$ (this can be done without loss of generality, as our assumption on $\delta$ ensures that $n\geq 2mk^2$). For $1\leq \ell\leq 2k$, we define the integer interval $I_{k,\ell}:=[k]+(\ell-1)k$, so that $[2k^2] = \bigcup_{\ell=1}^{2k}I_{k,\ell}$. Given two distributions $\mathbf{p},\mathbf{q}$ over $[k]$, we define families of distributions $\mathcal{C}_{\mathbf{p},\mathbf{q}}$ and $\mathcal{F}_{\mathbf{p},\mathbf{q}}$ over $[n]$ as follows: first, we consider the distributions $\mathbf{c},\mathbf{f}$, each over $[2k^2]$, obtained by ``repeating and alternating'' $\mathbf{p}$ and $\mathbf{q}$ as follows: \begin{itemize} \item For $1\leq \ell\leq k$ and $j \in \bucket{k}{\ell}$, $\mathbf{c}(j) = \frac{1}{2k}\mathbf{p}(j)$. \item For $1\leq \ell\leq k$ and $j \in \bucket{k}{k+\ell}$, $\mathbf{c}(j) = \frac{1}{2k}\mathbf{q}(j)$. \end{itemize} \begin{figure}[ht]\centering \begin{tikzpicture}[scale=0.60] \begin{axis}[ xmin=0, xmax=50, ymin=0, ytick={0}, width=\textwidth, height=\axisdefaultheight, area style, ] \node[anchor=north west] at (rel axis cs:0,1) {Distribution $\mathbf{c}$}; \foreach \r in {0,...,4}{ \pgfmathtruncatemacro{\i}{10*\r+10} \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=blue!\i,draw=black] plot coordinates { (5*\r, 3/16) (5*\r+1, 2/16) (5*\r+2, 6/16) (5*\r+3, 4/16) (5*\r+4, 1/16) (5*\r+5, 0) }; }\temp } \foreach \r in {5,...,9}{ \pgfmathtruncatemacro{\i}{10*(\r-5+1)+10} \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=red!\i,draw=black] plot coordinates { (5*\r, 4/36) (5*\r+1, 10/36) (5*\r+2, 8/36) (5*\r+3, 8/36) (5*\r+4, 6/36) (5*\r+5, 0) }; }\temp } \end{axis} \end{tikzpicture} \\ \begin{tikzpicture}[scale=0.60] \begin{axis}[ xmin=0, xmax=50, ymin=0, ytick={0}, width=\textwidth, height=\axisdefaultheight, area style, ] \node[anchor=north west] at (rel axis cs:0,1) {Distribution $\mathbf{f}$}; \foreach \r in {5,...,9}{ \pgfmathtruncatemacro{\i}{10*(\r-5+1)+10} \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=blue!\i,draw=black] plot coordinates { (5*\r, 3/16) (5*\r+1, 2/16) (5*\r+2, 6/16) (5*\r+3, 4/16) (5*\r+4, 1/16) (5*\r+5, 0) }; }\temp } \foreach \r in {0,...,4}{ \pgfmathtruncatemacro{\i}{10*\r+10} \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=red!\i,draw=black] plot coordinates { (5*\r, 4/36) (5*\r+1, 10/36) (5*\r+2, 8/36) (5*\r+3, 8/36) (5*\r+4, 6/36) (5*\r+5, 0) }; }\temp } \end{axis} \end{tikzpicture} \\ \begin{tikzpicture}[scale=0.60] \begin{axis}[ xmin=0, xmax=50, ymin=0, ytick={0}, width=\textwidth, height=\axisdefaultheight, area style, ] \node[anchor=north west] at (rel axis cs:0,1) {Distribution $\mathbf{r}$}; \foreach \r in {1,...,5}{ \pgfmathtruncatemacro{\i}{10*\r+10} \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=blue!\i,draw=black] plot coordinates { (\r-1, 3/16) (\r, 0) }; }\temp \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=blue!\i,draw=black] plot coordinates { (5+\r-1, 2/16) (5+\r, 0) }; }\temp \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=blue!\i,draw=black] plot coordinates { (10+\r-1, 6/16) (10+\r, 0) }; }\temp \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=blue!\i,draw=black] plot coordinates { (15+\r-1, 4/16) (15+\r, 0) }; }\temp \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=blue!\i,draw=black] plot coordinates { (20+\r-1, 1/16) (20+\r, 0) }; }\temp } \foreach \r in {1,...,5}{ \pgfmathtruncatemacro{\i}{10*\r+10} \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=red!\i,draw=black] plot coordinates { (25+\r-1, 4/36) (25+\r, 0) }; }\temp \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=red!\i,draw=black] plot coordinates { (25+5+\r-1, 10/36) (25+5+\r, 0) }; }\temp \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=red!\i,draw=black] plot coordinates { (25+10+\r-1, 8/36) (25+10+\r, 0) }; }\temp \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=red!\i,draw=black] plot coordinates { (25+15+\r-1, 8/36) (25+15+\r, 0) }; }\temp \edef\temp{\noexpand% \addplot+[ybar interval,mark=no,color=red!\i,draw=black] plot coordinates { (25+20+\r-1, 6/36) (25+20+\r, 0) }; }\temp } \end{axis} \end{tikzpicture} \caption{An example of $\mathbf{c}$ (top), $\mathbf{f}$ (middle), and $\mathbf{r}$ (bottom) over $[2k^2]$, for $k=5$; here, we took $\mathbf{p}=\frac{1}{16}(3,2,6,4,1)$ and $\mathbf{q}=\frac{1}{18}(2,5,4,4,3)$.} \end{figure} \noindent We obtain $\mathbf{f}$ over $[2k^2]$ in a similar fashion, but swapping $\bucket{k}{\ell}$ and $\bucket{k}{k+\ell}$: \begin{itemize} \item For $1\leq \ell\leq k$ and $j \in \bucket{k}{\ell}$, $\mathbf{f}(j) = \frac{1}{2k}\mathbf{q}(j)$. \item For $1\leq \ell\leq k$ and $j \in \bucket{k}{k+\ell}$, $\mathbf{f}(j) = \frac{1}{2k}\mathbf{p}(j)$. \end{itemize} \noindent Further, we define our ``reference'' distribution $\mathbf{r}$ over $[2k^2]$ as \begin{itemize} \item For $1 \leq \ell \leq k$ and $j \in \bucket{k}{\ell}$, $\mathbf{r}(j) = \frac{1}{2k}\mathbf{p}(\ell)$. \item For $k+1 \leq \ell \leq 2k$ and $j \in \bucket{k}{\ell}$, $\mathbf{r}(j) = \frac{1}{2k}\mathbf{q}(\ell)$. \end{itemize} We also define the reference distribution $\mathbf{r}^\ast_{\mathbf{p},\mathbf{q}}$ over $[n]=[2k^2m]$ by concatenating $m$ copies of $\mathbf{r}$ and normalizing the result; that is, \[ \mathbf{r}^\ast_{\mathbf{p},\mathbf{q}} := \frac{1}{2m}(\mathbf{r} \sqcup \mathbf{r} \sqcup \dots \sqcup \mathbf{r}), \] where $\sqcup$ denotes the vector concatenation. Note that both $\mathbf{c}$ and $\mathbf{f}$ are permutations of $\mathbf{r}$, and that $\normone{\mathbf{r}}=\normone{\mathbf{c}}=\normone{\mathbf{f}}=1$. Next, we bound the gap between $\totalvardist{\mathbf{f}}{\mathbf{r}}$ and $\totalvardist{\mathbf{c}}{\mathbf{r}}$, relating it to the distance between $\mathbf{p}$ and $\mathbf{q}$. \begin{claim} \label{claim:distance:gap:c:f:r} $\totalvardist{\mathbf{f}}{\mathbf{r}} \geq \totalvardist{\mathbf{c}}{\mathbf{r}} + \frac{1}{k}\totalvardist{\mathbf{p}}{\mathbf{q}}$ \end{claim} \begin{proof} We will analyze the contributions to $\totalvardist{\mathbf{c}}{\mathbf{r}}$ and $\totalvardist{\mathbf{c}}{\mathbf{f}}$ on $\bucket{k}{\ell}$ and $\bucket{k}{k+\ell}$ for $1 \leq \ell \leq k$. Without loss of generality, we can assume that $\mathbf{p},\mathbf{q}$ are non-decreasing. Then, from our definition of $\mathbf{c}$, $\mathbf{r}$, and $\mathbf{f}$, we have \begin{align*} \totalvardist{\mathbf{f}}{\mathbf{r}} &= \frac{1}{4k} \sum_{i=1}^k \sum_{j=1}^k (|\mathbf{p}(i) - \mathbf{q}(j)| + |\mathbf{q}(i) - \mathbf{p}(j)|) = \frac{1}{2k} \mleft( \sum_{i=1}^k \sum_{j=1}^k |\mathbf{p}(i) - \mathbf{q}(j)| \mright)\\ &= \frac{1}{2k} \mleft( \sum_{i=1}^k |\mathbf{p}(i)-\mathbf{q}(i)| + \sum_{i=1}^k\sum_{j=1}^{i-1} (|\mathbf{p}(i)-\mathbf{q}(j)| + |\mathbf{p}(j)-\mathbf{q}(i)|) \mright)\,\\ \totalvardist{\mathbf{c}}{\mathbf{r}} &= \frac{1}{4k} \sum_{i=1}^k \sum_{j=1}^k (|\mathbf{p}(i) - \mathbf{p}(j)| + |\mathbf{q}(i) - \mathbf{q}(j)|) = \frac{1}{2k} \sum_{i=1}^k\sum_{j=1}^{i-1} ((\mathbf{p}(i)-\mathbf{p}(j))+(\mathbf{q}(i)-\mathbf{q}(j))) \end{align*} where for the last equality we used the assumption that $\mathbf{p},\mathbf{q}$ were non-decreasing to write \[ \sum_{i=1}^k \sum_{j=1}^k |\mathbf{p}(i) - \mathbf{p}(j)| = \sum_{i=1}^k \sum_{j=1}^{i-1} (\mathbf{p}(i)-\mathbf{p}(j)) + \sum_{i=1}^k \sum_{j=i+1}^{k} (\mathbf{p}(j)-\mathbf{p}(i)) = 2\sum_{i=1}^k \sum_{j=1}^{i-1} (\mathbf{p}(i)-\mathbf{p}(j))\,, \] The conclusion then follows from recalling that $\totalvardist{\mathbf{p}}{\mathbf{q}} = \frac{1}{2}\sum_{i=1}^k |\mathbf{p}(i)-\mathbf{q}(i)|$, and observing that $(\mathbf{p}(i)-\mathbf{p}(j))+(\mathbf{q}(i)-\mathbf{q}(j)) = (\mathbf{p}(i)-\mathbf{q}(j))+(\mathbf{q}(i)-\mathbf{p}(j)) \leq |\mathbf{p}(i)-\mathbf{q}(j)| + |\mathbf{p}(j)-\mathbf{q}(i)|$.\qedhere \end{proof} To define $\mathcal{C}_{\mathbf{p},\mathbf{q}}$ and $\mathcal{F}_{\mathbf{p},\mathbf{q}}$, we will need one further piece of notation. We denote by $\mathcal{B}_{k}\subseteq\mathcal{S}_{2k^2}$ the set of all permutations of $[2k^2]$ ``respecting the buckets,'' that is, \[ \mathcal{B}_{k} := \setOfSuchThat{ \pi \in \mathcal{S}_{2k^2}}{ \pi(I_{k,\ell})=I_{k,\ell} \forall \ell \in [2k] } \] We then let \[ \mathcal{C}_{\mathbf{p},\mathbf{q}} = \setOfSuchThat{ \frac{1}{2mk}(\mathbf{c}\circ\pi_1 \sqcup \mathbf{c}\circ\pi_2 \sqcup \dots \sqcup \mathbf{c}\circ\pi_m) }{ \pi_1,\dots, \pi_m \in \mathcal{B}_{k} } \] and \[ \mathcal{F}_{\mathbf{p},\mathbf{q}} = \setOfSuchThat{ \frac{1}{2mk}(\mathbf{f}\circ\pi_1 \sqcup \mathbf{f}\circ\pi_2 \sqcup \dots \sqcup \mathbf{f}\circ\pi_m) }{ \pi_1,\dots, \pi_m \in \mathcal{B}_{k} } \] where as before $\sqcup$ denotes the vector concatenation; that is, we stitch together $m$ blocks, each consisting on a permuted version of either $\mathbf{c}$ or $\mathbf{f}$. Note that since $n=m\cdot 2k^2$ and each $\mathbf{c}$ (resp. $\mathbf{f}$) is a $(2k^2)$-dimensional vector, $\mathcal{C}_{\mathbf{p},\mathbf{q}}$ and $\mathcal{F}_{\mathbf{p},\mathbf{q}}$ are indeed families of probability distributions over $[n]$, and $\mathcal{C}_{\mathbf{p},\mathbf{q}},\mathcal{F}_{\mathbf{p},\mathbf{q}} \subseteq \Pi_{n}(\mathbf{r}^\ast_{\mathbf{p},\mathbf{q}})$.\medskip The construction above allows us to convert any two distributions $\mathbf{p},\mathbf{q}$ with sufficiently many matching moments to families of distributions (whose elements are all permutations of a single reference one) hard to distinguish: \begin{claim} \label{claim:lower:bound:from:moment:matching} There exists some absolute constant $c>0$ such that, if $\mathbf{p},\mathbf{q}$ have matching first $r$-way moments, it is impossible to distinguish a uniformly random element of $\mathcal{C}_{\mathbf{p},\mathbf{q}}$ from a uniformly random element of $\mathcal{F}_{\mathbf{p},\mathbf{q}}$ given fewer than $c m^{1-\frac{1}{r+1}}$ samples. \end{claim} \begin{proof} By assumption on $\mathbf{p},\mathbf{q}$ and out construction of $\mathbf{c},\mathbf{f}$ from them, for every of the $m$ contiguous blocks of $2k^2$ elements, the $r$-way moments of the corresponding conditional distributions exactly match. Given that a uniformly element drawn of $\mathbf{p}'$ from $\mathcal{C}_{\mathbf{p},\mathbf{q}}$ and $\mathbf{q}'$ from $\mathcal{F}_{\mathbf{p},\mathbf{q}}$ corresponds to independent permutations inside each block, any block in which fewer than $r+1$ samples falls brings exactly zero information about whether it comes from $\mathbf{p}'$ or $\mathbf{q}'$ (specifically, one could simulate the distribution of those $s < r+1$ samples without getting any sample from the real distribution). Since each of these $m$ blocks has total probability $1/m$ under both $\mathbf{p}'$ and $\mathbf{q}'$, by a generalized birthday paradox (see, e.g.,~\cite{SuzukiTKT06}), with probability at least $9/10$ no block will receive more than $r$ samples unless the total number of samples is at least $c m^{1-\frac{1}{r+1}}$, for some absolute constant $c>0$. \end{proof} It remains to specify \emph{which} pair of distributions with ``sufficiently many matching moments'' we will use. While we could argue directly about the existence of such a pair of distributions with desirable properties, it is simpler to leverage a construction due to Valiant and Valiant~\cite{ValiantV11}, which exhibits the desired properties. \begin{claim} \label{claim:moment:matching} There exists some $\dst_0>0$ such that the following holds. For every sufficiently large $r$, there exists a pair of distributions (without loss of generality, non-decreasing) $\mathbf{p}_{\rm VV},\mathbf{q}_{\rm VV}$ over $k=O(r2^r)$ elements with matching first $r$-way moments, but $\totalvardist{\mathbf{p}_{\rm VV}}{\mathbf{q}_{\rm VV}} \geq \dst_0$. % \end{claim} \begin{proof} This follows from the lower bound construction of~\cite{ValiantV11}. \end{proof} We will rely on this pair of distributions $\mathbf{p}_{\rm VV},\mathbf{q}_{\rm VV}$, and hereafter write $\mathcal{C},\mathcal{F},$ and $\mathbf{r}^\ast$ for $\mathcal{C}_{\mathbf{p}_{\rm VV},\mathbf{q}_{\rm VV}},\mathcal{F}_{\mathbf{p}_{\rm VV},\mathbf{q}_{\rm VV}},$ and $\mathbf{r}^\ast_{\mathbf{p}_{\rm VV},\mathbf{q}_{\rm VV}}$, respectively. \begin{claim} For every $\mathbf{p}'\in\mathcal{C}$ and $\mathbf{q}'\in\mathcal{F}$, we have $\totalvardist{\mathbf{q}'}{\mathbf{r}^\ast} > \totalvardist{\mathbf{p}'}{\mathbf{r}^\ast} + \frac{\dst_0}{k}$. \end{claim} \begin{proof} Due to the definition of $\mathcal{C}$, $\mathcal{F}$, and $\mathbf{r}^\ast$ as $m$-fold concatenations, and since $\mathbf{r}$ is invariant by permutations from $\mathcal{B}_{k}$, it is sufficient to prove the claim for $\mathbf{p}_{\rm VV}$, $\mathbf{q}_{\rm VV}$, and $\mathbf{r}$ (over $[2k^2]$). The claimed bound then immediately follows from~\cref{claim:distance:gap:c:f:r}. \end{proof} To finish the argument, it only remains to combine the various claims. We choose $k\geq \frac{\dst_0}{\delta}$ and $m = n/(2k^2) \geq 1$ (since $\delta = \Omega(1/\sqrt{n})$. By~\cref{claim:moment:matching}, we can then set $r:= \bigOmega{\log k}$ and obtain, from~\cref{claim:lower:bound:from:moment:matching}, a sample complexity lower bound of \[ \bigOmega{ m^{1-\frac{1}{r+1}} } = \bigOmega{\delta^2 n^{1-\bigO{\frac{1}{\log(1/\delta)}}}} \] as desired. \end{proof} \ifnum1=1 The theorem immediately implies the following two corollaries. \begin{corollary} For every $c>0$, there exists some $\delta > 0$ such that the following holds. Any algorithm which, given a reference distribution $\mathbf{q}$ over $[n]$, $\dst\in(0,1)$, and sample access to an unknown distribution $\mathbf{p}\in \Pi_n(\mathbf{q})$, distinguishes with probability at least $2/3$ between (i)~$\totalvardist{\mathbf{p}}{\mathbf{q}} \leq \dst$ and (ii)~$\totalvardist{\mathbf{p}}{\mathbf{q}} > \dst+\delta$, must have sample complexity $\bigOmega{ n^{1-c} }$. \end{corollary} \begin{corollary} Any algorithm which, given a reference distribution $\mathbf{q}$ over $[n]$, $\dst\in(0,1)$, and sample access to an unknown distribution $\mathbf{p}\in \Pi_n(\mathbf{q})$, distinguishes with probability at least $2/3$ between (i)~$\totalvardist{\mathbf{p}}{\mathbf{q}} \leq \dst$ and (ii)~$\totalvardist{\mathbf{p}}{\mathbf{q}} > \dst+1/2^{\sqrt{\log n}}$, must have sample complexity $\frac{n}{2^{O(\sqrt{\log n})}}$. \end{corollary} \fi \ifnum1=0 \paragraph{Tolerant testing $C$-approximation} \newcommand{m}{m} \renewcommand{\bucket}{B} \newcommand{C}{C} \newcommand{s}{s} We now turn to our second tolerant testing lower bound, which applies to algorithms providing a $C$-factor approximation of the distance to the reference distribution. \begin{theorem} \label{thm:tol-mult-main} Any algorithm which, given a reference distribution $\mathbf{q}$ over $[n]$, $0< \dst \leq 1$, $C \geq 2$, and sample access to an unknown distribution $\mathbf{p}\in \Pi_n(\mathbf{q})$, distinguishes with probability at least $2/3$ between (i)~$\totalvardist{\mathbf{p}}{\mathbf{q}} \leq \dst$ and (ii)~$\totalvardist{\mathbf{p}}{\mathbf{q}} > C \dst$, must have sample complexity $\bigOmega{\sqrt{\frac{n}{8^C}}/\dst}$. \end{theorem} \begin{proof} We will prove the theorem via a sequence of lemmas. \ifnum1=0 In the interest of space, their proofs are deferred to the full version of the paper. \fi We will assume that $C \geq 2$ is an integer, and we define $m = 2^C-1$. Our proof will proceed similarly to the proof of Theorem~\ref{theo:toltesting:lb:1}. We will begin by working over $[m(2^{C+1}+2^{C-1}-3)]$. Throughout this section, we partition $[m(2^{C+1}+2^{C-1}-3)]$ into $C+1$ buckets, which we will denote $\bucket_0,\bucket_1,\ldots,\bucket_{C}$, such that each $\bucket_i$ is a set of consecutive integers, $|\bucket_0| = \twocmone2^{C-1}$, $|\bucket_C| = m$, and $|\bucket_i| = \twocmone2^{C+1-i}$ for $1 \leq i \leq C-1$. We define $\bucket_i = \{b_i,b_i + 1,\ldots,b'_i\}$. \noindent We define a distribution $\mathbf{r}$ in the following way: \begin{itemize} \item For each $j \in \bucket_0$, $\mathbf{r}(j) = \frac1{s}$. \item For each $1 \leq i \leq C-1$ and $j \in \bucket_i$, $\mathbf{r}(j) = \frac{2^i}{s}$. \item For each $j \in \bucket_C$, $\mathbf{r}(j) = \frac{2^{C}}{s}$. \end{itemize} We define two distributions $\mathbf{p}$ and $\mathbf{q}$ such that $\mathbf{p}$ and $\mathbf{q}$ are hard to distinguish with few samples, such that $\totalvardist{\mathbf{r}}{\mathbf{p}}$ and $\totalvardist{\mathbf{r}}{\mathbf{q}}$ are far apart. We define $\mathbf{q}$ in the following way: \begin{itemize} \item For each $j \in \bucket_0$, $\mathbf{q}(j) = \frac2{s}$. \item For each $1 \leq i \leq C-1$, \begin{itemize} \item For $j$ in $\{b_i,\ldots,b_i+\twocmone2^{C-i}-1\}$, $\mathbf{q}(j) = \frac{2^{i-1}}{s}$. \item For $j$ in $\{b_i+\twocmone2^{C-i},\ldots,b_i+m(2^{C-i} + 2^{C-1-i}) - 1\}$, $\mathbf{q}(j) = \frac{2^i}{s}$. \item For $j$ in $\{b_i+m(2^{C-i} + 2^{C-1-i}),\ldots,b'_i\}$, $\mathbf{q}(j) = \frac{2^{i+1}}{s}$. \end{itemize} and $j \in \bucket_i$, $\mathbf{q}(j) = \frac{2^i}{s}$. \item For each $j \in \bucket_C$, $\mathbf{q}(j) = \frac{2^{C-1}}{s}$. \end{itemize} \noindent We define $\mathbf{p}$ as follows: \begin{itemize} \item For each $j \in \bucket_0$, \begin{itemize} \item If $j \in \{b_0,\ldots,b_0+\twocmone2^{C-1}-1\}$, then $\mathbf{p}(j) = \frac1{s}$. \item If $j \in \{b_0+\twocmone2^{C-1},\ldots,b'_0\}$, then $\mathbf{p}(j) = \frac{2^C}{s}$. \end{itemize} \item For each $1 \leq C-1$ and $j \in \bucket_i$, $\mathbf{p}(j) = \mathbf{r}(j) = \frac{2^i}{s}$. \item For each $j \in \bucket_C$, \begin{itemize} \item If $j \in \{b_C,\ldots,b_C+2^{C-1}-1\}$, then $\mathbf{p}(j) = \frac1{s}$. \item If $j \in \{b_C+2^{C-1},\ldots,b'_C\}$, then $\mathbf{p}(j) = \frac{2^C}{s}$. \end{itemize} \end{itemize} \begin{lemma} \label{lem:tol-mult-equal-buckets} For $0 \leq i \leq C$, $\sum_{j \in \bucket_i} \mathbf{p}(j) = \sum_{j \in \bucket_i} \mathbf{q}(j)$. \end{lemma} \ifnum1=1 \begin{proof} The proof is simply direct calculation. Observe that in bucket $0$, \[ s \sum_{j \in \bucket_0} \mathbf{q}(j) = \twocmone2^{C-1} \cdot 2 (m-1)2^{C-1} + (m+1)2^{C-1} = 1 \cdot (m-1)2^{C-1} + 2^{C} \cdot 2^{C-1} = s \sum_{j \in \bucket_0} \mathbf{p}(j). \] In bucket $C$, we have\ \[ s \sum_{j \in \bucket_C} \mathbf{q}(j) = m \cdot 2^{C-1} = (m - 1)2^{C-1} + 2^{C-1} = (2^{C} - 2)2^{C-1} + 2^{C-1} = 2^{C} \cdot (2^{C-1} - 1) + 1 \cdot 2^{C-1} = s \sum_{j \in \bucket_C} \mathbf{p}(j). \] For $1 \leq i \leq C-1$, we have \begin{align*} s \sum_{j \in \bucket_C} \mathbf{p}(j) &= 2^i \cdot m 2^{C+1-i} \\ &= m 2^{C-1-i} 2^{i+2} \\ &= m 2^{C-1-i} (2(2^{i-1}) + 2^i + 2^{i+1}) \\ &= m 2^{C-1-i} \cdot 2^{i-1} + \twocmone2^{C-1-i} \cdot 2^i + \twocmone2^{C-1-i} \cdot 2^{i+1} \\ &= s \sum_{j \in \bucket_C} \mathbf{q}(j). \end{align*} The claim follows by dividing the equalities by $s$. \end{proof} \fi \begin{lemma} \label{lem:tv-distance-far} $\totalvardist{\mathbf{r}}{\mathbf{q}} = \frac{C}{4C-1}$ \end{lemma} \ifnum1=1 \begin{proof} By direct calculation, \begin{align*} 2s\totalvardist{\mathbf{r}}{\mathbf{q}} &= s \sum_{j=1}^{s} |\mathbf{r}(j) - \mathbf{q}(j)| \\ &= m 2^{C-1}(2-1) + m(2^C - 2^{C-1}) + \frac12 \sum_{i=1}^{C-1} \left( \twocmone2^{C-i}(2^i - 2^{i-1}) + \twocmone2^{C-1-i}(2^{i+1}-2^i) \right) \\ &= m 2^{C} + \sum_{i=1}^{C-1} (2^{i-1}\twocmone2^{C-i} + \twocmone2^{C-1-i}2^i) \\ &= m 2^{C} + \sum_{i=1}^{C-1} (\twocmone2^{C-1} + \twocmone2^{C-1}) \\ &= C m 2^{C}. \end{align*} Dividing both sides by $2s$ yields the lemma. \end{proof} \fi \begin{lemma} \label{lem:tol-mult-buckets-almost-uniform} For every $0 \leq i \leq C$, $\frac{1}{4C-1} \leq \mathbf{p}(\bucket_i) \leq \frac{4}{4C-1}$ (and similarly for $\mathbf{q}(\bucket_i))$. \end{lemma} \ifnum1=1 \begin{proof} By Lemma~\ref{lem:tol-mult-equal-buckets}, it suffices to check either $\mathbf{p}(\bucket_i)$ or $\mathbf{q}(\bucket_i)$ for each $0 \leq i \leq C$. For bucket $0$, we get \[ \mathbf{q}(\bucket_0) = \frac{2}{s} \cdot m{2^C-1} = \frac{2}{4C-1}. \] For bucket $C$, we get \[ \mathbf{q}(\bucket_{C}) = \frac{2^{C-1}}{s} \cdot m = \frac{1}{4C-1} \] For $1 \leq i \leq C-1$, we get \[ \mathbf{p}(\bucket_i) = \frac{2^i}{s} \cdot m(2^{C+1-i}) = \frac{4}{4C-1}. \] \end{proof} \fi \begin{lemma} \label{lem:tv-distance-close} $\totalvardist{\mathbf{r}}{\mathbf{p}} = \frac{1}{4C-1}$ \end{lemma} \ifnum1=1 \begin{proof} By direct calculation, \[ 2s\totalvardist{\mathbf{r}}{\mathbf{p}} = 2^{C-1} \cdot (2^C - 1) + 2^{C-1} \cdot (2^C - 1) = 2^{C} (2^{C-1}) = m 2^{C}. \] Dividing both sides by $2s$ yields the lemma. \end{proof} \fi \newcommand{t}{t} We assume that $n$ is a multiple of $s$, and define $t := \frac{n}{s}$. To define $\mathcal{C}$ and $\mathcal{F}$ over [n], we will need one further piece of notation. We denote by $\mathcal{B}'_{s}\subseteq\mathcal{S}_{s}$ the set of all permutations of $[s]$ ``respecting the buckets,'' that is, for every $0 \leq i \leq C$, \[ \mathcal{B}'_{s} = \{ \pi \in \mathcal{S}_{s} : \pi(\bucket_i) = \bucket_i \forall i \in \{0,1,\ldots,C\} \} \] We then let $ \mathbf{r}^* := \frac{1}{t} (\mathbf{r} \sqcup \mathbf{r} \sqcup \cdots \sqcup \mathbf{r}) $ as well as \begin{align*} \mathcal{C} &= \setOfSuchThat{ \frac{1}{t}(\mathbf{c}\circ\pi_1 \sqcup \mathbf{c}\circ\pi_2 \sqcup \dots \sqcup \mathbf{c}\circ\pi_t) }{ \pi_1,\dots, \pi_t \in \mathcal{B'}_{s} } \\ \mathcal{F} &= \setOfSuchThat{ \frac{1}{t}(\mathbf{f}\circ\pi_1 \sqcup \mathbf{f}\circ\pi_2 \sqcup \dots \sqcup \mathbf{f}\circ\pi_t) }{ \pi_1,\dots, \pi_t \in \mathcal{B'}_{s} } \end{align*} where as before $\sqcup$ denotes vector concatenation. Since $\totalvardist{\mathbf{r}}{\mathbf{c} \circ \pi} = \totalvardist{\mathbf{r}}{\mathbf{c}}$ and $\totalvardist{\mathbf{r}}{\mathbf{f} \circ \pi} = \totalvardist{\mathbf{r}}{\mathbf{f}}$ for all $\pi \in \mathcal{B}'_{s}$, we have that $\totalvardist{\mathbf{r}^*}{\mathbf{p}} = \frac{1}{4C-1}$ for every distribution $\mathbf{p} \in \mathcal{C}$, and $\totalvardist{\mathbf{r}^*}{\mathbf{q}} = \frac{C}{4C-1}$ for every distribution $\mathbf{q} \in \mathcal{F}$. Further, repeating the same partitioning of each interval of $s$ elements of $[n]$ into buckets $\bucket_0,\bucket_1,\ldots,\bucket_C$, we have $t (C+1)$ buckets, such that distinguishing a distribution in $\mathcal{C}$ from a distribution in $\mathcal{F}$ requires seeing at $2$ samples in at least one of these buckets. Since the probability mass on each bucket is in the interval $[\frac{1}{t(C+1)},\frac{4}{t(C+1)}]$, at least $\Omega(\sqrt{t(C+1)}) = \Omega(\sqrt{n(C+1)/s})$ queries to distinguish in $\mathcal{C}$ from a distribution in $\mathcal{F}$, completing the proof of Theorem~\ref{thm:tol-mult-main}. \end{proof} \fi \ifnum1=1 \subsubsection*{Tolerant testing $C$-approximation} \newcommand{m}{m} \renewcommand{\bucket}{B} \newcommand{C}{C} \newcommand{s}{s} We now turn to our second tolerant testing lower bound, which applies to algorithms providing a $C$-factor approximation of the distance to the reference distribution. \begin{theorem} \label{thm:tol-mult-main} Any algorithm which, given a reference distribution $\mathbf{q}$ over $[n]$, $C \geq 2$, and sample access to an unknown distribution $\mathbf{p}\in \Pi_n(\mathbf{q})$, distinguishes with probability at least $2/3$ between (i)~$\totalvardist{\mathbf{p}}{\mathbf{q}} \leq \frac{1}{4C-1}$ and (ii)~$\totalvardist{\mathbf{p}}{\mathbf{q}} \geq \frac{C}{4C-1}$, must have sample complexity $\bigOmega{\sqrt{\frac{n}{4^C}}}$. \end{theorem} \begin{remark} As discussed in~\cref{rk:small:tolerance},~\cref{thm:tol-mult-main} is essentially optimal, as it matches (up to polylogarithmic factors in the sample complexity) the upper bound from~\cref{theo:ub:testing} when $C=\Theta(\log\ab)$. \end{remark} \begin{proof} We will prove the theorem via a sequence of lemmas. \ifnum1=0 In the interest of space and exposition, their proofs are deferred to~\cref{sec:proofs-of-misc}. \fi We will assume that $C \geq 2$ is an integer, and we define $m = 2^C-1$. Our proof will proceed similarly to the proof of~\cref{theo:toltesting:lb:1}. We will begin by working over $[m(2^{C+1}+2^{C-1}-3)]$. Throughout this section, we partition $[m(2^{C+1}+2^{C-1}-3)]$ into $C+1$ buckets, which we will denote $\bucket_0,\bucket_1,\ldots,\bucket_{C}$, such that each $\bucket_i$ is a set of consecutive integers, $|\bucket_C| = \twocmone2^{C-1}$, $|\bucket_0| = m$, and $|\bucket_i| = \twocmone2^{i+1}$ for $1 \leq i \leq C-1$. \ignore{We define $\bucket_i = \{b_i,b_i + 1,\ldots,b'_i\}$.} For convenience, we define $s := m(4C-1)2^{C-1}$. \noindent We define a distribution $\mathbf{r}$ in the following way: \begin{itemize} \item For each $j \in \bucket_0$, $\mathbf{r}(j) = \frac{2^{C}}{s}$. \item For each $1 \leq i \leq C-1$ and $j \in \bucket_i$, $\mathbf{r}(j) = \frac{2^{C-i}}{s}$. \item For each $j \in \bucket_C$, $\mathbf{r}(j) = \frac{1}{s}$. \end{itemize} We define two distributions $\mathbf{p}$ and $\mathbf{q}$ such that $\mathbf{p}$ and $\mathbf{q}$ are hard to distinguish with few samples, such that $\totalvardist{\mathbf{r}}{\mathbf{p}}$ and $\totalvardist{\mathbf{r}}{\mathbf{q}}$ are far apart. We define $\mathbf{q}$ in the following way: \begin{itemize} \item For each $j \in \bucket_0$, $\mathbf{q}(j) = \frac{2^{C-1}}{s}$. \item For each $1 \leq i \leq C-1$, \begin{itemize} \item For $j$ in the first $\twocmone2^{i}$ elements of $\bucket_i$, \ignore{$\{b_i,\ldots,b_i+\twocmone2^{i}-1\}$,} $\mathbf{q}(j) = \frac{2^{C-i-1}}{s}$. \item For $j$ in the next $\twocmone2^{i-1}$ elements of $\bucket_i$, \ignore{$\{b_i+\twocmone2^{i},\ldots,b_i+m(2^{i} + 2^{i-1}) - 1\}$,} $\mathbf{q}(j) = \frac{2^{C-i}}{s}$. \item For $j$ in the last $\twocmone2^{i-1}$ elements of $\bucket_i$, \ignore{$\{b_i+m(2^{i} + 2^{i-1}),\ldots,b'_i\}$,} $\mathbf{q}(j) = \frac{2^{C-i+1}}{s}$. \end{itemize} % \item For each $j \in \bucket_C$, $\mathbf{q}(j) = \frac{2}{s}$. \end{itemize} \noindent We define $\mathbf{p}$ as follows: \begin{itemize} \item For each $j \in \bucket_0$, \begin{itemize} \item \ignore{If $j \in \{b_0,\ldots,b_0+2^{C-1}-1\}$,} If $j$ is in the first $2^{C-1}$ elements of $\bucket_0$, then $\mathbf{p}(j) = \frac1{s}$. \item \ignore{If $j \in \{b_0+2^{C-1},\ldots,b'_0\}$,} If $j$ is in the last $m-2^{C-1} = 2^{C-1}-1$ elements of $\bucket_0$, then $\mathbf{p}(j) = \frac{2^C}{s}$. \end{itemize} \item For each $1 \leq i \leq C-1$ and $j \in \bucket_i$, $\mathbf{p}(j) = \mathbf{r}(j) = \frac{2^{C-i}}{s}$. \item For each $j \in \bucket_C$, \begin{itemize} \item \ignore{If $j \in \{b_C,\ldots,b_C+\twocmone2^{C-1}-1\}$,} If $j$ is in the first $(m-1)2^{C-1}$ elements of $\bucket_C$, then $\mathbf{p}(j) = \frac1{s}$. \item \ignore{If $j \in \{b_C+\twocmone2^{C-1},\ldots,b'_C\}$,} If $j$ is in the last $2^{C-1}$ elements of $\bucket_j$, then $\mathbf{p}(j) = \frac{2^C}{s}$. \end{itemize} \end{itemize} \begin{lemma} \label{lem:tol-mult-equal-buckets} For $0 \leq i \leq C$, $\sum_{j \in \bucket_i} \mathbf{p}(j) = \sum_{j \in \bucket_i} \mathbf{q}(j)$. \end{lemma} \ifnum1=1 \begin{proof} The proof is simply direct calculation. Observe that in bucket $C$, \[ s \sum_{j \in \bucket_C} \mathbf{q}(j) = \twocmone2^{C-1} \cdot 2 = (m-1)2^{C-1} + (m+1)2^{C-1} = (m-1)2^{C-1} \cdot 1 + 2^{C} \cdot 2^{C-1} = s \sum_{j \in \bucket_C} \mathbf{p}(j). \] In bucket $0$, we have\ \[ s \sum_{j \in \bucket_0} \mathbf{q}(j) = m \cdot 2^{C-1} = (m - 1)2^{C-1} + 2^{C-1} = (2^{C} - 2)2^{C-1} + 2^{C-1} = (2^{C-1} - 1) \cdot 2^{C} + 2^{C-1} \cdot 1 = s \sum_{j \in \bucket_0} \mathbf{p}(j). \] For $1 \leq i \leq C-1$, we have \begin{align*} s \sum_{j \in \bucket_i} \mathbf{p}(j) &= \twocmone2^{i+1} \cdot 2^{C-i} \\ &= m (2^i + 2(2^{i-1}) + 2^{i+1}) 2^{C-1-i} \\ &= m 2^{i} \cdot 2^{C-i-1} + \twocmone2^{i-1} \cdot 2^{C-i} + \twocmone2^{i-1} \cdot 2^{C-i+1} \\ &= s \sum_{j \in \bucket_i} \mathbf{q}(j). \end{align*} The claim follows by dividing the equalities by $s$. \end{proof} \fi \begin{lemma} \label{lem:tv-distance-far} $\totalvardist{\mathbf{r}}{\mathbf{q}} = \frac{C}{4C-1}$ \end{lemma} \ifnum1=1 \begin{proof} By direct calculation, \begin{align*} 2s\totalvardist{\mathbf{r}}{\mathbf{q}} &= s \sum_{j=1}^{s} |\mathbf{r}(j) - \mathbf{q}(j)| \\ &= m 2^{C-1}(2-1) + m(2^C - 2^{C-1}) + \frac12 \sum_{i=1}^{C-1} \left( \twocmone2^{i}(2^{C-i} - 2^{C-i-1}) + \twocmone2^{i-1}(2^{C-i+1}-2^{C-i}) \right) \\ &= m 2^{C} + \sum_{i=1}^{C-1} (2^{i-1}\twocmone2^{C-i} + \twocmone2^{C-1-i}2^i) \\ &= m 2^{C} + \sum_{i=1}^{C-1} (\twocmone2^{C-1} + \twocmone2^{C-1}) \\ &= C m 2^{C}. \end{align*} Dividing both sides by $2s$ yields the lemma. \end{proof} \fi \begin{lemma} \label{lem:tol-mult-buckets-almost-uniform} For every $0 \leq i \leq C$, $\mathbf{p}(\bucket_i) \leq \frac{2}{C+1}$ (and similarly for $\mathbf{q}(\bucket_i))$. \end{lemma} \ifnum1=1 \begin{proof} We apply Lemma~\ref{lem:tol-mult-equal-buckets} and directly calculate. For bucket $C$, we get \[ \mathbf{p}(\bucket_C) = \mathbf{q}(\bucket_C) = \frac{2}{s} \cdot m{2^C-1} = \frac{2}{4C-1}. \] For bucket $0$, we get \[ \mathbf{p}(\bucket_0) = \mathbf{q}(\bucket_0) = \frac{2^{C-1}}{s} \cdot m = \frac{1}{4C-1}. \] For $1 \leq i \leq C-1$, we get \[ \mathbf{q}(\bucket_i) = \mathbf{p}(\bucket_i) = \frac{2^i}{s} \cdot m(2^{C+1-i}) = \frac{4}{4C-1}. \] The claim follows by observing that $\frac{4}{4C-1} \leq \frac{2}{C+1}$ when $C \geq \frac32$. \end{proof} \fi \begin{lemma} \label{lem:tv-distance-close} $\totalvardist{\mathbf{r}}{\mathbf{p}} = \frac{1}{4C-1}$ \end{lemma} \ifnum1=1 \begin{proof} By direct calculation, \[ 2s\totalvardist{\mathbf{r}}{\mathbf{p}} = 2^{C-1} \cdot (2^C - 1) + 2^{C-1} \cdot (2^C - 1) = 2^{C} (2^{C-1}) = m 2^{C}. \] Dividing both sides by $2s$ yields the lemma. \end{proof} \fi \newcommand{t}{t} \newcommand{\distsz}{w} Let $\distsz = m(2^{C+1}+2^{C-1}-3)$. We assume that $n$ is a multiple of $\distsz$, and define $t := \frac{n}{\distsz}$. To define $\mathcal{C}$ and $\mathcal{F}$ over $[n]$, we will need one further piece of notation. We denote by $\mathcal{B}'_{\distsz}\subseteq\mathcal{S}_{\distsz}$ the set of all permutations of $[\distsz]$ ``respecting the buckets,'' that is, for every $0 \leq i \leq C$, \[ \mathcal{B}'_{\distsz} = \{ \pi \in \mathcal{S}_{\distsz} : \pi(\bucket_i) = \bucket_i \forall i \in \{0,1,\ldots,C\} \} \] We then let $ \mathbf{r}^* := \frac{1}{t} (\mathbf{r} \sqcup \mathbf{r} \sqcup \cdots \sqcup \mathbf{r}) $ as well as \begin{align*} \mathcal{C} &= \setOfSuchThat{ \frac{1}{t}(\mathbf{c}\circ\pi_1 \sqcup \mathbf{c}\circ\pi_2 \sqcup \dots \sqcup \mathbf{c}\circ\pi_t) }{ \pi_1,\dots, \pi_t \in \mathcal{B'}_{\distsz} } \\ \mathcal{F} &= \setOfSuchThat{ \frac{1}{t}(\mathbf{f}\circ\pi_1 \sqcup \mathbf{f}\circ\pi_2 \sqcup \dots \sqcup \mathbf{f}\circ\pi_t) }{ \pi_1,\dots, \pi_t \in \mathcal{B'}_{\distsz} } \end{align*} where as before $\sqcup$ denotes vector concatenation. Since $\totalvardist{\mathbf{r}}{\mathbf{c} \circ \pi} = \totalvardist{\mathbf{r}}{\mathbf{c}}$ and $\totalvardist{\mathbf{r}}{\mathbf{f} \circ \pi} = \totalvardist{\mathbf{r}}{\mathbf{f}}$ for all $\pi \in \mathcal{B}'_{s}$, we have that $\totalvardist{\mathbf{r}^*}{\mathbf{p}} = \frac{1}{4C-1}$ for every distribution $\mathbf{p} \in \mathcal{C}$, and $\totalvardist{\mathbf{r}^*}{\mathbf{q}} = \frac{C}{4C-1}$ for every distribution $\mathbf{q} \in \mathcal{F}$. Further, repeating the same partitioning of each interval of $s$ elements of $[n]$ into buckets $\bucket_0,\bucket_1,\ldots,\bucket_C$, we have $t (C+1)$ buckets, such that distinguishing a distribution in $\mathcal{C}$ from a distribution in $\mathcal{F}$ requires seeing at $2$ samples in at least one of these buckets. Since the probability mass on each of the buckets at most $\frac{2}{t(C+1)}$ by Lemma~\ref{lem:tol-mult-buckets-almost-uniform}, at least $\Omega(\sqrt{t(C+1)}) = \Omega(\sqrt{n(C+1)/\distsz})$ queries to distinguish in $\mathcal{C}$ from a distribution in $\mathcal{F}$, completing the proof of Theorem~\ref{thm:tol-mult-main}. \end{proof} \fi \iffalse \subsection{Lower bound (previous)} \begin{definition} Given a set $S$ and an integer $r$, we define the $r$-way moment vector $\momvec{S}{r}$ to be the vector indexed by integer partitions $\lambda = (\lambda_1,\lambda_2,\ldots,\lambda_\ell) \vdash r$ such that $\momvec{S}{r}_\lambda = \sum_{\{i_1,i_2,\ldots,i_\ell\} \subseteq S} \prod_{j=1}^\ell i_j^{\lambda_{i_j}}$. \end{definition} \begin{theorem} For every $r > 1$, there exists an integer $M = 5r^2 \log r \parti{r}$ such that there exist distinct subsets $S$ and $T$ of $[M]$ such that (i) $|S| = |T| = M/2$ and (ii) $V^{S,r} = V^{T,r}$. \end{theorem} \begin{proof} We will use the Pigeonhole Principle. Let $A$ be a set of $[M]$ of cardinality $M/2$. The number of possible choices for $A$ is $\binom{M}{M/2} \geq 2^{M/2}$. For every $\lambda \vdash r$, $\momvec{A}{r}_{\lambda}$ is a positive integer bounded by $M^r \cdot \binom{M/2}{r} \leq M^{2r}$, so the number of possibilities for $\momvec{A}{r}$ is $M^{2r \parti{r}}$. Assuming that $M = 5r^2 \log r \parti{r}$, we observe that % \begin{align*} 2^{M/2} & > & 2^{2r^2 \log r \parti{r}} \\ & = & r^{2r^2 \parti{r}} \\ & = & (r^r)^{2r \parti{r}} \\ & > & M^{2r \parti{r}} \end{align*}, using the fact $\parti{r}$ is clearly at most $r^{r}$. Thus, by the Pigeonhole Principle, there exist two distinct subsets of integers $S$ and $T$ of $[M]$ such that $|S| = |T| = M/2$ and $\momvec{S}{r} = \momvec{T}{r}$. \end{proof} We observe that if $\momvec{S}{r} = \momvec{T}{r}$, then $\momvec{S+a}{r} = \momvec{T+a}{r}$ for every integer $a$, where the interpret $S+a$ to be the set $\{i+a \mid i \in S\}$. Also, for every set $A$, $\momvec{S\setminus A}{r} = \momvec{T\setminus A}{r}$. Thus we can assume that $S$ and $T$ are disjoint, $|S| = |T| \leq M/2$, and $1 \in S$. Thus, we restate the above theorem: \begin{theorem} \label{thm:moment-match-disjoint-and-one} For every $r > 1$, there exists an integer $M = 5r^2 \log r \parti{r}$ such that there exist disjoint nonempty subsets $S$ and $T$ of $[M]$ such that (i) $|S| = |T| \leq M/2$, (ii) $V^{S,r} = V^{T,r}$, and (iii) $1 \in S$. \end{theorem} Let $S$ and $T$ be two sets as promised by Theorem~\ref{thm:moment-match-disjoint-and-one}, and define $k = |S| \leq M/2$. We will set $S = \{s_1,s_2,\ldots,s_k\}$ and $T = \{t_1,t_2,\ldots,s_k\}$, where $s_1 < s_2 < \cdots < s_k$ and $t_1 < t_2 < \cdots < t_k$. We will define three distributions $\mathbf{r}$, $\mathbf{c}$ and $\mathbf{f}$ over $[2k^2]$ that will be hard to distinguish with few samples, such that $\mathbf{c}$ is noticeably closer to $\mathbf{r}$ than $\mathbf{f}$. We define $\bucket{k}{\ell}$ to be the set $[k] + (\ell-1) k$; observe that $[2k^2] = \bigcup_{i=1}^{2k} \bucket{k}{i}$. Then our unnormalized distribution $\mathbf{r}$ is defined in the following way: \begin{itemize} \item For $1 \leq i \leq k$ and $j \in \bucket{k}{i}$, $\mathbf{r}(j) = s_i$. \item For $k+1 \leq i \leq 2k$ and $j \in \bucket{k}{i}$, $\mathbf{r}(j) = t_i$. \end{itemize} We move on to the definition of $\mathbf{c}$: \begin{itemize} \item For $j \in \bucket{k}{1}$, $\mathbf{c}(j) = s_j$. \item For $j \in \bucket{k}{k+1}$, $\mathbf{c}(j) = t_j$. \item For $2 \leq i \leq k$ and $j \in \bucket{k}{i} \setminus \{ik\}$, $\mathbf{c}(j) = s_i$. \item For $2 \leq i \leq k$, $\mathbf{c}(ik) = s_1$. \item For $k+2 \leq i \leq 2k$ and $j \in \bucket{k}{i} \setminus \{ik\}$, $\mathbf{c}(j) = t_i$. \item For $k+2 \leq i \leq 2k$, $\mathbf{c}(ik) = t_1$. \end{itemize} The definition of $\mathbf{f}$ is similar, only differing from $\mathbf{c}$ on the first two bullet points: \begin{itemize} \item For $j \in \bucket{k}{1}$, $\mathbf{c}(j) = t_j$. \item For $j \in \bucket{k}{k+1}$, $\mathbf{c}(j) = s_j$. \item For $2 \leq i \leq k$ and $j \in \bucket{k}{i} \setminus \{ik\}$, $\mathbf{c}(j) = s_i$. \item For $2 \leq i \leq k$, $\mathbf{c}(ik) = s_1$. \item For $k+2 \leq i \leq 2k$ and $j \in \bucket{k}{i} \setminus \{ik\}$, $\mathbf{c}(j) = t_i$. \item For $k+2 \leq i \leq 2k$, $\mathbf{c}(ik) = t_1$. \end{itemize} We define the family of distributions $\mathcal{C}$ (resp. $\mathcal{F}$) to be all distributions of the form $\mathbf{c} \circ \pi$ (resp. $\mathbf{f} \circ \pi$), where $\pi : [2k^2] \rightarrow [2k^2]$ is a permutation such that $\pi(\bucket{k}{i}) = \bucket{k}{i}$ for all $1 \leq i \leq 2k$. For each permutation of this form, $\mathbf{r} \circ \pi = \mathbf{r}$, so $\dist{\mathbf{r}}{\mathbf{c}'} = \dist{\mathbf{r}}{\mathbf{c}}$ and $\dist{\mathbf{r}}{\mathbf{f}'} = \dist{\mathbf{r}}{\mathbf{f}}$ for all $\mathbf{c}' \in \mathcal{C}$ and $\mathbf{f}' \in \mathcal{F}$. We analyze these quantities next. We write $\unndist{p}{q}$ for the unnormalized distance between ``distributions'' $p$ and $q$. \begin{theorem} For the distributions $\mathbf{r}$, $\mathbf{c}$, and $\mathbf{f}$ defined earlier \[ \unndist{\mathbf{r}}{\mathbf{c}} = 2\sum_{i=2}^{k} (s_i - s_1 + t_i - t_1) \qquad {\; \mathrm{and} \; } \qquad \unndist{\mathbf{r}}{\mathbf{f}} \geq \unndist{\mathbf{r}}{\mathbf{c}} + 2(t_1 - s_1). \] \end{theorem} \begin{proof} We begin by analyzing $\unndist{\mathbf{r}}{\mathbf{c}}$. The contribution to $\unndist{\mathbf{r}}{\mathbf{c}}$ from elements in $\bucket{k}{1}$ is $\sum_{i=1}^k (s_i - s_1)$, and the contribution from elements in $\bucket{k}{k+1}$ is $\sum_{i=1}^k (t_i - t_1)$. We see these contributions again by analyzing the elements $\{i k \mid 2 \leq i \leq k\}$ and $\{i k \mid (k+2) \leq i \leq 2k\}$, respectively. For every other element $j \in [2k^2], \mathbf{r}(j) = \mathbf{c}(j) = \mathbf{f}(j)$. The only difference in the analysis of $\unndist{\mathbf{r}}{\mathbf{f}}$ is what happens on $\bucket{k}{1}$ and $\bucket{k}{k+1}$. Recall that $s_1 < t_1 < t_i$ for $2 \leq i \leq k$. We observe that the contribution to $\unndist{\mathbf{r}}{\mathbf{f}}$ from these buckets is \[ \sum_{i=1}^{k} | t_i - s_1 | + | s_i - t_1 | \geq 2|t_1 - s_1| + \sum_{i=2}^{k} (s_i - s_1 + t_i - t_1) = 2(t_1 - s_1) + \sum_{i=2}^{k} (s_i - s_1 + t_i - t_1), \] establishing the claim. \end{proof} Since the elements of $S$ and $T$ are all at most $M/2$, this theorem implies $\dist{\mathbf{r}}{\mathbf{f}} - \dist{\mathbf{r}}{\mathbf{c}} \geq \dfrac{2}{k M/2} = \dfrac{1}{k M}$. By standard collision bounds, this implies the following theorem. \begin{theorem} For every integer $r \geq 1$, there exist two families of distributions over $[2k^2]$ with $k = 3r^2\log r \parti{r}$ such that any $(\dst_1,\dst_2)$-tolerant tester for testing identity under permutation promise with $\dst_2 - \dst_1 \leq 1/k^2$ requires $r+1$ queries in either $\bucket{k}{1}$ or $\bucket{k}{k+1}$. \end{theorem} \begin{proof} We show that it is impossible to distinguish a randomly chosen distribution from $\mathcal{C}$ from a randomly chosen distribution from $\mathcal{F}$ unless the conditions in the theorem are satisfied. Indeed, the distribution of masses assigned to elements outside of $\bucket{k}{1} \cup \bucket{k}{k+1}$ are the same in $\mathcal{C}$ and $\mathcal{F}$, so samples outside of this set provide no information for distinguishing. Recall that the (unnormalized) probability masses in $\bucket{k}{1}$ and $\bucket{k}{k+1}$ are exactly the elements of $S$ and $T$ in $\mathbf{c}$, and $T$ and $S$ in $\mathbf{f}$, respectively. XXXOkay this isn't done but it's moment business.XXX \end{proof} \ignore{ Ignoring normalization, we will assume that the probability masses assigned by $p$ are integers. Under this assumption, if the maximum integer assigned by $p$ is $M$, for every $\lambda$, there are at most $M^r$ choices for $V^{p,r}_\lambda$, and thus at most $M^{r \parti(r)}$ values for $V^{p,r}$. By a stars-and-bars counting argument, there are $\binom{M + k - 1}{k}$ such distributions. Setting $k = M$, this number is at least $\binom{2M - 1}{M} \geq 2^M$. Thus, if $M > r^2 \log r \parti(r)$, we have \begin{eqnarray*} 2^M & \geq & 2^{r^2 \log r \parti(r)} \\ & = & r^{r^2 \parti(r)} \\ & = & (r^r)^{r \parti(r)} \\ & > & M^{r \parti(r)}, \end{eqnarray*} using the fact that $\parti(r) = 2^{O(\sqrt{r} \log^2 r)}$. Thus, by the Pigeonhole Principle, there exist two distributions $p$ and $q$ with (unnormalized) integer probability masses between $1$ and $M$ inclusive that agree on all $r$-way moments. } \fi \section{Introduction} \label{sec:intro} \input{sec-introduction} \section{Preliminaries} \label{sec:prelim} \input{sec-preliminaries} \section{Testing} In this section, we establish our matching upper and lower bounds for testing under promise of permutation,~\cref{theo:toltesting:ub,theo:testing:lb}. \subsection{Upper bound} \label{sec:testing-upper-bound} \input{sec-testing-upper-bound} \subsection{Lower bound} \label{sec:testing-lower-bound} \input{sec-testing-lower-bound} \section{Tolerant testing} \label{sec:toleranttesting} \input{sec-toleranttesting} \printbibliography \ifnum1=0
{ "timestamp": "2021-05-06T02:09:13", "yymm": "2105", "arxiv_id": "2105.01856", "language": "en", "url": "https://arxiv.org/abs/2105.01856" }
\section{Introduction}\label{intro} Here, we propose and investigue the inexact Levenberg--Marquardt method with feasible inexact projections for solving nonsmooth equations subject to a set convex, i.e., to solve the following problem: find $x\in \mathbb{R}^n$ such that \begin{equation}\label{eq:prob} \begin{aligned} x\in C,\qquad f(x)=0, \end{aligned} \end{equation} where $C$ is a nonempty closed convex set contained in an open set $\Omega \subset \mathbb{R}^n$ and $f: \Omega \rightarrow \mathbb{R}^m$ is a locally Lipschitz continuous function. Throughout this paper, we will assume that the solution set of \eqref{eq:prob}, denoted by $C^*$, is nonempty. The problem \eqref{eq:prob} has aroused the interest of many researchers since different problems can be written with \eqref{eq:prob}, for example, inequality feasibility problem and implicit complementarity problem, see \cite{PangQi1993,Pang1982}. Recently, in \cite{deOliveiraOFerreira2020} was proposed and analyzed a method for solving \eqref{eq:prob}. See also \cite{FacchiKanzon1997,KanzowPetra2004,KanPetra2007}. It is worth mentioning that, if $f$ is a continuously differentiable function, then \eqref{eq:prob} reduces to a constrained smooth equation, which has been addressed in several studies, and several methods have been proposed for solving it. See, for example, the exact/inexact Newton-like methods in \cite{mariniquasi2018,morini2016,GoncalvesGoncalves2Oliveira2021,GoncalvesOliveira2017,mariniquasi2018}, projected Levenberg--Marquardt-type methods in \cite{BehlingHaeserRamosSchonefeld2017,BehlingFischerHerrichIusemYe2014,BehlingRFischer2012}, and trust-region methods in \cite{BellaviMaria2004,Bellavia2012}. Levenberg--Marquardt-type method has being a valuable tool for solving unconstrained/constrained equation, see \cite{Kenneth1944,Donald1963}. For solving a system of nonsmooth equations the exact version of this method is formulated as: given a current iterate $x_{k} \in \mathbb{R}^n$ and the parameter $\mu_{k}>0$, the next iterate is compute as $x_{k+1} = x_{k} + d_{k}$, where the vector $d_k$ is the solution of the liner system \begin{equation}\label{uncon1} (V_k^TV_k + \mu_kI_n)d = -V_k^Tf(x_k), \qquad V_k \in \partial f(x_k), \end{equation} where $I_n$ is the identity matrix in $\mathbb{R}^n\times \mathbb{R}^n$, $V_k := V_{x_k}$ is an element of the Clarke generalized Jacobian of $F$ at $x_k$. For the definition of the Clarke generalized Jacobian, see \cite{Clarke1990}. It is worth mentioning that, the solution of system \eqref{uncon1} is equivalent the solution of the problem \begin{equation}\label{uncon2} \min_{d \in \mathbb{R}^n} \|f(x_k + V_kd)\|^2 + \mu_k\|d\|^2, \qquad V_k \in \partial f(x_k). \end{equation} The inexact versions of Levenberg--Marquardt-type method are more studied to the large-scale problems. In this case, at each iteration the system \eqref{uncon1} is not solved exactly, but only to within a certain tolerance, i.e, the vector $d_{k}$ is the solution of the system \begin{equation}\label{unconstrained} (V_k^TV_k + \mu_kI_n)d = -V_k^Tf(x_k) + r_k, \end{equation} where $r_k$ is a perturbation vector and measures how inexactly. In this study, we propose a new scheme for solving \eqref{eq:prob}, which we refer to as ILMM-IP. Basically, the proposed method combines the inexact Levenberg--Marquardt method with a procedure to obtain a feasible inexact projection onto $C$, thus ensuring the viability of the iterates. The concept of feasible inexact projection was introduced in \cite{deOliveiraFerreiraSilva2018}, which also accepts an exact projection that can be adopted when it is easily obtained. For instance, exact projections onto a box constraint or Lorentz cone can be easily obtained; see \cite[p. 520]{NocedalWright2000} and \cite[Proposition 3.3]{FukushimaTseng2002}, respectively. It is noteworthy that a feasible inexact projection can be computed using any method that minimizes a quadratic function on a closed convex set efficiently by introducing a suitable error criterion. For instance, if the set $C$ is polyhedral, then some iterations of an interior point method or active set method can be performed to obtain a feasible inexact projection, see \cite{NicholasPhilippe2002,NocedalWright2000,Robert1996}. When $C$ is a simple convex compact set, a similar scheme is adopted \cite{GoncalvesOliveira2017,MaxJefferson2017,lan2016}, which uses the conditional gradient method to obtain a feasible inexact projection. From the theoretical viewpoint, i.e., the local convergence of the proposed method as well as results on its rate are established by using assumption of semi-smoothness and an error bound condition. Specifically, let $\{x_k\}$ be the sequence generated by the method and $dist_{C^*}(x)$ the distance from $x$ to the solution set $C^*$. We show that the sequence $\{dist_{C^*}(x_k)\}$ converges to zero with a better rate than the linear. Moreover, we also deduce the convergence rate for the sequence $\{x_k\}$. To assess the practical behavior of the new method, some computacional results are reported. In particular, we compare the performance of the proposed method with that of the inexact Levenberg--Marquardt method using feasible exact projections. The outline of this paper is as follows. In Section~\ref{sec:NotDef}, we present the notations and som technical definitions used herein. In Section~\ref{sec:condGmet3}, we describe the ILMM-IP and examine its local convergence analysis. Some preliminary numerical experiments for the proposed schemes are reported in Section~\ref{sec:CompResu}. Finally, some concluding remarks are given in Section~\ref{sec:Conclusions}. \section{Notations and definitions}\label{sec:NotDef} The inner product and its associated Euclidean norm in $\mathbb{R}^n$ are denoted by $\langle\cdot,\cdot\rangle$ and $\| \cdot \|$, respectively. The closed ball centered at $x$ with radius $\delta$ is denoted by $B_{\delta}(x) := \{ y \in \mathbb{R}^n \ : \ \| y - x\| \leq \delta\}$. We define the distance from $x$ to the solution set $C^*$ by \begin{equation}\label{dist} dist_{C^*}(x):= \inf_{y \in C^*} \| y - x \|, \end{equation} We also represent by $\bar{x}$ a point in $C^*$ which realizes such distance, i.e. \begin{equation}\label{dist1} \|x - \bar{x} \| := dist_{C^*}(x). \end{equation} \begin{definition}\label{def:NormOper} The norm of a mapping $T: \mathbb{R}^n \to \mathbb{R}^m$ is define by $$ \|T\|:= \sup \{\|Tx\|:~ x \in \mathbb{R}^n, \|x\| = 1\}. $$ \end{definition} In the following, we define the concept of a locally Lipschitz continuous function, which is crucial in our study. \begin{definition}\label{def:FunLip} A function $f:\Omega \subset \mathbb{R}^n \rightarrow \mathbb{R}^m$ regarded as locally Lipschitz continuous if for each $x\in\Omega$ there exist constants $L, \delta >0$ such that $$ \|f(y) - f(z)\|\leq L \|y - z\| \quad \forall ~y, z \in B_\delta(x). $$ \end{definition} \begin{remark} According to the Rademacher theorem, see \cite[Theorem~2, p.~81]{Evans1992}, locally Lipschitz continuous functions are differentiable almost everywhere. \end{remark} Next, we define the Clarke generalized Jacobian of a function, which has appeared in \cite{Clarke1990}. This Jacobian requires only the local Lipschitz continuity of the function $f$ and its well-definedness is ensured by the Rademacher theorem. \begin{definition}\label{def:JacClarke} The Clarke-generalized Jacobian of a locally Lipschitz continuous function $f$ at $x$ is a set-valued mapping $\partial f: \mathbb{R}^n \rightrightarrows \mathbb{R}^m$ defined as $$ \partial f(x) := \mbox{co}\left\{H \in \mathbb{R}^{m\times n}:~ \exists \, \{x_k\} \subset \mathcal{D}_f, \lim_{k \to +\infty} x_k = x,\, H = \lim_{k \to + \infty}f'(x_k)\right\}, $$ where ``\mbox{co}'' represents the convex hull, $\mathbb{R}^{m\times n}$ the set comprising all $m\times n$ matrices, and $\mathcal{D}_f$ the set of points at which $f$ is differentiable. \end{definition} \begin{remark} It is noteworthy that if $f$ is continuously differentiable at $x$, then $\partial f(x) = \{f'(x)\}$. Otherwise, $\partial f(x)$ may contain other elements different from $f'(x)$, even if $f$ is differentiable at $x$, see \cite[Example 2.2.3]{Clarke1990}. Furthermore, the Clarke-generalized Jacobian is a subset of $\mathbb{R}^{m\times n}$ that is typically nonempty, convex, and compact. In addition, the set-valued mapping $\partial f$ is closed and upper semi-continuous; see \cite[Proposition 2.6.2, p.~70]{Clarke1990}. \end{remark} We finish this section with another important result about the set-valued mapping $\partial f$. \begin{proposition}\label{Prop:boundV} The set-valued mapping $\partial f(\cdot): \mathbb{R}^n \rightrightarrows \mathbb{R}^m$ is locally bounded; that is, for all $\delta >0$, there exists a $L > 0$ such that for all $y \in B_{\delta}(x)$ and $V \in \partial f(y)$, $\|V\| \leq L$ holds. \end{proposition} \begin{proof} For simplicity, we define the auxiliary set $\partial_B f(x)$ as follows $$ \partial_B f(x) := \left\{H \in \mathbb{R}^{m\times n}:~ \exists \, \{x_k\} \subset \mathcal{D}_f, \lim_{k \to +\infty} x_k = x,\, H = \lim_{k \to + \infty}f'(x_k)\right\}. $$ Since $V \in \partial f(y)$, there exist $H_1, \ldots, H_q \in \partial_B f(y)$ and $a_1, \ldots, a_q\in [0, 1]$ such that $V = \sum_{\ell=1}^{q}a_\ell H_\ell$ and $\sum_{\ell=1}^{m} a_\ell = 1$. On the other hand, because $H_1, \ldots, H_q \in \partial_B f(y)$, there exists $\{y_{k, \ell}\} \subset B_\delta(y)\cap {\cal D}_f$ with $\lim_{k \rightarrow +\infty} y_{k, \ell} = y$ such that $V = \sum_{\ell=1}^{q}a_\ell\lim_{k\rightarrow +\infty}f'(y_{k, \ell})$. Since $\{y_{k, \ell}\} \subset {\cal D}_f$, i.e., $f$ is differentiable at $y_{k, \ell}$ for each $\ell = 0, 1, \cdots, q$, we have $f'(y_{k,\ell}, v) = f'(y_{k,\ell})v$, where $f'(y_{k,\ell}, v)$ is directional derivative of $f$ in the direction of vector $v$ at the point $y_{k,\ell}$. Using that $f'(y_{k,\ell}, v) = f'(y_{k,\ell})v$ and that $f$ is locally Lipschitz continuous, we obtain that $$ \|f'(y_{k,\ell})v\| = \left\|\lim_{t \to 0^{-}} \dfrac{f(x + tv) - f(x)}{t}\right\| \leq L \|v\|. $$ Hence, from Definition~\ref{def:NormOper}, we have $\|f'(y_{k,\ell})\| \leq L$. Now, using properties of the norm and the fact that $\sum_{\ell=1}^{m} a_\ell = 1$, we conclude $$ \left\|V\right\| = \left\| \sum_{\ell=1}^{q} a_\ell \lim_{k\rightarrow +\infty}f'(y_{k, \ell}) \right\| \leq \sum_{\ell=1}^{q} \alpha_\ell \lim_{k \rightarrow +\infty} \left\|f'(y_{k, \ell}) \right\| \leq L, $$ which is the desired inequality. \end{proof} \section{Inexact Levenberg-Marquardt method with feasible inexact projections}\label{sec:condGmet3} In this section, we propose and analyze a local inexact Levenberg-Marquardt method with feasible inexact projections (ILMM-IP) to solve problem \eqref{eq:prob}. \begin{definition}\label{def:IP} Let $x\in \mathbb{R}^n$ and $\epsilon \geq 0$ be given. We say that $ P_{C}(x,\epsilon)$ is an $\epsilon$--projection of $x$ onto $C$ when \begin{equation*}\label{eq:iproj} P_{C}(x,\epsilon) \in C \quad \text{and} \quad \langle x - P_{C}(x,\epsilon), y - P_{C}(x,\epsilon) \rangle \leq \epsilon, \quad \forall y \in C. \end{equation*} \end{definition} Next, we indicate some important information about the $\epsilon$--projection. \begin{remark}\label{rem:projin} If $\epsilon = 0$, then $P_C(x, 0)$ corresponds to the orthogonal projection of $x$ onto $C$, which will be denoted, simply, by $P_C(x)$. On the other hand, $P_C(x,\epsilon)$ is an $\epsilon$--projection of $x$ onto $C$ in the sense of Definition~\ref{def:IP} for any $\epsilon > 0$. In the case that orthogonal projection onto $C$ neither has a closed-form nor can be easily computed, an $\epsilon$--projection of $x$ onto $C$ can be obtained by means of an iterative method applied to solve the projection problem $\min_{y \in C}(0,5\|y - x\|^2)$. For example, if $C$ is bounded, one can use the Frank–Wolfe method \cite{fw1956}, to obtain an inexact projection in the sense of Definition~\ref{def:IP}. In particular, given $z_t \in C$, the t-th step of the Frank–Wolfe method first finds $w_t$ as a minimum of the linear function $\langle x - z_t , w_t - z_t\rangle \leq \epsilon$ over $C$ and then set $z_{t+1} = (1 - \alpha_t)z_t + \alpha_t w_t$ for some $\alpha_t \in [0,1]$. \end{remark} In what follows, we establish a similar property for the operator $P_{C}(\cdot,\cdot)$ whose proof can be found in \cite{GoncalvesOliveira2021}. \begin{proposition}\label{prop:errproj} For any $x,y \in \mathbb{R}^n$ and $\epsilon\geq 0$, we have \[ \|P_C(x,\epsilon) - P_C(y)\| \leq \|x-y\| + \sqrt{\epsilon}. \] \end{proposition} We are now ready to formally described the inexact Levenberg-Marquardt method with feasible inexact projections.\\ \hrule \begin{algorithm} \label{Alg:NNM} {\vspace{0.2cm}\bf ILMM-IP \vspace{0.3cm}} \hrule \vspace{0.1cm} \begin{description} \vspace{.5 cm} \item[\bf Step 0.] Let $\eta \geq 1$, $\theta > 0$ and $\{\theta_k\}\subset[0,\theta)$ be given. Chose $x_0\in C$ and a parameter $\sigma \in (0,1)$. Set $k=0$. \item[\bf Step 1.] If $f(x_k)=0$, then {\bf stop}. \item[\bf Step 2.] Select an element $V_k \in \partial f(x_k)$ and set $\mu_k = \eta \|V_k^Tf(x_k)\|^\sigma$. Take a residual control $\zeta_k > 0$ and compute an approximate solution $d_k\in \mathbb{R}^n$ of the system \begin{equation*}\label{A1:s1} (V_k^T V_k + \mu_k I_n)d = -V_k^T f(x_k) + r_k, \end{equation*} such that \begin{equation}\label{eq:10} \|r_k\| \leq \zeta_k. \end{equation} \item[\bf Step 2.] Define $\epsilon_k := \theta_k^2 \| d_k \|^2$. Compute $P_{C}(x_k + d_k, \epsilon_k)$, an $\epsilon_k$--projection of $x_k + d_k$ onto $C$, and set \begin{equation}\label{eq:lm3} x_{k+1} := P_{C}(x_k + d_k, \epsilon_k). \end{equation} \item[\bf Step 3.] Set $k\gets k+1$, and go to \textbf{Step~1}. \vspace{.5 cm} \end{description} \hrule \end{algorithm} \vspace{0.3cm} \noindent \begin{remark} In ILMM-IP, we first verify whether the current iterate $x_k$ is a solution of \eqref{eq:prob}; otherwise, we select $V_k \in \partial f(x_k)$, $\mu_k > 0$ and $\zeta_k > 0$ to satisfy the criterion~\eqref{eq:10}. In our algorithm, we taking $\mu_{k} = \eta \|V_k^Tf(x_k)\|^\sigma$ for every $k\geq 0$ and $\sigma \in (0,1) $, which turns the problem \eqref{uncon1} into a strongly convex one and hence it possesses a unique solution. Suggestions of different regularization parameters have been discussed, see for example, \cite{Fan2013,Zhang2003}. As the point $x_k + d_k$ can be infeasible for the set of constraints $C$, ILMM uses a procedure to obtain a feasible inexact projection; consequently, the new iterate $x_{k+1}$ belongs to $C$. In particular, $x_{k+1}$ satisfies for $k\geq0$ the inequality $$ \langle x - x_{k+1}, y - x_{k+1}\rangle \leq \epsilon, \qquad \forall~y \in C. $$ See Remark~\ref{rem:projin} for some comments regarding our concept of feasible inexact projection and the method to compute it. Finally, the choose of tolerance $\theta_k$ is important for obtaining the local convergence of the ILMM-IP. \end{remark} In order to analyze the local convergence of the ILMM-IP, the following assumptions are made throughout this subsection. \begin{itemize} \item [\textbf{(H0)}] $C^* \neq \emptyset$ and let $x_* \in C^*$ be an arbitrary element of solution set.\\ \item [\textbf{(H1)}] There exist constants $\delta_1$, $\tau > 0$ and $0 \leq p \leq 1$ such that $$ \|f(y) - f(x) - V(y - x)\| \leq \tau \|x - y\|^{1 + p},\quad \forall~ V \in \partial f(x), \quad \forall ~ x, y \in B_{\delta_1}(x^*). $$ \item [\textbf{(H2)}] There exist $\omega, \delta_2 > 0$ such that $\|f(x)\|$ provides a local error bound on $B_{\delta_2}(x_*)$, i.e., $$ \omega\, dist_{C^*}(x) \leq \|f(x)\|, \qquad \forall~ x\in B_{\delta_2}(x^*). $$ \end{itemize} \begin{lemma}\label{Lemma1} Suppose that assumptions (H0) to (H2) hold. Let $r_k$ the residual vector and consider $\delta := \min\{\delta_1, \delta_2, 2(\omega^2/L\tau)^{1/p}\}$. If $x_k \in B_{\delta/2}(x_*)$, then there exist constants $c_1, c_2 > 0$ such that \begin{equation}\label{eq:8} \|d_k\| \leq c_1 \,dist_{C^*}(x_k) + \dfrac{\|r_k\|}{\mu_k}, \end{equation} \begin{equation}\label{equ:8} \|V_kd_k + f(x_k)\| \leq c_2 \,dist_{C^*}(x_k)^{1 + \frac{\sigma}{2}} + \|V_k\|\dfrac{\|r_k\|}{\mu_k}. \end{equation} \end{lemma} \begin{proof} Since $x_k \in B_{\delta/2}(x_*)$ from the triangular inequality, \eqref{dist} and \eqref{dist1}, we obtain that $$ \|\bar{x}_k - x_*\| \leq \|\bar{x}_k - x_k\| + \|x_* - x_k\| \leq \|x_* - x_k\| + \|x_* - x_k\| \leq \delta, $$ i.e., $\bar{x}_k \in B_{\delta}(x_*)$ and, consequently $\bar{x}_k \in B_{\delta_1}(x_*)$. On the other hand, because $f(\bar{x}_k) = 0$, we have \begin{align*} f(x_k)^TV_k(x_k - \bar{x}_k) &= f(x_k)^T[f(\bar{x}_k) + V_k(x_k - \bar{x}_k)] \nonumber\\ = f(x_k)^Tf(x_k) - f(x_k)^T[f(x_k) - f(\bar{x}_k) - V_k(x_k - \bar{x}_k)], \end{align*} where $V_k \in \partial f(x_k)$. Taking the norm on both sides of the last equality and using the properties of the norm, we have \begin{equation}\label{equ:4} \|f(x_k)^TV_k\|\|x_k - \bar{x}_k\| \geq \|f(x_k)\|^2 - \|f(x_k)\|\|f(x_k) - f(\bar{x}_k) - V_k(x_k - \bar{x}_k)\|. \end{equation} Since $x_k \in B_{\delta/2}(x_*)$ and $f(\bar{x}_k) = 0$, we can use assumption (H2), Definition~\ref{def:FunLip} and \eqref{dist1} to conclude that \begin{equation}\label{equ:5} \omega \|x_k - \bar{x}_k\| \leq \|f(x_k)\| \leq L\|x_k - \bar{x}_k\| \leq L\|x_k - x_*\|. \end{equation} Using assumption (H1), \eqref{equ:5} and the fact that $x_k \in B_{\delta/2}(x_*)$, we have \eqref{equ:4} reduces to \begin{align*} \|f(x_k)^TV_k\|\|x_k - \bar{x}_k\| &\geq \left[\omega^2 - L\tau \|x_k - x_*\|^p\right]\|x_k - \bar{x}_k\|^2 \geq \left[\omega^2 - L\tau \left(\dfrac{\delta}{2}\right)^p\right]\|x_k - \bar{x}_k\|^2. \end{align*} Because $\delta \leq 2(\omega^2/L\tau)^{1/p}$ the last inequality reduces to $\|f(x_k)^TV_k\| \geq \hat{c}\|x_k - \bar{x}_k\|$, where $\hat{c} := [\omega^2 - L\tau (\delta/2)^p]$. Now, note that \begin{equation}\label{equ:6} \mu_k = \eta \|V_k^Tf(x_k)\|^\sigma \geq \eta \hat{c}^\sigma \|x_k - \bar{x}_k\|^\sigma. \end{equation} On the other hand, using the properties of the norm, Proposition~\ref{Prop:boundV}, that $f(\bar{x}_k) = 0$ and Definition~\ref{def:FunLip}, we have \begin{equation}\label{equ1} \mu_k = \eta \|V_k^Tf(x_k)\|^\sigma \leq \eta L^\sigma \|f(x_k) - f(\bar{x}_k)\|^\sigma \leq \eta L^{2\sigma}\|x_k - \bar{x}_k\|^\sigma. \end{equation} Let $\bar{d}_k$ be solution of problem~\eqref{uncon2}, since $f(\bar{x}_k) = 0$, then by assumption (H1) and \eqref{equ:6}, we obtain that \begin{align*} \|\bar{d}_k\|^2 &\leq \dfrac{1}{\mu_k} \left[\|V_k \bar{d}_k + f(x_k)\|^2 + \mu_k\|\bar{d}_k\|^2\right] \\&\leq \dfrac{1}{\mu_k}\left[\|V_k(\bar{x}_k - x_k) + f(x_k)\|^2 + \mu_k\|\bar{x}_k - x_k\|^2\right]\nonumber \\ & \leq \dfrac{1}{\mu_k}\left[\|\tau^2 \|\bar{x}_k + x_k\|^{2 + 2p} + \mu_k\|\bar{x}_k - x_k\|^2\right] \\& \leq \left(\dfrac{\tau^2}{\eta \hat{c}^\sigma} \|\bar{x}_k - x_k\|^{2p - \sigma} + 1 \right)\|\bar{x}_k - x_k\|^2. \end{align*} Since $x_k \in B_{\delta/2}(x_*)$ and $\|\bar{x}_k - x_k\| \leq \|x_* - x_k\|$, we have \begin{equation}\label{equ:7} \|\bar{d}_k\| \leq \sqrt{\left(\dfrac{\tau^2}{\eta \hat{c}^\sigma} \left(\dfrac{\delta}{2}\right)^{2p - \sigma} + 1 \right)}\|\bar{x}_k - x_k\|. \end{equation} Moreover by \eqref{unconstrained}, follows that \begin{align*} d_k & = -(V_k^TV_k + \mu_k I_n)^{-1}V_k^T f(x_k) + (V_k^TV_k + \mu_kI_n)^{-1}r_k = \bar{d}_k + (V_k^TV_k + \mu_kI_n)^{-1}r_k. \end{align*} Taking the norm on both sides of the last equality, using the triangular inequality, \eqref{equ:7} and \eqref{dist1}, we conclude that \begin{align*} \|d_k\| &\leq \|\bar{d}_k\| + \|(V_k^TV_k + \mu_kI_n)^{-1}\|\|r_k\|\leq \sqrt{\left(\dfrac{\tau^2}{\eta \hat{c}^\sigma} \left(\dfrac{\delta}{2}\right)^{2p - \sigma} + 1 \right)}dist_{C^*}(x_k) + \dfrac{\|r_k\|}{\mu_k}. \end{align*} This implies that \eqref{eq:8} holds with $c_1 = \sqrt{(\tau^2/\eta \hat{c}^\sigma)(\delta/2)^{2p - \sigma} + 1}$. We proceed to prove the inequality in \eqref{equ:8}. Considering the left-hand of \eqref{equ:8}, we have \begin{align}\label{equ4} \|V_k d_k + f(x_k)\| &= \|V_k \left[\bar{d}_k + (V_k^TV_k + \mu_k I_n)^{-1}r_k \right] + f(x_k)\|\nonumber\\ & \leq \|V_k \bar{d}_k + f(x_k)\| + \|V_k\|\|(V_k^TV_k + \mu_k I_n)^{-1}\|\|r_k\|\nonumber\\ & \leq \|V_k \bar{d}_k + f(x_k)\| + \|V_k\|\dfrac{\|r_k\|}{\mu_k}. \end{align} Since $\bar{d}_k$ is solution of problema~\eqref{uncon2} and $f(\bar{x}_k) = 0$, using assumption (H1), and \eqref{equ1}, we obtain that \begin{align}\label{equ:9} \|V_k\bar{d}_k + f(x_k)\|^2 & \leq \|V_k\bar{d}_k + f(x_k)\|^2 + \mu_k \|\bar{d}_k\|\nonumber\\ &\leq \|V_k(\bar{x}_k - x_k) + f(x_k)\|^2 + \mu_k \|\bar{x}_k - x_k\|^2 \nonumber\\ & \leq \tau^2 \|\bar{x}_k - x_k\|^{2 + 2p} + \eta L^{2\sigma}\|\bar{x}_k - x_k\|^{2 + \sigma}\nonumber\\ &\leq \left[\tau^2\left(\dfrac{\delta}{2}\right)^{2p-\sigma} +\eta L^{2\sigma}\right]\|\bar{x}_k - x_k\|^{2+\sigma}. \end{align} Therefore, extracting the square root on both sides of \eqref{equ:9} and combining with \eqref{equ4}, we obtain the desired result. \end{proof} In the following, we give an assumption about the residual vector $r_k$. \begin{itemize} \item [\textbf{(H3)}] Let $\sigma \in (0,1)$ and $\nu_k \subset \mathbb{R}_+$ for all $k = 0, 1, \ldots$. The residual vector $r_k$ satisfies $$ \dfrac{\|r_k\|}{\mu_k}\leq \nu_k \, dist_{C^*}(x_k). $$ \end{itemize} In this paper, we choose $\nu_k \leq dist_{C^*}(x_k)^{\frac{\sigma}{2}}$ for all $k = 0, 1, \ldots$. If assumption (H3) holds, there exists $\delta_3 >0$ such that \begin{equation}\label{eq:6} dist_{C^*}(x_k) \leq \delta_3 \quad \Rightarrow \quad \dfrac{\|r_k\|}{\mu_k} \leq dist_{C^*}(x_k)^{1+ \frac{\sigma}{2}} \leq dist_{C^*}(x_k). \end{equation} \begin{lemma}\label{Lemm2} Suppose that assumptions (H0) to (H3) hold and let $\{x_k\}$ be a sequence generated by Algorithm~\ref{Alg:NNM}. Let $\delta \leq \min\{\delta_1, \delta_2, \delta_3, 2(\omega^2/L\tau)^{1/p}\}$, where $\delta_1$, $\delta_2$ are defined in Lemma~\ref{Lemma1} and $\delta_3$ in assumption (H3). Then for any $x_k$, $x_k + d_k \in B_{\delta/2}(x_*)$, there exists $c_3 > 0$ such that $$ dist_{C^*}(x_{k+1}) < c_3 \, dist_{C^*}(x_k)^{1+\frac{\sigma}{2}}, $$ for all $k = 0, 1, \ldots$. \end{lemma} \begin{proof} Follows on from \eqref{dist} and \eqref{eq:lm3} that \begin{equation}\label{eq:12} dist_{C^*}(x_{k+1}) = dist_{C^*}\big(P_C(x_k + d_k, \epsilon_k)\big) = \inf_{x\in C^*}\|P_C(x_k + d_k, \epsilon_k) - x\|. \end{equation} Since $P_C(x) = x$ for each $x\in C$, we obtain from Proposition~\ref{prop:errproj} and \eqref{dist} that \begin{align}\label{eq:2} \inf_{x \in C^*}\|P_C(x_k + d_k, \epsilon_k) - P_C(x)\| &\leq \inf_{x \in C^*}(\|x_k + d_k - x\| + \sqrt{\epsilon_k})\nonumber\\ & = \sqrt{\epsilon_k} + \inf_{x \in C^*}\|x_k + d_k - x\|\nonumber\\ & = \sqrt{\epsilon_k} + dist_{C^*}(x_k + d_k). \end{align} Combining \eqref{eq:12} and \eqref{eq:2}, and using assumption (H2) and the fact that $\epsilon_k = \theta_k^2\|d_k\|^2$, we have \begin{equation}\label{eq:3} dist_{C^*}(x_{k+1}) \leq \sqrt{\epsilon_k} + dist_{C^*}(x_k + d_k) \leq \theta_k\|d_k\| + \dfrac{1}{\omega}\|f(x_k + d_k)\|. \end{equation} On the other hand by assumption (H1), we obtain that \begin{equation}\label{eq:4} \|f(x_k + d_k)\| - \|f(x_k) + V_kd_k\|\leq \|f(x_k + d_k) - f(x_k) - V_kd_k\| \leq \tau \|d_k\|^{1 + p}, \end{equation} where $V_k \in \partial f(x_k)$. Combining \eqref{eq:3} and \eqref{eq:4}, we have $$ dist_{C^*}(x_{k+1}) \leq \theta_k\|d_k\| + \dfrac{1}{\omega}(\tau \|d_k\|^{1 + p} + \|f(x_k) + V_kd_k\|). $$ Now, we can use Proposition~\ref{Prop:boundV}, Lemma~\ref{Lemma1} and \eqref{eq:6} to conclude that \begin{multline*} dist_{C^*}(x_{k+1}) \leq \left[\theta_k (c_1 + 1)dist_{C^*}(x_k)^{- \frac{\sigma}{2}} + \dfrac{\tau (c_1 + 1)^{1 + p}}{\omega}dist_{C^*}(x_k)^{p - \frac{\sigma}{2}}\right.\\ \left.+\dfrac{c_2 + L}{\omega}\right]dist_{C^*}(x_k)^{1+\frac{\sigma}{2}}. \end{multline*} Finally, since $x_k \in B_{\delta/2}(x_*)$ and $\theta_k < \theta$. Then, the last inequality reduces to $$ dist_{C^*}(x_{k+1}) < \left\{\left[\theta(c_1 + 1)\left(\dfrac{2}{\delta}\right)^p + \dfrac{\tau (c_1+1)^{1+p}}{\omega}\right]{\left(\dfrac{\delta}{2}\right)}^{p - \frac{\sigma}{2}}+ \dfrac{c_2 + L}{\omega} \right\} dist_{C^*}(x_k)^{1+\frac{\sigma}{2}}, $$ for all $k = 0, 1, \ldots$. Therefore, the proof of the lemma is complete with constant $c_3 = [\theta(c_1 + 1)(2/\delta)^p + \tau (c_1+1)^{1+p}/\omega]{(\delta/2)}^{p - \frac{\sigma}{2}} + (c_2 + L)/\omega $. \end{proof} In the next result, we show that $x_k, x_k + d_k \in B_{\delta/2}(x_*)$ if the starting point $x_0$ in the proposed algorithm is chosen sufficiently close to the solution set $C^*$. \begin{lemma}\label{Lemma3} Let $\hat{\delta} := \min\{1/2c_3^{-1/p}, \delta/ 2(c_1 + 2)[1 + (1+\theta)(c_1 + 1)\varsigma]\}$, where $\varsigma \geq \sum_{\ell = 0}^{\infty} (1/2)^{(1 + p)^{\ell} - 1}$. If $x_0 \in B_{\hat{\delta}}(x^*)\cap C$, then $x_k, x_k + d_k \in B_{\delta/2}(x_*)$ for every $k \geq 0$. \end{lemma} \begin{proof} We will proceed by induction on $k$. We start with $k = 0$. By assumption, we have $x_0 \in B_{\hat{\delta}}(x^*)$. Since $\hat{\delta} \leq \delta/2$, we conclude that $x_0 \in B_{\delta/2}(x^*)$. Moreover, using the triangular inequality, \eqref{eq:8}, \eqref{eq:6} and \eqref{dist}, we obtain $$ \|x_0 + d_0 - x_*\| \leq \|x_0 - x_*\| + \|d_0\| \leq \hat{\delta} + (c_1 + 1)\,dist_{C^*}(x_0) \leq \hat{\delta} + (c_1 + 1)\|x_0 - x_*\| \leq (c_1 + 2)\hat{\delta}. $$ Since, in particular, $\hat{\delta} \leq \delta/2(c_1 + 2)$, we conclude that $x_0 + d_0 \in B_{\delta/2}(x_*)$. Let $k \geq 0$ be arbitrarily given and assume that $x_{\ell}, x_{\ell} + d_{\ell} \in B_{\delta/2}(x_*)$ for all $\ell = 0, 1, \ldots, k$. Now, we procede to prove that $x_{k+1}, x_{k+1} + d_k \in B_{\delta/2}(x_*)$. By induction assumption $x_{\ell}, x_{\ell} + d_{\ell} \in B_{\delta/2}(x_*)$ for all $\ell = 0, 1, \ldots, k$, thus from Lemma~\ref{Lemm2} and \eqref{dist}, we have \begin{align}\label{equ:1} dist_{C^*}(x_{\ell}) & < c_3\cdot c_3^{1+p} \cdot c_3^{(1+p)^2}\cdot\ldots\cdot c_3^{(1+p)^{\ell - 1}} \,dist_{C^*}(x_0)^{(1 + p)^{\ell}}\nonumber \\ & \leq c_3^{\frac{1}{p}[(1+p)^{\ell} - 1]}\|x_0 - x_*\|^{(1 + p)^{\ell}} \leq c_3^{\frac{1}{p}[(1+p)^{\ell} - 1]}\hat{\delta}^{(1 + p)^{\ell}}. \end{align} On the other hand, using \eqref{eq:lm3} and Proposition~\ref{prop:errproj}, we find that $$ \|x_{k + 1} - x_*\| = \|P_{C}(x_k + d_k, \epsilon_k) - P_C(x_*)\| \leq \|x_k + d_k - x_*\| + \sqrt{\epsilon_k}. $$ By the triangle inequality and the facts that $\epsilon_k = \theta_k^2\|d_k\|^2$ and $\theta_k < \theta$, we obtain that $$ \|x_{k+1} - x_*\| < \|x_k - x_*\| + (1 + \theta)\|d_k\|. $$ Since $x_{\ell} \in B_{\delta/2}(x_*)$, for all $\ell = 0,1, \ldots, k$ and $x_0 \in B_{\hat{\delta}}(x_*)$, we can use the last inequality recursively together with \eqref{eq:8} and \eqref{eq:6} to conclude that \begin{align}\label{equ:2} \|x_{k+1} - x_*\| & < \|x_0 - x_*\| + (1 + \theta)\sum_{\ell = 0}^{k}\|d_{\ell}\| \nonumber \\ &\leq \hat{\delta} + (1 + \theta)(c_1 + 1)\sum_{\ell = 0}^k dist_{C^*}(x_{\ell})\nonumber \\ & \leq \hat{\delta} + (1 + \theta)(c_1 + 1)\sum_{\ell = 0}^{\infty} dist_{C^*}(x_{\ell}). \end{align} Now combining \eqref{equ:1} and \eqref{equ:2}, we obtain that \begin{align}\label{equ3} \|x_{k+1} - x_*\| & < \hat{\delta} + (1 + \theta)(c_1 + 1)\sum_{\ell = 0}^{\infty} c_3^{\frac{1}{p}[(1+p)^{\ell} - 1]}\hat{\delta}^{(1 + p)^{\ell}} \nonumber \\ &\leq \hat{\delta} + (1 + \theta)(c_1 + 1)\hat{\delta}\sum_{\ell = 0}^{\infty} \left(\dfrac{1}{2}\right)^{(1 + p)^{\ell} - 1}\nonumber \\ & \leq [1 + (1 + \theta)(c_1 + 1)\varsigma]\hat{\delta}, \end{align} where the second inequality follows from definition $\hat{\delta}$. Therefore, $x_{k+1} \in B_{\delta/2}(x_*)$. It remains to prove that $x_{k+1} + d_{k+1} \in B_{\delta/2}(x_*)$. Since $x_{k+1} \in B_{\delta/2}(x_*)$, it follows from the triangle inequality, \eqref{eq:8} and \eqref{eq:6} that $$ \|x_{k+1} + d_{k+1} - x_*\|\leq \|x_{k+1} - x_*\| + \|d_{k+1}\| \leq (c_1 + 2)\|x_{k+1} - x_*\|, $$ which, combined with \eqref{equ3} and the definition of $\hat{\delta}$, yields $$ \|x_{k+1} + d_{k+1} - x_*\| < (c_1 + 2)[1 + (1 + \theta)(c_1 + 1)\varsigma]\hat{\delta} \leq \dfrac{\delta}{2}. $$ This implies $x_{x+1} + d_{k+1} \in B_{\delta/2}(x_*)$ and then the proof is complete. \end{proof} We are now ready to prove the convergence of the sequences $\{dist_{C^*}(x_k)\}$ and $\{x_k\}$. \begin{theorem} Suppose that assumptions (H0) to (H3) hold and let $\{x_k\}$ be a sequence generated by Algorithm~\ref{Alg:NNM} with $x_0 \in B_{\hat{\delta}}(x_*)\cap C$. Let $\delta$ and $\hat{\delta}$ be the constants given in Lemmas~\ref{Lemma1} and \ref{Lemma3}, respectively. Then the sequence $\{dist_{C^*}(x_k)\}$ converges to zero at $(1 + \sigma/2)$ - order rate and the sequence $\{x_k\}$ converges to a point belonging to $C^*$. \end{theorem} \begin{proof} The first part follows immediately from Lemmas~\ref{Lemm2} and \ref{Lemma3}. Now, we proceed to prove the second part. Since $\{dist_{C^*}(x_k)\}$ converges to zero and from Lemma~\ref{Lemma3}, $\{x_k\} \subset B_{\hat{\delta}}(x^*)\cap C \subset B_{\delta/2}(x_*)\cap C$, it suffices to show that $\{x_k\}$ converges. Let us prove that $\{x_k\}$ is a Cauchy sequence. For this end, take $p, q \in \mathbb{N}$ with $p \geq q$. It follows from Proposition~\ref{prop:errproj}, triangular inequality, and the facts that $\epsilon_k = \theta_k^2 \|d_k\|^2$ and $\{x_k\} \subset C$ that \begin{align*} \|x_p - x_q\|& = \|P_C(x_{p-1} + d_{p-1}, \epsilon_{p-1}) - P_C(x_q)\| \nonumber\\ & \leq \|x_{p - 1} + d_{p-1} - x_q\| + \theta_{p-1}\|d_{p-1}\| \nonumber\\ & \leq \|x_{p - 1} - x_q\| + (1 + \theta_{p-1})\|d_{p-1}\|. \end{align*} Repeating the process above, we get $$ \|x_p - x_q\| \leq (1 + \theta_q)\|d_q\| + \cdots + (1 + \theta_{p - 2})\|d_{p - 2}\| + (1 + \theta_{p-1})\|d_{p-1}\|, $$ which, combined with the fact that $\theta_k < \theta$, for every $k \geq 0$, yields $$ \|x_p - x_q\| < (1 + \theta)\sum_{\ell = q}^{p-1}\|d_{\ell}\| \leq (1 + \theta)\sum_{\ell = q}^{\infty}\|d_{\ell}\|. $$ On the other hand by \eqref{eq:8}, \eqref{eq:6}, \eqref{equ:1} and definition of $\hat{\delta}$, we have $$ \|d_k\| \leq (c_1 + 1)\,dist_{C^*}(x_{\ell}) \leq (c_1 + 1)c_3^{\frac{1}{p}[(1+p)^{\ell} - 1]}\hat{\delta}^{(1+p)^{\ell}} \leq (c_1 + 1)\hat{\delta} \left(\dfrac{1}{2}\right)^{(1+p)^{\ell} - 1}. $$ Combining the last two inequalities, we obtain that \begin{align*} \|x_p - x_q\| & < (1 + \theta)(c_1 + 1)\hat{\delta}\sum_{\ell = q}^{\infty} \left(\dfrac{1}{2}\right)^{(1+p)^{\ell} - 1} \\ &= (1 + \theta)(c_1 + 1)\hat{\delta}\left[\sum_{\ell = 0}^{\infty} \left(\dfrac{1}{2}\right)^{(1+p)^{\ell} - 1} - \sum_{\ell = 0}^{q-1} \left(\dfrac{1}{2}\right)^{(1+p)^{\ell} - 1}\right]. \end{align*} Taking the limit in the last inequality as $q$ goes to $\infty$, we have $\|x_p - x_q\|$ goes to $0$. Therefore, $\{x_k\}$ is a Cauchy sequence and hence it converges. Let us say $\bar{x} = \lim_{k \to \infty}x_k$. Since $x_k \in C$, $\forall\, k$, and $C$ is closed, them $\bar{x} \in C$. Moreover, because $\omega \,dist_{C^*}(x_k) \leq \|f(x_k)\| \leq L \,dist_{C^*}(x_k)$, $f$ is continuous and $\{dist_{C^*}(x_k)\}$ converges to zero, we conclude that $\bar{x} \in C^*$. \end{proof} The following theorem proves the local convergence rate of the sequence $\{x_k\}$ generated by the ILMM-IP. \begin{theorem} Suppose the assumptions (H0) to (H3) hold and let $\{x_k\}$ be a sequence generated by Algorithm~\ref{Alg:NNM} with $x_0 \in B_{\hat{\delta}}(x_*)\cap C$. Let $\delta$ and $\hat{\delta}$ be the constants given in Lemmas~\ref{Lemma1} and \ref{Lemma3}. Then the sequence $\{x_k\}$ converges to $\bar{x}$ at $(1 + \sigma/2)$ - order rate. \end{theorem} \begin{proof} It follows from Proposition~\ref{prop:errproj} that $$ \|d_k\| = \|x_k + d_k - x_k\| \geq \|P_C(x_k + d_k, \epsilon_k) - P_C(x_k)\| - \sqrt{\epsilon_k}. $$ Using \eqref{eq:lm3} and the facts that $\epsilon_k = \theta_k^2 \|d_k\|^2$ and $\theta_k < \theta$, we conclude that $$ \|d_k\| \geq \|x_{k+1} - x_k\| - \theta \|d_k\|, $$ Now let $\bar{x}_{k+1}$ be satisfying $dist_{C^*}(x_{k+1}) = \|x_{k+1} - \bar{x}_{k+1}\|$. Thus, from the previous inequality and \eqref{dist1}, we have $$ (1 + \theta)\|d_k\| \geq \|x_{k} - \bar{x}_{k+1}\| - \|\bar{x}_{k+1} - x_{k+1}\| = dist_{C^*}(x_k) - dist_{C^*}(x_{k+1}). $$ From Lemma~\ref{Lemm2}, we can get that $dist_{C^*}(x_{k+1}) < dist_{C^*}(x_k)/2$, when $k$ is sufficiently large. Hence, $\|d_k\| > dist_{C^*}(x_k) / 2(1 + \theta) $ and from \eqref{eq:8}, \eqref{eq:6} and Lemma~\ref{Lemm2}, we conclude that \begin{equation}\label{equ:dis} \|d_{k+1}\| \leq (c_1 + 1)\,dist_{C^*}(x_{k+1}) < c_3(c_1 + 1)\,dist_{C^*}(x_{k})^{1 + \frac{\sigma}{2}} < c_3(c_1 + 1)2^{1 + \frac{\sigma}{2}}(1 + \theta)^{1 + \frac{\sigma}{2}}\|d_k\|^{1 + \frac{\sigma}{2}}. \end{equation} For sufficiently large $k$, we assume without loss of generality that the condition $c_3(c_1 + 1)2^{1 + \frac{\sigma}{2}}(1 + \theta)^{1 + \frac{\sigma}{2}}\|d_k\|^{\frac{\sigma}{2}} \leq 1/2$ holds. Therefore, we can get that $\|d_{k+1}\| < 1/2 \|d_k\|$ and consequently, we have \begin{equation}\label{eq:7} \|d_{k+j}\| < \dfrac{1}{2}\|d_{k+j-1}\| < \cdots < \left(\dfrac{1}{2}\right)^j\|d_k\|, \qquad j = 0, 1, \cdots. \end{equation} Using Proposition~\ref{prop:errproj} and the fact that $\epsilon_k = \theta_k^2\|d_k\|^2$, we obtain that \begin{align*} \|x_k - x_{x+l}\| & = \|P_C(x_k) - P_C(x_{k+l-1} + d_{k+l-1}, \epsilon_{k+l-1})\| \\ & \leq \|x_k - x_{k+l-1} + d_{k+l-1}\| + \sqrt{\epsilon_{k+l-1}} \\ & \leq \|x_k - x_{k+l-1}\| + (1 + \theta_{k+l-1})\|d_{k+l-1}\|. \end{align*} Repeating the process above, we get $$ \|x_k - x_{x+l}\| \leq (1 + \theta_k)\|d_k\| + \cdots + (1 + \theta_{k+l-2})\|d_{k+l-2}\| + (1 + \theta_{k+l-1})\|d_{k+l-1}\|, $$ which, combined with the fact $\theta_k < \theta$, for every $k \geq 0$, and \eqref{eq:7}, yields $$ \|x_k - x_{x+l}\| < (1 + \theta)\sum_{j = 0}^{l-1}\|d_{k+j}\| < (1 + \theta)\|d_k\|\sum_{j = 0}^{l-1}\left(\dfrac{1}{2}\right)^{j}. $$ Since $\bar{x}$ is the limit point of $\{x_k\}$ taking the limit in the last inequality as $l$ goes to $\infty$, we obtain that \begin{equation}\label{eq:9} \|x_k - \bar{x}\| = \lim_{l \to \infty}\|x_k - x_{x+l}\| \leq (1 + \theta)\|d_k\|\sum_{j = 0}^{\infty}\left(\dfrac{1}{2}\right)^{j}. \end{equation} Considering that $\sum_{j = 0}^{\infty} (1/2)^j = 2$, we conclude from \eqref{eq:9} that $\|x_k - \bar{x}\| \leq 2 (1 + \theta)\|d_k\|$. Therefore, from \eqref{eq:8}, \eqref{eq:6}, \eqref{equ:dis} and \eqref{dist1}, we have \begin{equation*} \|x_{k+1} - \bar{x}\| \leq 2 (1 + \theta)\|d_{k+1}\| < c_3(c_1 + 1)[2(1+ \theta)]^{2 + \frac{\sigma}{2}}\|d_k\|^{1 + \frac{\sigma}{2}} \leq c_3[2(c_1 + 1)(1+ \theta)]^{2 + \frac{\sigma}{2}}\|x_k - \bar{x}\|^{1 + \frac{\sigma}{2}}, \end{equation*} which implies that $\{x_k\}$ converges to $\bar{x}$ at $(1+\sigma/2)$ - order rate. \end{proof} \section{Computacional results}\label{sec:CompResu} Here, we present some computacional results to assess the practical behavior of inexact Levenberg-Marquardt method with exact projections (ILMM-EP) and inexact Levenberg-Marquardt method with feasible inexact projections (ILMM-IP). Specifically, we consider one classe of constrained nonsmooth equations, i.e, of the form \eqref{eq:prob}. We worked with one class of medium- and large-scale problems called CAVEs (constrained absolute value equations). It is worth mentioning that in \cite{DeOliveiraFerreira2020} a version of inexact Newton method was used for solving CAVEs. The Constrained absolute value equation (CAVE) is described as $$ \mbox{find} \quad x \in C \quad \mbox{such that}\quad Ax - |x| = b, $$ where $C := \{x \in \mathbb{R}^n:~ \sum_{i = 1}^{n}x_i \leq d, \, x_i \geq 0, \, i = 1,2, \ldots,n\}$, $A \in \mathbb{R}^{n\times n}$, $b \in \mathbb{R}^n \equiv \mathbb{R}^{n\times 1}$, and $|x|$ denotes the vectors whose $i$-th component is equal to $|x_i|$. In our implementation, the CAVEs have been generated randomly. We used the Matlab routine \textit{sprand} to construct matrix $A$. In particular, this routine generates a sparse matrix with predefined dimension, density, and singular values. Initially, we defined the dimension $n$ and randomly generated the vector of singular values from a uniform distribution on $(0, 1)$. To ensure that $\|A^{-1}\|< 1/3$, i.e., so that the assumptions of \cite[Theorem 2]{BelloCruz2016} are fulfilled, we rescale the vector of singular values by multiplying it by $3$ divided by the minimum singular value multiplied by a random number in the interval $(0, 1)$. To generate the vector $b$ and the constant $d$, we chose a random solution $x_*$ from a uniform distribution on $(0.1, 100)$ and computed $b = Ax_* - |x_*|$ and $d = \sum_{i = 1}^{n}(x_*)_i$, where $(x_*)_i$ denotes the $i$-th component of the vector $x_*$. In both methods, $x_0 = (d/2n, d/2n,\ldots, d/2n)$ was defined as the starting point, the initialization data $\theta$ was taken equal to $10^{-2}$. We stopped the execution of Algorithm~\ref{Alg:NNM} at $x_k$, declaring convergence if $\|Ax_k - |x_k| - b\| < 10^{-6}$. In case this stopping criterion was not respected, the method stopped when a maximum of $100$ iterations had been performed. The procedure to obtain feasible projections used in our implementation was the \textit{CondG Procedure}; see, for example, \cite{OliveiraFerreiraSilva2019, GoncalvesOliveira2017}. In particular, this procedure stopped when either the stopping criterion, i.e., the condition $\langle x_k - x_{k+1}, y - x_{k+1}\rangle \leq \epsilon$ was satisfied for all $y \in C$ and $k = 0, 1, \ldots$ or a maximum of $100$ iterations was performed. For this class of problems, an element of the Clarke generalized Jacobian at $x$ (see \cite{BelloCruz2016,Mangasarian2009}) is given by $$ V = A - \mbox{diag}(\mbox{sgn}(x)), \qquad x \in \mathbb{R}^n, $$ where $\mbox{diag}(\alpha_i)$ denotes a diagonal matrix with diagonal elements $\alpha_1, \alpha_2, \ldots, \alpha_n$ and $\mbox{sgn}(x)$ denotes a vector with components equal to $-1$, $0$, or $1$ depending on whether the corresponding component of the vector $x$ is negative, zero, or positive. The ILMM-EP and ILMM-IP requires that the linear system $(V_k^TV_k + \mu_k I_n)d + V_k^Tf(x_k) - r_k = 0$ to be solved approximately, in the sense of \eqref{eq:10} be satisfied. Matlab has several iterative methods for solving linear equations. For our class of problems, the routine \textit{bicgstab} was the most efficient; thus, in all tests, we used \textit{bicgstab - BiConjugate Gradients Stabilized Method} as an iterative method to solve linear equations approximately. The numerical results were obtained using Matlab version R2016a on a 2.5~GHz Intel\textregistered\ Core\texttrademark\ i5 2450M computer with 6~GB of RAM and Windows 7 ultimate system. Table~\ref{tab:res} displays the numerical results obtained for the test set proposed. The methods were compared on the total number of iterations (It) and CPU time in seconds (Time). Observe that as for the CPU time the ILMM-IP behaved better than the ILMM-EP for this problem set. In particular, this becomes more evident for problems of large-scale. Regarding the total number of iterations, both methods behaved quite similarly. In summary, the numerical experiments indicate that the ILMM-IP seems to be reliable and competitive for solving medium- and large-scale constrained nonsmooth mainly when the orthogonal projection onto the feasible set cannot be easily computed. \begin{table} \centering \caption{Comparison of ILMM-EP and ILMM-IP for solving CAVEs.}\label{tab:res} \begin{tabular}{rrrcccc} \hline \ & \ & \multicolumn{2}{c}{ILMM-EP} & \multicolumn{2}{c}{ILMM-IP} \\ $n$ & $m$ & It & Time & It & Time \\ \hline 100 & 100 & 6 & 0.15 & 7 & 0.13 \\ \hline 500 & 500 & 6 & 0.56 & 6 & 0.49 \\ \hline 1000 & 1000 &7 & 3.38 & 7 & 2.65 \\ \hline 5000 & 5000 &8 & 162.90& 8 & 152.13 \\ \hline \end{tabular} \end{table} \section{Conclusions}\label{sec:Conclusions} This paper proposed and analyzed a ILMM-IP. It basically, consists of combining the inexact Levenberg-Marquart method step with feasible inexact projections. The local convergence as well as results on its rate were established under assumption of semi-smoothness and an error bound condition, which is weaker than the standard full-rank condition of Clarke-generalized Jacobian of $f$ at $x$. Finally, some numerical experiments were carried out in order to illustrate the numerical behavior of proposed method. In particular, they indicate that the ILMM-IP represent a useful tool for solving medium- and large-scale constrained nonsmooth equations mainly when the orthogonal projection onto the feasible set cannot be easily computed.
{ "timestamp": "2021-05-06T02:05:36", "yymm": "2105", "arxiv_id": "2105.01781", "language": "en", "url": "https://arxiv.org/abs/2105.01781" }
\section{Introduction and preliminaries} In the present paper, all vector spaces are supposed to be real, operators linear, vector topologies Hausdorff, and vector lattices Archimedean. For any vector lattice $X$, the Dedekind complete vector lattice of all order bounded linear functionals on $X$ is called {\em order dual} of $X$ and is denoted by $X^\sim$. Recall that the order continuous part $X^\sim_n$ is a band of $X^\sim$. Some vector lattices may have trivial order duals, for example $X^\sim=\{0\}$ whenever $X=L_p[0,1]$ with $0<p<1$ (cf. \cite[Thm.5.24]{AB1}). If $T$ is an operator from a vector space $X$ to a vector space $Y$, the {\em algebraic adjoint} $T^{\#}$ is an operator from the algebraic dual $Y^{\#}$ to $X^{\#}$, defined by $(T^{\#}f)(x):=f(Tx)$ for all $x\in X$ and all $f\in Y^{\#}$. In the case when $X$ and $Y$ are vector lattices and $T$ is order bounded, the restriction $T^\sim$ of $T^{\#}$ to the order dual $Y^\sim$ of $Y$ is called the {\em order adjoint} of $T$. The operator $T^\sim: Y^\sim \to X^\sim$ is not only order bounded, but even order continuous (cf. \cite[Thm.1.73]{AB2}). Clearly, $T^\sim: Y_n^\sim \to X_n^\sim$ when $T$ is order continuous. In the case when $(X,\varsigma)$ and $(Y,\tau)$ are topological vector spaces and $T$ is continuous, the restriction $T'$ of $T^{\#}$ to the {\em topological dual} $Y'$ (= the collection of all $\varsigma$-continuous linear functionals on $Y$) is called the {\em topological adjoint} of $T$. For every locally solid lattice $(X,\varsigma)$, we have $X'\subseteq X^\sim$ since $\varsigma$-continuous functionals are bounded on $\varsigma$-bounded subsets and hence are order bounded. The topological dual $X'$ of a locally convex-solid lattice $(X,\varsigma)$ is an ideal of $X^\sim$ (and hence $X'$ is Dedekind complete) (cf. \cite[Thm.3.49]{AB2}). Every Fr{\'e}chet lattice $(X,\varsigma)$ satisfies $X'=X^\sim$ (cf. \cite[Thm.5.23]{AB1}). A net $(x_\alpha)_{\alpha\in A}$ in a vector lattice $X$ is said to be: \begin{enumerate} \item[$a)$] \ {\em order convergent} ({\em $o$-convergent}) to $x\in X$, if there exists a net $(z_\beta)_{\beta\in B}$ in $X$ such that $z_\beta\downarrow 0$ and, for any $\beta\in B$, there exists $\alpha_\beta\in A$ with $|x_\alpha-x|\leq z_\beta$ for all $\alpha\geq\alpha_\beta$. In this case, we write $x_\alpha\convo x$$;$ \item[$b)$] \ {\em $uo$-convergent} to $x\in X$, if $|x_\alpha-x|\wedge u\convo 0$ for every $u\in X_+$. \end{enumerate} A {\em locally solid lattice} $(X,\varsigma)$ is called: \begin{enumerate} \item[$c)$] \ {\em Lebesgue}/$\sigma$-{\em Lebesgue} if $x_\alpha\downarrow 0$ implies $x_\alpha\convvars 0$ for every net/sequence $x_\alpha$ in $X$. \item[$d)$] \ {\em pre-Lebesgue} if $0\le x_n\uparrow\le x$ in $X$ implies that $x_n$ is a $\varsigma$-Cauchy sequence in $X$. \item[$e)$] \ {\em Levi}/$\sigma$-{\em Levi} \ if every increasing $\varsigma$-bounded net/sequen\-ce in $X_+$ has a supremum in $X$. \item[$f)$] \ {\em Fatou}/$\sigma$-{\em Fatou} if the topology $\varsigma$ has a base at zero consisting of order/$\sigma$-order closed solid sets. \end{enumerate} The assumption $x_\alpha\downarrow 0$ on the net $x_\alpha$ in $c)$ can be replaced by $x_\alpha\convo 0$. A Lebesgue lattice $(X,\varsigma)$ is Dedekind complete iff order intervals of $X$ are $\varsigma$-complete \cite[Prop.3.16]{Tay1}. Every Lebesgue lattice is pre-Lebesgue \cite[Thm.3.23]{AB1} and Fatou \cite[Lem.4.2]{AB1}. Furthermore, $(X,\varsigma)$ is pre-Lebesgue iff the topological completion $(\hat{X},\hat{\varsigma})$ of $(X,\varsigma)$ is Lebesgue \cite[Thm.3.26]{AB1} iff $(\hat{X},\hat{\varsigma})$ is pre-Lebesgue and Fatou \cite[Thm.4.8]{AB1}. By \cite[Thm.3.22]{AB1}, $d)$ is equivalent to each of the following two conditions: \begin{enumerate} \item[$d')$] \ if $0\le x_\alpha\uparrow\le x$ holds in $X$, then $x_\alpha$ is a $\varsigma$-Cauchy net; \item[$d'')$] \ every order bounded disjoint sequence in $X$ is $\varsigma$-conver\-gent to zero. \end{enumerate} The next well known fact follows directly from $d')$. \begin{prop}\label{tau compl pre Leb is Ded compl Every $\varsigma$-complete pre-Lebesgue lattice $(X,\varsigma)$ is Dedekind complete. \end{prop} A normed lattice $(X,\|\cdot\|)$ is called {\em Kantorovich-Banach} or {\em $KB$-space} if every norm bounded upward directed set in $X_+$ converges in the norm. Each $KB$-space is a Levi lattice with order continuous complete norm; each order continuous Levi normed lattice is Fatou; and each Levi normed lattice is Dedekind complete. Lattice-normed versions of $KB$-spaces were recently studied in \cite{AEEM1,AEEM2,AGG}. \begin{enumerate} \item[$g)$] \ We call a locally solid lattice $(X,\varsigma)$ by a {\em $KB$/$\sigma$-$KB$ lattice} if every increasing $\varsigma$-bounded net/sequ\-ence in $X_+$ is $\varsigma$-convergent. \end{enumerate} Clearly, each $KB$/$\sigma$-$KB$ lattice is Levi/$\sigma$-Levi and each Levi/$\sigma$-Levi lattice is Dedekind complete/$\sigma$-complete. Recall that a continuous operator $T$: \begin{enumerate} \item[$h)$] \ between two Banach spaces is said to be {\em Dunford-Pettis} if $T$ takes weakly null sequences to norm null sequences. It is well known that every weakly compact operator on $L_1(\mu)$ is Dunford-Pettis and that an operator is Dunford-Pettis iff it takes weakly Cauchy sequences to norm convergent sequences \cite[Thm.5.79]{AB2}. \item[$i)$] \ from a Banach lattice $X$ to a Banach space $Y$ is called {\em $M$-weakly compact} if $\|Tx_n\|_Y\to 0$ holds for every norm bounded disjoint sequence $x_n$ in $X$. \item[$j)$] \ from a Banach space $Y$ to a Banach lattice $X$ is called {\em $L$-weakly compact} whenever $\|x_n\|_X\to 0$ holds for every disjoint sequence $x_n$ in the solid hull of $T(U_Y)$, where $U_Y$ is the closed unit ball of $Y$. \end{enumerate} An operator $T$ from a vector lattice $X$ to a topological vector space $(Y,\tau)$ is called \begin{enumerate} \item[$k)$] \ {\em $o\tau$-continuous}/{\em $\sigma{}o\tau$-continuous} if $Tx_\alpha\convtau 0$ for every net/sequence $x_\alpha$ such that $x_\alpha\convo 0$ \cite{JAM}. Replacement of $o$-null nets/sequences by $uo$-null ones above gives the definitions of {\em $uo\tau$-continu\-ous}/{\em $\sigma{}uo\tau$-continu\-ous} operators. \end{enumerate} $L$-/$M$-weakly compact operators are weakly compact (cf. \cite[Thm.5.61]{AB2}) and the norm limit of a sequence of $L$-/$M$-weakly compact operators is again $L$-/$M$-weakly compact (cf. \cite[Thm.5.65]{AB2}). For further unexplained terminology and notions, we refer to \cite{AB1,AB2,AAT1,AAT2,GTX,MN,Wick,Za}. Various versions of Banach lattice properties like a property to be a $KB$-space were investigated recently (see, e.g., \cite{AP,AEGp,AG,AM,AEEM1,AEEM2,BA,DEM1,DEM1a,EEG,EGK,EM0,EM1,EM2,EGZ,EGOU,GTX,JAM,Tay1,Tay2,TA}). In the present paper we continue the study of operator versions of several topological/order properties, focusing on locally solid lattices. The main idea behind operator versions consists in a redistribution of topological and order properties of a topological vector lattice between the domain and range of the operator under investigation (like in the case of Dunford-Pettis and $L$-/$M$-weakly compact operators). As the order convergence is not topological in general \cite{DEM1,Go}, the most important operator versions emerge when both $o$- and $\varsigma$-convergences are involved simultaneously. \begin{definition}\label{order-to-topology} {\em Let $T$ be an operator from a vector lattice $X$ to a topological vector space $(Y,\tau)$. We say that: \begin{enumerate} \item[$(a)$] \ $T$ is {\em $\tau$-Lebes\-gue}/{\em $\sigma\tau$-Lebesgue} if $Tx_\alpha\convtau 0$ for every net/ sequence $x_\alpha$ such that $x_\alpha\downarrow 0$; $T$ is {\em quasi $\tau$-Lebes\-gue}/ {\em $\sigma\tau$-Lebesgue} if $Tx_\alpha$ is $\tau$-Cauchy for every net/sequence $x_\alpha$ in $X_+$ satisfying $x_\alpha\uparrow\le x\in X$. If there is no confusion with the choice of the topology $\tau$ on $Y$, we call $\tau$-Lebesgue operators by {\em Lebesgue} etc. \item[$(b)$] \ $T$ is {\em $o\tau$-bounded}/{\em $o\tau$-compact} if $T[0,x]$ is a $\tau$-bounded/ $\tau$-totally bounded subset of $Y$ for each $x\in X_+$. \end{enumerate} If additionally $X=(X,\varsigma)$ is a locally solid lattice, \begin{enumerate} \item[$(c)$] \ $T$ is {\em $KB$}/{\em $\sigma$-$KB$} if, for every $\varsigma$-bounded increasing net/sequence $x_\alpha$ in $X_+$, there exists (not necessarily unique) $x\in X$ such that $Tx_\alpha\convtau Tx$. \item[$(d)$] \ $T$ is {\em quasi $KB$}/{\em quasi $\sigma$-$KB$} if $T$ takes $\varsigma$-bounded increasing nets/sequences in $X_+$ to $\tau$-Cauchy nets. \end{enumerate} If $X$ and $Y$ are vector lattices with $(X,\varsigma)$ locally solid, \begin{enumerate} \item[$(e)$] \ $T$ is {\em Levi}/{\em $\sigma$-Levi} if, for every $\varsigma$-bounded increasing net/sequence $x_\alpha$ in $X_+$, there exists (not necessarily unique) $x\in X$ such that $Tx_\alpha\convo Tx$. \item[$(f)$] \ $T$ is {\em quasi Levi}/{\em quasi $\sigma$-Levi} if $T$ takes $\varsigma$-bounded increasing nets/sequences in $X_+$ to $o$-Cauchy nets. \end{enumerate} Replacement of decreasing $o$-null nets/sequences by $uo$-null ones in $(a)$ and $o$-convergent ($o$-Cau\-chy) nets/sequen\-ces by ($uo$-Cau\-chy) $uo$-conver\-gent ones in $(e)$ and $(f)$ above gives the definitions of {\em $uo\tau$-continu\-ous} and of ({\em quasi}) {\em $uo$-Levi operators} respectively. } \end{definition} In our approach, we focus on: \begin{enumerate} \item[$*$] \ modification of nets/sets of operators domains in $(a)$, $(b)$, $(d)$, $(e)$, and $(f)$; \item[$**$] \ information which operators provide about convergences in their domains/ranges in $(c)$, $(e)$, and $(f)$. \end{enumerate} \begin{rem}\label{c_00} {\em \begin{enumerate} \item[$a)$] \ The identity operator in a locally solid lattice $(Y,\tau)$ is Lebesgue/$KB$/Levi iff $(Y,\tau)$ is Lebes\-gue/$KB$/Levi. This motivates the terminology. Cle\-ar\-ly, every $o\tau$-continuous/$\sigma{}o\tau$-continuous operator is $\tau$-Lebesgue/$\sigma\tau$-Lebesgue. By Lemma \ref{PC1}, any positive operator to a locally solid lattice is Lebesgue/$\sigma$-Lebesgue iff $T$ is $o\tau$-continuous/$\sigma{}o\tau$-continuous; and every positive Lebesgue operator to a locally solid lattice is quasi Lebesgue by Theorem \ref{Thm.3.23 from AB2}. \item[$b)$] \ Each regular operator from a vector lattice $X$ to a locally solid lattice $(Y,\tau)$ is $o\tau$-bounded by Proposition \ref{regular are otau-bounded}. In the case of normed range space $Y$, $o\tau$-bounded operators are also known as {\em interval-bounded} (cf. \cite[Def.3.4.1]{MN}). Like in \cite[Lem.3.4.2]{MN}, each $o\tau$-bounded operator $T$ from a vector lattice $X$ to a topological vector space $(Y,\tau)$ possesses {\em adjoint} $T^{\circ} : Y' \to X^\sim$ given by $T^{\circ}y' = y'\circ T$ for all $y' \in Y'$. \item[$c)$] \ In the case when $X$ is also a topological vector lattice, the $\tau$-continuity of operator $T$ is not assumed in $(b)$ of Definition \ref{order-to-topology}. For example, the rank one discontinuous operator $Tx:=(\sum_{k=1}^{\infty}x_k)e_1$ in $(c_{00},\|\cdot\|)$ is $o\tau$-compact and $o\tau$-continuous yet not compact. Each continuous operator $T$ from a discrete Dedekind complete locally convex Lebesgue lattice to a topological vector space is $o\tau$-compact by the Kawai theorem (cf. \cite[Cor.6.57)]{AB1}. Every Dunford-Pettis operator from a Banach lattice to a Banach space is $o$-weakly-compact (cf. \cite[Thm 5.91]{AB2}). \item[$d)$] \ Each $KB$-operator is quasi $KB$; each quasi ($\sigma$-) $KB$-opera\-tor is quasi ($\sigma$-) Lebesgue; and each continuous operator from a $KB$-space to a topological vector space is $KB$. It is well known that the identity operator $I$ in a Banach lattice is $KB$ iff $I$ is $\sigma${-}$KB$ iff $I$ is quasi $KB$. Proposition \ref{quasi-KB-vs-sigma-quasi-KB} shows that the notions of qua\-si $KB$ and qua\-si $\sigma$-$KB$ operator coincide. The most important reason for using the term {\em $KB$-operator} for $(c)$ of Definition \ref{order-to-topology} is existence of limits of $\varsigma$-bounded increasing nets. Some authors (see, e.g., \cite{AM,BA,TA}) use the term ``$KB$-operator" for $(d)$ of Definition \ref{order-to-topology} which is slightly confusing because it says nothing about existence of limits of topologically bounded increasing nets and all continuous finite-rank operators in every Banach lattice satisfy $(d)$. Each order bounded operator from a Banach lattice to a $KB$-space is quasi $KB$. However, if we take $X=(c_{00},\|\cdot\|_l)$ and $Y=(c_{00},\|\cdot\|_p)$ with $l,p\in [1,\infty]$, the identity operator $I$ is quasi $KB$ iff $l\le p<\infty$. \item[$e)$] \ It is clear that every compact operator $T$ from a Banach lattice $X$ to a Banach space $Y$ is $o\tau$-compact. However, compact operator need not to be Lebesgue (cf. Example \ref{c_w(R)}). In particular, $o\tau$-compact operators are not necessarily $o\tau$-continuous. \end{enumerate}} \end{rem} \begin{exam}\label{c_0(R)} Let $(c(\mathbb{R}),\|\cdot\|_\infty)$ be the Banach lattice of all $\mathbb{R}$-valued functions on $\mathbb{R}$ such that for every $f\in c(\mathbb{R})$ there exists $a_f\in\mathbb{R}$ for which the set $\{r\in\mathbb{R}: |f(r)-a_f|\ge\varepsilon\}$ is finite for each $\varepsilon>0$. Then the identity operator $I$ in $c(\mathbb{R})$ is $\sigma{}o\tau$-continuous and quasi $\sigma$-Lebesgue yet neither Lebesgue nor quasi Lebesgue. \end{exam} \begin{exam}\label{c_00(H)} Recall that, for a nonempty set $H$, the vector space $c_{00}(H)$ of all finitely supported $\mathbb{R}$-valued functions on $H$ is a Dedekind complete vector lattice. Furthermore, any vector space $X$ is linearly isomorphic to $c_{00}(H)$, where $H$ is a Hamel basis for $X$. As each order interval of $c_{00}(H)$ lies in a finite-dimensional subspace of $c_{00}(H)$, every operator from $c_{00}(H)$ to any topological vector space $(Y,\tau)$ is $o\tau$-continuous, $o\tau$-bounded, and $o\tau$-compact. On the other hand, the identity operator in $(c_{00},\|\cdot\|_\infty)$ is neither quasi $\sigma$-$KB$ nor quasi $\sigma$-Levi. \end{exam} \begin{rem}\label{tau-Cauchy nets} {\em It is well known that, if a net $y_\alpha$ of a topological vector space $(Y,\tau)$ is not $\tau$-Cauchy, then there exist $U\in\tau(0)$ and an increasing sequence $\alpha_n$ such that $y_{\alpha_{n+1}}-y_{\alpha_n}\not\in U$ for each $n$ (see, e.g., \cite[Lem.2.5]{AB1}).} \end{rem} \begin{prop}\label{quasi-KB-vs-sigma-quasi-KB An operator $T$ from a locally solid lattice $(X,\varsigma)$ to a topological vector space $(Y,\tau)$ is quasi $KB$ iff $T$ is quasi $\sigma$-$KB$. \end{prop} \begin{proof} The necessity is trivial. For the sufficiency, suppose $T$ is not quasi $KB$. Then there exists a $\varsigma$-bounded increasing net $x_\alpha$ in $X_+$ such that $Tx_\alpha$ is not $\tau$-Cauchy in $Y$. It follows from Remark \ref{tau-Cauchy nets} that for some increasing sequence $\alpha_n$ and some $U\in\tau(0)$ $$ Tx_{\alpha_{n+1}}-Tx_{\alpha_n}\not\in U \ \ \ (\forall n\in\mathbb{N}) \eqno(1) $$ Since the sequence $x_{\alpha_n}$ is increasing and $\varsigma$-bounded, the condition $(1)$ implies that $T$ is not quasi $\sigma$-$KB$. \end{proof} \begin{cor}\label{s-complete is sigma-KB is KB Every $\varsigma$-complete $\sigma$-$KB$ lattice $(X,\varsigma)$ is a $KB$ lattice. \end{cor} \begin{proof} The identity operator $I$ on $X$ is $\sigma$-$KB$ and hence quasi $\sigma$-$KB$. By Proposition \ref{quasi-KB-vs-sigma-quasi-KB}, $I$ is quasi $KB$. Then every $\varsigma$-bounded increasing net in $X_+$ is $\varsigma$-Cauchy, and hence is $\varsigma$-convergent. \end{proof} \begin{rem}\label{sequential}{\em The following fact (cf., e.g., \cite[Prop.1.1]{AEG}) is well known: \begin{enumerate} \item[$a)$] \ $\rho(y_\alpha, y)\to 0$ in a metric space $(Y,\rho)$ iff, for every subnet $y_{\alpha_\beta}$ of the net $y_\alpha$, there exists a (not necessary increasing) sequence $\beta_k$ of indices with $\rho(y_{\alpha_{\beta_k}}, y)\to 0$. \end{enumerate} Application of $a)$ to Cauchy nets/sequences in $(Y,\rho)$ gives that: \begin{enumerate} \item[$b)$] \ a net/sequence $y_\alpha$ of a metric space $(Y,\rho)$ is Cauchy iff, for every subnet/subsequence $y_{\alpha_\beta}$ of $y_\alpha$, there exists a sequence $\beta_k$ of indices such that the sequence $y_{\alpha_{\beta_k}}$ is $\rho$-Cauchy. \end{enumerate} Application of $b)$ and Proposition \ref{quasi-KB-vs-sigma-quasi-KB} gives that, for an operator $T$ from a locally solid lattice $(X,\varsigma)$ to a metric vector space $(Y,\rho)$, the following are equivalent: \begin{enumerate} \item[$(i)$] \ $T$ is quasi $KB$; \item[$(ii)$] \ Every $\varsigma$-bounded increasing sequence $x_n$ in $X_+$ has a subsequence $x_{n_k}$ such that $Tx_{n_k}$ is $\rho$-Cauchy in $Y$. \end{enumerate} Another application of $b)$ to an operator $T$ from a locally solid lattice $(X,\varsigma)$ to a metric vector space $(Y,\rho)$ gives the equivalence of the following conditions: \begin{enumerate} \item[$(i)'$] \ $T$ is $KB$; \item[$(ii)'$] \ For any $\varsigma$-bounded increasing net $x_\alpha$ in $X_+$, there exist an element $x\in X$ and a sequence $\beta_k$ of indices such that $\rho(Tx_{\alpha_{\beta_k}},Tx)\to 0$. \end{enumerate} } \end{rem} In many cases, like for (quasi) $KB$ or Levi operators, the lattice structure in the domain/range of operator can be relaxed to the ordered space structure \cite{AM,AGG,EG,EM2}, or combined with the lattice-norm structure \cite{AP,AEEM1,AEEM2,AGG}. Such generalizations are not included in the present paper. In Section 2 we investigate operators whose domains and/or ranges are locally solid lattices. Section 3 is devoted to operators between Banach lattices. \section{Operators between locally solid lattices} In this section, we study mostly operators whose domains and/or ranges are locally solid lattices. Observe first that the sets $L_{Leb}(X,Y)$, $L_{o\tau}(X,Y)$, $L_{o\tau{}b}(X,Y)$, and $L_{o\tau{}c}(X,Y)$ of Le\-bes\-gue, $o\tau$-continuous, $o\tau$-bounded, and $o\tau$-compact operators respectively from a ve\-ctor lattice $X$ to a topological vector space $(Y,\tau)$ are vector spaces and: \begin{enumerate} \item[$(*)$] \ $L_{o\tau}(X,Y)\subseteq L_{Leb}(X,Y)$; \item[$(**)$] \ $L_{o\tau c}(X,Y)\subseteq L_{o\tau b}(X,Y)$ since every totally bounded subset in $Y$ is bounded. \end{enumerate} \begin{theorem}\label{Thm.5.10 of AB2 Let $X$ be a vector lattice and $(Y,\tau)$ be a locally convex Lebesgue lattice which is either Dede\-kind complete or $\tau$-complete. Then $L_{o\tau c}(X,Y)\bigcap L_r(X,Y)$ is a band of the lattice $L_r(X,Y)$ of all regular operators from $X$ to $Y$. \end{theorem} \begin{proof} Since every Lebesgue lattice is pre-Lebesgue by \cite[Thm.3.23]{AB1}, and every topologically complete pre-Lebesgue lattice is Dedekind complete by Proposition \ref{tau compl pre Leb is Ded compl}, the lattice $(Y,\tau)$ is Dede\-kind complete in either case. By \cite[Thm.5.10]{AB2}, for each $x\in X_+$, the set $$ C(x)=\{T\in L_b(X,Y): T[0,x] \ \text{is}\ \tau\text{-totally}\ \text{bounded}\} $$ is a band of the Dedekind complete lattice $L_b(X,Y)$ of all order bounded operators from $X$ to $Y$. Since $L_r(X,Y)=L_b(X,Y)$, the set $L_{o\tau{}c}(X,Y)\bigcap L_r(X,Y)=\bigcap_{x\in X_+}C(x)$ is also a band of $L_r(X,Y)$ as desired. \end{proof} The following two propositions might be known. We include their elementary proofs as we did not find appropriate references. \begin{prop}\label{regular are otau-bounded Each regular operator $T$ from a vector lattice $X$ to a locally solid lattice $(Y,\tau)$ is $o\tau$-bounded. \end{prop} \begin{proof} Without lost of generality, assume $T\ge 0$. Let $x\in X_+$, $U\in\tau(0)$, and $V$ be a solid $\tau$-neighborhood of zero of $Y$ with $V\subseteq U$. Then $Tx\in\lambda V$ for all $\lambda\ge\lambda_0$ for some $\lambda_0>0$ and hence $T[0,x]\subseteq [0,Tx]\subseteq\lambda V\subseteq \lambda U$ for all $\lambda\ge\lambda_0$. Since $U\in\tau(0)$ was arbitrary, $T$ is $o\tau$-bounded. \end{proof} \begin{prop}\label{Lebesgue vs weak Lebesgue An order continuous positive operator $T$ from a vector lattice $X$ to a locally convex-solid lattice $(Y,\tau)$ is Lebesgue iff $T$ is weakly Lebesgue. \end{prop} \begin{proof} The necessity is trivial. For the sufficiency, assume that $T$ is weakly Lebesgue (i.e., $T:X\to Y$ is Lebesgue with respect to the weak topology $\sigma(Y,Y')$ on $Y$). Let $x_\alpha\downarrow 0$ in $X$. Then $Tx_\alpha\downarrow 0$ in $Y$. Since $T$ is $\sigma(Y,Y')$-Lebesgue, $Tx_\alpha\xrightarrow[]{\sigma(Y,Y')}0$. By using Dini-type arguments (cf. \cite[Thm.3.52]{AB2}), we conclude $Tx_\alpha\convtau 0$, as desired. \end{proof} \noindent We do not know any example of an order continuous $o\tau$-bounded weakly Lebesgue operator from a vector lattice to a locally solid lattice that is not Lebesgue. \begin{prop}\label{Lebesgue vs weak weak cont A weakly continuous positive operator $T$ from a Lebesgue $($$\sigma$-Lebesgue$)$ lattice $(X, \varsigma )$ to a locally solid lattice $(Y,\tau)$ is Lebesgue $($$\sigma$-Lebesgue$)$. \end{prop} \begin{proof} As arguments are similar, we restrict ourselves to the Lebesgue case. Let $x_\alpha\downarrow 0$ in $X$. Since $(X, \varsigma )$ is Lebesgue, $x_\alpha\convvars 0$ and hence $x_\alpha\xrightarrow[]{\sigma(X,X')}0$. The weak continuity of $T$ implies $Tx_\alpha\xrightarrow[]{\sigma(Y,Y')}0$. Since $Tx_\alpha\downarrow 0$ in $Y$, it follows $Tx_\alpha\convtau 0$, as in the proof of Proposition \ref{Lebesgue vs weak Lebesgue}. \end{proof} \begin{theorem}\label{Thm.3.23 from AB2 Each positive Lebesgue operator $T$ from a vector lattice $X$ to a locally solid lattice $(Y,\tau)$ is quasi Lebesgue. \end{theorem} \begin{proof} Let $x_\alpha\uparrow\le x\in X$, so $u_\alpha:=x-x_\alpha\downarrow\ge 0$. Therefore $(u_\alpha-z)_{\alpha;z\in L}\downarrow 0$, where $L=\{z\in X: (\forall\alpha\in A)[z\le u_\alpha]\}$ is the upward directed set of lower bounds of the net $u_\alpha$ (cf. \cite[Thm.1.2]{Em}). Since $T$ is Lebesgue and $T\ge 0$, $$ (Tu_\alpha-Tz)_{\alpha;z\in L}\convtau 0 \ \ \ \& \ \ \ (Tu_\alpha-Tz)_{\alpha;z\in L}\downarrow\ge 0. \eqno(2) $$ It follows from $(2)$, that $(Tu_\alpha-Tz)_{\alpha;z\in L}\downarrow 0$. Then $Tu_\alpha$ and hence $Tx_\alpha$ is $\tau$-Cauchy, as desired. \end{proof} \noindent We do not know any example of an $o\tau$-bounded Lebesgue operator which is not quasi Lebesgue. In contrast to Theorem 3.24 of \cite{AB1}, the converse of Theorem \ref{Thm.3.23 from AB2} is false even if $Y=\mathbb{R}$ due to Example \ref{c_w(R)}. Recall that an Archimedean vector lattice $X$ is called {\em laterally} ({\em $\sigma$-}) {\em complete} whenever every (countable) subset of pairwise disjoint vectors of $X_+$ has a supremum. Every laterally complete vector lattice $X$ contains a weak order unit and every band of such an $X$ is a principal band (cf. \cite[Thm.7.2]{AB1}). Furthermore, by the Veksler~-- Geiler theorem (cf. \cite[Thm.7.4]{AB1}), if a vector lattice $X$ is laterally ($\sigma$-) complete, then $X$ satisfies the (principal projection) projection property. A vector lattice that is both laterally and Dedekind ($\sigma$-) complete is referred to as {\em universally} ($\sigma$-) {\em complete}. It follows from \cite[Thm.7.4]{AB1} that a vector lattice $X$ is universally complete iff $X$ is Dedekind $\sigma$-complete and laterally complete iff $X$ is uniformly complete and laterally complete (cf. \cite[Thm.7.5]{AB1}). Similarly, a laterally $\sigma$-complete vector lattice $X$ is Dedekind $\sigma$-complete iff $X$ is uniformly complete. A universal completion of a vector lattice $X$ is a laterally and De\-de\-kind complete vector lattice $X^u$ which contains $X$ as an order dense sublattice. Every vector lattice has unique universal completion (cf. \cite[Thm.7.21]{AB1}). A laterally complete vector lattice $X$ is discrete iff $X$ is lattice isomorphic to $\mathbb{R}^S$ for some nonempty set $S$ iff $X$ admits a Hausdorff locally convex-solid Lebesgue topology iff the space $X_n^\sim$ of order continuous functionals on $X$ separates $X$ (cf. \cite[Thm.7.48]{AB1}); in each of these cases $X = X^u$. \begin{definition}\label{tau-laterally-complete {\em A topological vector lattice $(Y,\tau)$ is called {\em $\tau$-laterally} ({\em $\sigma$-}) {\em complete} whenever every $\tau$-bounded (countable) subset of pairwise disjoint vectors of $X_+$ has a supremum. } \end{definition} Every laterally ($\sigma$-) complete topological vector lattice $(Y,\tau)$ is $\tau$-laterally ($\sigma$-) complete. Every Dedekind complete $AM$-space $X$ with an order unit is $\tau$-laterally complete with respect to the norm. \begin{exam}\label{not sigma Lebesgue tau-laterally {\em Let $X$ be a vector lattice of real functions on $\mathbb{R}$ such that each $f\in X$ may differ from a constant say $a_f$ on a countable subset of $\mathbb{R}$ and $f-a_f\mathbb{I}_\mathbb{R}\in\ell_1(\mathbb{R})$ for each $f\in X$. \begin{enumerate} \item[$(i)$] \ The vector lattice $X$ is not Dedekind $\sigma$-complete, as $f_n:=\mathbb{I}_{\mathbb{R}\setminus\{1,2,...,n\}}\downarrow \ge 0$ yet $\inf\limits_{n \in \mathbb{N}}f_n$ does not exist in $X$. \item[$(ii)$] \ The vector lattice $X$ is a Banach space with respect to the norm $\|f\|:=|a_f|+\|f-a_f\mathbb{I}_\mathbb{R}\|_1$. \item[$(iii)$] \ The vector lattice $X$ is not $\tau$-laterally $\sigma$-complete with respect to the norm topology on $X$. Indeed, the norm bounded countable set of pairwise disjoint orths $e_n=\mathbb{I}_{\{n\}}$ of $X_+$ has no supremum in $X$. \item[$(iv)$] \ The identity operator $I$ on $X$ is not $\sigma$-Lebesgue, as\\ $\frac{1}{n}\mathbb{I}_{\mathbb{R}\setminus\{1,2,...,n\}}\downarrow 0$ in $X$ yet $\|\frac{1}{n}\mathbb{I}_{\mathbb{R}\setminus\{1,2,...,n\}}\|=\frac{n+1}{n}\not\to 0$. \end{enumerate} } \end{exam} \begin{prop}\label{straightforward Any Levi $($$\sigma$-Levi$)$ lattice is $\tau$-la\-te\-rally $($$\sigma$-$)$ complete and Dedekind $($$\sigma$-$)$ complete. \end{prop} \begin{proof} We consider the Levi case only, because the $\sigma$-Levi case is similar. The Dedekind completeness of $X$ is trivial. Let $D$ be a $\tau$-bounded subset of pairwise disjoint positive vectors of a Levi lattice $(Y,\tau)$. Then the collection $D^\vee$ of suprema of finite subsets of $D$ forms an increasing $\tau$-bounded net indexed by the set ${\cal P}_{fin}(D)$ of all finite subsets of $D$ directed by inclusion. Since $(Y,\tau)$ is Levi, $D^\vee \uparrow d$ for some $d \in Y$. It follows from $\sup D = \sup D^\vee =d$ that $(Y,\tau)$ is $\tau$-la\-te\-rally complete. \end{proof} \begin{exam}\label{s} The locally solid lattice $(\mathbb{R}^S,\tau)$, where $\tau$ is the product topology on the vector lattice $\mathbb{R}^S$ of real functions on a set $S$ is locally convex, Lebesgue, Levi, and universally complete. \end{exam} \begin{prop}\label{Thm.7.8. from AB1 Each regular operator $T$ from a laterally $\sigma$-complete vector lattice $X$ to a $\sigma$-Lebesgue lattice $(Y,\tau)$ is $\sigma$--Lebesgue. \end{prop} \begin{proof} Without lost of generality, we can suppose $T\ge 0$. Let $x_n\downarrow 0$ in $X$. By Theorem 7.8 of \cite{AB1}, $T$ is $\sigma$-order continuous, and hence $Tx_n\downarrow 0$ in $Y$. Since $(Y,\tau)$ is $\sigma$--Lebesgue, $Tx_n\convtau 0$. \end{proof} \noindent We do not know whether or not every positive operator $T$ from a laterally complete vector lattice $X$ to a Lebesgue lattice $(Y,\tau)$ is Lebesgue. Each laterally $\sigma$-complete locally solid lattice is $\sigma$-Le\-bes\-gue and pre-Le\-bes\-gue \cite[Thm.7.49]{AB1}. \begin{prop}\label{varsigma1 Let $T$ be a continuous operator from a laterally $\sigma$-complete locally solid lattice $(X,\varsigma)$ to a topological vector space $(Y, \tau)$. Then $T$ is $\sigma$-Lebesgue in each of the following cases: \begin{enumerate} \item[$(i)$] \ $X$ is discrete. \item[$(ii)$] \ $(X, \varsigma)$ is metrizable. \end{enumerate} \end{prop} \begin{proof} Let $x_n\downarrow 0$ in $X$. In case $(i)$, $(X,\varsigma)$ is $\sigma$-Lebesgue by \cite[Thm.7.49]{AB1}. The result follows because $T$ is $\varsigma\tau$-continuous. In case $(ii)$, the result follows from $(i)$ as every infinite dimensional laterally $\sigma$-complete metrizable locally solid lattice is lattice isomorphic to $\mathbb{R}^\mathbb{N}$. \end{proof} Since $(X,u\varsigma)$ is pre-Lebesgue iff $(X,\varsigma)$ is pre-Lebesgue \cite[Cor.4.3]{Tay2}, the statement of Proposition \ref{varsigma1} is also true when $(X,\varsigma)$ is replaced by $(X,u\varsigma)$. \begin{rem} {\em Let $T$ be a positive operator from a locally solid lattice $(X,\varsigma)$ to a laterally $\sigma$-complete locally solid lattice $(Y, \tau)$ and let $0\le x_\alpha\uparrow \le x$ in $X$. Since $(Y, \tau)$ is pre-Lebesgue by \cite[7.49]{AB1}, it follows from $0\le Tx_\alpha\uparrow \le Tx$ that $Tx_\alpha$ is $\tau$-Cauchy in $Y$ and hence $T$ is quasi Lebesgue. } \end{rem} \begin{rem {\em Let $T$ be a continuous operator from a laterally $\sigma$-complete locally solid lattice $(X,\varsigma)$ to a locally solid lattice $(Y,\tau)$. \begin{enumerate} \item[$a)$] \ Suppose $T\ge 0$ and $0\le x_\alpha\uparrow\le x$ in $X$. Since $(X,\varsigma)$ is pre-Lebesgue by \cite[7.49]{AB1} then $x_\alpha$ is $\varsigma$-Cauchy in $X$ and hence $Tx_\alpha$ is $\tau$-Cauchy in $Y$ meaning that $T$ is quasi Lebesgue. \item[$b)$] \ Suppose $(X,\varsigma)$ is Fatou, and $x_\alpha\downarrow 0$ in $X$. Then by \cite[7.50]{AB1} $(X,\varsigma)$ is Lebesgue, so $x_\alpha\convvars 0$ and hence $Tx_\alpha\convtau 0$ meaning that $T$ is Lebesgue. \item[$c)$] \ Suppose $\varsigma$ is a metrizable locally solid topology and $x_\alpha\downarrow 0$ in $X$. Then $(X,\varsigma)$ is Lebesgue by \cite[Thm.7.55]{AB1}. Thus $x_\alpha\convvars 0$ and hence $Tx_\alpha\convtau 0$ meaning that $T$ is Lebesgue. \item[$d)$] \ If $(X,\varsigma)$ is a Fr{\'e}chet lattice then $\varsigma$ is uniqu\-ely defined and every regular $T:(X, \varsigma)\to(Y, \tau)$ is continuous (cf. \cite[Thm.5.19 and Thm.5.21]{AB1}). In particular, every regular operator from a laterally $\sigma$-complete Fr{\'e}chet lattice $X(\varsigma)$ to a locally solid lattice $(Y,\tau)$ is Lebesgue. \end{enumerate} } \end{rem} \begin{prop}\label{cont are otau-bounded Any continuous operator $T$ from a locally solid lattice $(X,\varsigma)$ to a topological vector space $(Y,\tau)$ is $o\tau$-boun\-ded. \end{prop} \begin{proof} Let $x\in X_+$, $U\in\tau(0)$, and $V$ be a solid $\varsigma$-neighbor\-hood of zero of $X$ with $V\subseteq T^{-1}U$. Then there exists $\lambda_0>0$ such that $x\in\lambda V$ for all $\lambda\ge\lambda_0$ and hence $[0,x]\subseteq\lambda V\subseteq\lambda T^{-1}(U)$ for all $\lambda\ge\lambda_0$. Thus $T[0,x]\subseteq\lambda U$ for all $\lambda\ge\lambda_0$, which shows that $T$ is $o\tau$-bounded. \end{proof} \noindent It is worth mentioning here that an $o\tau$-boun\-ded operator $T$ from Dedekind $\sigma$-complete vector lattice $X$ to a normed lattice $Y$ is $\sigma$-Lebesgue iff $T$ is order-weakly compact (cf. \cite[Cor.1]{JAM}). The following lemma can be considered as an extension of \cite[Lem.1]{JAM} to locally solid lattices. \begin{lem}\label{PC1 A positive operator $T$ from a vector lattice $X$ to a locally solid lattice $(Y,\tau)$ is Lebesgue/$\sigma$-Lebesgue iff $T$ is $o\tau$-continuous/$\sigma{}o\tau$-continuous. \end{lem} \begin{proof} As the $\sigma$-Lebesgue case is similar, we consider only Lebesgue operators. The sufficiency is routine. For the necessity, assume $T$ is Lebes\-gue and $x_\alpha\convo 0$ in $X$. Take a net $z_\beta$ in $X$ with $z_\beta\downarrow 0$ such that, for each $\beta$, there exists $\alpha_\beta$ with $|x_\alpha|\le z_\beta$ for $\alpha\ge\alpha_\beta$. It follows from $T\ge 0$ that $|Tx_\alpha|\le T|x_\alpha|\le Tz_\beta$ for $\alpha\geq\alpha_\beta$. As $T$ is Lebes\-gue, $Tz_\beta\convtau 0$, which implies $Tx_\alpha\convtau 0$, because the topology $\tau$ is locally solid. \end{proof} \begin{cor}\label{PC2 An order bounded operator $T$ from a vector lattice to a Dedekind complete locally solid lattice is Lebesgue/$\sigma$-Lebesgue iff $T$ is $o\tau$-continuous/$\sigma{}o\tau$-conti\-n\-u\-ous. \end{cor} Now we investigate some properties of adjoint operators. Recall that, for a locally solid lattice $(X,\varsigma)$, the {\em absolute weak topology} $|\sigma|(X^\sim,X)$ on $X^\sim$ is the locally convex-solid topology generated by the collection of Riesz seminorms $\{\rho_x(f)=|f|(|x|): x\in X\}$. The locally solid lattice $(X^\sim,|\sigma|(X^\sim,X))$ is Lebesgue, Levi, and Fatou \cite[Prop.81C]{Fre}. We begin with the following technical lemma. \begin{lem}\label{dual is cont Let $T:(X,\varsigma)\to(Y,\tau)$ be an order bounded continuous operator from a locally solid lattice $(X,\varsigma)$ to a Dedekind complete locally solid lattice $(Y,\tau)$. Then the topological adjoint $T':(Y',|\sigma|(Y^\sim,Y))\to(X',|\sigma|(X^\sim,X))$ is also continuous. \end{lem} \begin{proof} Indeed, for $f_\alpha\xrightarrow{|\sigma|(Y^\sim,Y)}0$ in $Y^\sim$, it follows from $$ \rho_{x}(T'f_\alpha)=\langle |T'f_\alpha|, |x|\rangle \le \langle |T'||f_\alpha|, |x|\rangle\le\langle |T|' |f_\alpha|, |x|\rangle = $$ $$ \langle|f_\alpha|, |T||x|\rangle=|f_\alpha|(|T||x|)=\rho_{|T||x|}(f_\alpha)\to 0 \eqno(3) $$ that $T'f_\alpha\xrightarrow{|\sigma|(X^\sim,X)}0$. The second inequality in $(3)$ follows from the existence of the modulus of $T$ and from the observation $\pm T\le|T|\Rightarrow \pm T'\le|T|'$ (cf. \cite[p.67]{AB2}) which implies $|T'|\le|T|'$. \end{proof} \begin{theorem Let $T: X \to Y$ be an order bounded operator from a vector lattice $X$ to a Dedekind complete vector lattice $Y$, and $T':(Y^\sim,|\sigma|(Y^\sim,Y))\to(X^\sim,|\sigma|(X^\sim,X))$ the correspondent topological adjoint of $T$. Then $T'$ is $o\tau$-bounded, $o\tau$-continuous, Levi, and $KB$. \end{theorem} \begin{proof} The $o\tau$-boundedness of $T'$ follows from Lemma \ref{dual is cont} by Proposition \ref{cont are otau-bounded}. Since $T$ is regular, without lost of generality, we can suppose $T\ge 0$ (and hence $T'\ge 0$). Let $f_\alpha\convo 0$ in $Y^\sim$. As $T'$ is order continuous, $T'f_\alpha\convo 0$ in $X^\sim$, and since $(X^\sim,|\sigma|(X^\sim,X))$ is a Lebesgue lattice, we obtain $T'f_\alpha\xrightarrow[]{|\sigma|(X^\sim,X)}0$ and hence $T'$ is $o\tau$-continuous. Let $f_\alpha$ be a positive $|\sigma|(Y^\sim,Y)$-bounded increasing net in $Y^\sim$. Since $(Y^\sim,|\sigma|(Y^\sim,Y))$ is Levi, $f_\alpha\uparrow f$ for some $f\in Y^\sim$. As $T'$ is order continuous, $T'f_\alpha\uparrow T'f$ in $X^\sim$, in particular $T'$ is Levi. Since the locally solid lattice $(X^\sim,|\sigma|(X^\sim,X))$ is Lebesgue, $T'f_\alpha\xrightarrow[]{|\sigma|(X^\sim,X)}T'f$ and hence $T': (Y^\sim,|\sigma|(Y^\sim,Y)) \to (X^\sim,|\sigma|(X^\sim,X))$ is $KB$. \end{proof} The natural embedding $J$ of a vector lattice $X$ to $(X_n^\sim)_n^\sim$ is defined by $(Jx)(f) = f(x)$ for all $f\in X_n^\sim$, $x\in X$. The mapping $J$ is an order continuous lattice homomorphism (cf. e.g. \cite[Thm.1.70]{AB2}). By the Nakano theorem (cf. \cite[Thm.1.71]{AB2}), $J$ is one-to-one and onto (i.e. $X$ is {\em perfect}) iff $X_n^\sim$ separates $X$ and whenever a net $x_\alpha$ of $X_+$ satisfies $x_\alpha\uparrow$ and $\sup f(x_\alpha)<\infty$ for each $0\le f\in X_n^\sim$, then there exists some $x\in X$ satisfying $x_\alpha\uparrow x$ in $X$. \begin{lem}\label{sveta5} Let $T$ be an order bounded operator from a vector lattice $X$ to a vector lattice $Y$. The second order adjoint $T^{\sim\sim}$, while restricted to $(X_n^\sim)^\sim$, $$ ((X_n^\sim)^\sim, |\sigma|((X_n^\sim)^\sim,X_n^\sim))\xrightarrow{T^{\sim\sim}}((Y^\sim)^\sim, |\sigma|((Y_n^\sim)^\sim,Y_n^\sim)) $$ is continuous. \end{lem} \begin{proof} Let $(X_n^\sim)^\sim\ni x_\alpha \xrightarrow{|\sigma|((X_n^\sim)^\sim,X_n^\sim)} 0$. By use of the fact $|T^{\sim\sim}|\le|T^\sim|^\sim$ it follows $$ \langle |T^{\sim\sim}x_\alpha|, |y|\rangle \le \langle|T^\sim|^\sim|x_\alpha|, |y|\rangle \le \langle |x_\alpha|,|T^\sim||y|\rangle \eqno(4) $$ for all $y\in Y_n^\sim$. Since $x_\alpha \xrightarrow{|\sigma|((X_n^\sim)^\sim, X_n^\sim)} 0$, it follows from $(4)$ that $T^{\sim\sim}x_\alpha \xrightarrow{|\sigma|((Y_n^\sim)^\sim,Y_n^\sim)}0$ and hence the operator $T^{\sim\sim}$ is $|\sigma|((X_n^\sim)^\sim,X_n^\sim)$ to $|\sigma|((Y_n^\sim)^\sim,Y_n^\sim)$ continuous. \end{proof} \begin{theorem}\label{sveta6} Let $T$ be an order bounded operator from a vector lattice $X$ to a vector lattice $Y$. Then $$ ((X_n^\sim)_n^\sim,|\sigma|((X_n^\sim)_n^\sim,X_n^\sim))\xrightarrow{T^{\sim\sim}}((Y_n^\sim)_n^\sim, |\sigma|((Y_n^\sim)_n^\sim,Y_n^\sim)). $$ is a continuous, Lebesgue, Levi, and $KB$ operator. \end{theorem} \begin{proof} The continuity follows from Lemma \ref{sveta5} by restricting of the second order adjoint $T^{\sim\sim}$ to $(X_n^\sim)_n^\sim$. Let $x_\alpha\downarrow 0$ in $(X_n^\sim)_n^\sim$. As $T^{\sim\sim}$ is order continuous, $T^{\sim\sim}x_\alpha\downarrow 0$ in $(Y_n^\sim)_n^\sim$. Since $((Y_n^\sim)_n^\sim,|\sigma|((Y_n^\sim)_n^\sim,Y_n^\sim))$ is Lebesgue, $T^{\sim\sim}x_\alpha\xrightarrow{|\sigma|((Y_n^\sim)_n^\sim,Y_n^\sim)} 0$ showing that the operator is Lebesgue. Let $x_\alpha$ be a positive increasing $|\sigma|((X_n^\sim)_n^\sim, X_n^\sim)$-bounded net in $(X_n^\sim)_n^\sim$. Since $((X_n^\sim)_n^\sim,|\sigma|((X_n^\sim)_n^\sim,X_n^\sim))$ is Levi, the net $x_\alpha$ has a supremum in $(X_n^\sim)_n^\sim$, say $x$. Thus $x_\alpha\uparrow x$ and $T^{\sim\sim}x_\alpha\uparrow T^{\sim\sim}x$ as $T^{\sim\sim}$ is order continuous. Thus the operator is Levi. Since $((X_n^\sim)_n^\sim,|\sigma|((X_n^\sim)_n^\sim,X_n^\sim))$ is Levi, $x_\alpha\uparrow x$ in $(X_n^\sim)_n^\sim$. Then $T^{\sim\sim} x_\alpha\convo {T^\sim}^\sim x$ by order continuity of the operator $T^{\sim\sim}$. As $((Y_n^\sim)_n^\sim,|\sigma|((Y_n^\sim)_n^\sim,Y_n^\sim))$ is a Lebesgue lattice, $T^{\sim\sim} x_\alpha\xrightarrow{|\sigma|((Y_n^\sim)_n^\sim,Y_n^\sim)}T^{\sim\sim}x$ and the operator is $KB$. \end{proof} A vector sublattice $Z$ of a vector lattice $X$ is called {\em regular}, if the embedding of $Z$ into $X$ preserves arbitrary suprema and infima. Order ideals and order dense vector sublattices of a vector lattice $X$ are regular \cite[Thm.1.23]{AB1}. \begin{cor}\label{sveta7} Let $X^{\sim}=X_n^{\sim}$, and $Y$ be a vector lattice. Then each order bounded operator $T:X\to(Y, |\sigma|(Y,Y_n^\sim))$ is Lebes\-gue. \end{cor} \begin{proof} Let $x_\alpha\downarrow 0$ in $X$. Since the natural embedding $J:X\to X^{\sim\sim}$ is one-to-one and $J(X)$ is a regular sublattice of $X^{\sim\sim}$ (cf. \cite[Thm.1.67]{AB1}), then $T=T^{\sim\sim}\circ J$ and $Jx_\alpha\downarrow 0$ in $X^{\sim\sim}$. Thus $Tx_\alpha=T^{\sim\sim}(Jx_\alpha)\xrightarrow{|\sigma|((X_n^\sim)_n^\sim,X_n^\sim)}0$ since $T^{\sim\sim}$ is Lebesgue by Theorem \ref{sveta6}. \end{proof} \medskip We also mention the following question. Let $T$ be Lebes\-gue, $o\tau$-continuous, $o\tau$-bounded, $o\tau$-compact, $KB$, or Levi. Does $T^{\sim\sim}$ satisfy the same property? The answer is negative, when $T$ maps a Banach lattice $X$ to a Banach lattice $Y$, in the following cases: \begin{enumerate} \item[$(*)$] \ for Lebesgue and for $o\tau$-continuous operators, since the identity $I:c_0\to c_0$ is $o\tau$-continuous yet its second order adjoint $I:\ell_\infty\to\ell_\infty$ is not even Lebesgue; \item[$(**)$] \ for $o\tau$-compact operators, since the natural embedding $J:c_0\to\ell_\infty$ is $o\tau$-compact but $J^{\sim\sim}:\ell_\infty\to\ell^{**}_\infty$ is not $o\tau$-compact. \end{enumerate} Next we discuss the domination properties of positive operators. Let $T$ and $S$ be positive operators between vector lattices $X$ and $Y$ satisfying $0\le S\le T$. \begin{quest}\label{dominated Lebesgue {\em When does the assumption that $T$ is Lebes\-gue, $o\tau$-continuous, $o\tau$-bounded, $o\tau$-compact, $KB$, or Levi imply that $S$ has the same property? } \end{quest} \noindent Question \ref{dominated Lebesgue} has positive answers in the following cases: \begin{enumerate} \item[$<a>$] \ Trivially, for Lebesgue; $\sigma$-Lebesgue; and $o\tau$-bounded operators from a vector lattice to a locally solid lattice. \item[$<b>$] \ For $o\tau$-compact operators from a vector lattice $X$ to a locally convex Lebesgue lattice $(Y,\tau)$ which is either Dede\-kind complete or $\tau$-complete by Theorem \ref{Thm.5.10 of AB2}. \item[$<c>$] \ For $o|\sigma|(Y,Y')$-compact operators from a Banach lattice $X$ to a Banach lattice $Y$ \cite[Thm.5.11]{AB2}. \item[$<d>$] \ For $KB$/$\sigma$-$KB$ lattice homomorphisms between locally solid lattices by Corollary \ref{KB lattice homomorphism dominated property}. \item[$<e>$] \ For quasi $KB$ operators between locally solid lattices by Theorem \ref{quasi $KB$ dominated property}. \item[$<f>$] \ For quasi Levi operators from a locally solid lattice to a vector lattice by Theorem \ref{quasi Levi dominated property}. \end{enumerate} We remind that the modulus $|T|$ of an order bounded disjointness preserving operator $T$ between vector lattices $X$ and $Y$ exists and satisfies $|T||x|=|T|x||=|Tx|$ for all $x\in X$ \cite[Thm.2.40]{AB2}; there exist lattice homomorphisms $R_1,R_2:X\to Y$ with $T = R_1-R_2$ (cf. \cite[Exer.1, p.130]{AB2}. \begin{theorem}\label{KB disj pres dominated property Let $T$ be an order bounded disjointness preserving $KB$/$\sigma$-$KB$ operator between locally solid lattices $(X,\varsigma)$ and $(Y,\tau)$. If $|S|\le|T|$ then $S$ is $KB$/$\sigma$-$KB$. \end{theorem} \begin{proof} Take a $\varsigma$-bounded increasing net/sequence $x_\alpha$ in $X_+$. Then $T(x_\alpha-x)\convtau 0$ for some $x\in X$. It follows from $$ |S(x_\alpha-x)|\le|S||x_\alpha-x|\le|T||x_\alpha-x|=|Tx_\alpha-Tx|\convtau 0 $$ that $Sx_\alpha\convtau Sx$, as desired. \end{proof} Since $0\le S\le T$ with a lattice homomorphism $T$ implies that $S$ is also a lattice homomorphism, the next result is a direct consequence of Theorem \ref{KB disj pres dominated property}. \begin{cor}\label{KB lattice homomorphism dominated property Let $T$ be a $KB$/$\sigma$-$KB$ lattice homomorphism between locally solid lattices $(X,\varsigma)$ and $(Y,\tau)$. Then each $S$ satisfying $0\le S\le T$ is also a $KB$/$\sigma$-$KB$ lattice homomorphism. \end{cor} \begin{theorem}\label{quasi $KB$ dominated property Let $T$ be a positive quasi $KB$ operator between locally solid lattices $(X,\varsigma)$ and $(Y,\tau)$. Then each operator $S:X\to Y$ with $0\le S\le T$ is also quasi $KB$. \end{theorem} \begin{proof} Let $0\le S\le T$ and $x_\alpha$ be an increasing $\varsigma$-bounded net in $X_+$. Since $T\ge 0$ then $Tx_\alpha\uparrow$. Since $T$ is quasi $KB$ then $Tx_\alpha$ is $\tau$-Cauchy. Take a $U\in\tau(0)$ and a solid neighborhood $V\in\tau(0)$ with $V-V\subseteq U$. There exists $\alpha_0$ satisfying $T(x_\alpha-x_\beta)\in V$ for all $\alpha,\beta\ge\alpha_0$. In particular, $T(x_\alpha-x_{\alpha_0})\in V$ for all $\alpha\ge\alpha_0$. Since $0\le S\le T$ and $V$ is solid, $S(x_\alpha-x_{\alpha_0})\in V$ for all $\alpha\ge\alpha_0$. Thus, we obtain $$ Sx_\alpha-Sx_\beta=S(x_\alpha-x_{\alpha_0})-S(x_\beta-x_{\alpha_0})\in V-V\subseteq U \eqno(5) $$ for all $\alpha,\beta\ge\alpha_0$. Since $U\in\tau(0)$ was taken arbitrary, $(5)$ implies that $Sx_\alpha$ is also $\tau$-Cauchy. \end{proof} \begin{theorem}\label{quasi Levi dominated property Let $T$ be a positive quasi Levi operator from a locally solid lattice $(X,\varsigma)$ to a vector lattice $Y$. Then each operator $S:X\to Y$ with $0\le S\le T$ is also quasi Levi. \end{theorem} \begin{proof} Let $T:X\to Y$ be quasi Levi and $x_\alpha$ be a positive $\varsigma$-bounded increasing net in $(X, \varsigma)$. Then $Tx_\alpha$ is $o$-Cauchy in $Y$. So, there exists a net $z_\beta\downarrow$ in $Y$ such that, for each $\beta$, there exists $\alpha_\beta$ such that $|Tx_{\alpha_1}-Tx_{\alpha_2}|\le z_\beta$ for all $\alpha_1, \alpha_2\ge \alpha_\beta$. Choosing $\alpha_1, \alpha_2\ge \alpha_\beta$ for a fixed $\alpha_\beta$ we have $$ Sx_{\alpha_1}-Sx_{\alpha_2}\le S(x_{\alpha_1}-x_{\alpha_\beta})\le T(x_{\alpha_1}-x_{\alpha_\beta}) \le z_\beta, $$ and similarly $$ Sx_{\alpha_2}-Sx_{\alpha_1}\le S(x_{\alpha_2}-x_{\alpha_\beta})\le T(x_{\alpha_2}-x_{\alpha_\beta}) \le z_\beta . $$ Thus $|Sx_{\alpha_1}-Sx_{\alpha_2}|\le z_\beta$ for all $\alpha_1, \alpha_2\ge \alpha_\beta$ showing that the operator $S$ is also quasi Levi. \end{proof} Any order dense sublattice is regular by \cite[Thm.1.27]{AB1}. Also a locally solid lattice $(X,\varsigma)$ is a regular sublattice of its $\varsigma$-completion $\hat{X}$ iff every $\varsigma$-Cauchy net $x_\alpha$ in $X_+$ such that $x_\alpha\convo 0$ in $X$ satisfies $x_\alpha\convvars 0$ \cite[Thm.2.41]{AB1}. \begin{lem}\label{sveta1 Let $Z$ be a regular sublattice of a vector lattice $X$ and $T$ be an $o\tau$-continuous operator from $X$ to a topological vector space $(Y,\tau)$. Then the restriction $T|_Z: Z\to Y$ is also $o\tau$-continuous. \end{lem} \begin{proof} Under the assumptions of the lemma, for each net $z_\alpha$ in $Z$, $z_\alpha\convo 0$ in $X$ if $z_\alpha\convo 0$ in $Z$, by \cite[Lem.2.5]{GTX}. Therefore the result follows directly from the definition of an $o\tau$-con\-ti\-nuous operator. \end{proof} \noindent Similarly for $uo$-null nets. Let $Z$ be a regular sublattice of $X$ then $x_\alpha\convuo 0$ in $Z$ iff $x_\alpha\convuo 0$ in $X$ by \cite[Thm.3.2]{GTX}, and hence, for a $uo\tau$-continuous operator $T$ from a vector lattice $X$ to a topological vector space $(Y,\tau)$, the restriction $T|_Z: Z\to Y$ is also $uo\tau$-continuous. \begin{lem}\label{sveta2 Let $T$ be a positive Lebesgue operator from a vector lattice $X$ to a locally solid lattice $(Y,\tau)$. If $Z$ is a regular sublattice of $X$, then the restriction $T|_Z: Z\to Y$ is $o\tau$-continu\-ous. \end{lem} \begin{proof} $T$ is $o\tau$-continuous by Lemma \ref{PC1}. It follows from Lemma \ref{sveta1} that $T|_Z: Z\to Y$ is $o\tau$-continu\-ous. \end{proof} \noindent Since a vector lattice $X$ is order dense in $X^\delta$, and hence is regular in $X^\delta$, the next result follows immediately from Lemma \ref{sveta2}. \begin{prop}\label{sveta3 If an operator $T$ from the Dedekind completion $X^\delta$ of a vector lattice $X$ to a locally solid lattice $(Y,\tau)$ is Lebesgue then the operator $T|_X$ is $o\tau$-continu\-ous. \end{prop} Each order continuous lattice homomorphism $T$ from a Fatou lattice $(X,\varsigma)$ to a Lebesgue lattice $(Y,\tau)$ is $\varsigma\tau$-continuous on order intervals of $X$ by \cite[Thm.4.23]{AB1}. Since $T$ is also Lebesgue under the given conditions, the following can be considered as a generalization of \cite[Thm.4.23]{AB1}. \begin{theorem}\label{restricted to order bounded subsets Let $(X,\varsigma)$ be a Fatou lattice, $(Y,\tau)$ a Lebes\-gue lattice, and $T:X\to Y$ a Lebesgue lattice homomorphism. Then $T$ is $\varsigma\tau$-continuous when restricted to order bounded subsets in $X$. \end{theorem} \begin{proof} Let ${\cal N}$ be a base of solid neighborhoods for $\tau(0)$. Since $T$ is a Lebes\-gue lattice homomorphism, the family $\{T^{-1}(V)\}_{V\in{\cal N}}$ is a base of solid neighborhoods of zero for a Lebesgue (not necessary Hausdorff) topology $\zeta$ on $X$. Since $\varsigma$ is Fatou and $\zeta$ is Lebesgue, then on any order bounded subset of $X$ the topology induced by $\varsigma$ is finer than the topology induced by $\zeta$ due to Theorem 4.21 of \cite{AB1}. Clearly, $T:X\to Y$ is $\zeta\tau$-continuous. Thus, $T$ is also $\varsigma\tau$-continuous on order bounded subsets of $X$. \end{proof} Recall that, for a locally solid lattice $(X,\varsigma)$, a vector $\hat{v}\in \hat{X}_+$ is called an {\em upper element} of $X$ whenever there exists an increasing net $u_\alpha$ in $X_+$ such that $u_\alpha\stackrel{\hat{\varsigma}}{\to} \hat{v}$ (cf. \cite[Def.5.1]{AB1}). In a metrizable locally solid lattice $(X,\varsigma)$ upper elements can be described by sequences: a vector $\hat{v}\in \hat{X}_+$ is an upper element of $X$ iff there exists an increasing sequence $u_n$ in $X_+$ such that $u_n\stackrel{\hat{\varsigma}}{\to} \hat{v}$ (cf. \cite[Thm.5.2]{AB1}). For a metrizable $\sigma$-Lebesgue lattice $(X,\varsigma)$ its topological completion $(\hat{X}, \hat{\varsigma})$ is also $\sigma$-Lebesgue (cf. \cite[Thm.5.36]{AB1}). \begin{theorem}\label{sveta4} Let $(X,\varsigma)$ be a metrizable locally solid lattice, $(Y,\tau)$ a $\sigma$-Lebesgue lattice, and $T:(X,\varsigma)\to(Y,\tau)$ a positive $\varsigma\tau$-continuous $\sigma$-Lebesgue operator. Then the unique extension $\hat{T}:(\hat{X}, \hat{\varsigma}) \to (\hat{Y}, \hat{\tau})$ is positive, $\hat{\varsigma}\hat{\tau}$-con\-ti\-nu\-ous, and $\sigma$-Lebesgue. \end{theorem} \begin{proof} Clearly, $\hat{T}$ is positive and $\hat{\varsigma}\hat{\tau}$-continuous. To show that $\hat{T}$ is $\sigma$-Lebesgue, assume $\hat{x}_n\downarrow 0$ in $(\hat{X},\hat{\varsigma})$. Let $W\in\tau(0)$. We have to show $\hat{T}\hat{x}_n\in\overline{W}$ for large enough $n$. Take a solid neighborhood $W_1\in\tau(0)$ with $W_1 + W_1 \subseteq W$ and choose a neighborhood $V\in\varsigma(0)$ with $T(V) \subseteq W_1$. Let $\{V_n\}_{n=1}^{\infty}$ be a base for $\varsigma(0)$ satisfying $V_{n+1}+ V_{n+1} \subseteq V_n$ for all natural $n$ and also $V_1+V_1\subseteq V$. At this point we borrow the second paragraph of the proof of \cite[Thm.5.36]{AB1}. For each $n$, let $\hat{v}_n\in\hat{X}$ be an upper element of $X$ such that $\hat{x}_n\le\hat{v}_n$ and $\hat{v}_n-\hat{x}_n\in\overline{V}_n$; see \cite[Thm.5.4]{AB1}. Put $\hat{w}_n:=\wedge^n_{i=1}\hat{v}_n$ for all $n$ and note that each $\hat{w}_n$ is an upper element of $X$, $\hat{w}_n\downarrow$ and $\hat{w}_n-\hat{x}_n\in\overline{V}_n$. Since $\hat{x}_n\downarrow 0$, it follows from \cite[Thm.2.21(e)]{AB1} that $\hat{w}_n\downarrow 0$ also holds in $\hat{X}$. Thus, we can assume without loss in generality that $\hat{x}_n$ is a sequence of upper elements of $X$. Take $u_n\in X_+$ with $u_n \le \hat{x}_n$ and $\hat{x}_n - u_n \in \overline{V}_n$. Application of \cite[Thm.2.21(e)]{AB1} to the nets $\hat{x}_n\downarrow 0$ and $z_n:=\wedge^n_{i=1}u_n\downarrow $ with $\hat{x}_n-z_n\convvarsh 0$ gives $z_n\downarrow 0$ in $X$. As $T$ is $\sigma$-Lebesgue, $Tz_n\convtau 0$. So, there exists some $n_1$ such that $$ \hat{T}z_n=Tz_n\in W_1\subseteq\overline{W}_1\ \ \ \ \ (\forall n\ge n_1). \eqno(6) $$ Observe that $$ 0\le \hat{x}_n - z_n = \hat{x}_n - \wedge^n_{i=1} u_i = \vee^n_{i=1}(\hat{x}_n - u_i) \le \vee^n_{i=1}(\hat{x}_i - u_i)\le $$ $$ \sum^n_{i=1}(\hat{x}_i - u_i) \in\overline{V}_1 + \overline{V}_2 + \dotsc + \overline{V}_n \subseteq \overline{V} \ \ \ (\forall n\in\mathbb{N}). \eqno(7) $$ It follows from $(6)$, $(7)$, and $\hat{T}(\overline{V}) \subseteq \overline{W}_1$ that $$ \hat{T}(\hat{x}_n) \le \hat{T}(z_n)+\hat{T}(\hat{x}_n - z_n ) \in \overline{W}_1 + \overline{W}_1 \subseteq \overline{W} \ \ \ (\forall \ n\ge n_1). $$ Since $W\in\tau(0)$ was arbitrary, $\hat{T}$ is $\sigma$-Lebesgue. \end{proof} \noindent Since Theorem 5.36 of \cite{AB1} follows from Theorem \ref{sveta4} when $(X,\varsigma)$ coincides with $(Y,\tau)$ and $T$ is the identity operator on $X$, Theorem \ref{sveta4} can be also considered as a generalization of \cite[Thm.5.36]{AB1}. One more question deserves being mentioned. Let $T$ be Lebes\-gue, $o\tau$-continuous, $o\tau$-bounded, $o\tau$-compact, $KB$, or Levi. Does $|T|$ satisfy the same property? The answer is positive, when $T$ maps a vector lattice $X$ to a Dedekind complete locally solid lattice $(Y,\tau)$, in the following cases: \begin{enumerate} \item[$(*)$] \ for a Lebes\-gue/$o\tau$-continuous order continuous operator $T$, when $(Y,\tau)$ is Lebesgue, since $|T|$ is order continuous by \cite[Thm.1.56]{AB2} and hence $x_\alpha\downarrow 0$/$x_\alpha\convo 0$ in $X$ imply $|T|x_\alpha\convo 0$ and (since $(Y,\tau)$ is Lebesgue) $|T|x_\alpha\convtau 0$; \item[$(**)$] \ for an $o\tau$-bounded operator $T$, by Proposition \ref{regular are otau-bounded}. \end{enumerate} \begin{rem} {\em \begin{enumerate} Let $(X,\varsigma)$ and $(X,\tau)$ be locally solid lattices. \item[$a)$] \ Recall that $(Y,\tau)$ is said to be {\em boundedly order-boun\-ded} (BOB) whenever increasing $\tau$-bounded nets in $Y_+$ are order boun\-ded in $Y$ (cf. \cite[Def.3.19]{Tay1}). Assume $(Y,\tau)$ to be Dedekind complete and BOB, Then each positive quasi $KB$ operator from $(X,\varsigma)$ to $(Y,\tau)$ is quasi Levi. Let $x_\alpha$ be an increasing $\varsigma$-bounded net in $X_+$. Then $Tx_\alpha$ is a $\tau$-Cauchy increasing net in $Y_+$. Then $Tx_\alpha$ is $\tau$-bounded and hence $o$-bounded since $(Y,\tau)$ is BOB. The Dedekind completeness of $Y$ implies that $Tx_\alpha\uparrow y$ for some $y\in Y$. Therefore $Tx_\alpha$ is $o$-Cauchy as desired. \item[$b)$] \ Recall that $(X, \varsigma)$ satisfies the {\em $B$-property} (cf. \cite[Def.3.1]{Tay1}) whenever the identity operator $I$ on $X$ is quasi $\sigma$-$KB$. Let $T:(X,\varsigma)\to(Y,\tau)$ be a continuous operator and assume the topology $\varsigma$ in $(X,\varsigma)$ to be minimal. Then $T$ is Lebesgue and quasi $KB$. The first part follows as $(X, \varsigma)$ is Lebesgue by \cite[Thm.7.67]{AB1} and hence $T: (X, \varsigma) \to (Y, \tau )$ is Lebesgue. The second part follows as $(X, \varsigma)$ satisfies the $B$-property \cite[Prop.3.2]{Tay1} and hence $T$ is $\sigma$-$KB$, and thus $T$ is quasi $KB$ by Proposition \ref{quasi-KB-vs-sigma-quasi-KB}. \end{enumerate} \noindent Similarly, each regular operator from a vector lattice $X$ to a locally solid lattice $(Y, \tau)$ with minimal topology $\tau$ is quasi Lebesgue by \cite[Cor.3.3]{Tay1}. } \end{rem} \section{Operators between Banach lattices} In this section we investigate operators whose domains and ranges are Banach lattices. It is clear that every continuous operator from an order continuous Banach lattice to a topological vector space is Lebesgue. The following proposition generalizes \cite[Prop.2]{AEEM1}. \begin{prop}\label{Prop.2 from 1AEEM An operator $T$ from a $\sigma$-order continuous Banach lattice $X$ to a metrizable topological vector space $(Y,\tau)$ is continuous iff $T$ is $\sigma{}o\tau$-continuous. \end{prop} \begin{proof} The necessity is trivial. For the sufficiency: let $T$ be $\sigma{}o\tau$-continuous, $x_\alpha\convnX x$, and $x_{\alpha_\beta}$ be a subnet of the net $x_\alpha$. By $a)$ of Remark \ref{sequential}, there exists a sequence $\beta_n$ of indices with $\|x_{\alpha_{\beta_n}}-x\|_X\to 0$. Since $X$ is a Banach lattice, there is a subsequence $x_{\alpha_{\beta_{n_k}}}$ of $x_{\alpha_{\beta_n}}$ satisfying $x_{\alpha_{\beta_{n_k}}}\convo x$ in $X$ (see \cite[Thm.VII.2.1]{V}). Since $T$ is $\sigma{}o\tau$-continuous then $Tx_{\alpha_{\beta_{n_k}}}\convtau Tx$. Remark \ref{sequential} implies $Tx_\alpha\convtau Tx$, as required. \end{proof} In the next proposition we transfer the $\sigma$-condition from the domain $X$ of $T$ to the operator $T$. \begin{prop}\label{sigma-otau-cont to cont Each $\sigma{}o\tau$-continuous operator $T$ from a Banach lattice $X$ to a Banach lattice $Y$ is continuous. \end{prop} \begin{proof} It is sufficient to show that $\|Tx_n\|\to 0$ for every norm-null sequence $x_n$ in $X$. Let $\|x_n\|\to 0$ in $X$. For proving $\|Tx_n\|\to 0$, in view of $b)$ of Remark \ref{sequential}, it is enough to show that, for each subsequence $Tx_{n_k}$ of $Tx_n$, there exists a sequence $n_{k_i}$ of indices with $\|Tx_{n_{k_i}}\|\to 0$. Let $Tx_{n_k}$ be a subsequence of $Tx_n$. By \cite[Thm.VII.2.1]{V}, $x_{n_k}$ has a further subsequence $x_{n_{k_i}}$ with $x_{n_{k_i}}\convo 0$, and since $T$ is $\sigma{}o\tau$-continuous, $\|Tx_{n_{k_i}}\|\to 0$ as desired. \end{proof} \begin{theorem}\label{see Thm. 5.28 from AB2} Let $T:X\to Y$ be an order bounded operator between Banach lattices $X$ and $Y$, the norms in $X'$ and $Y$ be order continuous, and $Y$ be weakly sequentially complete. Then $T$ is a quasi $KB$-operator. \end{theorem} \begin{proof} Since $Y$ has order continuous norm, $Y$ is Dedekind complete, and hence the operator $T$ is regular. Therefore, we may assume $T\ge 0$. By Proposition \ref{quasi-KB-vs-sigma-quasi-KB}, it suffices to prove that $T$ is quasi $\sigma$-$KB$. Let $x_n$ be a $\|\cdot\|_X$-bounded increasing sequence in $X_+$. Then $Tx_n$ is $\|\cdot\|_Y$-bounded and $0\le Tx_n\uparrow$. It follows from \cite[Thm.5.28]{AB2} that $Tx_n$ has a weak Cauchy subsequence $Tx_{n_k}$. Then $Tx_n$ is weak Cauchy and hence $Tx_n\convw y$ for some $y\in Y$. Since $Tx_n\uparrow$, it follows from \cite[Thm.3.52]{AB2} that $Tx_n\convnY y$, and hence $Tx_n$ is $\|\cdot\|_Y$-Cauchy as required. \end{proof} As $c_0$ is not a $KB$-space, the identity operator $I:c_0\to c_0$ is not quasi $KB$ by Remark \label{c_00} $a)$; $d)$. Thus, the weak sequential completeness of $E$ is essential in Theorem \ref{see Thm. 5.28 from AB2}. \begin{exam}\label{c_w(R)} {\em Let $X=(c_\omega(\mathbb{R}),\|\cdot\|_\infty)$ be the Banach lattice of all bounded $\mathbb{R}$-valued functions on $\mathbb{R}$ such that each $f\in X$ differs from a constant say $a_f$ on a countable subset of $\mathbb{R}$. It is easy to see that $X$ is Dedekind $\sigma$-complete yet not Dedekind complete. We define a positive operator $T:X\to X$ as follows. $Tf$ is the constant function $a_f\cdot\mathbb{I}_{\mathbb{R}}\in X$, for which the set $\{d\in\mathbb{R}: f(d)\ne a_f\}$ is countable. Clearly, $T$ is quasi $KB$ and hence quasi Lebesgue. \begin{enumerate} \item[$(i)$] \ For the sequence $f_n:=\mathbb{I}_{\{1,2,\ldots n\}}\in X$ we have that $f_n\convo\mathbb{I}_{\mathbb{N}}\in X$, yet $\|f_n-\mathbb{I}_{\mathbb{N}}\|_\infty=1$ for all $n$. Thus, the norm in $X$ is not $\sigma$-order continuous. In particular, Proposition \ref{Prop.2 from 1AEEM} is not applicable to $T$. \item[$(ii)$] \ $T$ is rank-one continuous $($therefore quasi $KB$, compact, and $o\tau$-compact$)$ operator in $X$. Let $f_n\convo 0$. Since for every $\varepsilon>0$ there exists $n_\varepsilon$ such that the set $\cup_{n\ge n_\varepsilon}\{d\in\mathbb{R}: |f_n(d)|\ge\varepsilon\}$ is countable, $\|Tf_n\|_\infty<\varepsilon$ for all $n\ge n_\varepsilon$. Thus, $T$ is $\sigma{}o\tau$-continuous and hence $\sigma$-Lebesgue with respect to the norm topology on $X$. \item[$(iii)$] \ Operator $T$ is not Lebesgue. Indeed, for the net $f_\alpha:=\mathbb{I}_{\mathbb{R}\setminus\alpha}\in X$ indexed by the family $\Delta$ of all finite subsets of $\mathbb{R}$ ordered by inclusion, we have $f_\alpha\downarrow 0$ yet $\|Tf_\alpha\|_\infty=\|\mathbb{I}_{\mathbb{R}}\|_\infty=1$ for all $\alpha\in\Delta$. The same argument shows that $T$ is not weakly Lebesgue. Since $Tf_k\ne 0$ for at most one $k$ for each disjoint sequence $f_n$ in $X$, operator $T$ is $M$-weakly compact. It should be clear that $T$ is not $L$-weakly compact. \item[$(iv)$] \ $T$ is $KB$. Indeed, let $X\ni f_\alpha\uparrow$ and $\|f_\alpha\|\le M\in\mathbb{R}$. Then $a_{f_\alpha}\uparrow\le M$. Take $f:=a\cdot\mathbb{I}_{\mathbb{R}}\in X$ for $a=\sup_\alpha a_{f_\alpha}\in\mathbb{R}$. Clearly, $\|T(f-f_\alpha)\|\downarrow 0$, as desired. \item[$(v)$] \ Clearly, $T$ is a Dunford-Pettis lattice homomorphism. By Corollary \ref{KB lattice homomorphism dominated property}, the span of $[0,T]$ is a vector sublattice of $KB$-operators in the ordered space $L_r(X)$ of all regular operators in $X$. \item[$(vi)$] \ The vector lattice $X$ is $\tau$-laterally $\sigma$-complete yet not $\tau$-laterally complete with respect to the norm topology on $X$. \end{enumerate} } \end{exam} Observe that a $KB$-operator $T:\ell_1\to \ell_\infty$ defined by $[T(a)]_k:=\sum_{n=1}^{\infty}a_n$ for all $k\in\mathbb{N}$ is neither $L$- nor $M$-weakly compact (cf., e.g., \cite[p.322]{AB2}). In the present paper we do not investigate conditions under that $KB$-operators are (weakly) Lebesgue. The following lemma generalizes the observation that the identity operator in a Banach lattice $X$ is quasi $KB$ iff $X$ is a $KB$-space. \begin{lem}\label{Ghoussoub-Johnson 1 If $c_0$ does not embed in a Banach space $Y$, then every continuous operator $T$ from any Banach lattice $X$ to $Y$ is a quasi $KB$ operator. \end{lem} \begin{proof} It follows from \cite[Thm.4.63]{AB2} that $T=S\circ Q$, where $Q$ is a lattice homomorphism from $X$ to a $KB$-space $Z$ (and hence $Q$ is a quasi $KB$-operator by $d)$ of Remark \ref{c_00}) and $S$ is a continuous operator from $Z$ to $Y$. Now apply the fact that the class of quasi $KB$-operators is closed under the left composition with continuous operators. \end{proof} \begin{theorem}\label{Ghoussoub-Johnson 2 Let $X$ be a Banach lattice and $Y$ a Banach space. TFAE. \begin{enumerate} \item[$(i)$] \ Every continuous operator $T:X\to Y$ is quasi $KB$. \item[$(ii)$] \ $c_0$ does not embed in $Y$. \end{enumerate} \end{theorem} \begin{proof} $(ii)\Rightarrow(i)$ is Lemma \ref{Ghoussoub-Johnson 1}.\ $(i)\Rightarrow(ii)$ It suffices to prove that any embedding $J:c_0\to Y$ is not a quasi $KB$-operator. Assume by way of contradiction that such a $J$ is quasi $KB$ and let $x_n=\sum_{k=1}^n e_k\in c_0$. The increasing sequence $x_n$ is norm bounded, yet the sequence $Jx_n$ is not $\|\cdot\|_Y$-Cauchy because $$ \|Jx_n-Jx_m\|_Y=\left\|J\sum_{k=m}^n e_k\right\|\ge \frac{1}{\|J^{-1}|_{J(c_0)}\|}>0 \ \ \ \ (n\ne m). $$ The obtained contradiction completes the proof. \end{proof} Among other things, Theorem \ref{Ghoussoub-Johnson 2} asserts that, in the Banach space setting, the concept of continuous quasi $KB$-opera\-tor does not depend on the domain $X$ and is completely determined by the property that the range space $Y$ not containing an isomorphic copy of $c_0$. In light of this observation, the last sentence in item $d)$ of Remark \ref{c_00} is reduced to the condition $1\le p<\infty$ under which $\overline{c_{00}}^{\|\cdot\|_p}=\ell_p$ is a $KB$-space and to the condition $1\le l\le p$ under which the identity operator $I$ from $(c_{00},\|\cdot\|_l)$ to $(c_{00},\|\cdot\|_p)$ is continuous. Now we pass to a discussion of $KB$ operators. \begin{theorem}\label{when KB Let $T:X\to Y$ be regular operator between Banach lattices. If $X'$ has order continuous norm and $c_0$ does not embed in $Y$ then $T$ is KB. \end{theorem} \begin{proof} Without lost of generality, we suppose $T\ge 0$. By \cite[Thm.4.63]{AB2}, $T$ admits a factorization $T = S\circ Q$, where $Z$ is a $KB$-space and $Q:X\to Z$ is a lattice homomorphism. Let $x_\alpha$ be an increasing norm bounded net in $X_+$. Since $Q\ge 0$ then $Qx_\alpha$ is an increasing norm bounded net in $Z_+$, and since $Z$ is a $KB$-space, there exist $z\in Z$ with $\|Qx_\alpha-z\|\to 0$ and hence $\|Tx_\alpha-Sz\|\to 0$ as desired. \end{proof} The following theorem gives conditions under which each positive weakly compact operator from a Banach lattice $X$ to an arbitrary $KB$-space $Y$ is $\sigma{}o\tau$-continuous. \begin{theorem}\label{AB2 Thm 5.42 For a Banach lattice $X$ TFAE. \begin{enumerate} \item[$(i)$] \ The image $i(X)$ under the natural embedding $i:X\to X''$ is a regular sublattice of $X''$. \item[$(ii)$] \ Each positive weakly compact operator $T$ from $X$ to a $KB$-space $Y$ is $\sigma{}o\tau$-continuous. \item[$(iii)$] \ Each positive compact operator $T$ from $X$ to a $KB$-space $Y$ is $\sigma{}o\tau$-continuous. \item[$(iv)$] \ $T^2$ is $\sigma{}o\tau$-continuous for every positive weakly compact operator $T$ in $X$. \item[$(v)$] \ $T^2$ is $\sigma{}o\tau$-continuous for every positive compact operator $T$ in $X$. \item[$(vi)$] \ $T^2$ is Lebesgue for every positive compact operator $T$ in $X$. \end{enumerate} \end{theorem} \begin{proof} $(i)\Rightarrow(ii)$ \ Let $T$ be a positive weakly compact operator from $X$ to a $KB$-space $Y$. By \cite[Thm.5.42]{AB2}, $T=R\circ S$ with both $S:X\to Z$ and $R:Z\to Y$ are positive and $Z$ is a reflexive Banach lattice. Let $x_\alpha\convo 0$ in $X$. Since $i(X)$ is regular in $X''$ then $i(x_\alpha)\convo 0$ in $X''$ (e.g., by \cite[Thm.1.20]{AB1}). Since $S'':X''\to Z''$ is order continuous (e.g., by \cite[Thm.1.73]{AB2}), $S''(i(x_\alpha))\convo 0$ in $Z''$, and hence $Sx_\alpha=S''(i(x_\alpha))\convo 0$ in $Z$. The order continuity of norm in $Z$ implies $\|Sx_\alpha\|_Z\to 0$ and hence $\|Tx_\alpha\|_Y=\|R\circ S(x_\alpha)\|_Y\to 0$, showing that $T$ is $\sigma{}o\tau$-continuous. $(ii)\Rightarrow(iii)$ and $(iv)\Rightarrow(v)\Rightarrow(vi)$ \ are trivial. $(iii)\Rightarrow(i)$ \ If $i(X)$ is not regular in $X''$, then (e.g., by \cite[Thm.3.12]{AB1}) there exists a positive functional $f\in X'$ which is not order continuous. Thus $f:X\to\mathbb{R}$ is positive compact yet not $\sigma{}o\tau$-continuous as $f(x_\alpha)$ does not converge to $0$ in the $KB$-space $Y=\mathbb{R}$ for some net with $x_\alpha\convo 0$ in $X$. The obtained contradiction proves that $i(X)$ must be regular in $X''$. $(i)\Rightarrow(iv)$ \ The proof is similar to the proof of $(i)\Rightarrow(ii)$ above with the only difference that the factorization of $T^2$ by \cite[Cor.5.46]{AB2} must be used instead of the factorization of $T$ by \cite[Thm.5.42]{AB2}. $(vi)\Rightarrow(i)$ \ The proof is again similar to the proof of $(iii)\Rightarrow(i)$ above. If $i(X)$ is not regular in $X''$, then there exists a positive functional $f\in X'$ which is not order continuous. Take any $z\in X_+$ with $f(z)=1$ and define a positive compact operator $T$ in $X$ by $Tx:=f\otimes z$. As $f$ is not order continuous, there is a net $x_\alpha$ with $x_\alpha\downarrow 0$ and $f(x_\alpha)\ge 1$ for all $\alpha$. Therefore $$ T^2(x_\alpha)=T(f(x_\alpha)z)=f(x_\alpha)Tz=f(x_\alpha)z\ge z>0 \ \ \ (\forall \alpha) $$ violating that $T^2$ is Lebesgue. The obtained contradiction completes the proof. \end{proof} \begin{prop}\label{when otau bounded is KB Let $T:X\to Y$ be a $o\tau $-bounded operator from a Banach lattice $X$ to a Banach space $Y$. If $c_0$ does not embed in $Y$, then $T$ is $\sigma$-KB. \end{prop} \begin{proof} By \cite[Thm.3.4.6]{MN}, $T$ factors over a $KB$-space $Z$ as $T=S\circ Q$, where $Q$ is a lattice homomorphism $Q: X \to Z$, and $S:Z\to Y$ is continuous. Let $x_n$ be a norm bounded increasing sequence in $X_+$. Since $Q$ is positive, $Qx_n$ is positive norm bounded increasing sequence in the $KB$-space $Z$. Thus, $Qx_n$ is norm convergent in $Z$. As $S$ is continuous, $Tx_n=SQx_n$ is norm convergent and hence $T$ is $\sigma$-KB. \end{proof} \begin{prop Let $T:X\to Y$ be a continuous operator from a Banach lattice $X$ to a Banach space $Y$. If either $T''$ is order-weakly compact or the norm in $X$ is order continuous and $T$ does not preserve a sublattice isomorphic to $c_0$, then $T$ is $\sigma$-KB. \end{prop} \begin{proof} The operator $T:X\to Y$ factors over a $KB$-space $Z$ as $T = S\circ Q$, where $Q: X\to Z$ is a lattice homomorphism and $S : Z\to Y$ continuous by \cite[Thm.3.5.8]{MN}. Let $x_n$ be a norm bounded increasing sequence in $X_+$, then $Qx_n$ is also norm bounded increasing in $Z_+$. As $Z$ is a $KB$-space, $Qx_n$ is norm convergent in $Z$. Hence $(S\circ Q)x_n$ is convergent in $Y$ and $T$ is a $\sigma$-KB operator. \end{proof} Remark that if $T:X\to Y$ is a continuous operator from a Banach lattice $X$ to a Banach space $Y$ and $T$ preserves no subspace (or no sublattice) isomorphic to $c_0$, then $Tx_n$ is norm convergent for each increasing norm bounded sequence in $X_+$ by \cite[Thm.3.4.11]{MN} and therefore $T$ is $\sigma$-KB. It follows from \cite[Thm.1]{Wick} that every $uo\tau$-continu\-ous operator $T$ from an atomic Banach lattice $X$ to a Banach space $Y$ is a Dunford-Pettis operator. By Theorem 5.57 of \cite{AB2}, a continuous operator $T:X\to Y$ from a Banach lattice $X$ to a Banach space $Y$ is $o$-weakly compact iff, for each order bounded disjoint sequence $x_n$ in $X$, we have $\|Tx_n\|\to 0$. \begin{prop}\label{pos w-comp is levi Each positive weakly compact operator $T$ between Banach lattices $X$ and $Y$ is Levi. \end{prop} \begin{proof} Let $x_\alpha$ be a norm bounded increasing net in $X_+$. Weak compactness of $T$ ensures that $Tx_{\alpha_\beta}\convw y$ for some $y\in Y$ and some subnet $x_{\alpha_\beta}$ of $x_\alpha$. Since $Tx_{\alpha_\beta}\uparrow$ then $Tx_{\alpha_\beta}\convw y$ implies $Tx_{\alpha_\beta}\uparrow y$ and hence $Tx_\alpha\uparrow y$ which means that $T$ is a Levi operator. \end{proof} \begin{rem} {\em \begin{enumerate} \item[$a)$] \ It follows directly from \cite[Cor.3.4.12]{MN} that every continuous operator from a Banach lattice $X$ to a Banach space $Y$ that does not contain $c_0$ is quasi $KB$. \item[$b)$] \ Let $X$ be a Banach lattice and $Y$ be a Banach space. Then every continuous operator $T:X\to Y$ that does not preserve $c_0$ is quasi-KB by \cite[Thm.3.4.11]{MN}. Under the same settings, it was observed in \cite[3.4.E4, p. 203]{MN} that if $T''({\cal B}_X) \subseteq Y$, where ${\cal B}_X$ is the band generated by $X$ in $X''$, then $T$ does not preserve a subspace isomorphic to $c_0$. Note that $b$-weakly compact operators satisfy $T''({\cal B}_X) \subseteq Y$, whenever $X$ has order continuous norm. It follows that under these conditions, each $b$-weakly compact operator is quasi KB (see \cite[Prop 2.11]{AAT1}). \end{enumerate} } \end{rem} {\normalsize
{ "timestamp": "2022-03-16T01:10:08", "yymm": "2105", "arxiv_id": "2105.01810", "language": "en", "url": "https://arxiv.org/abs/2105.01810" }
\section{Introduction} \IEEEPARstart{T}he development of Industry 4.0 has profoundly promoted the transformation of the manufacturing industry to intelligence and automation, leading to the integration of information network technology and industry. Meanwhile, as a critical part of Industry 4.0, the manufacturing industry also presents a tendency of intelligence and high integration. More importantly, industry 4.0 allows manufacturing factories distributed in different regions to flexibly dispatch resources, which puts forward higher demand for transportation\cite{9068495}. According to related research\cite{9363013}, Intelligent Transportation System (ITS) has an advantage in improving safety, efficiency and driving comfort. Connected and Automated Vehicles (CAVs)\cite{wang2019survey} is an indispensable enabler for ITS, derived from advanced communication technology and emerging computing technology. CAVs are expected to revolutionize future ITS to solve the issues of safety\cite{rios2016automated}, efficiency\cite{fernandes2014multiplatooning}, and sustainability\cite{altan2017glidepath}. With CAVs, traffic optimization can occur at different places, including automated intersection management\cite{DBLP:journals/tcps/KhayatianMADCLS20}. Compared to conventional traffic light control, automated intersection management aims to achieve higher throughput while ensuring the safety of vehicles. The process of deciding the vehicle passing sequence is called scheduling. The three main categories of existing scheduling policies are: 1) First-Come First-Served\cite{DBLP:journals/jair/DresnerS08}: The first arrival vehicle is to be served firstly and entry to the intersection. 2) Optimization-Based\cite{DBLP:journals/tits/BichiouR19}: The approaches try to minimize the average travel time for all vehicles passing the intersection, and the passing sequence may vary from the original. 3) Heuristic\cite{DBLP:conf/itsc/StevanovicM18}: The optimal solution is not guaranteed, but it is sufficient to reach the immediate goal. Because the actions taken by CAVs depend on the real-time driving conditions, which is a typical Markov Decision Process (MDP), Reinforcement Learning (RL) is suitable to address this issue. Guan et al. proposed an RL-based method to centralized guide a fixed number of vehicles through the intersection \cite{guan2020centralized}. Wu et al. decoupled the relationship between the identity and driving information of vehicles and proposed a cooperative RL method to improve traffic efficiency while ensuring safety\cite{wu2020cooperative}. Jiang et al. proposed a two-stage RL incorporating end-edge-cloud architecture to achieve global optimization among multiple homogeneous intersections \cite{jiang2020multi}. However, low sample efficiency and limited safety performance bring a great difficulty to practical application. Although some progress has been achieved in learning-based vehicle control at intersections, three problems still need to be solved at different intersections: 1) Isolation: For better vehicle control performance, it is essential to set a local center to assist the vehicle control, which means that some privacy-sensitive data is kept in a local center. Due to the privacy requirements of transport service providers, the data will not be transmitted to cloud nodes or other peer nodes. 2) Heterogeneity: Due to different traffic densities at intersections, the generated experience data drive the obtained RL model to show different capabilities for vehicle control optimization. Therefore, the conventional model parameter averaging may not meet the performance requirements at different intersections. 3) Scalability: As the number of CAV-enabled unsignalized intersections grows, data generated by the vehicles increases. Any learning-based algorithm with a centralized property may find it difficult to handle such data due to the incurred high computation and communication budget. \begin{figure*} \centering \includegraphics[width=\textwidth]{General.pdf} \caption{Federated Imitation Learning for Distributed Vehicle Control. Vehicles take the role of interactors, collect the surrounding information and map other vehicles from Physical-Lanes (PLs) to Cyber-Lanes (CLs). The reorganized information is upload to the experience pool as a piece of experience. The edge trainers sample the experience to train a local model via the proposed imitation learning algorithm. Then, several local models are collected at cloud nodes to be aggregated as a global model. Next, the global model is distributed to the edge trainer for subsequent updates. After several iterations, the final vehicle control model is obtained.} \label{fig:General} \end{figure*} To cope with the above-mentioned challenges, a federated deep learning framework, as shown in Fig.\ref{fig:General}, is proposed for vehicle control policy acquisition at various intersections. The main contributions of this article are summarized as follows. \begin{itemize} \item This paper proposes a density-aware federated learning (FL) framework for unsignalized intersection control, reducing communication costs and supporting various vehicle densities. This framework includes four main components: data collection, experience upload, model training and model aggregation. \item An Imitation Learning (IL) algorithm is proposed to obtain a safety-oriented vehicle control policy, which trains the model with the experience from collision avoidance rules. Compared with the pure rule strategy, the model training process makes up for the lack of driving comfort. \item In response to limited communication, a loss-aware experience selection strategy is designed, which can reduce communication overhead by extra computation. Trainers output and deliver a reference loss as a threshold. Each interactor compares the newly-generated experience with the threshold to decide whether to upload. \item The extensive experiment is conducted, and the results demonstrate that IL can significantly improve safety and reduce discomfort by 55.71\%, FL combined with IL can further reduce discomfort by 41.37\%, and experience selection strategy can reduce the communication overhead by up to 12.80\%. \end{itemize} The rest of this article is organized as follows: The system architecture is present in Section \ref{sec:system}. Section \ref{sec:FIL} elaborates the details of the proposed federated deep learning framework and IL algorithm for vehicle control. In Section \ref{sec:loss-aware}, a loss-aware experience selection strategy is presented. The performance evaluation and analysis are provided in Section \ref{sec:experiment}. Section \ref{sec:conclusion} concludes this article. \section{System Architecture} \label{sec:system} This section mainly considers a hierarchical network in an urban scenario, which consists of one Cloud Aggregator (CA), many Edge Trainers (ETs) and lots of Vehicle Interactors (VIs), as shown in Fig.\ref{fig:General}. The CA is deployed in the remote cloud and connects to a set of ETs via a reliable backhaul link. These ETs are denoted by $\mathbb{E} = \{E_{1}, E_{2}, \dots, E_{n}, \dots, E_{N}\}$, where $N$ is the number of ETs. Each ET $i$ serves its wirelessly connected VIs, denoted by $ \mathbb{V}_{n} = \{V_{1}, V_{2}, \dots, V_{i}, \dots, V_{m(t)}\}$. Note that, under an ET, the number of VI, $m(t)$, varies over time $t$. Both ETs and VIs are equipped with powerful GPU computing services. The ETs are used to train the local model by the information uploaded by connected VIs. The CA is used to produce a global model by combining the local information of different ETs. On the aspect of unsignalized intersection control, there are three assumptions to support our work, which is similar to related work\cite{bian2019cooperation}. \begin{itemize} \item Longitudinal vehicle control is the focus of this paper. All vehicles keep their original directions and go straight within the intersection area. \item All vehicles can measure kinetic information, strictly obey the determined acceleration and communicate with adjacent nodes, i.e. edge trains and adjacent vehicles. \item Communication latency and package loss are not taken into consideration for simplification. \end{itemize} The longitudinal motion of vehicles is given by \begin{equation} \begin{aligned} x^{long}(t + 1) & = x^{long}(t) - v(t)T - \frac{1}{2}a(t)T^2 \\ v(t + 1) & = v(t) + a(t)T \end{aligned}, \label{eq:x_v_t} \end{equation} where $x^{long}$ is the displacement, $v$ and $a$ are the velocity and acceleration respectively, and $T$ is the discrete-time step. The change of vehicle motion state depends on the input, i.e. acceleration, at the previous time step. This paper adopts distributed decision-making for vehicle control. That is to say, each vehicle constructs its cyberspace and maps adjacent vehicles to cyber objects. As a result, vehicle $j$ decides its action $a_j$ based on its surroundings. \begin{equation} {a_i} = P(\overrightarrow {s_i} |\theta ) \end{equation} where $P(\cdot|\theta)$ is $\theta$-parameterized policy for decision-making. $\overrightarrow {s_{i}}$ is the state vector of the vehicle $i$ and $\overrightarrow {s_{i}} = {s_i},\overrightarrow {s_{ - i}}$. $s_i$ is state of the ego-vehicle $i$, including position, velocity and acceleration. Moreover, $\overrightarrow {{s_{ - i}}}$ is a vehicle set other than vehicle $i$. $|\overrightarrow {{s_{ - i}}}|$, the number of vehicle set $\overrightarrow {{s_{ - i}}}$, is defined by a selection scheme. To simulate traffic flow, the number of arrived vehicles for entering each intersection during the period $t$ is defined as $V_{q}(t)$. It follows a Poisson process with the parameter $\lambda$: \begin{equation} P\left(V_{q}(t)=g\right)=\frac{(\lambda t)^{g}}{g !} e^{-\lambda t} \end{equation} where $g$ equals the number of vehicles generated in a period $t$. The introduction of the Poisson process means that vehicles are created dynamically, which is similar to real traffic. As shown in Fig.\ref{fig:General}, it is a three-layered architecture. The bottom layer contains VIs requesting model downloading. The middle layer includes several ETs equipped with GPU computing servers and experience databases. The top layer has a global model aggregator. The multiple connected VIs interact with the environment and individually upload experience data to the local database. Each EI firstly generates a local model and acquires the experience from connected VIs. Then, based on the received data, each ET utilizes the local GPU computing ability to compute a local model. Next, the ET sends the local model to VIs for vehicle control and the CA for global model aggregation. Finally, the CA aggregates the models and sends the global model back to each ET. The above steps are repeated until a satisfying global model is achieved. The trained model in this work is specially developed to output the vehicles' accelerations in response to the contextual information around the vehicle. \section{Density-aware federated imitation learning for vehicle control} \label{sec:FIL} This section elaborates on the proposed vehicle control scheme. Firstly, the density-aware federated deep learning framework is described, including model download, experience upload, FL model training, upload updated model and weighted aggregation. Then, we introduce a set of collision avoidance rules as a basis for further optimization. Finally, with the rule, an IL algorithm is proposed for vehicle control. \subsection{Density-aware Federated Deep Learning Framework} FL enables collaborative training of a deep neural network model among ETs under the orchestration of a server in CA by keeping the training data on each ET at intersections. And it not only significantly reduces the privacy risk of the vehicle but also dramatically reduces the communication overhead caused by the centralized machine learning \cite{DBLP:journals/corr/KonecnyMYRSB16}. FL is enabled by multiple communication rounds (computing iteration). $N$ intersections with known traffic densities and the corresponding ETs are selected to conduct model training. The $N$ ETs are indexed by $n$. Then, each ETs retrieves a global model from the CA and trains this model from its local model with the data collected from vehicles in the intersection area. Following the local training of ETs, the updated weights and gradients are sent back to the CA. Finally, the CA aggregates the collected models from vehicles to construct an updated global model. Furthermore, the final-trained models are distributed to the vehicles to ensure that vehicles pass through the intersection. The details of the proposed designed FL iteration consists of the following steps: \subsubsection{Model Download} A set of intersections are chosen to take part in FL training. The ETs at these intersections download the global model $\omega_r $ from the CA and train this model over their local data. \subsubsection{Experience Upload} To achieve better vehicle control performance, each vehicle needs to consider other vehicles' states for inference. However, due to the insufficient number of collected samples and non-uniform distribution, the efficiency of vehicular distributed training is very low. Under the above setting, it is essential to adopt centralized training and distributed execution\cite{DBLP:journals/corr/KonecnyMYRSB16}. Then each vehicle interacts with the environment, i.e. other vehicles, and generates enormous experience data to upload. This data is used to train the local model by the corresponding ET. \subsubsection{FL Model Training} The third step in the proposed FL is to train the model by utilizing local data uploaded by vehicles. Let $Exp = \{Exp_{1}, Exp_{2}, \dots, Exp_{n}, \dots, Exp_{N}\}$ represent the experience data stored in selected ETs. $Exp_{n}$ denotes the local experience of the $n^{th}$ ET with the length $d_n$, $d_n=|Exp_{n}|$. $d$ is the size of the whole data among the selected ETs. The goal of the FL is to minimize the loss function $L(\omega)$: \begin{equation} \begin{aligned} \min _{\omega} L(\omega) & =\sum_{n=1}^{N} \frac{d_{n}}{d} L_{n}(\omega) \quad \text { where } \\ L_{n}(\omega) & =\frac{1}{d_{j}} \sum_{j \in H_{k}} l_{j}(\omega), \end{aligned} \end{equation} where $l_{j}(\omega)$ is the loss of the vehicle control on the $j^{th}$ experience batch in $Exp_n$ with the parameters of model $\omega$, and $d_{j}$ is the experience number in the $j^{th}$ batch. $n$ denotes the index of total selected ETs $N$. $L_{n}(\omega)$ represents the local loss function of ET $n$. Then it is obviously that minimizing the weighted average of local loss function $L_{n}(\omega)$ is equal to optimize the loss function $l(\omega)$ of FL. \subsubsection{Upload Updated Model} The fourth step is to upload the local model ${\omega}^{n}_{t+1}$ from ETs to the CA. The communication cost exceeds the computing cost\cite{DBLP:conf/aistats/McMahanMRHA17}. In order to reduce communication overhead, the model can be compressed before being uploaded to the CA. \subsubsection{Weighted Aggregation} After ETs uploading their models, the fifth step is to produce the new global model $\omega_{r+1}$ by computing a weighted sum of all received models ${\omega}^{n}_{r}$. The new generated global model is used for the next training iteration. $r$ denotes the communication rounds in FL. Compared with the typical Federated Stochastic Gradient Descent (FedSGD), Federated Averaging (FedAVG), widely applied in FL, increases the proportion of local computing and decreases mini-batch sizes. In FedSGD, each ET $n$ utilizes its local data to locally compute the average gradient $\nabla L_n(\omega)$ on its global model $omega_r$. The CA then aggregates these computed gradients by taking a weighted average sum and applies update gradients: \begin{equation} w_{r+1} \leftarrow w_{r}-\eta \sum_{n=1}^{N} \frac{d_{n}}{d} w_{r+1}^{n}, \end{equation} where $\eta$ is the static learning rate. Whereas, in FedAVG, each ET adds more computation by iterating the local updates $ w_{r}^{n} \leftarrow w_{r}^{n}-\eta \nabla L_{n}\left(w_{r}^{n}\right)$ multiple times before the aggregation step in the CA. The weighted averaging algorithm is implemented to aggregate the model. Weights for parameter aggregation is dependent on the traffic density of each intersection, which is $\gamma_{n} = \mathfrak{D}_n / \mathfrak{D}$. $\mathfrak{D}_n$ and $\mathfrak{D}$ respectively denote the traffic density on $n^{th}$ intersections and the density sum of all intersections. Then, the aggregate method can be re-written as \begin{equation} w_{r+1} \leftarrow w_{r}-\eta \sum_{n=1}^{N} \gamma_n \frac{d_{n}}{d} w_{r+1}^{n}. \end{equation} Selected intersections with a higher traffic density account for more contributions and are given greater weight in model aggregation. \subsection{Imitation Learning for vehicle control} The model trained in the above FL framework is the IL model for vehicle control. It is used to obtain the vehicle control capability from the collision avoidance rules. Both IL and RL depend on environment interaction. Unlike RL, which obtains the desired behaviors according to the hidden objectives, IL directly clones the desired behaviors. IL can overcome the highly uncertain initial state and the sparse reward, which exists in RL and may lead to an exploration trap. Thus, this part explains how to imitate the end-to-end vehicle control policy from existing rules. As presented in Fig.\ref{fig:General}, there are two modules to support vehicle control policy acquisition. The first module is a set of collision avoidance rules to output expert experience, described in Section \ref{sec:rule}. The second module is a continuous updated deep neural network as the final vehicle control policy carrier. The proposed IL consists of the following steps: \begin{enumerate} \item Collision avoidance rules guide vehicles to make actions at intersections. The state $s$ and $a$ are recorded as expert experience $(s,a)$ to upload to the corresponding ET's experience buffer. \item ET sample a batch of experience from the pool for the process of IL. \item Experience batch is simultaneously forwarded into two modules, collision avoidance rules and deep neural network. \item The loss is calculated with the output of the two modules. This paper utilizes the square of the difference between the two outputs to calculate the loss. \item The deep neural network is updated by minimizing the loss mentioned above. \end{enumerate} The loss function of IL model update can be defined as below, \begin{equation} \label{eq:IL_loss} L(\overrightarrow {Exp}){\rm{ = }}\frac{1}{B}\sum\limits_{i = 1}^B {{{\left| {{\pi _\theta }\left( {{{s}_i}} \right) - {a_i}} \right|}^2}} , \end{equation} where $\overrightarrow {Exp}$ is a batch of experience, and the batch size is $B$, given in the experiment part. $\pi _\theta(\cdot)$ is $\theta$-parameterized policy, and each iteration updates the parameter $\theta$. With collision avoidance rules integrated, the loss function can drive the deep neural network to produce safety-oriented strategies. \subsection{Collision Avoidance Rules} \label{sec:rule} In this part, a concept of Cyber-Lane (CL) is introduced to reconstruct vehicles' relationship. Vehicles in different trajectories travel in different Physical-Lanes (PLs), and these trajectories intersect into conflict points. All vehicles are projected on the CLs from PLs based on conflict points. From the perspective of CL, the position relationship of all vehicles is reorganized. After being projected, vehicle $A1$ appears between vehicle $B1$ and $B2$. Then, the action of vehicle $B1$ and $B2$ will naturally take into account the action of vehicle $A1$. The set of rules considers three factors, including space, time and acceleration. Space is the first factor, which can directly determine whether the collision has occurred. As a second-order factor, time considers whether the vehicle will collide in the near future. The acceleration indicates whether the collision will be avoided. In this paper, the above factors are quantified as safety values (SV). The first factor is space. The safety value for space $SV_{j,s}$ is calculated as below, \begin{equation} SV_{j,s}=log((\frac{d_{j,nearest}}{\alpha_{s}})^{\beta_{s}}), \label{eq:d_safe_value} \end{equation} where $d_{j,nearest}$ denotes the distance between vehicle $j$ and its nearest vehicle on the virtual lane. $\alpha_{s}$ normalizes $d_{j,nearest}$, and it can be treated as the expected headway distance. $\beta_{s}$ increases the offset to improve $log(\cdot)$ effect. There is a positive correlation between safety value and nearest inter-vehicle space distance. The second factor is time. The SV for time $SV_{j,t}$ is calculated as below, \begin{equation} {SV_{j,t}} = \left\{ {\begin{array}{*{20}{c}} { - {{\left[ {\frac{{\alpha_{t}}}{{\tanh ( - {t_{j,nearest}})}}} \right]}^{\beta_t}}} & {0 < {t_{j,nearest}} < 1} \\ 2 & {otherwise} \end{array}}, \right. \label{eq:t_safe_value} \end{equation} where $t_{i,nearest}$ denotes Time To Collision (TTC) between vehicle $j$ and its nearest vehicle. In the sensitive range, where $t_{i,nearest}$ is no more than 1, the function $tanh(\cdot)$ is used to mark the nearby collision risk. There is a negative correlation between safety value and TTC. The third factor is acceleration. The SV for acceleration $SV_{j,acc}$ is calculated as below, \begin{equation} SV_{j,acc} = \lambda_{acc} \times acc_{j,front} \times \log \left( {\min {{\left( {\frac{{{d_{j,front}}}}{{{d_{threshold}}}},\alpha_{acc}} \right)}^{\beta_{acc}}}} \right), \label{eq:acc_safe_value} \end{equation} where $d_{j,front}$ is the distance from vehicle $j$ to its front vehicle, $acc_{j,f}$ is the acceleration of the vehicle in front of vehicle $j$, and $d_{threshold}$ is the space distance safety threshold. $min(\cdot)$ is used to control the range of $\frac{{{d_{j,front}}}}{{{d_{threshold}}}}$ within $[0,\alpha_{acc} ]$. To limit the influence of acceleration in the calculation of safety value, discount factor $\lambda_{acc}$ is introduced. The combination of SV is calculated as follows, \begin{equation} \begin{array}{*{5}{l}} S{V_j} & = Comb\left( {S{V_{j,s}},S{V_{j,t}},S{V_{j,acc}} | S{V_{max}},S{V_{min}}} \right) \\ & = clip\left( {\left( {S{V_{j,d}} + S{V_{j,t}} + S{V_{j,acc}}} \right),S{V_{max}},S{V_{min}}} \right), \end{array} \label{eq:safe_value_combination} \end{equation} where $SV_{j,d}$, $SV_{j,t}$, and$SV_{j,acc}$ are defined above. In order to obtain a proper acceleration value in Eq.\ref{eq:safe_to_acc}, $clip(\cdot)$ is used to limit the maximum and minimum. A larger $S{V_j}$ indicates the vehicle $j$ driving in a safer condition. Based on the above SV, the ego vehicle's action can be calculated as follows, \begin{equation} a_{exe} = \left\{ {\begin{array}{*{20}{c}} {\left| {\frac{{SV}}{\eta}} \right|} & {d_f \leq d_b} \\ {\frac{{SV}}{\eta}} & {d_f > d_b} \end{array}} \right., \label{eq:safe_to_acc} \end{equation} where $d_f$ is the distance to the vehicle in front, $d_b$ is the distance to the vehicle behind, and $\eta$ is used to convert safety value to action, i.e. ego vehicle's acceleration. The experiment results shown in Section \ref{sec:experiment} prove that the rule can achieve collision avoidance under different traffic densities. \section{Loss-aware experience selection strategy} \label{sec:loss-aware} In the setting of the proposed IL in Section \ref{sec:FIL}, the experience, i.e. $(s,a)$ tuple, is generated by each vehicle and uploaded to ETs for training at $10Hz$. This will consume many communication resources when the number of vehicles near an intersection is enormous. Inspired by \cite{DBLP:journals/corr/SchaulQAS15}, not all the experience, i.e. $(s_t,a_t,r_t,s_{t+1})$ tuple, is valuable for the RL model training. This can also be analogous to IL. This section introduces computing for communication, where extra computation is performed to reduce communication overhead. In this section, the extra computation is placed on vehicles and edge nodes. Vehicles calculate the loss and compare the loss with thresholds given by edge nodes. Edge nodes produce a threshold for loss comparison. Therefore, combined with the concept of computing for communication, a loss-aware experience selection strategy is proposed to throw away experience that helps to model training. \begin{figure}[H] \centering \includegraphics[width=\linewidth]{Loss-aware-exp.pdf} \caption{Loss-aware experience selection strategy.} \label{fig:Loss-aware-exp} \end{figure} As displayed in Fig.\ref{fig:Loss-aware-exp}, the proposed strategy is applied between vehicle interactors and edge trainers. When a vehicle enters an intersection area, it requests edge trainers for the newest model and threshold. Then, all vehicles interact with the environment using vehicular collision avoidance rules and generate experience $(s, a)$. Then, the vehicle produces the action with the current model and state $s$, and output the action loss between rule and model. The loss is compared with loss threshold $Th.$. If the action loss is larger than $Th.$, the corresponding experience $(s, a)$ is allowed to upload to the edge trainers, vice versa. Finally, the received experiences support the edge trainers to perform the IL training and output a new threshold. The threshold is calculated on ETs as follows, \begin{equation} \label{eq:th_cal} Th. = sort(\overrightarrow {Exp},"loss")[p \times B]["loss"] \end{equation} where $\overrightarrow {Exp}$ is a batch of experience for training, and its size is $B$. The function $sort(\cdot,"loss")$ sorts experiences according to the ascending order of loss value. $p$ is the discard rate. Eq.\ref{eq:th_cal} means that the threshold is the $(p \times B)^{th}$ smallest loss in the experience batch. Because the experience will be partly discarded, the communication overhead will be saved. \section{Experiment} \label{sec:experiment} \subsection{General setups} The proposed DUCF algorithm is trained and evaluated in a self-designed intersection vehicle control platform, which is developed based on Python 3.5. Each intersection consists of four driving directions, and vehicles, which are generated following the Poisson process with different densities, are allowed to go straight without steering. The related vehicle control parameters are listed in Table.\ref{tab:Experiment_Parameter}. \begin{table} \centering \caption{Experimental Parameters} \label{tab:Experiment_Parameter} \begin{tabular}[l]{@{}lc} \toprule \textbf{Parameter} & \textbf{Value} \\ \midrule \textbf{\emph{Simulator}} \\ Lane length ($m$) & $150$ \\ Vehicle size ($m$) & $2$ \\ Velocity ($m/s$) & $[6,13]$ \\ Initial velocity ($m/s$) & $10$ \\ Acceleration ($m/s^2$) & $[-3,3]$ \\ Discrete-time step $T$ ($s$) & $0.1$ \\ \midrule \textbf{\emph{Safety Value}} \\ $\alpha_{s}$ & $10$ \\ $\beta_{s}$ & $10$ \\ $\alpha_{t}$ & $1.5$ \\ $\beta_{t}$ & $2$ \\ $\alpha_{acc}$ & $1.5$ \\ $\beta_{acc}$ & $12$ \\ $\lambda_{acc}$ & $0.2$ \\ $SV_{max}$ & $20$ \\ $SV_{min}$ & $-20$ \\ Conversion factor $\eta$ & $3$ \\ Fusion factor $\omega$ & $0.2$ \\ Weighting factor $\varepsilon $ & $0.5$ \\ \midrule \textbf{\emph{Vehicle Selection}} \\ Number of the closet vehicle $n$ & $5$ \\ \bottomrule \end{tabular} \end{table} In the proposed schemes, the neural network (NN) is used to imitate the collision-free rule by minimizing the action loss between the NN and the rule. There is only one NN, containing three dense layers and two normalization layers, and ReLU is chosen as the activation function in hidden layers. The output layer is activated by $tanh(\cdot)$. To fit the range of acceleration, the output of the NN is multiplied by 3. The complete hyper-parameters are listed in Table.\ref{tab:HyperParameter}. However, due to poor interpretability and limited safety performance of end-to-end NN inference, a weighted operation is added as below, \begin{equation} \label{eq:weighted_operation} a_{exe} = \varepsilon \times a_{NN} + (1-\varepsilon ) \times a_{rule}, \end{equation} where $\varepsilon$ is a factor for smoothing the NN output $a_{NN}$ with the rule output $a_{rule}$ to ensure driving safety. In the following experiment results, \emph{Model} denotes action output using NN only, \emph{Model+Rule} represents mixed output using NN and rule. \begin{table} \centering \caption{Parameters for Neural Networks} \label{tab:HyperParameter} \begin{tabular}[l]{@{}lccccccc} \toprule \textbf{Parameter} & \textbf{Value} \\ \midrule Discounted factor $\gamma$ & {0.8} \\ Batch Size $B$ & {48} \\ Soft update factor $\tau$ & {0.99} \\ Episode & {50} \\ Learning rate & {0.001 $\rightarrow$ 0} \\ Optimizer & {Adam} \\ \midrule \textbf{\emph{Network Architecture}} \\ Dense layer 1\# & 64 \\ Dense layer 2\# & 64 \\ Dense layer 3\# & 1 \\ \bottomrule \end{tabular} \end{table} \subsection{Indicator} To comprehensively evaluate the performance of the proposed vehicle control methods at intersections, three indicators are chosen, including safety, efficiency and discomfort. Safety is the first indicator to evaluate the vehicle control. This paper treats the collision ratio $r_{collision}$ as the safety indicator, shown as below, \begin{equation} \label{eq:metric_safe} {r_{collision}} = \frac{n_{collision}}{N_{veh}}, \end{equation} where $n_{collision}$ is the number of collisions, and $N_{veh}$ is the total number of vehicles. A large $r_{collision}$ means the vehicle control algorithm is not capable of achieve collision avoidance. The algorithm in this article is designed to reduce the indicator value to 0. While ensuring vehicles' safety, traffic efficiency is an indicator needs to be improved. This paper chooses $v_{avg}$ as the indicator, presented as below, \begin{equation} \label{eq:metric_eff} {v_{avg}} = \frac{1}{{{N_{veh}}}}\sum\limits_{i = 1}^{{N_{veh}}} {\frac{{{l_{road}}}}{{{t_i}}}}, \end{equation} where $t_i$ is the $i^{th}$ vehicle's travel time and $l_{road}$ is the length of the road, which is given in Table.\ref{tab:Experiment_Parameter}. A large $v_{avg}$ represents vehicles with the proposed algorithm driving faster, which also means a high throughput for the transportation system. Driving discomfort is also an important indicator for vehicle control. As presented in \cite{DBLP:conf/cdc/ZhangMC17, katriniok2018distributed}, the driving discomfort can be defined as below, \begin{equation} \label{eq:metric_comf} {J_{avg}} = \frac{1}{{{N_{veh}}}}\sum\limits_{i = 1}^{{N_{veh}}} {\sum\limits_t {j_{i,t}^2} } , \end{equation} where $j_{i,t}$ is the $i^{th}$ vehicle's jerk. The jerk is defined by ${j_{i,t}} = {\dot a_{i,t}}$, and $a_{i,t}$ is the $i^{th}$ vehicle's acceleration at time step $t$. A large $J_{avg}$ indicates more frequent or sharp acceleration and deceleration, resulting in more severe driving discomfort. \subsection{Results Analysis} The whole experiment contains three parts to evaluate the corresponding schemes. In the first part, the proposed IL scheme is compared with an RL algorithm and pure rule, described in Section \ref{sec:rule} in terms of training, and three indicators mentioned above. Secondly, the proposed density-aware federated learning algorithm is compared to the local training with different traffic densities. Finally, to verify the effectiveness of the proposed loss-aware experience selection strategy, the three indicators and communication reduction are treated as metrics to compare the strategy with different $p$. \begin{figure*} \subfigure[Imitation Learning-Safety]{ \label{fig:IL_safety} \includegraphics[width=0.31\textwidth]{IL-Collision_Rate.pdf} } \subfigure[Imitation Learning-Discomfort]{ \label{fig:IL_discomf} \includegraphics[width=0.31\textwidth]{IL-Discomfort.pdf} } \subfigure[Imitation Learning-Efficiency]{ \label{fig:IL_velocity} \includegraphics[width=0.31\textwidth]{IL-Average_Velocity.pdf} } \caption{The performance of the proposed imitation learning.} \label{fig:IL_perf} \end{figure*} \begin{figure*} \subfigure[Reinforcement Learning-Safety]{ \label{fig:RL_safety} \includegraphics[width=0.31\textwidth]{RL-Collision_Rate.pdf} } \subfigure[Reinforcement Learning-Discomfort]{ \label{fig:RL_discomf} \includegraphics[width=0.31\textwidth]{RL-Discomfort.pdf} } \subfigure[Reinforcement Learning-Efficiency]{ \label{fig:RL_velocity} \includegraphics[width=0.31\textwidth]{RL-Average_Velocity.pdf} } \caption{The performance of the benchmark reinforcement learning.} \label{fig:RL_perf} \end{figure*} \begin{figure*} \subfigure[Federated Learning-Safety]{ \label{fig:FL_safety} \includegraphics[width=0.31\textwidth]{FL-Collision_Rate.pdf} } \subfigure[Federated Learning-Discomfort]{ \label{fig:FL_discomf} \includegraphics[width=0.31\textwidth]{FL-Discomfort.pdf} } \subfigure[Federated Learning-Efficiency]{ \label{fig:FL_velocity} \includegraphics[width=0.31\textwidth]{FL-Average_Velocity.pdf} } \caption{The performance of the proposed federated learning.} \label{fig:FL_perf} \end{figure*} Fig.\ref{fig:IL_perf} depicts the performance of the proposed IL scheme for varying traffic densities from 300 to 2100 veh/lane/hour. The IL scheme is trained in four traffic densities (i.e. 300/900/1500/2100 veh/lane/hour). The black lines in these sub-figures represent the set of collision avoidance rules described in Section \ref{sec:rule}, and the results show that the rules can achieve good collision avoidance and traffic efficiency but poor driving comfort. This is because the safety value emphasized in this set of rules is to evaluate the surrounding danger degree according to space, time and acceleration, so as to avoid the collision, but it does not suppress the change of acceleration. In addition, the results also demonstrate that the proposed IL algorithm can help the model learn the ability of collision avoidance and efficiency improvement from the rules. Furthermore, only the model trained in low-density produces a small ratio of collisions in a high-density evaluation environment. This is because, in the low-density scene, the experience samples are concentrated in large inter-vehicle distances. Such sample distribution contributes to insufficient model training, which causes the model unable to cope with the high-density (i.e. small inter-vehicle distance) scenarios. Moreover, compared with the rule, the policy obtained from IL can reduce the discomfort by 55.71\%. It can be explained that rule-making inevitably introduces threshold trigger mode, and the state that has not yet triggered the threshold can not be fully mapped to the action. The model training helps complete the state-action mapping by gradually approaching the rule with a low learning rate. Fig.\ref{fig:RL_perf} shows the performance of a benchmark RL algorithm for different traffic densities. In the comparison of Fig.\ref{fig:IL_perf} and Fig.\ref{fig:RL_perf}, it can find that IL outperforms RL in terms of safety and efficiency. Obviously, RL is more dependent on samples, which can be easily found from the low-density training and high-density evaluation in Fig.\ref{fig:RL_safety}. Using the weighted operation mentioned in Eq.\ref{eq:weighted_operation}, rules can contribute to IL model achieving collision-free in all cases, but they can not effectively assist RL model. Fig.\ref{fig:FL_perf} presents the impact of the model aggregation operation in DUCF. Two aggregation methods (Same-proportion and Density-Aware) are evaluated. Our training scenario uses four traffic densities (300/900/1500/2100 veh/lane/hour). The Same-proportion means the four models have the same weight, and density-aware means the weight is $1:3:5:7$. The density-aware method outperforms same-proportion in all three indicators. In addition, because the global model aggregates the model parameters under different densities, the discomfort is further reduced by 41.37\% compared with the trained policy under any single traffic density. Taking a comprehensive view of Fig.\ref{fig:IL_perf}-\ref{fig:FL_perf}, the average speed is basically unchanged, and there is a conversion relationship between collision rate and discomfort. Models with higher collision rates are more conservative, but lead to better comfort and vice versa. \begin{figure} \centering \includegraphics[width=\linewidth]{actor_loss.pdf} \caption{The loss value with different discard factor $p$.} \label{fig:save_loss} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{CommSave.pdf} \caption{Cumulative communication savings with different discard factor $p$.} \label{fig:save_ratio} \end{figure} Fig.\ref{fig:save_loss} depicts the loss curves with different discard factor $p$. When $p$ is not more than 10\% With the training steps, the trained models can be convergent and stable. The results demonstrate that the IL model is more difficult to converge as the number of discarded experiences increases. But the IL model can still converge when the number of discarded experiences are limited within a certain range. In the following analysis, only the convergent model (i.e. models with $p = 1\%, 2\%, 5\%, 10\%$) is consider. In Fig.\ref{fig:save_ratio}, the cumulative communication savings is presented. According to Fig.\ref{fig:save_loss}, the models converge in 6000 steps, so only the communication overhead before 6000 steps is counted.A higher $p$ brings less communication cost. When the model converges, the communication overhead can be saved 0.44\%, 1.65\%, 5.6\% and 12.80\%, respectively. From the performance shown in Table.\ref{tab:save_perf}, the models are more conservative, that is, higher collision rate and comfort. \begin{table}[] \caption{IL Performance with discard factors} \label{tab:save_perf} \begin{tabular}{llcccc} \toprule \multicolumn{2}{l}{Discard factor} & 1\% & 2\% & 5\% & 10\% \\ \midrule \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Collision\\ Rate\end{tabular}} & Model & 36\% & 39\% & 44\% & 42\% \\ & Model+Rule & 0\% & 0\% & 0\% & 0\% \\ \midrule \multirow{2}{*}{Discomfort} & Model & 21.55 & 13.08 & 11.11 & 23.85 \\ & Model+Rule & 137.15 & 113.23 & 108.83 & 130.83 \\ \midrule \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Average\\ Velocity\end{tabular}} & Model & 12.22 & 12.06 & 12.06 & 12.24 \\ & Model+Rule & 12.21 & 12.21 & 12.19 & 12.26 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} This paper has proposed a FL-based vehicle control framework to address the problem of raw data interaction limitation. The framework contains three parts, interactors, trainers and an aggregator. Under the framework, a density-aware model aggregation is proposed for intersections with different traffic densities. Then, an IL algorithm is proposed for action cloning from a set of collision avoidance rules to improve the safety capability of end-to-end learning. Furthermore, a loss-aware experience selection strategy is explored to reduce communication overhead via additional computation on interactors and trainers. The extensive experiment reveals that the proposed IL algorithm obtains the ability to avoid collisions and reduces discomfort by 55.71\%. The density-aware model aggregation in FL framework can further reduce discomfort by 41.37\%, and the experience selection scheme can reduce the communication overhead by 12.80\% while ensuring convergence. Our main future work will be focused on the modeling and theoretical analysis of the relationship between the interactors and trainers in terms of communication and model training. Based on this analysis, we believe that it will help speed up the model training and significantly reduce the communication overhead. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-06T02:11:02", "yymm": "2105", "arxiv_id": "2105.01889", "language": "en", "url": "https://arxiv.org/abs/2105.01889" }
\section{Introduction} \input{introduction} \section{BF and BFCG Theory}\label{bftheory} \subsection{4d BF Theory} \paragraph{Action} We consider the Lie group ${\cal D}$ which has a pair of $d$-dimensional Lie subgroups $\mathcal{G}$ and $\mathcal{G}^*$, with $\mathcal{G}^*$ abelian, such that $${\cal D}\cong \mathcal{G}\ltimes \mathcal{G}^*\cong \mathcal{G}^*\rtimes \mathcal{G}.$$ We denote by $\mathfrak{g}$ and $\mathfrak{g}^*$ the Lie algebra of $\mathcal{G}$ and $\mathcal{G}^*$ with generators $e_i$ and $e^*_i$ for $i \in \{1 \dots d\}$. The 2d dimensional Lie algebra of ${\cal D}$ is noted $\mathfrak{d}\cong \mathfrak{g}\ltimes \mathfrak{g}^*\cong \mathfrak{g}^*\rtimes\mathfrak{g}$, with Lie brackets \begin{align} & [e_i,e_j] = f^k{}_{ij}e_k, \quad [e^{*i},e^{*j}]=0. \label{eq: lie alg}\\ & [e_i,e^{*j}] = f^j{}_{ki}e^{*k}, \label{eq: double} \end{align} where index summation notation is used here and for the rest of this paper. The Lie bracket structure can be translated into an action of $\mathfrak{g}$ on $\mathfrak{g}^*$. \begin{align} \mathfrak{d}\cong \mathfrak{g}\ltimes \mathfrak{g}^*, \end{align} with the action given by \begin{align} e_i \triangleright e^{*j} = f^j{}_{ki}e^{*k}, \end{align} hence the Lie bracket can be written as \begin{align} [e_i,e^{*j}] = e_i \triangleright e^{*j} \end{align} There is a natural pairing between $\mathfrak{g}$ and $\mathfrak{g}^*$ which is compatible with the Lie algebra brackets of $\mathfrak{g}$ and $\mathfrak{g}^*$ that extends to $\mathfrak{d}$. \begin{align} & \langle e_i,e_j\rangle = \langle e^{*i}, e^{*j }\rangle =0, \quad \langle e_i, e^{*j} \rangle = \delta^j_i, \label{duality} \\ & [e_i,e^{*j}] = \langle [e_k,e_i], e^{*j} \rangle e^{*k} .\label{eq: double 2} \end{align} This pairing gives the relation \begin{align} \langle a,[b,c] \rangle = -\langle [b,a], c\rangle, \quad \forall a, b, c\in\mathfrak{d}. \label{eq: invariance} \end{align} $\mathfrak{d}$ is called the (Drinfeld) double. \medskip Let $\mathcal{B}$ be a $\mathfrak{g}^*$ valued $2$-form and let $\mathcal{A}$ be a $\mathfrak{g}$ valued 1-form. The curvature of $\mathcal{A}$ is denoted $\mathcal{F}$ and is defined as \begin{align} \mathcal{F} = d\mathcal{A} +\demi [\mathcal{A}\wedge\mathcal{A}]. \end{align} The BF theory on the four dimensional manifold $\mathcal{M}$ is defined by the action \begin{align} S_\mathcal{G} \coloneqq \int_\mathcal{M} \langle \mathcal{B} \wedge \mathcal{F}\rangle = \int_\mathcal{M} \mathcal{B}_i \wedge \mathcal{F}^i. \label{eq:bf action} \end{align} \paragraph{Symmetries}The symmetries of the action (\ref{eq:bf action}) are given by the gauge transformation parametrized by the group element $G\in\mathcal{G}$, and the translation/shift symmetries parametrized by a $\mathfrak{g}^*$ valued $1$-form $\eta$. \begin{align} \textrm{Gauge transformations: } & \left\{\begin{array}{l} \mathcal{A} \mapsto G^{-1} \mathcal{A} G +G^{-1}dG \\ \mathcal{B} \mapsto G^{-1}\mathcal{B} G \end{array}\right. \label{eq: gauge}\\ \textrm{Shift transformations: } & \left\{\begin{array}{l} \mathcal{A} \mapsto \mathcal{A} \\ \mathcal{B} \mapsto \mathcal{B} +d_\mathcal{A} \eta \end{array}\right. \label{eq: shift} \end{align} where $d_\mathcal{A} \eta = d\eta +[\mathcal{A},\eta]$ (note that due to \eqref{eq: double}, $[\mathcal{A},\eta]\in \mathfrak{g}^*$ and so $d_A\eta \in \mathfrak{g}^*$). The action is invariant under the gauge transformation due to the invariance of $\langle \cdot, \cdot \rangle$ given in (\ref{eq: invariance}). Under the shift transformation, the action is invariant up to a boundary term, \begin{align} S_\mathcal{G} \mapsto S_\mathcal{G} +\int _\mathcal{M} d\langle \eta \wedge \mathcal{F} \rangle, \end{align} where we used the Bianchi identity, $d_\mathcal{A} \mathcal{F}=0$. As we will recall in the next section, the shift symmetry can be interpreted as a 2-gauge transformation. \paragraph{Equations of motion and potential} The equations of motion associated with $S_\mathcal{G}$ are obtained by varying the fields, $\mathcal{A}$ and $\mathcal{B}$: \begin{align} \delta S_\mathcal{G} = \int_\mathcal{M} \langle \delta B \wedge \mathcal{F} \rangle - \int _\mathcal{M} \langle d_\mathcal{A} \mathcal{B}\wedge \delta \mathcal{A} \rangle +\int_\mathcal{M} d\langle \mathcal{B} \wedge \delta \mathcal{A} \rangle\label{eq:variation} \end{align} The first two terms are the equations of motion: \begin{align} \mathcal{F} = 0 \qquad d_\mathcal{A} \mathcal{B}=0 \label{eq: eom} \end{align} The third term in (\ref{eq:variation}) is the symplectic potential and is responsible for the Poisson structure. Denoting the boundary of $\mathcal{M}$ by $\partial \mathcal{M} \coloneqq M$, the potential is \begin{align} \Theta \coloneqq \int_M \langle \mathcal{B} \wedge \delta \mathcal{A} \rangle. \end{align} The associated symplectic 2-form is \begin{align} \Omega \coloneqq \delta \Theta = \int_M \langle \delta \mathcal{B} \wedge \delta \mathcal{A} \rangle. \end{align} We note that the equations of motion imply two other sets of condition, namely \begin{align} \mathcal{F} = 0 \implies d_\mathcal{A} \mathcal{F}= 0 \qquad d_\mathcal{A} \mathcal{B} =0 \implies d_\mathcal{A}\big(d_\mathcal{A} \mathcal{B}\big)= [{\cal F},\mathcal{B}]=0 \label{eq: eom topo} \end{align} The first relation is the standard Bianchi identity. The condition $[{\cal F},\mathcal{B}]=0$ is weaker than the flatness constraint. \paragraph{Charges/momentum maps. } We consider the infinitesimal versions of the gauge transformations and want to identify the charges generating such transformations. Let the infinitesimal version of (\ref{eq: gauge}) be parametrized by $\chi \in \mathfrak{g}$, and the infinitesimal version of the shift transformation (\ref{eq: shift}) be parametrized by $\beta \in \mathfrak{g}^*$, \begin{align} \textrm{Infinitesimal gauge transformations: } & \left\{\begin{array}{l} \delta _\chi \mathcal{A} = d_\mathcal{A} \chi \\ \delta _\chi \mathcal{B} = [\mathcal{B},\chi]\end{array}\right. \label{sym:BFg} \\ \textrm{Infinitesimal shift transformations: } & \left\{\begin{array}{l} \delta _\beta \mathcal{A} =0 \\\delta _\beta \mathcal{B} = d_\mathcal{A} \beta\end{array}\right. \label{sym:BFs} \end{align} We think of $\delta_\chi$ and $\delta_\beta$ as vector fields in the field space. We use notation such as $\delta_\chi \lrcorner \delta \mathcal{A} = \delta_\chi \mathcal{A}$ to express the interior product of a vector and a 1-form in field space. We define the charges as \begin{align} \delta \mathcal{J}_\chi &\coloneqq -\delta_\alpha \lrcorner \Omega =\int _M \langle\delta \mathcal{B}\wedge \delta_\chi \mathcal{A} \rangle -\int_M \langle \delta_\chi\mathcal{B} \wedge \delta \mathcal{A} \rangle\\ \delta {\cal P}_\beta &\coloneqq -\delta _\beta \lrcorner \Omega = -\int_M \langle \delta_\beta \mathcal{B} \wedge \delta \mathcal{A} \rangle \end{align} Some manipulation reveals (provided we assume $\delta \alpha =0$ and $\delta \beta =0$, ie the parameters are not field dependent) \begin{align} \mathcal{J}_\chi &= \int_M d\langle \mathcal{B} \wedge \chi \rangle-\int_M \langle d_\mathcal{A}\mathcal{B}\wedge \chi\rangle \approx \int_M d\langle \mathcal{B} \wedge \chi \rangle\\ {\cal P}_\beta &= -\int_M d\langle \beta \wedge \mathcal{A} \rangle -\int_M\langle \beta \wedge \mathcal{F}\rangle \approx -\int_M d\langle \beta \wedge \mathcal{A} \rangle, \end{align} where $\approx$ means we went on-shell. We note that the charges are essentially given by the corner charges specified by the variables $\mathcal{B}$ and $\mathcal{A}$, a 2-form and a 1-form respectively. When the parameters are constant, we will call the charges \textit{global}. We have used in each case the pull-back to $M$ of the equations of motion. These pull-backs are also interpreted as \textit{constraints}. A momentum map is a function on phase space generating the symmetry transformations. As such the constraints are momentum maps. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline Charge& Momentum map & Symmetry\\\hline\hline ${\cal B}$&$d_\mathcal{A}\mathcal{B}$ & gauge transformation \\ ${\cal A}$&$\mathcal{F}$ & shift transformation\\ \hline \end{tabular} \caption{Summarizing the charges, momentum maps and associated symmetries. } \label{tab:0} \end{table} \medskip To anticipate a bit our results, we note that the global charges $\mathcal{J}$ and ${\cal P}$ are simply generated by the configuration/momentum variables. This implies that where the phase space variables are discretized will directly influence how the charges will be discretized. This means that we might get different types of symmetry structure (1-group versus 2-group) according to the choice of discretization of the variables. \subsection{BFCG theory} Let's revisit the action of BF theory and see how specifying the group might change our perspective. We consider $\mathfrak{g}$ to be the Euclidian (or Poincar\'e) Lie algebra $\mathfrak{g}=\mathfrak{iso}(4)\cong \so(4)\ltimes \mathbb{R}^4 $ and $\mathfrak{g}^*$ is the dual abelian Lie algebra $\mathfrak{g}^*=\mathfrak{iso}^*(4)\cong \so^*(4)\times \mathbb{R}^{*4} $, with $\so^*(4)\cong \mathbb{R}^6$ and $\mathbb{R}^{*4}\cong \mathbb{R}^4$. \begin{align} \mathfrak{d} \sim \mathfrak{iso}(4) \ltimes \mathfrak{iso}^*(4). \end{align} The subalgebra $\mathbb{R}^4$ is generated by $P^{\mu}$ and the rotation algebra $\so(4)$ is generated by $J^{\mu\nu}$. Greek indices range from 0 to 3. The Lie brackets are \begin{align} [J^{\mu\nu},J^{\sigma\rho}] &= \eta^{\mu\rho}J^{\nu\sigma }+\eta^{\nu\sigma}J^{\mu\rho}-\eta^{\mu\sigma}J^{\nu\rho}-\eta^{\nu\rho}J^{\mu\sigma}\\ [P^\mu,P^{\nu}]=0, &\quad [J^{\mu\nu},P^\sigma]=\eta^{\mu\sigma}P^\nu-\eta^{\nu\sigma}P^\mu, \label{eq: algebra} \end{align} where $\eta$ is the flat metric. We will sometimes write $[J^{\mu\nu}, J^{\sigma\rho}] = f^{\mu\nu\sigma\rho}{}_{\alpha\beta}J^{\alpha\beta}$ where $f$ is the structure constant of $\so(4)$. The bilinear pairing is \begin{align} \langle P^*_\nu,P^\mu \rangle =& \delta_\nu^\mu, \quad \langle J^*_{\mu\nu}, J^{\sigma\rho}\rangle = \delta_\mu^\sigma\delta _\nu^\rho-\delta^\sigma_\nu\delta_\mu^\rho, \end{align} where the generators in the dual are distinguished by lowered indices and an asterix. The second term in the second pairing is to account for the antisymmetry of the indices. We will identify the subspace generated by $P^*$ as $\mathbb{R}^{4*}$, the subspace generated by $J^*$ as $\so(4)^*$, and $\mathfrak{iso}^*(4)\cong \mathbb{R}^{10}$. \begin{align} [P^*_\mu,P^*_\nu] = [J_{\mu\nu}^*,J_{\sigma\rho}^*]=[J_{\mu\nu}^*,P_\sigma^*] = [J_{\mu\nu}^*,P^\rho]=0. \end{align} The Lie brackets of the double is constructed according to\footnote{Writing \eqref{eq: double 2} in terms of $J$'s and $P$'s gives, for example, $[P^\mu,P^*_\nu] = \langle [P^\sigma,P^\mu],P^*_\nu\rangle P^*_\sigma+\demi \langle [J^{\alpha\beta}, P^\mu],P^*_\nu \rangle J^*_{\alpha\beta}$. } (\ref{eq: double 2}) \begin{align} [P^\mu,P^*_\nu] =& \eta^{\mu\sigma}J^*_{\sigma\nu}\label{nontrivial}\\ [J^{\sigma\rho},P^*_\nu] =& (\eta^{\rho\alpha}\delta_\nu^\sigma- \eta^{\sigma\alpha}\delta_\nu^\rho)P^*_\alpha\\ [J^{\mu\nu},J^*_{\sigma\rho}]=& (\eta^{\alpha\nu}\delta_\rho^\mu-\eta^{\alpha\mu}\delta_\rho^\nu)J^*_{\alpha\sigma}+(\eta^{\alpha\mu}\delta_\sigma^\nu-\eta^{\alpha\nu}\delta^\mu_\sigma)J^*_{\alpha\rho} \end{align} The non-zero brackets $ [P^\mu,P^*_\nu]$ in \eqref {nontrivial} will play an important role in the discretization procedure. We decompose the fields $\mathcal{A}$ and $\mathcal{B}$ into their rotation and translation components: \begin{align} \mathcal{B} &= B + \Sigma \qquad &B \in \so(4)^*\qquad \Sigma \in \mathbb{R}^{4*}\\ \mathcal{A} &= A + C \qquad &A \in \so(4)\qquad C \in \mathbb{R}^4 \end{align} The curvature is also rewritten in terms of its projections into subalgebras. \begin{align} \mathcal{F} = F +d_AC, \quad \textrm{ with } F = dA +\demi[A\wedge A] \in \so(4) && d_AC = dC +[A\wedge C] \in \mathbb{R}^4. \end{align} Hence the action is \begin{align} S_{ISO(4)} =& \int_\mathcal{M} \langle B \wedge F \rangle + \int_\mathcal{M} \langle \Sigma \wedge d_A C \rangle = \int_\mathcal{M} \langle B \wedge F \rangle - \int_\mathcal{M} \langle d_A\Sigma \wedge C \rangle + \int_\mathcal{M} d\langle \Sigma \wedge C \rangle. \end{align} The quantity $G \coloneqq d_A \Sigma$ is called the 2-curvature. The BFCG action is obtained from $S_{ISO(4)}$ up to a boundary term \cite{Mikovic:2011si, Mikovic:2015hza}, \begin{align} S_{BFCG} =& \int_M \langle B\wedge F \rangle +\int_M \langle C \wedge G \rangle = S_{ISO(4)} - \int_\mathcal{M} d\langle \Sigma \wedge C \rangle. \end{align} \paragraph{Equations of motion and potential} The equations of motion can be determined by varying $S_{BFCG}$ with respect to $A$, $B$, $C$, and $\Sigma$. Alternatively, since a boundary term in the action does not change the equations of motion we can decompose the expression in (\ref{eq: eom}) into translation and rotation components. Either way, we get \begin{align} F &=0 \qquad d_A C = 0\nonumber\\ G &= 0 \qquad d_A B = -[C\wedge\Sigma]. \label{eq: bfcgeom} \end{align} We remind the reader that though $C$ and $\Sigma$ are both elements of a Lie subalgebra with trivial brackets, the bracket between them is given by \eqref{nontrivial} which is not zero but in $\so^*(4)$. Similarly the potential is decomposed \begin{align} \Theta_{ISO(4)} = \int_M \langle B\wedge \delta A\rangle +\int_M \langle \Sigma \wedge \delta C \rangle. \end{align} The potential from $S_{BFCG}$ is slightly different, \begin{align} \Theta_{BFCG} = \int_ M \langle B \wedge \delta A \rangle - \int _M \langle C\wedge \delta \Sigma \rangle. \end{align} The two potentials differ by a total functional derivative and therefore give the same 2-form\footnote{$\delta C \wedge \delta \Sigma = -\delta \Sigma \wedge \delta C$ since they are 1-forms in field space}: \begin{align} \Omega = \int_M \langle \delta B \wedge \delta A \rangle +\int_M \langle \delta \Sigma \wedge \delta C \rangle. \end{align} We note that we also have the analogue of the Bianchi identity and its companion \eqref{eq: eom topo} which are still valid. Breaking them into components we have \begin{align} \left. \begin{array}{r} d_\mathcal{A}\mathcal{F}=0\\ d_A(d_A C)=0 \end{array}\right\} &\rightarrow \left\{ \begin{array}{l} \, d_A F=0 \\ \, [F,C]=0 \end{array} \right. \\ d_\mathcal{A}(d_\mathcal{A}\mathcal{B})=[\mathcal{F}\wedge\mathcal{B}]=0 & \rightarrow \left\{\begin{array}{l} \, [F,B]+ [d_AC\wedge\Sigma]=0 \\ \, [F,\Sigma]=0 \end{array}\right.\label{continuous-edge-simp} \end{align} \paragraph{Symmetries} Now let's review the symmetries introduced in the previous section. We have now different names as the fields are interpreted in the 2-gauge theory picture \cite{Girelli:2007tt}. \begin{align} \textrm{Gauge : } \left\{\begin{array}{l} \delta _\chi \mathcal{A} = d_\mathcal{A} \chi \\ \delta _\chi \mathcal{B} = [\mathcal{B},\chi] \end{array}\right. & {\,\rightarrow\,} \begin{array}{l} \chi= \alpha+ X \\ \alpha \in \so(4), \, X\in \mathbb{R}^4 \end{array} {\,\rightarrow\,} \left\{\begin{array}{l} % \textrm{1-gauge transformation } \left\{\begin{array}{l} \delta _\alpha A = d_A\alpha \\ \delta _\alpha B = [B,\alpha]\\ \delta _\alpha C = [C,\alpha] \\ \delta_\alpha \Sigma = [\Sigma,\alpha] \end{array}\right. \\ \textrm{2-shift } \left\{\begin{array}{l} \delta _X A=0 \\ \delta _X C= d_A X\\ \delta _X B =[\Sigma , X] \\ \delta _X \Sigma =0 \end{array}\right. % \end{array}\right. % \\ \textrm{ Shift : } \left\{\begin{array}{l} \delta _\beta \mathcal{A} =0 \\ \delta _\beta \mathcal{B} = d_\mathcal{A} \beta \end{array}\right. & {\,\rightarrow\,} \begin{array}{l} \beta = \zeta +Y, \\ \zeta \in \mathbb{R}^{4*}, \, Y \in \so(4)^* \end{array} {\,\rightarrow\,} \left\{\begin{array}{l} % \textrm{2-gauge transformation } \left\{\begin{array}{l} \delta_\zeta A =0\\ \delta_\zeta C =0 \\\delta _{\zeta} B = [C\wedge\zeta]\\ \delta _{\zeta} \Sigma = d_A \zeta \end{array}\right. \\ \textrm{1-shift } \left\{\begin{array}{l} \delta _Y A =0\\ \delta _Y C =0 \\\delta _{Y}B = d_AY \\ \delta _Y \Sigma = 0 \end{array}\right. % \end{array}\right. \end{align} \paragraph{Charge and momentum maps. } As before, we define the charges\footnote{A similar analysis was done by M. Geiller in some unpublished work.} \begin{align} \delta {\cal L}_\alpha \coloneqq& - \delta_\alpha \lrcorner \Omega = -\int_M \langle\delta_\alpha B\wedge \delta A \rangle +\int_M \langle\delta B \wedge \delta _\alpha A\rangle - \int_M \langle\delta _\alpha\Sigma \wedge \delta C\rangle +\int_M \langle\delta \Sigma \wedge \delta _\alpha C\rangle \\ \delta {\cal R}_ {Y} \coloneqq& -\delta_{Y} \lrcorner \Omega = -\int_M \langle\delta _{Y} B\wedge \delta A \rangle\\ \delta \mathcal{K}_{X} \coloneqq& -\delta _{X} \lrcorner \Omega = -\int_M \langle\delta_X B \wedge \delta A\rangle+\int_M \langle\delta \Sigma \wedge \delta_X C\rangle\\ \delta \mathcal{Q}_{\zeta} \coloneqq& - \delta _{\zeta} \lrcorner \Omega = -\int_M \langle\delta_{\zeta}B \wedge \delta A \rangle-\int_M \langle\delta _{\zeta}\Sigma \wedge \delta C\rangle \end{align} After some algebra, we find that, still assuming a non-dependence of the parameters in terms of the fields, \begin{align} {\cal L}_\alpha &= \int_M d\langle B\wedge \alpha \rangle - \int_M \langle (d_A B + [C\wedge\Sigma])\wedge \alpha)\approx \int_M d\langle B\wedge \alpha \rangle\\ {\cal R}_{Y} &= -\int_M d\langle Y \wedge A\rangle - \int_M \langle Y \wedge F \rangle \approx -\int_M d\langle Y \wedge A \rangle\\ \mathcal{K}_X &= \int_M d\langle \Sigma \wedge X \rangle - \int_M \langle X\wedge d_A\Sigma \rangle \approx \int_M d\langle \Sigma \wedge X \rangle \\ \mathcal{Q}_{\zeta} &= -\int_M d\langle \zeta \wedge C \rangle -\int_M \langle \zeta\wedge d_A C \rangle \approx -\int_M d\langle \zeta \wedge C \rangle \end{align} If the coefficients $\zeta,X,Y,\alpha$ are constant on the boundary, we deal then with ``global" charges. There is then no central extension in their Poisson algebra (see Appendix \ref{algebra}). As before we have a set of constraints, which are the pull-back of the equations of motion to $M$. These constraints are momentum maps generating the symmetry transformations, we list them in table \ref{tab:mmas}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline Charge& Momentum map & Symmetry\\\hline\hline $B$&$d_A B + [C\wedge\Sigma]$ & 1-gauge transformation \\ $\Sigma$&$d_\mathcal{A}\Sigma$ & 2-shift \\ $A$ &$F$ & 1-shift \\ $C$ &$d_A C$ & 2-gauge transformation\\ \hline \end{tabular} \caption{Summarizing the charges, momentum maps and associated symmetries. } \label{tab:mmas} \end{table} \medskip Once again anticipating, we can see that that the discretized symmetries will different whether we consider the pair $(A,C)$ (ie ${\cal A}$) discretized on the dual complex as would be done for regular BF theory (hence obtaining a 1-gauge theory), or if we consider the pair $(A,\Sigma)$ discretized on the dual complex as would be done for the BFCG theory (hence considering a 2-gauge theory). While the continuum theories are equivalent up to a boundary term, the discrete symmetry will be different. \section{Discretization of 4d ISO(4) BF theory}\label{sec:discBF} \subsection{Notation} Let us set up the notations for the cellular decomposition once and for all. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{figures/twotetra.pdf} \caption{A small piece of the cellular decomposition, showing how we label structures. The centers of the tetrahedron are labelled by $c$ and $c'$ with the connecting link labelled $(cc')$. The triangle which the link passes through is labelled $(cc')^*$. The vertices shown are labelled by $v$ and $v'$ and the connecting edge by $[vv']$. There are arrows on the links and edges indicating the orientation.} \label{fig:twotetrahedra} \end{figure} We divide the spatial slice $M$ into subregions forming a cellular decomposition. The 3 dimensional cells will be tetrahedra for simplicity, but other cases can be considered as well. Within each tetrahedron we identify a center point $c$, which we refer to as a node. The tetrahedron dual to $c$ is denoted $c^*$. The oriented segment between two nodes, $c$ and $c'$ is denoted by the ordered pair $(cc')$, which we will call a link. The ordering of the nodes determines the orientation of the link. We refer to the first node in the pair $(cc')$ as the source and the second node as the target. The vertices $\overline{v}$ of tetrahedra are denoted with an overline. The edges between two vertices, $v$ and $v'$ is then denoted $[\overline{vv}']$. For each vertex $v$, there is a set of tetrahedra containing $v$. The centers of these tetrahedra generate a polyhedron in the dual cellular complex. This polyhedron will be denoted $\overline{v}^*$. The faces of the polyhedron, called dual faces are labelled by the edge it intersects, $[\overline{vv}']^*$ or by the set of nodes it contains. Similarly, the triangles making up the surface of the tetrahedra are either labelled by the three vertices they contain such as $[\overline{v}_1\overline{v}_2\overline{v}_3]$ or by the link intercepting it, $(cc')^*$. Some of the structures are shown in figure \ref{fig:twotetrahedra}. \subsection{Recovering the standard discretization of 4d Euclidean BF theory}\label{sec:BF-disc} We use the discretization approach that has been used in several works \cite{Freidel:2011ue, Dupuis:2017otn, Dupuis:2020ndx}. {\textit{\textbf{Restricting the fields to subregions}} } First, let's consider the symplectic potential of BF theory \begin{align} \Theta_{BF} = \int_M \langle \mathcal{B} \wedge \delta\mathcal{A} \rangle = \sum_c \int_{c^*} \langle \mathcal{B}_c \wedge \delta\mathcal{A}_c \rangle \end{align} The second equality makes it explicit that we are dividing $M$ into 3-cells $c^*$. Furthermore, we label the restrictions of the fields $\mathcal{A}$ and $\mathcal{B}$ within each cell with a subscript. The fields in neighbouring 3-cells will be related through some continuity relations on the boundary of the cell. {\textit{\textbf{Truncation.}} } Until now we have only rewritten the symplectic potential to account for the fact we broke up $M$ into subregions. The next step is to \textit{truncate} the theory, by going on shell in the interior of each cell. Since we are dealing with a topological theory, we could argue that we push all curvature defects or "torsion" defect (ie such that $d_\mathcal{A}\mathcal{B} \neq0$ to respectively the edges and the vertices of the triangulation. Then we should regularize properly them and treat them accordingly. This would be going beyond the scope of the present paper, so we just assume that there are no such defects at all and leave their study for later investigations. The equations of motion (\ref{eq: eom}) imply that $\mathcal{A}$ is a flat connection and therefore pure gauge. For an $\mathrm{ISO}(4)$ group element $\mathcal{H}_c(x)$, interpreted as an ISO(4) holonomy connecting $c$ to a point $x$ in the cell, \begin{align} \mathcal{A}_c = \mathcal{H}_c^{-1}d\mathcal{H}_c \end{align} is a solution for $\mathcal{F} = 0$. Then, for a $\mathfrak{iso}(4)^*$ valued 1-form $\chi$, \begin{align} \mathcal{B}_c = \mathcal{H}_c^{-1}d\chi_c \mathcal{H}_c \end{align} is a solution to $d_\mathcal{A}\mathcal{B} = 0$. Hence both $\mathcal{A}_c$ and $\mathcal{B}_c$ are pure gauge. {\textit{\textbf{Continuity equations.}} } As mentioned above, the fields inside each cell $c$ may be considered separately so long as there is continuity between cells. This puts conditions on the fields $\mathcal{H}$ and $\chi$. The continuity relations for $\mathcal{A}$ and $\mathcal{B}$ \textit{on the interior of} the triangle shared by $c^*$ and $c'^*$ are expressed as \begin{align} \mathcal{A}_c(x) = \mathcal{A}_{c'}(x), \, \implies \, \mathcal{H}_{c'}(x) = \mathcal{G}_{c'c}\mathcal{H}_{c}(x)\label{cont1}\\ \mathcal{B}_c(x) = \mathcal{B}_{c'} (x)\implies d\chi_{c'}(x) = \mathcal{G}_{c'c}d\chi_c(x) \mathcal{G}_{cc'} \label{eq: chi cont}, \end{align} where $\mathcal{G}_{cc'}\in ISO(4)$ does not depend on $x$. We denote the inverse $\mathcal{G}_{cc'}^{-1} = \mathcal{G}_{c'c}$. The above continuity equations are valid on the interior of the triangle. If there is no curvature concentrated on the edges (we will assume this later on), the equations would be valid there as well. \medskip We emphasize that we also have the induced equation $[\mathcal{F},\mathcal{B}]=0$ and the Bianchi identity $d_\mathcal{A} \mathcal{F}=0$. We should also assess how they can be realized in the truncated scheme. Since they are expressed in terms of the notion of curvature, they should be obtained by considering several continuity relations concatenated together to generate a loop. To this aim, let us consider the loop $\partial e^*$, which is the boundary of the dual face $e^*$ (dual to the edge $e$). This loop can be described by the links relating the nodes $(c_ic_{i+1})$. The version of the condition $[\mathcal{F},\mathcal{B}]=0$ in terms of continuity equation is then \begin{equation} d\chi_{c} = \big(\prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}{c_ic_{i+1}}\big)^{-1} d\chi_c \big(\prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}}\big), \label{BF simp constraint} \end{equation} As we could infer that the curvature is given in terms of the holonomy around the loop $\partial e^*$, the Bianchi identity is naturally obtained by demanding that the (dual) polyhedron made of loops $\partial e^*_i$ is closed so that \begin{equation} \prod_{e_i^*\in\partial v^*} \mathcal{G}_{\partial e_i^*}=1. \end{equation} We will not use this constraint in the BF discretization. However we demand as a requirement for the fields $\chi_{c}$ and $ \mathcal{G}_{c_ic_{i+1}}$ to satisfy \eqref{BF simp constraint}. \medskip {\textit{\textbf{Evaluation of the symplectic potential.}} } In order to write the potential in terms of $\mathcal{H}$ and $\chi$, we should express the variation $\delta \mathcal{A}$ in terms of $\mathcal{H}$: \begin{align} \delta \mathcal{A}_c = \mathcal{H}_c^{-1}(d\Delta \mathcal{H}_c) \mathcal{H}_c \end{align} where $\Delta \mathcal{H}_c = \delta \mathcal{H}_c \mathcal{H}_c^{-1}$. The potential evaluated on-shell in the cells reads \begin{align} \Theta_{BF} \approx \sum_{c} \int_{c^*}\langle d\chi_c \wedge d\Delta \mathcal{H}_c \rangle, \label{eq: bf potential} \end{align} where $\approx$ means we went on-shell \textit{in} the cells. We can then use the continuity equations \eqref{cont1} and \eqref{eq: chi cont} to simplify its expression and recover the well-known results. We note that the integrand in \eqref{eq: bf potential} is a total derivative so we can use Stokes theorem to recast it as an integral over triangles bounding each tetrahedron. However, there is a choice to be made as to which variable keeps the derivative when dealing with the integral on the boundary. A similar choice arises when dealing with (3d) gravity, and we have the LQG or dual LQG picture \cite{Dupuis:2017otn}. For now we will deal with the case where the derivative is kept on the 1-form $\chi$. The standard discretization of the 4d BF theory is summarized in the following proposition. \begin{proposition} The symplectic potential is given as a sum of symplectic potentials associated to the phase space $T^*\mathrm{ISO}(4)$. \begin{align} \Theta_{BF} = \int_M \langle \mathcal{B} \wedge \delta\mathcal{A} \rangle \approx \sum_{(cc')} \langle \beta_{(cc')^*}, \Delta \mathcal{G}_c^{c'}\rangle,\label{eq: BF potential discrete} \end{align} which we construct from the solutions of the continuity equations \eqref{cont1} and \eqref{eq: chi cont}, \begin{align} \chi_{c'} &= \mathcal{G}_{c'c}(\chi_c +d{\cal Z}_c^{c'})\mathcal{G}_{cc'}, \quad \mathcal{H}_{c'}= \mathcal{G}_{c'c}\mathcal{H}_{c}, \text{ and } \beta_{(cc')^*}= \int_{(cc')^*}d \chi_c. \end{align} Table \ref{tab:1} provides the geometric structure which the discretized fields are attached to. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline Link $(cc')$ & Dual face $e^*$ & Edges $e$ & Triangles $(cc')^*$\\ \hline\hline $\mathcal{G}_{cc'}\in\mathrm{ISO}(4)$ &-- &--& $\beta_{(cc')^*}\in \mathfrak{iso}^*(4)$\\ \hline \end{tabular} \caption{Localization of the discrete variables. } \label{tab:1} \end{table} \\ These discrete variables satisfy by definition two kinds of constraints, the so-called Gauss and ``face simplicity" constraints \begin{align} \sum_{(cc)^*\in\partial c^*}\beta_{(cc)^*} =0, && \beta_{(cc)^*}=\big(\prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}}\big)^{-1} \,\beta_{(cc)^*}\, \big(\prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}} \big), \end{align} where in the second constraint, the loop being considered begins and ends at the node $c$. Furthermore, if we assume there is no curvature, then we have the flatness constraint and the discretized Bianchi identity. \begin{align} \mathcal{G}_{e}=\prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}}=1 , \quad \prod_{e^*\in\partial v^*}\mathcal{G}_{e}=1. \end{align} \end{proposition} We note that the flatness constraint implies the face simplicity as well as the discretized Bianchi identity (as it should). \begin{proof} Let us evaluate the symplectic potential with the given choice of application of Stokes theorem. \begin{align} \Theta_{BF} \approx -\sum_{c} \int_{\partial c^*}\langle d\chi_c \, ,\, \Delta \mathcal{H}_c \rangle \label{eq: bf potential 1choice} \end{align} The boundary of the tetrahedra $c^*$ is made up of four triangles. Since each triangle is shared by two tetrahedra, the contribution to the potential from each triangle contains two terms with a relative minus sign to account for the opposite orientation: \begin{align} \Theta &= \sum_{(cc')} \int_{(cc')^*} \Theta_{(cc')}\\ \Theta_{(cc')} &\coloneqq \langle d\chi_c\, ,\, \Delta \mathcal{H}_c \rangle - \langle d\chi_{c'}\, ,\, \Delta \mathcal{H}_{c'} \rangle \label{eq: triangle potential}\\ &= \langle \Delta \mathcal{G}_{c}^{c'} \, ,\, d\chi_c\rangle \end{align} The last equality is obtained by using the continuity equations and defining $\Delta \mathcal{G}_c^{c'}=\delta \mathcal{G}_{cc'}\mathcal{G}_{c'c}$. We can identify the factors with structures of the cellular decomposition and its dual graph. We define discrete variables $\mathcal{G}_{(cc')}=\mathcal{G}_{cc'}$ to be the discrete variable associated to the link $(cc')$ and $\beta_{(cc')^*} = \int_{(cc')^*}d\chi_c$ is the discrete variable associated to the triangle $(cc')^*$. Thus, as a function of discrete variables, the potential is \begin{align} \Theta_{BF} \approx \sum_{(cc')} \langle \beta_{(cc')^*}, \Delta \mathcal{G}_c^{c'}\r \end{align} From this potential, we can determine the Poisson brackets, which are the canonical ones associated with the cotangent bundle $T^*ISO(4)$. We will review this in the following section. \textit{{\textbf{ Gauss constraint.}}} \textit{By construction} the phase space variables satisfy some constraints. For a given tetrahedron, if we perform the sum over the triangles \begin{align} \sum_{(cc_i)^*\in\partial c^*}\beta_{(cc_i)^*}=\sum_{(cc_i)^*\in\partial c^*} \int_{(cc_i)^*} d\chi_c=0 , \label{eq: bfconstraint} \end{align} by Stokes theorem. This constraint is the discretization of the (pull-back of the) continuum constraint $d_{{\cal A}} {\cal B}=0$. In order to accommodate the different possible orientations of the links connecting $c^*$ to its neighbours, we point out that the base point of the variable $\beta$ can be changed according to \begin{align} \beta_{(c'c)^*} = \int_{(c'c)^*} d\chi_{c'} = -\int_{(cc')^*} h_{c'c}d\chi_{c} h_{cc'}= -h_{c'c}\beta_{(cc')^*}h_{cc'}. \end{align} \textit{{\textbf{Face simplicity.}}} Since we have by the continuity equations that \begin{equation} d\chi_{c} = \big(\prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}}\big)^{-1} d\chi_c \big(\prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}}\big), \end{equation} where the product of links begins and ends on the node $c$, we can just perform the integration over $(cc')^*$ and get the face simplicity constraint. \begin{eqnarray} \beta_{(cc)^*} = \int_{(cc)^*} d\chi_{c} &=\big(\prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}}\big)^{-1} \,\big(\int_{(cc)^*} d\chi_{c}\big) \, \big(\prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}} \big) . \end{eqnarray} {\textit{\textbf{Flatness constraint.}}} The definition of the discretized field does not imply that the holonomies $\mathcal{G}_{c_ic_{i+1}}$ should be flat, so we implement it by hand. \begin{align} \prod_{(c_ic_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}} =1 \end{align} This constraint is the discretization of the (pull-back of the) continuum constraint $\mathcal{F} =0$. One can check that they generate the discretized version of the $BF$ symmetries. We note that this is a non-abelian group valued momentum map \cite{Alekseev3}. {\textit{\textbf{Bianchi identity.}}} This condition is naturally discretized by demanding that concatenating all the holonomies on the dual faces of a (dual) polyhedron $v^*$ (as dual to a vertex $v$) gives the identity. This is automatically satisfied if each face is flat. The constraint then reads for every dual polyhedron $v^e$ with faces $e^*$, \begin{equation} \prod_{e^*\in\partial v^*}\mathcal{G}_{e^*}=1. \end{equation} \end{proof} \subsection{Relativistic spinning top phase space}\label{sec: spinning top} We can decompose the discrete variables into subalgebra components. For simplicity we will omit the indices $(cc')$. The $\mathfrak{iso}(4)^*$ holonomy is decomposed as $\beta = b+ V$ for $b \in \so(4)^*$ and $V \in \mathbb{R}^{4*}$. This is equivalent to decomposing the continuous variable as $d\chi_c = db_c+d\sigma_c$ for $b_c \in \so(4)^*$ and $\sigma_c \in \mathbb{R}^{4*}$. We write the $\mathrm{ISO}(4)$ holonomy as $\mathcal{G} = e^{x}h$ where $h\in SO(4)$ and $x$ is a constant element of the Lie algebra of $\mathbb{R}^4$ (which also happens to be $\mathbb{R}^4$). Since $\mathbb{R}^4$ is abelian, in calculations we use the representation such that\footnote{We use the following representation highlighting that the abelian group product of $\mathbb{R}^4$ is isomorphic to the the Lie algebra $\mathbb{R}^4$ seen as an abelian group. The generators $P$ of the Lie algebra $\mathbb{R}^4$ are such that $P^\mu P^\nu=0$. As a consequence the group element is $e^{P^\mu}=1+P^\mu$. We recover in this way the addition as the product of the group $\mathbb{R}^4$, since $$ e^{p\cdot P}e^{q\cdot P}= (1+ p\cdot P)(1+q\cdot P)= 1+ p\cdot P+q\cdot P =1+ (p+q)\cdot P=e^{(p+q)\cdot P}. $$ } $e^{x}=1+x$. The variation then reads $\Delta \mathcal{G} = \delta x+\Delta h+[x,\Delta h]$. Associated to a link $(cc')$, the potential is \begin{align} \Theta = \langle b\, ,\, \Delta h\rangle + \langle V\, ,\, \delta x\rangle+\langle V\, ,\, [x,\Delta h]\rangle.\label{eq: potential bf} \end{align} This is the symplectic potential for the phase space $T^*\mathrm{ISO}(4)$. The Poisson brackets corresponding to this potential are then \begin{eqnarray} & \{ x_\sigma, V^\rho \} = \delta_\sigma^\rho, \quad \{x_\mu, b^{\sigma\rho}\} = 2x_\lambda\eta^{\lambda[\sigma}\delta_\mu^{\rho]}\\ &\{V^\alpha,b^{\sigma\rho}\} = 2\eta^{\alpha[\rho}V^{\sigma]}, \quad \{h^\alpha{}_\beta, b^{\sigma\rho}\} = (J^{\sigma\rho}h)^\alpha{}_\beta\\ &\{ b^{\alpha\beta}, b^{\mu\nu} \} = f^{\alpha\beta\mu\nu}{}_{\sigma\rho}(b^{\sigma\rho}+2[x,V]^{\sigma\rho}), \end{eqnarray} where $f$ is the structure constant of $\so(4)$: $f^{\alpha\beta\mu\nu}{}_{\sigma\rho} = \demi \langle J_{\sigma\rho}^*,[J^{\alpha\beta},J^{\mu\nu}]\rangle $. The discrete variables are summarized in table \ref{tab: discrete variables 0} and Fig. \ref{fig:doublewedge1}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline Link $(cc')$ & Dual face $e^*$ & Edges $e$ & Triangles $(cc')^*$\\ \hline\hline $h_{(cc')}\in\mathrm{SO}(4), \, x_{(cc')}\in \mathbb{R}^4$ & --&--& $ b_{(cc')^*}\in \so^*(4), \, V_{(cc')^*}\in \mathbb{R}^4$\\ \hline \end{tabular} \caption{Localization of the discrete variables for the BF discretization. } \label{tab: discrete variables 0} \end{table} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{figures/doublewedge1.pdf} \caption{A link in red is decorated by a ISO(4) holonomy, while the triangle in blue is decorated by a $\mathfrak{iso}^*(4)\sim \so^*(4)\times \mathbb{R}^{4*}$ element. The building blocks to construct the discrete phase space are given in terms of $T^*\mathrm{ISO}(4)\sim (\mathrm{SO}\ltimes\mathbb{R}^4)\ltimes (\so^*(4)\times \mathbb{R}^{4*})$, with $\so^*(4)\sim \mathbb{R}^6$ and $\mathbb{R}^{4*}\sim \mathbb{R}^4$. } \label{fig:doublewedge1} \end{figure} Note that the phase space we have recovered is the isomorphic to the phase space of the relativistic spinning top \cite{Hanson:1974qy}, as one could expect. We define a new variable $S = b + [V,x]$. The potential (for a given link/face pair) now reads \begin{align} \Theta = \langle \Delta h \, ,\, S\rangle + \langle V\, ,\, \delta x\rangle \label{eq: top} \end{align} The Poisson brackets of these new variables are those of the relativistic spinning top: \begin{align} \{h{}^\alpha{}_\beta, S^{\mu\nu}\} = (J^{\mu\nu}h){}^\alpha{}_\beta,\quad \{S^{\alpha\beta}, S^{\mu\nu}\} = f^{\alpha\beta\mu\nu}{}_{\sigma\rho}S^{\sigma\rho},\quad \{x_\mu,V^\nu\} = \delta_\mu^\nu \end{align} \medskip \paragraph{Constraints/charges.} The total phase space we have obtained for the triangulation can therefore be interpreted as a set of relativistic spinning tops, which satisfy some constraints. This is consistent with Penrose's idea of spin networks \cite{Penrose71angularmomentum:}. We have the constraints encoding the conservation of (relativistic) angular momentum \begin{eqnarray} && \left.\begin{array}{c} d_A B +[C\wedge\Sigma]=0\\ d_A\Sigma=0 \end{array}\right\} \Leftrightarrow d_\mathcal{A}\mathcal{B}=0 \rightarrow {\cal J}_c=\sum_{(cc)^*\in\partial c^*}\beta_{(cc)^*}=0 \Leftrightarrow \left\{ \begin{array}{c} b_c = \sum_{(cc')\in \partial c^*} b_{(cc')^*} =0\\ V_c = \sum_{(cc')\in \partial c^*}V_{(cc')^*} = 0 \end{array}\right. \end{eqnarray} The curvature constraints are given by \begin{eqnarray} && \left.\begin{array}{c} F=0\\ d_AC=0 \end{array}\right\} \Leftrightarrow \mathcal{F}=0 \rightarrow \mathcal{G}_{e}=\prod_{(c_i c_{i+1})\in \partial e^*}\mathcal{G}_{c_ic_{i+1}} =1 \Leftrightarrow \left\{ \begin{array}{c} h_{e}= \prod_{(c_i c_{i+1})\in \partial e^*} h_{c_ic_{i+1}} = 1\\ x_{e} = \sum_i h_{c_1c_i}x_{(c_ic_{i+1})}h_{c_ic_1} =0 \end{array}\right. \end{eqnarray} To get the components of $\mathcal{G}_{e}$, we write $\mathcal{G}_{cc'} = e^{x_{(cc')}}h_{cc'}$. \begin{align} \mathcal{G}_{e}= \prod_{(c_i c_{i+1})\in \partial e^*} \mathcal{G}_{c_i c_{i+1}} =& \mathcal{G}_{c_1c_2}\dots \mathcal{G}_{c_nc_1}\\ =&e^{x_{(c_1c_2)}}h_{c_1c_2}e^{x_{(c_2c_3)}}h_{c_2c_3}\dots e^{x_{c_nc_1}}h_{c_nc_1} \\ =& e^{x_{(c_1c_2)}}h_{c_1c_2}e^{x_{(c_2c_3)}}h_{c_2c_1}h_{c_1c_2}h_{c_2c_3}\dots e^{x_{c_nc_1}}h_{c_nc_1}\\ =& e^{\sum_i h_{c_1c_i}x_{(c_ic_{i+1})}h_{c_ic_1}}\prod_{(c_i c_{i+1})\in \partial e^*} h_{c_ic_{i+1}} = e^{x_{e}} h_{e}. \end{align} where $n$ in the above is the total number of nodes around the edge $e$ and we define $c_{n+1} = c_1$. Finally we also have the Bianchi identity for $F$ and its companion. The Bianchi identity $d_\mathcal{A}\mathcal{F}=0$ implies the constraints \begin{eqnarray} && \left.\begin{array}{c} d_AF=0\\ d_A(d_AC)=[F\wedge C]=0 \end{array}\right\} \rightarrow \prod_{e^*\in\partial v^*}\mathcal{G}_{e}=1 \Leftrightarrow \left\{ \begin{array}{c} \prod_{e^*\in\partial v^*} h_{e^*} = 1\\ \sum_{e^*\in\partial v^*} x_{e^*}=0 \end{array}\right. \end{eqnarray} The face simplicity constraints follows from the constraint $d_\mathcal{A}\mathcal{B} =0$: \begin{eqnarray} \left.\begin{array}{c} [F\wedge B]+[d_\mathcal{A} C\wedge\Sigma]=0\\ d_A(d_A\Sigma)=[F\wedge\Sigma]=0 \end{array}\right\} \Leftrightarrow d_\mathcal{A} (d_\mathcal{A}\mathcal{B})=0 \rightarrow \beta_{(cc)^*}=\mathcal{G}_{e}^{-1} \,\beta_{(cc)^*}\mathcal{G}_{e} \Leftrightarrow \left\{ \begin{array}{l} b_{(cc)^*}=h_{e}^{-1} (b_{(cc)^*} + [V_{(cc)^*},x_e]) h_{e} \\ V_{(cc)^*}= h_{e}^{-1} \,V_{(cc)^*}h_{e} \end{array}\right.\nonumber \end{eqnarray} \section{Discretization of BFCG theory}\label{sec: discbfcg} In this section we will go through the same procedure starting from the BFCG potential. To illustrate that the theory is similar to the ISO(4) BF theory, we will first show how to recover the results of the previous subsection (the discrete potential (\ref{eq: potential bf})). We will then proceed and obtain the proper discretized BFCG potential, the main result of this section. As a consequence we will recover the classical picture behind the G-networks introduced in \cite{Asante:2019lki}. \paragraph{Restricting the fields to subregions. } We start by expressing the integral $\Theta_{BFCG}$ as a sum of integrals over each cell. \begin{align} \Theta_{BFCG} = \sum_{c} \int_{c^*}\langle B_c\wedge \delta A_c \rangle -\sum_c\int_{c^*} \langle C_c\wedge \delta \Sigma_c \rangle. \end{align} \paragraph{Truncation. } As before, we go on-shell inside the cells. Since BFCG and BF theory differ by a boundary term, they share the same equations of motion so we can either decompose the solutions of (\ref{eq: eom}) or directly solve (\ref{eq: bfcgeom}). We will take the former approach: we solve the equations of motion from BF theory to get, for a $\mathrm{ISO}(4)$ holonomy $\mathcal{H}_c(x)$ connecting $c$ to $x$ in the cell and a $\mathfrak{iso}(4)^*$ valued 1-form $\chi$, \begin{align} \mathcal{A}_c = \mathcal{H}_c^{-1}d\mathcal{H}_c, \quad \mathcal{B}_c = \mathcal{H}_c^{-1}d\chi_c \mathcal{H}_c. \end{align} We decompose $\mathcal{H}=e^cg$, where $g$ is a rotation and $e^c$ is a translation. $\chi$ is decomposed into $\chi = b + \sigma$ for $b \in \so(4)^*$ and $\sigma \in \mathbb{R}^{4*}$. Thus, (still using the convenient representation $e^c=1+c$) \begin{align} \mathcal{A}_c=A_c+C_c = g_c^{-1}dg_c +g_c^{-1}dc_c g_c && \mathcal{B}_c=B_c+\Sigma_c = g_c^{-1}(db_c+[d\sigma_c,c_c]+d\sigma_c)g_c \end{align} giving \begin{align} A_c =& g^{-1}_cdg_c & C_c =& g_c^{-1}dc_c g_c\\ \Sigma_c =& g_c^{-1}d\sigma_c g_c & B_c =& g_c^{-1}(db_c +[d\sigma_c , c_c])g_c, \end{align} where the different fields are defined in table \ref{tab: cont eq}. \paragraph{Continuity equations. } The continuity of the field between neighbouring cells $c^*$ and $c'^*$ is expressed as \begin{align} B_c = B_{c'} && A_c = A_{c'} && C_c = C_{c'} && \Sigma_c = \Sigma_{c'} &&\text{on }c\cap c'^*\label{eq: cont1} \end{align} The solutions of these continuity equations are given in Table \ref{tab: cont eq}. If we apply the continuity equations consecutively around a loop $\partial e^*$, we also get the equations \begin{align} \label{cont eq loop} & dc_c = h_{e}^{-1} dc_c h_{e}, \\ & db_{c} =h_{e}^{-1} db_{c}h_{e}, \quad d\sigma_{c} =h_{e}^{-1} d\sigma_{c}h_{e}, \quad h_e\equiv \prod_{c_ic_{i+1}\in\partial e^*}h_{c_ic_{i+1}},\label{cont eq loop1} \end{align} where again, the product along the loop of links begins and ends at node $c$. The condition \eqref{cont eq loop} can be seen as the discretization of $[F,C]=0$, while the second ones \eqref{cont eq loop1} come from $d_\mathcal{A}\mathcal{B}=0$. The solutions of the continuity equations are given in the table \ref{tab: cont eq}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline Continuity eqs & Solutions to continuity eqs & Fields \\ \hline\hline $g_c^{-1}dg_c = g^{-1}_{c'}dg_{c'}$&$ g_c = h_{cc'}g_{c'}$ & \makecell{ $g_c$ function in $\mathrm{SO}(4)$,\\ $h_{cc'}$ a constant in $\mathrm{SO}(4)$ } \\ \hline $ dc_{c'} = h_{c'c}dc_c h_{cc'}$ & $ c_{c'} = h_{c'c}(c_c+x_c^{c'})h_{cc'}$ &\makecell{$c_c$ function in $\mathbb{R}^{4}$,\\ $x_c^{c'}$ constant in $\mathbb{R}^4$ } \\ \hline $d\sigma_{c'} = h_{c'c}d\sigma_c h_{cc'}$ & $\sigma_{c'} = h_{c'c}(\sigma_c + d\varsigma_c^{c'})h_{cc'}$ & \makecell{$\sigma_c$ 1-form in $\mathbb{R}^{4*}$,\\ $\varsigma_c^{c'}$ function in $\mathbb{R}^{4*}$}\\ \hline $db_{c'} =h_{c'c}(db_c -[d\sigma_c,x_c^{c'}])h_{cc'}$ & $b_{c'} =h_{c'c}(b_c -[\sigma_c , x_c^{c'}]+dy^{c'}_c)h_{cc'} $ & \makecell{$b_c$ 1-form in $\so(4)^*$,\\ $y_c^{c'}$ function in $\so(4)^*$}\\ \hline \end{tabular} \caption{Continuity equations and their solution. } \label{tab: cont eq} \end{table} Anticipating a bit, we will see that in the BFCG discretization, if we allow for some curvature on the edges, we will still need to assume that on the edges of the triangulation \begin{align}\label{sigmasimple0} \sigma_c= h_e^{-1} \sigma_c h_e. \end{align} This will ensure that we can integrate the symplectic potential. While in the construction given in \cite{Asante:2019lki} particular emphasis was put on the \textit{edge simplicity}, that is something of the type \begin{align} c_c= h_e^{-1} c_c h_e, \end{align} to recover the KBF amplitude, it seems that for the discretization, the key property will be more \eqref{sigmasimple0}. \paragraph{Evaluating the symplectic potential: before making the choice.} In order to express the potential in terms of the fields $g, \sigma, b, $ and $c$ we need to express the variation of the fields $A$ and $\Sigma$ in these variables. We define \begin{align} \Delta g_c \coloneqq \delta g_c g_c^{-1} \end{align} then, \begin{align} \delta A_c = g^{-1}_cd\Delta g_c g _c && \delta \Sigma_c = g^{-1}(\delta d\sigma_c + [d\sigma_c,\Delta g_c])g_c \end{align} Using these expressions, the potential in a cell is \begin{align} \Theta_c \approx \langle db_c\wedge d\Delta g_c \rangle + d\langle [d\sigma_c, c_c], \Delta g_c \rangle - \langle dc_c\wedge d\delta \sigma_c \rangle, \label{eq: potential c} \end{align} where $\approx$ means we went on-shell. We see that $\Theta_c$ is a total derivative and can be written as an integral over the boundary $\partial c^*$ by Stokes theorem. As in the previous section, there is a choice to make regarding which variable keeps the derivative when we perform Stokes' theorem. \subsection{Choice 1: recovering the BF discretization} At this time, we make the following choice (we note that the first term is the same polarization as the LQG case \cite{Dupuis:2017otn}). \begin{align} \Theta_{BFCG} \approx \sum_{c} \int_{\partial c*} (\langle db_c \, ,\, \Delta g_c \rangle +\langle [d\sigma_c , c_c] \, ,\, \Delta g_c \rangle -\langle c_c\, ,\, d \delta \sigma_c \rangle) \end{align} The boundary $\partial c^*$ is made up of four triangles. Each triangle is shared by two tetrahedra. The contribution from each triangle, $(cc')^*$, is $\Theta _{(cc')^*}$, \begin{align} \Theta_{BFCG} &\approx \sum_{(cc')^*} \int_{(cc')^*}\Theta _{(cc')^*}\\ \Theta_{(cc')^*} &=\langle db_c \, ,\, \Delta g_c \rangle - \langle db_{c'}\, ,\, \Delta g_{c'}\rangle + \langle [d\sigma_c,c_c]\, ,\, \Delta g_c\rangle - \langle [d\sigma_{c'},c_{c'}]\, ,\, \Delta g_{c'}\rangle \nonumber\\ &\qquad - \langle c_c\, ,\, d\delta \sigma_c\rangle + \langle c_{c'} \, ,\, d\delta \sigma_{c'} \rangle. \end{align} \begin{proposition} The symplectic potential is given as a sum of symplectic potential associated to the phase space $T^*\mathrm{ISO}(4)$. \begin{align} \Theta_{BFCG} \approx \Theta'_{BF} = \sum_{(cc')^*} \left\langle \Delta h_c^{c'}\, ,\, \int_{(cc')^*} db_c \right\rangle + \left\langle [\Delta h_c^{c'} , x_c^{c'}] \, ,\, \int_{(cc')^*}d\sigma_c \right\rangle + \left\langle x_c^{c'}\, ,\, \delta \int_{(cc')^*} d\sigma_c \right\rangle , \end{align} where the discrete variables are obtained from the continuity equations from table \ref{tab: cont eq}, Table \ref{tab:3} provides the geometric structure which they are attached to. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline Links & Dual faces& Edges & Triangles \\ \hline\hline $h_{(cc')}\in\mathrm{SO}(4)$, $x_{(cc')}\in\mathbb{R}^4$ &-- &-- & $b_{(cc')^*}=\int_{(cc')^*}db_c\in\so^*(4)$, $V_{(cc')^*}=\int_{(cc')^*}d\sigma_c$ \\ \hline \end{tabular} \caption{Localization of the discrete variables. } \label{tab:3} \end{table} \end{proposition} We note that we almost recover the same potential as in the BF standard discretization \eqref{eq: potential bf}. The difference comes from a minus sign in the $(x,V)$ sector. The reason is the following. By adding the boundary term to go to the BFCG action, we have swapped the polarization, we have exchanged the configuration and momentum variables. If we consider a symplectic form $\delta q \wedge \delta p$, this form is not invariant under the exchange $q\leftrightarrow p$, which leads to $\delta p \wedge \delta q=-\delta q \wedge \delta p$. This is what we have done by changing the polarization. The symplectic transformation is instead given by $q{\,\rightarrow\,} -p$ and $p{\,\rightarrow\,} q$. Hence the $V$ in the BF discretization and the $V$ in the BFCG discretization are related by a minus sign. \begin{proof} We can use the continuity relations from table \ref{tab: cont eq} to simplify $\Theta_{(cc')^*}$. \begin{align} \Theta_{(cc')^*} = \langle db_c \, ,\, \Delta h_c^{c'} \rangle +\langle x_c^{c'} \, ,\, [d\sigma_c , \Delta h_c^{c'}]\rangle +\langle x_c^{c'}\, ,\, \delta d\sigma_c \rangle, \end{align} where we have defined $\Delta h_c^{c'} \coloneqq \delta h_{cc'} h_{c'c}$. The total potential is now \begin{align} \Theta_{BFCG} &\approx \sum_{(cc')^*} \left\langle \Delta h_c^{c'}\, ,\, \int_{(cc')^*} db_c \right\rangle + \left\langle [\Delta h_c^{c'} , x_c^{c'}] \, ,\, \int_{(cc')^*}d\sigma_c \right\rangle + \left\langle x_c^{c'}\, ,\, \delta \int_{(cc')^*} d\sigma_c \right\rangle\\ &\approx \sum_{(cc')}\langle \Delta h_{(cc')}\, ,\, b_{(cc')^*}\rangle +\langle [\Delta h_{(cc')}, x_{(cc')}]\, ,\, V_{(cc')^*}\rangle +\langle x_{(cc')}\, ,\, \delta V_{(cc')^*}\rangle.\label{eq: potential bfbfcg} \end{align} The factors in $\Theta_{BFCG} $ can be associated to structures in the cellular decomposition. We already saw that $h_{cc'}$ is related to the the links in the dual cellular decomposition. Similarly $x_c^{c'}$ is also associated to the links. The factors involving integrals over triangles are associated to the triangles in the cellular decomposition, which are dual to the links. The discrete variables and where they live in the cellular decomposition is summarized in table \ref{tab:3} which is the same as table \ref{tab:1}. \end{proof} \subsection{Choice 2: recovering the phase space of \cite{Asante:2019lki}} As we emphasized already, when using Stokes' theorem, there is a choice which is made on which variable will keep the differential. We recall that we obtained the symplectic potential for a given tetrahedron $c^*$, \begin{align} \Theta_c = \langle db_c\wedge d\Delta g_c \rangle + d\langle [d\sigma_c, c_c]\, ,\, \Delta g_c \rangle - \langle dc_c\wedge d\delta \sigma_c \rangle . \end{align} In the previous section we used Stokes' theorem to write the potential on the triangles bounding $c^*$. In particular, the last term was expressed as \begin{align} \langle dc_c \wedge d\delta \sigma_c\rangle = d\langle c_c \, ,\, d\delta \sigma_c \rangle \end{align} and when we determined the discrete variables, $\int_{(cc')^*}d\sigma$ was assigned to a triangular face. We can alternatively write this term as \begin{align} \langle dc_c \wedge d\delta \sigma_c \rangle = -d\langle dc_c\wedge \delta \sigma_c \rangle \end{align} In performing Stokes' theorem in this way, we will have different discrete variables living on different structures in the cellular decomposition. Let us first identify the discretized variables and then the constraints associated to them. \medskip \subsubsection{Identifying the discrete variables.} After performing Stokes theorem and applying the continuity equations the potential on the triangles is now \begin{align} \int_{(cc')^*}\Theta_{(cc')^*} = \int_{(cc')^*}\langle d(b_c +[c_c,\sigma_c])\, ,\, \Delta h_c^{c'}\rangle -\int_{(cc')^*}d\langle (dc_c \, ,\, [\Delta h_c^{c'},\varsigma_c^{c'}])\rangle +\int_{(cc')^*}d\langle dc_c \, ,\, \delta \varsigma_c^{c'}\rangle\label{eq: potential triangle} \end{align} The last term is indeed a problematic one. In contrast to the previous section, neither $dc_c$ nor $\delta\varsigma_c^{c'}$ are constant and so we must do some work to perform this integration. We shall once again use Stokes' theorem on each triangle and deal with integrals over edges bounding triangles instead. \begin{align} \sum_{(cc')^*}\int_{(cc')^*}d\langle dc_c\, ,\, \delta \varsigma_c^{c'}\rangle=&\sum_{(cc')}\int_{\partial (cc')^*}\langle dc_c\, ,\, \delta \varsigma_c^{c'}\rangle \\ =&\sum_e \int_e \sum_{(cc')\in e^*}\epsilon^e_{(cc')}\langle d c_c\, ,\, \delta \varsigma_c^{c'}\rangle \end{align} The first sum and the integral are over edges $e$. The second sum is over the links $(cc')$ which make up the polygon $e^*$ dual to the $e$. The factor $\epsilon^e_{(cc')}$ is either 1 or $-1$, depending on whether the orientation of $ (cc')^*$ is aligned with $e$ or not. \begin{figure} \centering \includegraphics[width=0.2\textwidth]{figures/edge2.pdf} \caption{An example of the type of edge used for illustrative calculations. The edge is shown in red labelled by $e$ with surrounding nodes forming a triangle. } \label{fig:edge} \end{figure} To illustrate we take the example edge we have in figure \ref{fig:edge}. The contribution of this edge to the potential is \begin{align} \int_e \sum_{(cc')^*\in e^*} \langle dc_c \, ,\, \delta \varsigma_c^{c'} \rangle =& \int_e \langle dc_1\, ,\, \delta \varsigma_1^2 \rangle +\int_e\langle dc_2 \, ,\, \delta\varsigma_2^3\rangle + \int_e \langle dc_3\, ,\, \delta \varsigma_3^1 \rangle \\ =& \int_e \langle dc_1 \, ,\, (\delta \varsigma_1^2 +h_{12}\delta \varsigma_2^3 h_{21} + h_{13}\delta\varsigma_3^1 h_{31}) \rangle\\ =& \int_e \langle dc_1 \, ,\, \delta (\varsigma_1^2 +h_{12}\varsigma_2^3 h_{21} +h_{13}\varsigma_3^1h_{31}) \rangle+\int_e \langle [dc_1 , h_{12}\varsigma_2^3 h_{21}]\, ,\, \Delta h_1^2\rangle \nonumber\\ &+\int_e \langle [dc_1,h_{13}\varsigma_3^1 h_{31}]\, ,\, \Delta h_1^3\rangle \label{eq: edge potential} \end{align} In the second line, we were able to use the continuity equation in the variable $c$ to base each term at the center 1 (an arbitrary choice). The second and third term involve something proportional to $\Delta h_1^2$ and $\Delta h_1^3$ (the value of the superscript and subscript are a result of the arbitrary choice made to base everything at 1) and can therefore be absorbed in the first term of (\ref{eq: potential triangle}) (since the total potential involves summing over the links). This will be the source of a non-trivial closure constraint for the tetrahedron. The first term involves a combination of the continuity variables $\varsigma$. Here, as we alluded earlier, we need to take a specific assumption on the behaviour of $\sigma_c$ under consecutive change of frames, as in \eqref{sigmasimple0}. Consider the three continuity equations for $\sigma$ which are satisfied on $e = (12)^*\cap (23)^*\cap (31)^*$: \begin{align} \sigma_2 = h_{21}(\sigma_1 +d\varsigma_1^2)h_{21} && \sigma_3 = h_{32}(\sigma_2+d\varsigma_2^3)h_{23}&& \sigma_1 = h_{13}(\sigma_3+d\varsigma_3^1)h_{31}. \end{align} Putting these together we have \begin{align} \sigma_3 =& h_{32}h_{21}h_{13}\sigma_3 h_{31}h_{12}h_{23} + h_{32}h_{21}h_{13}d\varsigma_3^1 h_{31}h_{12}h_{23} \nonumber\\&+ h_{32}h_{21}d\varsigma_1^2 h_{21}h_{23} +h_{32}d\varsigma_2^3h_{23}.\label{eq: edge continuity} \end{align} \textit{Assuming}\footnote{If we enforce the fact there is \textit{no} curvature on the edges, this assumption is obviously true. If, we do not impose flatness right away, we have to make this assumption to get to the relevant result. Hence we can get our result, even though there is no flatness, but such that our assumption is satisfied. } that \begin{align}\label{eq: edge continuity1} \sigma_3 =& h_{32}h_{21}h_{13}\sigma_3 h_{31}h_{12}h_{23}. \end{align} and putting together (\ref{eq: edge continuity}) and \eqref{eq: edge continuity1}, we get \begin{align} d\varsigma_1^2+h_{13}d\varsigma_3^1h_{31} + h_{12}d\varsigma_2^3 h_{21} = 0. \end{align} And so we have that \begin{align} \varsigma_1^2+h_{13}\varsigma_3^1h_{31} + h_{12}\varsigma_2^3 h_{21} = V^{e^*}_1, \end{align} for some constant $V_1^{e^*}$, which then decorates the dual face $e^*$. This is exactly the expression which appears in the first term of (\ref{eq: edge potential}). Since $V^{e^*}$ is a constant, we are able to consider $\int_e dc_1$ as our discrete variable associated to $e$ and $V^{e^*}_1$ as the discrete variable associated to the polygon $e^*$. The potential due to $e$ is then \begin{align} \int_e \sum_{(cc')^* \in e^*} \langle dc_c \, ,\, \delta \varsigma_c^{c'} \rangle =& \langle \delta V_1^{e^*} \, ,\, \int_e dc_1 \rangle +\int_e \langle [dc_1 , h_{12}\varsigma_2^3 h_{21}]\, ,\, \Delta h_1^2\rangle + \int_e \langle [dc_1,h_{13}\varsigma_3^1 h_{31}]\, ,\, \Delta h_1^3\rangle . \end{align} Summarizing, the symplectic potential takes now the shape \begin{align} \Theta_{BFCG} \approx \sum_{(cc')} \left\langle \int_{(cc')^*} \tilde b_{c}^{c'}\, ,\, \Delta h_c^{c'} \right\rangle+\sum_e \left\langle \int_e dc_{c_e}\, ,\, \delta V^{e^*}_{c_e}\right\rangle\label{eq: potentialbfcg} \end{align} A lot has been concealed in writing equation (\ref{eq: potentialbfcg}). The label $c_e$ is the choice of base point in the polygon dual to $e$ (in the example edge we took $c_e$ to be the node 1). We also introduced $\tilde b_c^{c'}$. Simply put, this is shorthand notation for everything which appears in $\Theta$ next to $\Delta h_c^{c'}$. The explicit form of such a term depends on the choices of $c_e$ and so we won't write it out in general. We will define $\tilde b_{c}^{c'}$ in an explicit example shortly. We can now determine the discrete variables. The discrete variables are $b_{(cc')^*}=\int_{(cc')^*} \tilde b_c^{c'} \in \so(4)^*$ on triangles, $h_{cc'}\in \mathrm{SO}(4)$ on links dual to triangles, $\ell_e = \int_e dc_{c_e} \in \mathbb{R}^4$ on edges, and $V_{e^*} = V_{c_e}^{e^*}$ on polygons dual to an edge. The variables are summarized in table \ref{tab: bfcgl} and Fig. \ref{fig:doublewedge2}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline Discrete variable & Definition in terms of & Home in cellular complex \\ &continuous variables & \\\hline\hline $V_{c_e}$ & linear combination of $\varsigma$'s & Polygon $e^*$, a face in \\ & around an edge & the dual complex\\ \hline $\ell_{c_e}$ & $\int_e dc_{c_e}$ & Edge $e$ of tetrahedron\\ \hline $h_{(cc')}$ & $h_{cc'}$ & Links\\\hline $b_{(cc')^*}$ & $\int_{(cc')^*} \tilde b_c^{c'}$ & Triangles \\\hline \end{tabular} \caption{Summary of discretization of BFCG theory. The key result is that $b_{(cc')^*}$ depends on many variables, namely $c, \varsigma$ and $b$, as illustrated in \eqref{bs}. In particular, we have integrations both on the triangle and some of the edges forming its boundary. } \label{tab: bfcgl} \end{table} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{figures/doublewedge2.pdf} \caption{A link in red is decorated by a SO(4) holonomy, a dual face is decorated by an element in $\mathbb{R}^{4*}\cong \mathbb{R}^4$. An edge in blue is decorated by an element in $\mathbb{R}^4$, the triangle in blue is decorated by a $\so^*(4)\cong \mathbb{R}^{6}$ element. The building blocks to construct the discrete phase space are still isomorphic to $T^*\mathrm{ISO}(4)$ since $(\mathrm{SO}(4)\ltimes \mathbb{R}^{4*})\ltimes (\so^*\times \mathbb{R}^4)\cong (\mathrm{SO}(4)\ltimes \mathbb{R}^{4})\ltimes (\so^*\times \mathbb{R}^{4*})\cong T^*\mathrm{ISO}(4)$. } \label{fig:doublewedge2} \end{figure} The discrete potential looks just like that of (\ref{eq: top}), but with the translation sector on the edges and dual faces instead of the links and triangles. \begin{align} \{\ell_\alpha, V^\beta \} &= -\delta _\alpha^\beta\label{poi1}\\ \{h^\alpha{}_\beta ,b^{\sigma\rho}\} &= (J^{\sigma\rho}h)^\alpha{}_\beta\quad \{b^{\sigma\rho},b^{\alpha\beta}\} = \eta^{\sigma\alpha}b^{\rho\beta}+\eta^{\rho\beta}b^{\sigma\alpha}-\eta^{\sigma\beta}b^{\rho\alpha}-\eta^{\rho\alpha}b^{\sigma\beta}\label{poi2} \end{align} For each edge $e$ of the triangulation we have the decorations $\ell_e$ and $V_{e^*}$ which decorate respectively the edge and the dual face, with Poisson bracket \eqref{poi1}. For each link $l$, we have the decorations $h_l$ and $b_l$ which decorate respectively the link and the triangle $l^*$, with Poisson brackets \eqref{poi2}. These variables satisfy a set of constraints different than in the ones in the BF discretization. In order to justify these constraints and see that they follow from the definitions of the discrete variables in terms of the continuous functions, we need to explicitly define $\tilde b_c^{c'}$ and make choices about where to base the edge variables. In order to simplify these expressions and be exhaustive, we will consider an explicit example. \subsubsection{Explicit case: example of the sphere} The triangulation we choose is that of the 4-simplex. The space is divided into 5 tetrahedra. The centers of the 5 tetrahedra will be labelled by integers $\{1, 2,\dots, 5\}$. The vertices of the tetrahedra will be labelled by overlined integers $\{\overline{1},\overline{2},\dots,\overline{5}\}$. The tetrahedron $i^*$ will have vertices $\{\overline{1},\dots, \overline{5}\}\backslash \overline{i}$. A diagram indicating the orientation of the links and edges is shown in figure \ref{fig:4cel}. The calculation on the edges is just like in the example we did previously since each edge is dual to a triangle. The resulting dual face variables are \begin{align} V_3^{[\overline{12}]^*} = \varsigma_3^4+h_{34}\varsigma_4^5+h_{35}\varsigma_5^3 && V_2^{[\overline{31}]^*} = -h_{24}\varsigma_4^2+h_{24}\varsigma_4^5-\varsigma_2^5\nonumber\\ V_2^{[\overline{14}]^*} = \varsigma_2^3-\varsigma_2^5-h_{25}\varsigma_5^3&& V_2^{[\overline{51}]^*} = h_{24}\varsigma_4^2+h_{23}\varsigma_3^4+\varsigma_2^3\nonumber\\ V_1^{[\overline{23}]^*} = h_{14}\varsigma_4^5+h_{15}\varsigma_5^1+\varsigma_1^4&& V_3^{[\overline{42}]^*} = -h_{35}\varsigma_5^3-\varsigma_3^1+h_{35}\varsigma_5^1\nonumber\\ V_1^{[\overline{25}]^*} = h_{13}\varsigma_3^4-h_{13}\varsigma_3^1-\varsigma_1^4&& V_1^{[\overline{34}]^*} = \varsigma_1^2+h_{15}\varsigma_5^1+h_{12}\varsigma_2^5\nonumber\\ V_1^{[\overline{53}]^*} = -h_{14}\varsigma_4^2-\varsigma_1^4+\varsigma_1^2&& V_1^{[\overline{45}]^*} = \varsigma_1^2+h_{12}\varsigma_2^3+h_{13}\varsigma_3^1\label{Vs} \end{align} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figures/4cell.pdf} \caption The edges of the complex are shown with solid black lines and the dual complex is shown with dotted red lines. The arrows indicate the orientation chosen.} \label{fig:4cel} \end{figure} In the above, we have always based the variables at the lowest node in numerical order. This is a choice made arbitrarily, any node which is a vertex of $[\overline{ij}]^*$ would be equally valid. The resulting expressions for the $\so(4)^*$ variables are: \begin{align} b_{(12)^*} =& \int_{[\overline{345}]} (db_1-d[c_1,\sigma_1]-[dc_1,d\varsigma_1^2]) +\int_{[\overline{45}]}[dc_1,h_{12}\varsigma_2^3h_{21}]+\int_{[\overline{34}]} [dc_1,h_{12}\varsigma_2^5h_{21}]\nonumber\\ b_{(23)^*} =& \int_{[\overline{145}]} (db_2-d[c_2,\sigma_2]-[dc_2,d\varsigma_2^3]) +\int_{[\overline{51}]}[dc_2,h_{23}\varsigma_3^4h_{32}]\nonumber\\ b_{(34)^*} =& \int_{[\overline{125}]} (db_3-d[c_3,\sigma_3]-[dc_3,d\varsigma_3^4]) +\int_{[\overline{12}]} [dc_3,h_{34}\varsigma_4^5h_{43}]\nonumber\\ b_{(45)^*} =& \int_{[\overline{123}]} (db_4-d[c_4,\sigma_4]-[dc_4,d\varsigma_4^5]) \nonumber\\ b_{(51)^*} =& \int_{[\overline{234}]} (db_5-d[c_4,\sigma_4]-[dc_5,d\varsigma_5^1]) -\int_{[\overline{34}]}[dc_5,\varsigma_5^1]-\int_{[\overline{23}]}[dc_5,\varsigma_5^1]\nonumber\\ b_{(14)^*} =& \int_{[\overline{235}]} (db_1-d[c_1,\sigma_1]-[dc_1,d\varsigma_1^4])+\int_{[\overline{23}]}[dc_1,h_{14}\varsigma_4^5h_{41}]-\int_{[\overline{53}]} [dc_1,h_{14}\varsigma_4^2h_{41}]\nonumber\\ b_{(25)^*} =& \int_{[\overline{134}]} (db_2-d[c_2,\sigma_2]-[dc_2,d\varsigma_2^5])-\int_{[\overline{14}]}[dc_2,h_{25}\varsigma_5^3h_{52}]\nonumber\\ b_{(31)^*} =& \int_{[\overline{245}]} (db_3-d[c_3,\sigma_3]-[dc_3,d\varsigma_3^1])+\int_{[\overline{25}]}[dc_3,\varsigma_3^1]-\int_{[\overline{25}]}[dc_3,\varsigma_3^4]-\int_{[\overline{45}]}[dc_3,\varsigma_3^1]\nonumber\\ b_{(42)^*} =& \int_{[\overline{135}]}(db_4-d[c_4,\sigma_4]-[dc_4,d\varsigma_4^2])-\int_{[\overline{51}]}[dc_4,\varsigma_4^2]-\int_{[\overline{31}]} [dc_4,\varsigma_4^5] +\int_{[\overline{31}]}[dc_4,\varsigma_4^2]\nonumber\\ b_{(53)^*} =& \int_{[\overline{124}]}(db_5-d[c_5,\sigma_5]-[dc_5,d\varsigma_5^3])-\int_{[\overline{42}]}[dc_5,\varsigma_5^1]+\int_{[\overline{42}]}[dc_5,\varsigma_5^3]-\int_{[\overline{12}]} [dc_5,\varsigma_5^3] \label{bs} \end{align} Clearly there is a lack of symmetry due to the orientation choices for the links. \subsubsection{Constraints} The discrete variables in this new polarisation give rise to a new set of constraints. We follow the terminology of \cite{Asante:2019lki} to name the constraints. Compared to the BF case, there are two sets of new constraints, the 2-Gauss constraints encoding the triangles are closed and the 2-flatness encoding that the dual polyhedra close. We then have the more usual set of constraints, the 1-Gauss constraints and the 1-flatness constraints. The latter encodes that the holonomies along the links forming a closed loops should be trivial. The former encodes that the triangle decoration should be equal to a specific quantity. Finally, there is also the edge simplicity constraints. This constraint is actually implied if we have flatness, but can hold without having flatness. \medskip Let us review the explicit shape of the constraints. \medskip \paragraph{2-Gauss constraints.} For each triangle in the simplex, there is a constraint on the edge data. For example, for the triangle $(45)^*$ \begin{align} h_{43}\ell_3^{[\overline{12}]}h_{43} + h_{41}\ell_1^{[\overline{23}]}h_{14}+ h_{42}\ell_2^{[\overline{31}]}h_{24} = \int_{\partial [\overline{123}]} dc_4 = 0. \end{align} Such constraint is the discrete analogue of the constraint $d_A C =0$, since \begin{align} C=g^{-1} dc g \Leftrightarrow dc = g C g^{-1} \Leftrightarrow d^2c=0=g(d C + [g^{-1} d g,C ])g^{-1} = g(d C + [A,C ])g^{-1}. \end{align} In general, the edge variables $\ell$ corresponding to the edges of a triangle sum to zero. The sum can only be performed after each variable is transported to the appropriate node. The full list of triangle constraints in the 4-simplex are given below \begin{align} \mathcal{G}^{[\overline{123}]}_4 =& h_{ 43 } \ell_3 ^{ [ \overline{12} ] } + h_{41} \ell_1^{ [ \overline{ 23 }] } + h_{ 42 } \ell_2^{ [\overline{31}] }\nonumber\\ \mathcal{G}^{[\overline{124}]}_5 =& h_{ 53 } \ell_3 ^{ [ \overline{12} ] } + h_{51} \ell_1^{ [ \overline{ 25 }] } + h_{ 52 } \ell_2^{ [\overline{51}] }\nonumber\\ \mathcal{G}^{[\overline{125}]}_3 =& \ell_3 ^{ [ \overline{12} ] } - \ell_3^{ [ \overline{ 42 }] } - h_{ 32 } \ell_2^{ [\overline{14}] }\nonumber\\ \mathcal{G}^{[\overline{134}]}_2 =& - \ell_2 ^{ [ \overline{31} ] } - \ell_2^{ [ \overline{ 14 }] } + h_{ 21 } \ell_1^{ [\overline{34}] }\nonumber\\ \mathcal{G}^{[\overline{135}]}_4 =& -h_{ 42 } \ell_2 ^{ [ \overline{31} ] } - h_{41} \ell_1^{ [ \overline{ 53 }] } + h_{ 42 } \ell_2^{ [\overline{51}] }\nonumber\\ \mathcal{G}^{[\overline{145}]}_2 =&\ell_2 ^{ [ \overline{14} ] } + h_{21} \ell_1^{ [ \overline{ 45 }] } + \ell_2^{ [\overline{51}] }\nonumber\\ \mathcal{G}^{[\overline{234}]}_5 =& h_{ 51 } \ell_1 ^{ [ \overline{23} ] } + h_{53} \ell_3^{ [ \overline{ 42 }] } + h_{ 51 } \ell_1^{ [\overline{34}] }\nonumber\\ \mathcal{G}^{[\overline{235}]}_1 =& \ell_1 ^{ [ \overline{23} ] } - \ell_1^{ [ \overline{ 53 }] } + \ell_1^{ [\overline{25}] }\nonumber\\ \mathcal{G}^{[\overline{245}]}_3 =& -h_{ 31 } \ell_1 ^{ [ \overline{25} ] } - \ell_3^{ [ \overline{ 42 }] } + h_{ 31 } \ell_1^{ [\overline{45}] }\nonumber\\ \mathcal{G}^{[\overline{345}]}_1 =& \ell_1 ^{ [ \overline{34} ] } + \ell_1^{ [ \overline{ 45 }] } + \ell_1^{ [\overline{53}] }\label{Gs} \end{align} The relative sign differences between terms comes from the $\epsilon^e_{(cc')}$ factor introduced above. \medskip \paragraph{2-flatness.} This set of constraints is interpreted as the closure of the dual polyhedra (tetrahedra in our illustrating example). By Minkowski's theorem, the sum of vectors normal to faces of a polyhedron with magnitude equal to the face area is zero. In this system, the analogous quantities are the variables on the faces, $V$. For example the closure of the polyhedron dual to $\overline{1}$ is \begin{align} h_{23}V^{[\overline{12}]^*}_3h_{32} +V_2^{[\overline{14}]^*}+V_2^{[\overline{13}]^*}+V_2^{[\overline{15}]^*} =0 \end{align} This follows directly from the definitions of the face variables (also using $V^{[\overline{ij}]}=-V^{[\overline{ji}]}$). In our example, there are five such polyhedron constraints shown below \begin{align} {\cal P}^{\overline{1}} =& h_{23}V^{[\overline{12}]^*}_3h_{32} +V_2^{[\overline{14}]^*}-V_2^{[\overline{31}]^*}-V_2^{[\overline{51}]^*} \nonumber\\ {\cal P}^{\overline{2}} =& V^{[\overline{23}]^*}_1 +V_1^{[\overline{25}]^*}-h_{13}V_3^{[\overline{12}]^*}h_{31}-h_{13}V_3^{[\overline{24}]^*}h_{31}\nonumber \\ {\cal P}^{\overline{3}} =& h_{13}V^{[\overline{31}]^*}_3h_{31} +V_1^{[\overline{34}]^*}-V_2^{[\overline{53}]^*}-V_2^{[\overline{23}]^*}\nonumber\\ {\cal P}^{\overline{4}} =& V^{[\overline{45}]^*}_1 +V_1^{[\overline{42}]^*}-h_{12}V_2^{[\overline{14}]^*}h_{21}-V_1^{[\overline{34}]^*}\nonumber\\ {\cal P}^{\overline{5}} =& V^{[\overline{53}]^*} +h_{12}V_2^{[\overline{51}]^*}h_{21}-V_1^{[\overline{45}]^*}-V_1^{[\overline{25}]^*} \label{Ps} \end{align} \medskip \paragraph{1-Gauss.} Due to the less trivial continuity equations for the $\so(4)^*$ variables, the expression for the tetrahedron constraints appears more cumbersome: \begin{align} \sum_{c':c\to c'}b_{(cc')^*}-\sum_{e}[\ell_c^{e}, V_c^{e^*}] - \sum_{c': c'\to c} h_{cc'}b_{ (c'c)^* } h_{c'c} = 0 \end{align} The explicit form of the sum depends on where we choose to base our variables $\ell$ and $V$ as well as the orientation of the links. The first sum is a sum over the links which have the source point at $c$ and the third sum is over links which have their target node at $c$. The second sum is over the edges such that $\ell^e$ and $V^{e^*}$ is based at $c$. If no $\ell$ is based at $c$, this is just the empty sum. With the variables defined above, we have the five constraints \begin{align} {\cal T}^{5} =& b_{ (51)^* } + b_{ (53)^* } -h_{54} b_{ (45)^* } h_{45} -h_{52}b_{ (35)^* } h_{25}\nonumber \\ {\cal T}^{4} =& b_{ (45)^* } + b_{ (42)^* } -h_{41} b_{ (14)^* } h_{41} -h_{43}b_{ (34)^* } h_{34}\nonumber\\ {\cal T}^{3} =& b_{ (34)^* } + b_{ (31)^* } -h_{32} b_{ (23)^* } h_{32} -h_{35}b_{ (53)^* } h_{53} - [\ell^{ [\overline{12}]}_3, V^{[\overline{12}]^*}_3 ] - [\ell_3^{[\overline{42}]}, V_3^{ [\overline{42}]^*} ]\nonumber\\ {\cal T}^2 =&b_{ (23)^* } + b_{ (25)^* } -h_{24} b_{ (42)^* } h_{42} -h_{21}b_{ (12)^* } h_{21} - [\ell^{ [\overline{14}]}_2, V^{[\overline{14}]^*}_2 ] - [\ell_2^{[\overline{31}]}, V_2^{ [\overline{31}]^*} ] - [\ell_2^{[\overline{51}]}, V_2^{ [\overline{51}]^*} ]\nonumber\\ {\cal T}^1 =& b_{(12)^*} + b_{(14)^*} -h_{13}b_{(31)^*}h_{31} -h_{15}b_{(51)^*}h_{51} -[\ell_1^{ [ \overline{45} ] } , V_1^{ [ \overline{45} ]^* }]-[\ell_1^{ [ \overline{23} ] } , V_1^{ [ \overline{23} ]^* }]\nonumber\\ &-[\ell_1^{ [ \overline{53} ] } , V_1^{ [ \overline{53} ]^* }]-[\ell_1^{ [ \overline{34} ] } , V_1^{ [ \overline{34} ]^* }] -[\ell_1^{ [ \overline{25} ] } , V_1^{ [ \overline{25} ]^* }]\label{Ts} \end{align} We emphasize that these constraints are realized by definition of the fields We expect them to be the discretization of the constraint $d_AB+[C\wedge \Sigma]=0$. \medskip Since these constraints do not seem to be very natural, let us illustrate in one example how this comes to be. For concreteness, let's take ${\cal T}^3$ as an example. For convenience we recall the relevant triangle variables \begin{align} b_{(23)^*} =& \int_{[\overline{145}]} (db_2-d[c_2,\sigma_2]-[dc_2,d\varsigma_2^3]) +\int_{\overline{51}}[dc_2,h_{23}\varsigma_3^4h_{32}]\\ b_{(34)^*} =& \int_{[\overline{125}]} (db_3-d[c_3,\sigma_3]-[dc_3,d\varsigma_3^4]) +\int_{[\overline{12}]} [dc_3,h_{34}\varsigma_4^5h_{43}]\\ b_{(31)^*} =& \int_{[\overline{245}]} (db_3-d[c_3,\sigma_3]-[dc_3,d\varsigma_3^1])+\int_{\overline{25}}[dc_3,\varsigma_3^1]-\int_{\overline{25}}[dc_3,\varsigma_3^4]-\int_{\overline{45}}[dc_3,\varsigma_3^1]\\ b_{(53)^*} =& \int_{\overline{124}}(db_5-d[c_5,\sigma_5]-[dc_5,d\varsigma_5^3])-\int_{\overline{42}}[dc_5,\varsigma_5^1]+\int_{\overline{42}}[dc_5,\varsigma_5^3]-\int_{[\overline{12}]} [dc_5,\varsigma_5^3] \end{align} Using the continuity equations, we can check that \begin{align} -h_{32} b_{(23)^*} h_{23} = \int_{[\overline{145}]}(-db_3+d[c_3,\sigma_3])-\int_{[\overline{51}]} [dc_3,\varsigma_3^4] \end{align} and \begin{align} -h_{35}b_{(53)^*}h_{53} = \int_{[\overline{124}]} (-db_3 +d[c_3,\sigma_3])+\int_{[\overline{42}]} [dc_3,h_{35}\varsigma_5^1h_{53}]-\int_{[\overline{42}]}[dc_3,h_{35}\varsigma_5^3]+\int_{[\overline{12}]} [dc_3,h_{35}\varsigma_5^3h_{53}]. \end{align} We now evaluate the sum involving the $b$ variables, \begin{align} b_{ (34)^* } + b_{ (31)^* }& -h_{32} b_{ (23)^* } h_{32} -h_{35}b_{ (53)^* } h_{53}= \int_{[\overline{125}]}d(b_3-[c_3,\sigma_3]) + \int_{[\overline{245}]}d(b_3-[c_3,\sigma_3])\nonumber\\ &-\int_{[\overline{145}]}d(b_3-[c_3,\sigma_3])-\int_{[\overline{124}]}d(b_3-[c_3,\sigma_3])-\int_{[\overline{125}]}[dc_3,d\varsigma_3^4] -\int_{[\overline{245}]}[dc_3,\varsigma_3^1]\nonumber\\ &+\int_{[\overline{12}]}[dc_3, h_{34}\varsigma_4^5 h_{43}] +\int_{\overline{25}}[dc_3,\varsigma_3^1]-\int_{\overline{25}}[dc_3,\varsigma_3^4]-\int_{\overline{45}}[dc_3,\varsigma_3^1]\nonumber\\ &+\int_{[\overline{42}]} [dc_3,h_{35}\varsigma_5^1h_{53}]-\int_{[\overline{42}]}[dc_3,h_{35}\varsigma_5^3]+\int_{[\overline{12}]} [dc_3,h_{35}\varsigma_5^3h_{53}]-\int_{[\overline{51}]} [dc_3,\varsigma_3^4]\\ &= \int_{[\overline{1245}]} d^2(b_3-[c_3,\sigma_3])+ \int_{\partial[\overline{125}]}[dc_3,\varsigma_3^4]+\int_{\partial[\overline{245}]}[dc_3,\varsigma_3^1]\nonumber\\ &+\int_{[\overline{12}]}[dc_3, h_{34}\varsigma_4^5 h_{43}] +\int_{\overline{25}}[dc_3,\varsigma_3^1]-\int_{\overline{25}}[dc_3,\varsigma_3^4]-\int_{\overline{45}}[dc_3,\varsigma_3^1]\nonumber\\ &+\int_{[\overline{42}]} [dc_3,h_{35}\varsigma_5^1h_{53}]-\int_{[\overline{42}]}[dc_3,h_{35}\varsigma_5^3]+\int_{[\overline{12}]} [dc_3,h_{35}\varsigma_5^3h_{53}]-\int_{[\overline{51}]} [dc_3,\varsigma_3^4]. \end{align} We have used the observation that the four surfaces we are integrating over in the first four terms of the first line are the boundary of the tetrahedron $1^*$. We then use Stokes theorem to write it as derivative in the bulk of the tetrahedron. We can then use that $d^2=0$ to say that the first integral after the second equality vanishes. Next we write out the integral $\int_{\partial[\overline{125}]}$ and $\int_{\partial[\overline{245}]}$ as integrals over edges. We then collect terms: \begin{align} b_{ (34)^* } + b_{ (31)^* } -h_{32} b_{ (23)^* } h_{32} -h_{35}b_{ (53)^* } h_{53} =& \int_{[\overline{12}]} [dc_3, \varsigma_3^4 +h_{34}\varsigma_4^5h_{43}+h_{35}\varsigma_5^3h_{53}]\nonumber\\ &+ \int_{[\overline{42}]}[dc_3, h_{35}\varsigma_5^1 h_{53}-h_{35}\varsigma_5^3-\varsigma_3^1]\\ =& [\ell_3^{[\overline{12}]},V_3^{[\overline{12}]^*}] + [\ell_3^{[\overline{42}]},V_3^{[\overline{42}]^*}], \end{align} which is precisely ${\cal T}^3=0$. \paragraph{Edge simplicity and 1-flatness} Now let's consider the edge simplicity constraint. We consider this condition instead of the usual flatness constraint because it appears as the less strict condition we must impose in order to discretize the potential. The edge simplicity constraint for a given edge $e$ follows directly from the property of the fields we introduced in \eqref{cont eq loop}. By integrating this expression on the edge, we just get \begin{align} {\cal E}_c^e = \ell_c^e -h_e^{-1} \ell_c^eh_e=0, \quad h_{e}= \prod_{(c_i c_{i+1})\in \partial e^*} h_{c_ic_{i+1}} .\label{edge} \end{align} This constraint can be viewed as the discretization of the constraint $[F\wedge C]=0$. Similar constraints hold when $V$ replaces $\ell$, which is inherited from \eqref{eq: edge continuity1}. This can be seen as a dual face simplicity. The 1-flatness constraint appears by constraining the constants $h_{cc'}$ to form a flat closed holonomy along the loop $\partial e^*$. \begin{align} h_{e}= \prod_{(c_i c_{i+1})\in \partial e^*} h_{c_ic_{i+1}} = 1\end{align} This constraint implies the constraint \eqref{edge}. \section{2-group structures}\label{sec: 2gp} In this section, we put our findings in light of the higher gauge theory context and its discretization. We first recall quickly the concept of higher gauge theory mostly relying on \cite{Baez:2010ya}. \subsection{From 2-group to higher gauge theory in a nutshell} There is a natural 2-group interpretation for the discretized symmetries we have obtained. We recall that a (strict) 2-group can be seen as a crossed module, $(G,H, t, \rhd)$, which consists in a pair of (Lie) groups $(G,H)$, with a group homomorphism $t:H{\,\rightarrow\,} G$ called the target map and an action $\rhd$ of $G$ on $H$. The target map and the action must satisfy some compatibly relations. \begin{eqnarray}\label{comp cond} && t(h)\rhd h'=hh'h^{-1}, \quad t(g\rhd h)= gt(h)g^{-1}. \end{eqnarray} The crossed module is equipped with \textit{two} product laws. The first one is inherited from considering $H >\!\!\! \lhd G $. \begin{equation} (h_1\ , \ g_1)\bullet (h_2\ , \ g_2)= (h_1 (g_1\rhd h_2)\ ,\ g_1g_2), \quad (h\ , \ g)^{-1_\bullet} = ( (g\rhd h)^{-1} \ , \ g^{-1} ). \end{equation} The other multiplication comes from the product of $H$. \begin{eqnarray} &&(h_1\ , \ g_1)\diamond (h_2\ , \ g_2)= (h_1h_2, g_1), \quad \textrm{if } g_2 = t(h_1)g_1. \\ && (h,g)^{-1_\diamond} = (h^{-1}, t(h)g), \textrm{ and unit } (1,g). \end{eqnarray} We note that demanding that the $t$ map is trivial, the vertical composition implies that the holonomy $g_2g_1^{-1}$ must be flat. \medskip The notion of 2-Lie algebra which can be exponentiated to the Lie 2-group \cite{Baez:2003fs} is relevant to discuss the \textit{BF} or \textit{BFCG} theories. A Lie 2-algebra is given by the differentiation of the Lie 2-group \cite{Baez:2003fs}. A Lie 2-algebra can be given by the differential crossed module $(Lie\, G, Lie\, H, \tau, \alpha)$, where both $\tau$ and $\alpha$ are obtained by differentiating $t$ and $\rhd$ respectively. The compatibility relations are now \begin{align} \tau (\alpha(x)(y))= [x,\tau(y)], \quad \alpha(t(y))(y')=[y,y'], \quad x\in Lie\, G, \quad y,y'\in Lie\, H. \end{align} \medskip Equipped with the notion of Lie 2-group and Lie 2-algebra, we can define the notion of higher gauge theory \cite{baez:2004in}. It is specified in terms of a 1-connection, the usual gauge connection, denoted $A$, a 1-form with value in $Lie\, G$, and a 2-connection, often noted $B$ or $\Sigma$, a 2-form with value in $Lie\, H$. Together with these connections, we get the associated curvatures. We have the 1-curvature, which is the usual curvature found in gauge theory, $F(A)=d A + \frac{1}{2}[A\wedge A]$. We also have the 2-curvature which is given by $G(\Sigma,A)= d \Sigma + \alpha(A)(\Sigma)$, where we used the action of $Lie\, G$ on $Lie\, H$. We note that the 1-curvature is actually is specified in terms of $\tau(\Sigma)$, \begin{align} F=\tau(\Sigma). \end{align} Finally, we have the 1-gauge transformations and the 2-gauge transformations, parameterized by a group element $g\in G$ and a 1-form $a\in Lie H$ respectively. \begin{align}\label{2-gaugetr} A'&= g^{-1} A g + g^{-1}d g + \tau(a), \\ \Sigma'&= \alpha(g)(\Sigma)+ d_Aa +a \wedge a, \end{align} \subsection{$BF$ case.} The four dimensional $BF$ action can related to higher gauge symmetries, more specifically to the (co-)tangent 2-group, and at the continuum level to the associated (co-)tangent 2-Lie algebra \cite{Baez:2010ya}. \subsubsection{Continuum level and 2-Lie algebra} The ${\cal B}$-field, a 2-form with value in $\mathfrak{g}^*\sim \mathbb{R}^d$ is usually interpreted as a Lagrange multiplier implementing the fact we are dealing with a flat connection. In the higher gauge theory picture, we interpret it instead as a 2-connection, while ${\cal A}$ with value in $\mathfrak{g}$ is the 1-connection. The relevant 2-Lie group is given by the (co-)tangent 2-group, with $ H=Lie\, G^*\sim \mathbb{R}^d$, $G$, with $G$ acting with the coadjoint action on $Lie\, G^*$ and the t-map is constant, $t=1$. The connections are valued in the associated Lie 2-algebras. The translational symmetry of the $BF$ action is then interpreted as the 2-gauge transformation. \subsubsection{Discrete level and 2-Lie group} While the relevant 2-group is (co-)tangent 2-group, we have obtained at the discrete level that this 2-group can actually be seen as a pair of trivial 2-groups, ie a pair of 1-groups. Firstly on the dual complex, we have solely a decoration of the links, by the group elements in $G$. There are no decoration on the dual faces. This means that the 2-group is actually trivial, we have\footnote{By $H=1$, we mean that the group is simply the identity element. While the $t\equiv1$, we mean that to any element in $H$, the $t$ map associates the identity element in $G$. } \begin{align} G\equiv ISO(4), \quad H= 1, \quad t\equiv 1, \quad \rhd \textrm{ trivial} \end{align} We note that the target map being trivial is also equivalent to the dynamical constraint that the $\mathrm{ISO}(4)$ holonomy on the links must be flat. \medskip Secondly, on the triangulation, we have no decoration on the edges (so we can set equivalently any decoration to the identity), but the triangles decorated by the abelian group $\mathbb{R}^{10}\sim \mathfrak{iso}(4)^*$. \begin{align} G\equiv 1, \quad H= \mathbb{R}^{10}\sim \mathfrak{iso}(4)^*, \quad t\equiv 1, \quad \rhd \textrm{ trivial}. \end{align} \medskip Finally, the two (trivial) 2-groups can be seen as dual to each other, one being configuration, the other one momentum space, and together form the (co-)tangent 2-group. Put together, we can set up a phase space for each (link, trivial face decoration)/(trivial edge decoration, triangle), with symplectic form given by \eqref{eq: potential bf}. Let us reformulate the previous statement. Locally the cotangent bundle $T^*G\sim \mathbb{R}^d \rtimes G$ has a simple cross module structure (with a trivial t-map) hence can be interpreted as a 2-group. Since the cotangent bundle $T^*G$ is naturally equipped with a symplectic form, the 2-group is also equipped with a symplectic form. This points towards a possible generalization of the Heisenberg double \cite{SemenovTianShansky:1993ws} to the 2-group context. \subsection{$BFCG$ case } \subsubsection{Continuum level and 2-Lie algebra} While the \textit{BFCG} action we considered is actually equivalent to a \textit{BF} action, the 2-gauge structure is actually different. Now the 2-connection is not given by the full ${\cal B}=B+\Sigma$ field with value in $Lie\, G\sim \mathfrak{iso}(4)^*$, but by the translational sector $\Sigma$ only. The $B$-field component with value in $\so(4)^*$ is now seen as a Lagrange multiplier. The 1-gauge connection is not the full connection ${\cal A}=A+ C$ but is $A$ lying in the $\so(4)$ sector while the $C$ component is seen as a Lagrange multiplier. As a consequence this means that the relevant gauge symmetry is the Euclidean 2-group $\mathrm{SO}(4)\ltimes \mathbb{R}^4$. While this might look as a cosmetic change with respect to the original Euclidean BF theory, it is actually deeper than that. Indeed, the addition of the boundary term modifying the Euclidean BF theory into the BFCG theory can be seen as dualization, more exactly semi-dualization, since we only dualize ``half" of the Lie algebra $\mathfrak{iso}(4)$, namely the translational part. While the charge algebra is not modified, the place where these charges will be discretized is modified. Said otherwise, in the \textit{BF} formulation, the ${\cal B}$-field, the 2-connection, is seen solely as the momentum, and the 1-connection is solely configuration variable. In contrast, in the \textit{BFCG} case, we have that the momentum variables are a mix of 1-connection $C$ (in $\mathbb{R}^4$) and 2-form $B$ (in $\so(4)^*$), while the configuration variables are given in terms of the 1-connection $A$ (in $\so(4)$) and the 2-connection $\Sigma$ in $\mathbb{R}^4$. \medskip Nevertheless due to the equivalence with the Euclidean BF theory, we still have the (co)-tangent 2-group structure present. To see how they still connect together we can move to the discrete picture. \subsubsection{Discrete level and 2-Lie groups} Firstly, on the dual complex, we have $SO(4)$ holonomies decorating on the links and elements in $\mathbb{R}^4$ decorating the dual faces. We recognize this as the Euclidean 2-group. \begin{align} G\equiv SO(4), \quad H= \mathbb{R}^4, \quad t\equiv 1, \quad \rhd \textrm{ canonical action of } SO(4) \textrm{ on } \mathbb{R}^4. \end{align} Once again the target map being trivial is also equivalent to the dynamical constraint stating that the $SO(4)$ holonomy on the links must be flat. \medskip Secondly, on the triangulation, we have the edges are decorated by elements in $\mathbb{R}^{4*}\sim \mathbb{R}^{4}$, and the triangles decorated by the abelian group $\mathbb{R}^{6}\sim \so(4)^*$. Hence we have the trivial cross module \begin{align} G\equiv \mathbb{R}^{4}, \quad H= \mathbb{R}^{6}\sim \so(4)^*, \quad t\equiv 1, \quad \rhd \textrm{ trivial}. \end{align} \medskip Finally, once again, the two 2-groups can be seen as dual to each other. Put together, we can set up a phase space for each (link, face )/(edge, triangle), with symplectic form given by \eqref{eq: potentialbfcg}. The total phase space is again based on $T^*\mathrm{ISO}(4)$. This is where the equivalence with the usual Euclidean $BF$ formulation appears. As we recalled in the previous section $T^*\mathrm{ISO}(4)$ can be seen as a cross module equipped with a symplectic structure. Unlike in the \textit{BF} case, now the components of this 2-group are themselves non-trivial 2-groups in the sense that both contain decoration on the face/triangle. This puts further arguments that the generalization of a 2-Heisenberg double, defined as a 2-group seen as a phase space with both configuration and momentum spaces being 2-groups might exist. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figures/doublewedge3.pdf} \caption{Pictorial representation of the semi-dualization process relating the BF discretization to the BFCG one. } \label{fig:doublewedge3} \end{figure} \section{Outlook} We have revisited the discretization of 4d $BF$ theory and highlighted that this discretization (and in turns the quantum theory) is very sensitive on the choice of boundary terms. This is yet another example of importance such boundary data to probe the quantum regime of a gauge theory \cite{Freidel:2020xyx, Freidel:2020svx, Freidel:2020ayo}. Here the boundary term just implemented a partial change of polarization which allowed to rewrite the Euclidean $BF$ theory as the $BFCG$ theory. It was already known that a full change of polarization leads at the discretized level to the notion of ``$BF$ vacuum" \cite{Dittrich:2014wda}, so that the kinematical symmetries are not the Lorentz ones (through the Gauss constraint) but instead the translational ones (through the flatness constraint). Here we performed only a partial change of polarization, in the case where the gauge group is not the Lorentz group but the Euclidean group. At the level of the action, this implemented a change from a regular $BF$ action to the $BFCG$ action. At the discrete level, one gets the usual phase space built out of $T^*\mathrm{ISO}(4)$ for each link of the dual 1-complex, which is the classical phase space behind the standard notion of spin networks. With the partial change of polarization, we obtain the phase space of the so-called $G$-networks, which are based on 2-group structures. We were therefore able to recover the expression of the $G$-network discrete variables in terms of the continuum variables. \medskip Such identification is actually very important to relate gravity to the \textit{BF} or \textit{BFCG} theory. Indeed, gravity can be obtained at the continuum level by constraining the $B$ field, through the simplicity constraints \cite{Mikovic:2011si, Mikovic:2015hza, Belov:2018uko}. In the \textit{BFCG} case, what stands for the discretized \textit{B} field is actually a function of several of the continuous fields. Hence demanding that such discretized \textit{B} field satisfies the simplicity constraint is not equivalent to the continuum simplicity constraint. This might explain why the discrete simplicity constraint leads to such drastic reduction of the model \cite{Mikovic:2011si, Mikovic:2015hza}. It would be interesting to see what is the actual relevant discretization of the simplicity constraint for the \textit{BFCG} case, we leave this question for the future. \medskip The Euclidean \textit{BF} theory or the \textit{BFCG} theory are equivalent up to a boundary term, so we expect them to have the same physical content. We note that each theory corresponds to different symmetries, both given in terms of 2-groups. In the BF case, we have a pair of trivial 2-groups, where one of the groups is the trivial group, so that the 2-group is really a 1-group (decorating the links or the triangles). In the BFCG case, we deal with the Euclidean 2-group and its dual. At the quantum level, amplitudes can be constructed in terms of the representations of each symmetry structure, be it the Euclidean 1-group or the Euclidean 2-group. Since the two descriptions describe the same theory, this means there must be some relation between the amplitudes expressed in terms of the different representation theories. Being able to identify explicitly the relation would be extremely interesting. \medskip In fact some ideas on how to proceed could rely on the concept of semi-dualization \cite{Majid:1996kd, Majid:2008iz}. The algebraic structure underlying the discretized BF structure, the cotangent bundle $T^*G$, is the Drinfeld double $\mathfrak{d}\sim \mathfrak{iso}(4)\rhd\!\!\!< \mathfrak{iso}(4)^*\sim (\mathbb{R}^4>\!\!\!\lhd \so(4))\rhd\!\!\!< (\mathbb{R}^{4*}>\!\!\!\lhd \so(4)*)$ as a Lie algebra. To get to the BFCG formulation, we swapped the sector $\mathbb{R}^4$ and $\mathbb{R}^{4*}$, which amounts to slicing $\mathfrak{d}$ in a different manner. Still as a Lie algebra, the relevant Drinfeld double for the BFCG formulation is still $\mathfrak{d}$ but now given by \begin{align} \mathfrak{d}\sim (\mathbb{R}^{4*}>\!\!\!\lhd \so(4))\rhd\!\!\!< (\mathbb{R}^{4}>\!\!\!\lhd \so(4)^*). \end{align} Let us recall quickly what is the semi-dualization. If we consider a pair of Lie algebras $\mathfrak{g}_i$ (equipped with some cocycle), acting on each other so that we can consider the big Lie algebra $\mathfrak{g}_1\bowtie \mathfrak{g}_2$, then roughly the semi-dualization is defined via the map \begin{align} \mathfrak{g}_1\bowtie \mathfrak{g}_2 \rightarrow \mathfrak{g}_1\rhd\!\!\!\blacktriangleleft \mathfrak{g}_2^*, \end{align} where $\blacktriangleleft$ encodes a co-action. However since we are dealing still with an abelian group $\mathbb{R}^4$, there is no kick-back action and no co-action upon semi-dualization. Note however that the non trivial 2-group appeared thanks to the semi-dualization! We expect that a deformation of $\mathbb{R}^4$ into a non abelian group such as $AN(3)$ for example would lead to non trivial interesting structures related to Majid's bicrossproduct construction \cite{Majid:1996kd}. \medskip This later consideration brings us to a natural extension of the current work to include the cosmological constant. Instead of working with $\mathrm{ISO}(4)$, we could work with a $\mathrm{SO}(4,1)$ \textit{BF} theory. Under the semi-dualization we would expect to recover the Poincar\'e 2-group decorating the dual complex and a non-trivial 2-group on the triangulation given by the crossed module $AN\ltimes \mathbb{R}^6$, which has a trivial $t$ map. Interestingly to be consistent with the Poisson structure, these 2-groups should have some non-trivial Poisson structure (inherited from the coaction we just mentioned) so that they would be really quantum 2-groups under quantization. Preliminary studies point to the apparition of the $\kappa$-Poincar\'e group and the $\kappa$-Poincar\'e algebra. This is work in progress \cite{wip}. \medskip During our discussion, we did not consider excitations of any kind. We could decorate the vertices with 2-curvature excitation, or the edges with 1-curvature excitation. While a 4d BF theory is naturally coupled with string like excitations \cite{Baez:2006sa,Baez:2006un} it would be interesting to see how these are expressed when using the non-trivial 2-gauge picture, ie the BFCG formulation. We leave this for later investigations. \medskip Finally, we have discussed that BF theory has two main discretizations, the standard one and the dual one where we swap the full group with the dual abelian group. We argued that we can do a partial dualization where we swap only the translational sector to recover the BFCG discretization. There is therefore a fourth case, where we do a partial dualization on the Lorentz sector. This would amount to a ``dual" BFCG discretization. We leave this case for later study. \section*{Acknowledgment} We would like to thank B. Dittrich and A. Riello for some discussions and suggestions.
{ "timestamp": "2021-05-06T02:07:37", "yymm": "2105", "arxiv_id": "2105.01817", "language": "en", "url": "https://arxiv.org/abs/2105.01817" }
\section{Acknowledgment} The HMC dataset with contours was collected at Haukeland University Hospital, Bergen, Norway, and was provided to us by responsible oncologist Svein Inge Helle and physicist Liv Bolstad Hysing. The EMC dataset with contours was collected at Erasmus University Medical Center, Rotterdam, The Netherlands, and was provided to us by radiation therapist Luca Incrocci and physicist Mischa Hoogeman. They are gratefully acknowledged. \section{Conclusion} \label{conclusion} In this paper, we propose to formulate the registration and segmentation tasks as a multi-task learning problem. We presented various approaches in order to do so, both on an architectural level and via the loss function. We experimented with different network architectures in order to investigate the best setting that maximizes the information flow between these tasks. Moreover, we compared different loss weighting methods in order to optimally combine the losses from these tasks. We proved that multi-task learning approaches outperform their single-task counterparts. Using an adaptive parameter sharing mechanism via Cross-stitch units gives the networks freedom to share information between these two tasks, which resulted in the best performance. An equal loss weighting approach had similar performance to more sophisticated methods. The cross stitch network with equal loss weights achieved a median MSD of 0.99 mm, 0.82 mm, 1.13 mm and 1.47 mm on the validation set and 1.09 mm, 1.24 mm, 1.02 mm, and 2.10 mm on the independent test set for the prostate, bladder, seminal vesicles, and rectum, respectively. That is equal or less than slice thickness (2 mm). Due to the fast inference of the methods, the proposed method is highly promising for automatic re-contouring of follow-up scans for adaptive radiotherapy, potentially reducing treatment related complications and therefore improving patient quality-of-life after treatment. \section{Discussion} \label{discussion} In this study, we proposed to merge image registration and segmentation on the architectural level as well as via the loss, via a multi-task learning setting. We studied different network architectures and loss weighting methods in order to explore how these tasks interact, and thereby leverage the shared knowledge between them. Moreover, we carried out extensive quantitative analysis in the context of adaptive radiotherapy, and compared the proposed multi-task methods to their single-task counterparts. In this paper, a substantial number of experiments were executed, where we explored the following methodological choices: the bending energy weight, the input to the STL and MTL networks, and the loss weighting method. We also performed a thorough analysis on how Cross-stitch units and loss weights evolve during training. Finally, we compared our proposed methods against state-of-the-art methods. In all the experiments we fixed the weight of the bending energy weight so that the network would not set it too low in order to improve the DSC of the deformed contours on the account of the smoothness of the predicted DVF. As shown in Figure \ref{fig:bending_energy} low bending energy weights result in better contour quality on the account of the smoothness of the predicted DVF. For the inputs to the STL networks, additionally feeding $\SMMath$\xspace to the segmentation network resulted in a statistically significant improvement especially for the seminal vesicles. Apparently the network considers $\SMMath$\xspace as an initial estimation for $\SFMath$\xspace and subsequently uses it as a guidance for its final prediction. When feeding $\IMMath$\xspace the results deteriorated; this may confuse the network as $\IFMath$\xspace and $\IMMath$\xspace have the same anatomy but with different shapes and local positions. The addition of both $\IMMath$\xspace and $\SMMath$\xspace performed similar to the addition of only $\SMMath$\xspace, which indicates that the networks learned to ignore $\IMMath$\xspace. For the registration network, the addition of $\SMMath$\xspace resulted in a sub-optimal result, since the $\SMMath$\xspace contours on its own does not represent the underlying deformation well. For the inputs to the MTL networks, in the JRS-reg network, feeding $\SMMath$\xspace alongside $\IFMath$\xspace and $\IMMath$\xspace resulted in a similar performance compared to not feeding it. This indicates that the incorporation of $\SMMath$\xspace via the DSC loss, already enables the JRS-reg network to exploit this extra information, and that additionally adding $\SMMath$\xspace as a network input does not provide further benefits. In the Cross-stitch network, we found that adding $\SMMath$\xspace to the registration network results in a statistically significant improvement. Furthermore, feeding $\SMMath$\xspace to one of the networks is sufficient, proving that segmentation and registration networks communicate their knowledge efficiently through the Cross-stitch units. We selected the STL networks with $\IFMath$\xspace (for segmentation) and $\IFMath$\xspace alongside $\IMMath$\xspace (for registration) as input to our baseline methods. Between these two networks, the registration network performed better overall, since the registration network leverages prior knowledge from the organs in the moving image. For the bladder, the segmentation network achieved better results; Apparently the registration network had difficulties finding the correspondence between the bladder in the fixed and moving images, since it tends to deform considerably between visits. However, the segmentation network failed to segment the seminal vesicles for five cases. That is explained by the fact that the seminal vesicles is a difficult structure to segment, due to its relatively small size, undefined borders, and poor contrast with its surroundings. The registration network on the other hand is able to employ the surrounding anatomy as context, to accurately warp the seminal vesicles. For the multi-task networks, we demonstrated that fusing segmentation and registration tasks is performing better than its single-task counterparts. Merging these tasks using Cross-stitch network achieved the best results on both the validation and testing datasets. Different loss weighting methods achieved comparable results as shown in Table \ref{table:HMC_MSD_weighting}. In Figure \ref{fig:loss-weights}, homoscedastic uncertainty tended to weigh all losses equally, using almost a fixed weight of 0.9 during most of the training iterations. On the contrary, DWA tended to fluctuate during training as the weights are updated based on the ratio of the loss from previous iterations, which fluctuates due to the batch-based training. Since the fixed and moving images are affinely registered beforehand, DWA tended to down-weigh the registration loss and the associated DSC at the beginning of the training, while weighting the segmentation network loss more in order to improve its prediction. Later during training, all the weights stabilized around 0.9 similar to homoscedastic uncertainty. Although both methods stabilized by the end of the training around the same value (0.9), the homoscedastic uncertainty achieved slightly better results compared to DWA and equal weighting methods, except for the Cross-stitch network. Our reasoning behind this is that homoscedastic uncertainty, unlike other methods, is learnable during the training and highly dependent on the underlying task uncertainty. By analyzing the performance of the Cross-stitch units as demonstrated in Figure \ref{fig:CS-weights}, we found that the Cross-stitch units tended to average feature maps for the down-sampling path, while preferring to be more task-specific for the upsampling path. This somewhat mimics the shared encoder double decoder (SEDD) network, but in contrast to this network, the Cross-stitch network does not completely split the decoder paths. This finding confirms that the segmentation and registration tasks are correlated and thereby encode similar features. We carried out an experiment to study the effect of the bladder filling protocol between the HMC and EMC datasets. As shown in Figure \ref{fig:HMC_bladder_filling}, the HMC dataset has a bladder filling protocol so the volume of the bladder changes slightly around 100 mL between different sessions, which is not the case for the EMC dataset as shown in Figure \ref{fig:EMC_bladder_filling}. Since the registration-based networks and joint networks were trained on small bladder deformations, they failed on large deformations, however the segmentation network was not affected since it does not depend on the deformation but rather the underlying texture to segment the bladder. In terms of the smoothness of the predicted DVF shown in Table \ref{table:dvf-table}, MTL networks achieved lower numbers for the standard deviation of the Jacobian as well as for the folding fraction, compared to the STL network (Reg), on both the test and validation set. Our reasoning is that joining the segmentation task to the registration task works as an additional regularization to the registration network. Due to the fact that the higher the quality of the predicted DVF, the higher the quality of the propagated contours and subsequently the lower the DSC loss. The numbers on the test set are slightly higher than the validation set, but this is due to the variance between the deformations between both sets and the fact that the network has not seen the test set before. This can be addressed using transfer learning as suggested by Elmahdy \textit{et al.} \cite{elmahdy2020patient} or by using synthetic deformations that mimic the one presented in the EMC dataset. In the paper, we compared our algorithm against different algorithms from various categories: non-learning (\texttt{elastix} \cite{elastix}, a popular conventional tool); hybrid \cite{MedPhys}, and GAN-based \cite{JrsGan}. The presented multi-task networks outperformed these approaches on the validation set and performed on par to these methods for the test set. However, the test time for the hybrid and \texttt{elastix} methods are in the order of minutes, while the presented methods have the advantage of fast prediction in less than a second. This enables online automatic re-contouring of daily scans for adaptive radiotherapy. Moreover, in our hybrid study \cite{MedPhys} we carried out an extensive dosimetric evaluation alongside the geometric evaluation. The predicted contours from that study met the dose coverage constraints in 86\%, 91\%, and 99\% of the cases for the prostate, seminal vesicles, and lymph nodes, respectively. Since our multi-task networks outperformed the geometrical results in that study, we expect that our contours would achieve a higher success rate in terms of the dose coverage. This could potentially reduce treatment related complications and therefore improve patient quality-of-life after treatment. A promising direction for future research is the addition of a third task, potentially radiotherapy dose plan estimation. Hence, we can generate contours that are consistent with an optimal dose planning. Further studies could also focus on sophisticated MTL network architectures similar to sluice networks \cite{ruder2017sluice} or routing networks \cite{rosenbaum2017routing}. Moreover, we can study how to fuse the contours from the segmentation and registration paths in a smarter way rather than simply selecting one of them based on the validation set. \section{ Datasets, Implementation, and Evaluation} \label{exp_results_section} \subsection{Datasets} This study involves two datasets from two different institutes and scanners for patients who underwent intensity-modulated radiotherapy for prostate cancer. The first dataset is from Haukeland Medical Center (HMC), Norway. The dataset has 18 patients with 8-11 daily CT scans, each corresponding to a treatment fraction. These scans were acquired using a GE scanner and have 90 to 180 slices with a voxel size of approximately 0.9 $\times$ 0.9 $\times$ 2.0 mm. The second dataset is from Erasmus Medical Center (EMC), The Netherlands. This dataset consists of 14 patients with 3 daily CT scans each. The scans were acquired using a Siemens scanner, and have 91 to 218 slices with a voxel size of approximately 0.9 $\times$ 0.9 $\times$ 1.5 mm. The target structures (prostate and seminal vesicles) as well as organs-at-risk (bladder and rectum) were manually delineated by radiation oncologists. All datasets were resampled to an isotropic voxel size of 1 $\times$ 1 $\times$ 1 mm. All scans and corresponding contours were affinely registered beforehand using \texttt{elastix} \cite{elastix}, so that corresponding anatomical structures would fit in the network's field of view. The scan intensities were clipped to [-1000, 1000] . \subsection{Implementation and Training Details}\label{implement_details} All experiments were developed using Tensorflow (version 1.14) \cite{abadi2016tensorflow}. The convolutional layers were initialized with a random normal distribution ($\mu=0.0$, $\sigma=0.02$). All parameters of the Cross-stitch units were initialized using a truncated normal distribution ($\mu=0.5$, $\sigma=0.25$) in order to encourage the network to share information at the beginning of the training. In order to ensure fairness regarding the number of parameters in all the networks, the number of filters for the Cross-stitch network were set to [16, 32, 64, 32, 16], while for the other networks the numbers were scaled by $\sqrt{2}$ resulting in [23, 45, 91, 45, 23] filtermaps. This results in approximately $7.8 \times 10^5$ trainable parameters for each network. The networks were trained using the RAdam optimizer \cite{liu2019variance} with a fixed learning rate of $10^{-4}$. Patches were sampled equally from the target organs, organs-at-risk and torso. All networks were trained for 200K iterations using an initial batch size of 2. The batch size is then doubled by switching the fixed and moving patches so that the network would warp the fixed patch to the moving patch and vice versa at the same training iteration. The networks were trained and optimized on the HMC dataset, while the EMC dataset was used as an independent test set. Training was performed on a subset of 111 image pairs from 12 patients, while validation and optimization was carried out on the remaining 50 image pairs from 6 patients. From each image, 1,000 patches of size 96 $\times$ 96 $\times$ 96 voxels were sampled. The size of the patch was chosen so that it would fit in the GPU memory, while still producing a patch size of $17^3$ at the lowest resolution, which is a reasonable size to encode the deformation from the surrounding region. Losses from the deeply supervised resolutions were weighted equally, $\frac{1}{3}$ each. Training was performed on a cluster equipped with NVIDIA RTX6000, Tesla V100, and GTX1080 Ti GPUs with 24, 16 and 11 GB of memory, respectively. \subsection{Evaluation Metrics} The automatically generated contours are evaluated geometrically by comparing them against the manual contours for the prostate, seminal vesicle, rectum, and bladder. The Dice similarity coefficient (DSC) measures the overlap between contours: \begin{equation}\label{eq:dsc} \centering \mathrm{DSC}= \sum \frac{2 \mid {S_f} \cap {S_g} \mid}{\mid {S_f} \mid + \mid {S_g} \mid}, \end{equation} where ${S_g}$ is the generated contour from either the segmentation or the registration network. The distance between the contours is measured by the Mean Surface Distance (MSD) and Hausdorff Distance (HD) defined as follows: \begin{align} \centering \mathrm{MSD} &= \frac{1}{2} \left( \frac{1}{N} \sum_{i=1}^{n} d \left( a_i, {S_g} \right) + \frac{1}{M} \sum_{i=1}^{m} d \left( b_i, {S_f} \right) \right),\label{eq:msd}\\ \mathrm{HD} &= \max\! \left\lbrace\! \max_i \left\lbrace d \left( a_i, {S_g} \right) \right\rbrace , \max_j \left\lbrace d \left( b_i, {S_f} \right) \right\rbrace \!\right\rbrace,\label{eq:hd} \end{align} where $\{$$a_1$; $a_2$; \dots ; $a_n$$\}$ and $\{$$b_1$; $b_2$; \dots; $b_m$$\}$ are the surface mesh points of the manual and generated contours, respectively, and $d \left( a_i, {S_g} \right) = \min_j \, \|b_j - a_i\|$. For all the experiments, we apply the largest connected component operation on the network prediction. In order to evaluate the quality of the deformations, we calculate the determinant of the Jacobian matrix. A Jacobian of 1 indicates that no volume change has occurred; a Jacobian $>$ 1 indicates expansion, a Jacobian between 0 and 1 indicates shrinkage, and a Jacobian $\leq$ 0 indicates a singularity, i.e. a place where folding has occurred. We can quantify the smoothness and quality of the DVF by indicating the fraction of foldings per image and by calculating the standard deviation of the Jacobian alongside the MSD of the segmentation. A repeated one-way ANOVA test was performed using a significance level of $p = 0.05$. P-values are only stated for the comparisons between the best network with the other networks. \section{Experiments and Results} \begin{table*}[!t] \centering \setlength{\tabcolsep}{3pt} \caption[]{The effect of network input for the different architectures on the validation set (HMC) in terms of MSD (mm). Lower values are better. Here, $\oplus$ is the concatenation operation, and $\cdot \| \cdot$ represents the inputs to the segmentation network (left of $\|$) and the inputs to the registration network (right of $\|$). Stars denote one-way ANOVA statistical significance with respect to the Cross-stitch network with ${I_f} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ as inputs.} \resizebox{\textwidth}{!}{ \begin{tabular}{lcllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Input & Output path&\multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{4}{*}{ Seg }&\multirow{1}{*}{ $ {I_f}$ }& &$1.49 \pm 0.3^{*}~$ & 1.49 & $2.50 \pm 2.6~$ & 2.09 & $3.39 \pm 2.2~$ & 2.73 & $1.60 \pm 1.1^{*}~$ & 1.13 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{S_m}$ }&& $1.31 \pm 0.4~$ & 1.23 & ${1.63} \pm {0.9}~$ & {1.26} & $2.88 \pm 3.4~$ & {2.06} & $1.12 \pm 0.5~$ & {0.97} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& $3.06 \pm 0.6^{*}~$ & 3.01 & $5.36 \pm 4.4~$ & 3.71 & $14.57 \pm 9.4^{*}~$ & 11.58 & $1.46 \pm 1.3~$ & 1.12 \\ [1mm &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${1.26} \pm {0.4}~$ & {1.20} & $2.08 \pm 2.2~$ & 1.27 & ${2.79} \pm {1.6}~$ & 2.45 & ${1.05} \pm {0.4}~$ & {0.97} \\ \hline \multirow{2}{*}{ Reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${1.43} \pm {0.8}^{*}~$ & {1.29} & ${1.71} \pm {1.4}^{*}~$ & {1.37} & ${2.44} \pm {1.1}^{*}~$ & {2.17} & ${3.40} \pm {2.3}^{*}~$ & {2.71} \\ [1mm &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& $1.91 \pm 1.3~$ & 1.59 & $1.92 \pm 1.5~$ & 1.44 & $2.58 \pm 1.1~$ & 2.33 & $3.88 \pm 2.5~$ & 3.16 \\ \hline \multirow{2}{*}{ JRS-reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${1.16} \pm {0.3}~$ & 1.16 & ${1.32} \pm {0.6}~$ & {1.11} & ${2.08} \pm {1.0}~$ & {1.82} & ${2.57} \pm {2.0}~$ & 2.04 \\ [1mm &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& $1.20 \pm 0.4~$ & {1.13} & $1.35 \pm 0.7~$ & 1.16 & ${2.08} \pm {1.0}~$ & {1.82} & $2.63 \pm 2.3~$ & {1.90} \\ \hline \multirow{8}{*}{ Cross-stitch }&\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}$ }&Segmentation & \textcolor{light-gray}{$1.47 \pm 0.3^{*}$} & \textcolor{light-gray}{1.48} & \textcolor{light-gray}{$2.93 \pm 3.0^{*}$} & \textcolor{light-gray}{2.08} & \textcolor{light-gray}{$2.93 \pm 2.0^{*}$} & \textcolor{light-gray}{2.25} & $1.19 \pm 1.0$ & 0.89 \\ &&Registration & $1.10 \pm 0.3~$ & 1.07 & $1.38 \pm 0.7~$ & 1.17 & $2.12 \pm 1.0$ & 1.89 & \textcolor{light-gray}{$2.55 \pm 2.1$} & \textcolor{light-gray}{1.89} \\ [1.5mm] &\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & ${1.06} \pm {0.3}~$ & {0.99} & ${1.27} \pm {0.4}~$ & \textcolor{light-gray}{1.15} & ${1.76} \pm {0.8}~$ & {1.47} & ${0.91} \pm {0.4}~$ & {0.82} \\ &&Registration & \textcolor{light-gray}{$1.10 \pm 0.3~$} & \textcolor{light-gray}{1.06} & \textcolor{light-gray}{$1.30 \pm 0.6~$} & {1.13} & \textcolor{light-gray}{$2.00 \pm 1.0~$} & \textcolor{light-gray}{1.75} & \textcolor{light-gray}{$2.45 \pm 2.1~$} & \textcolor{light-gray}{1.81} \\ [1.5mm &\multirow{2}{*}{ $ {I_f}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & \textcolor{light-gray}{$2.05 \pm 0.7^{*}$} & \textcolor{light-gray}{2.00} & \textcolor{light-gray}{$3.66 \pm 4.4^{*}$} & \textcolor{light-gray}{2.19} & \textcolor{light-gray}{$2.44 \pm 1.0^{*}$} & \textcolor{light-gray}{2.35} & $1.09 \pm 0.5^{*}$ & 0.93 \\ &&Registration & $1.40 \pm 0.4$ & 1.35 & $1.31 \pm 0.6~$ & 1.17 & $2.27 \pm 1.0$ & 2.02 & \textcolor{light-gray}{$2.56 \pm 1.9$} & \textcolor{light-gray}{1.96} \\ [1.5mm &\multirow{2}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & $1.08 \pm 0.3~$ & 1.05 & \textcolor{light-gray}{$1.54 \pm 0.9^{*}$} & \textcolor{light-gray}{1.28} & $1.88 \pm 1.0~$ & 1.61 & $1.01 \pm 0.7~$ & {0.82} \\ &&Registration & \textcolor{light-gray}{$1.20 \pm 0.3$} & \textcolor{light-gray}{1.18} & $1.35 \pm 0.7~$ & 1.16 & \textcolor{light-gray}{$2.12 \pm 1.1$} & \textcolor{light-gray}{1.87} & \textcolor{light-gray}{$2.54 \pm 2.2$} & \textcolor{light-gray}{1.80} \\ \hline \end{tabular} } \label{table:network_input_MSD} \end{table*} In the paper we present two single-task networks dubbed \textit{Seg} and \textit{Reg} networks (see Sections \ref{seg_network} and \ref{reg_network} for more details). Moreover, we investigated multiple multi-task networks, namely JRS-reg, dense, SEDD, and Cross-stitch (see Sections \ref{jrs_network}, \ref{dense_network}, \ref{sedd_network}, and \ref{cs_network} for more details). We compared our proposed methods against three state-of-the-art methods that were developed for prostate CT contouring. These methods represent three approaches, namely an iterative conventional registration method, a deep learning-based registration method, and a hybrid method. For the iterative method, we used \texttt{elastix} software \cite{elastix} with the NCC similarity loss using the settings proposed by Qiao \emph{et. al.} \cite{qiao2017fast}. In the deep learning method proposed by Elmahdy \emph{et. al.} \cite{JrsGan}, a generative network is trained for contour propagation by registration, while a discrimination network evaluates the quality of the propagated contours. Finally, we compare our methods against the hybrid method proposed by Elmahdy \emph{et. al.} \cite{MedPhys}, where a CNN network segments the bladder and then feeds it to the iterative registration method as prior knowledge. Following, we optimize some of the network settings on the validation set (HMC), in order to investigate the influence of the bending energy weight, network inputs, weighting strategy and network architecture on the results. Then, on the independent test set, we present the final results comparing with methods from the literature. \subsection{Bending Energy Weight} \label{bending_Section} We compared the single-task registration, the JRS-reg method and the Cross-stitch network for a set of bending energy weights, see Equations (\ref{eq:RegistrationLoss}) and (\ref{eq:general_mtl}), while the weights of the other loss functions are set to 1. Figure \ref{fig:bending_energy} shows the performance of the aforementioned methods using different bending energy weights. The optimal performance of the registration network occurs at a bending weight of 0.5, while the optimal bending weight for both JRS-reg and Cross-stitch network is much lower but with higher standard deviation of the Jacobian. Therefore, for the remainder of the paper we set the weight of the bending energy to 0.5 since it achieves the best compromise between the contour performance in terms of MSD and the registration performance in terms of the std. of the Jacobian determinant. \begin{figure}[t!] \begin{center} \includegraphics[width=1\linewidth]{figures/bending_plot2.pdf} \caption{The performance of the registration, JRS-registration and Cross-stitch networks with different bending energy weights on the validation set (HMC), in terms of mean MSD averaged over the four organs. The annotation at each point represents the standard deviation of the determinant of the Jacobian.} \label{fig:bending_energy} \end{center} \end{figure} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{MSD (mm) values for the different networks and loss weighting methods for the HMC dataset. Lower values are better. Stars and daggers denote one-way ANOVA statistical significance for inter-network experiments with respect to Homoscedastic weights and intra-network experiments with respect to Cross-stitch with Equal weights, respectively. Grey numbers represent the values of the worst path between the segmentation and registration paths.} \resizebox{\textwidth}{!}{ \begin{tabular}{llllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Weight & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{3}{*}{ JRS-reg }&\multirow{1}{*}{Equal}&Registration & $1.20 \pm 0.4$ & 1.13 & {$1.35 \pm 0.7~$} &{1.16} & {$2.08 \pm 1.0$} & {1.82} & {$2.63 \pm 2.3^{*}$} & {1.90} \\ &\multirow{1}{*}{Homoscedastic}&Registration & $1.20 \pm 0.3$ & {1.20} & ${1.22} \pm {0.5}~$ & {1.07} & $2.05 \pm 1.0$ & 1.81 & $2.34 \pm 2.2$ & 1.60 \\ &\multirow{1}{*}{DWA}&Registration & {$1.22 \pm 0.3$} & 1.18 & {$1.37 \pm 0.7^{*}~$} & {1.20} & {$2.29 \pm 1.1^{*}$} & {2.04} & {$3.18 \pm 2.4^{*}$} & {2.43} \\ \hline \multirow{6}{*}{ Dense }&\multirow{2}{*}{Equal}&Segmentation & $1.14 \pm 0.4$ & 1.06 & \textcolor{light-gray}{$1.73 \pm 2.1~$} & \textcolor{light-gray}{1.12} & $1.91 \pm 0.9~$ & 1.64 & $1.04 \pm 0.7$ & 0.87 \\ &&Registration & \textcolor{light-gray}{$1.20 \pm 0.3$} & \textcolor{light-gray}{1.11} & $1.33 \pm 0.7^{*}~$ & 1.10 & \textcolor{light-gray}{$2.16 \pm 1.1$} & \textcolor{light-gray}{1.85} & \textcolor{light-gray}{$2.56 \pm 1.9$} & \textcolor{light-gray}{1.90} \\ [1mm &\multirow{2}{*}{Homoscedastic}&Segmentation & $1.09 \pm 0.3$ & 1.04 & \textcolor{light-gray}{$1.51 \pm 1.2~$} & \textcolor{light-gray}{1.13} & $1.86 \pm 0.8~$ & 1.69 & $0.99 \pm 0.4$ & 0.91 \\ &&Registration & \textcolor{light-gray}{$1.17 \pm 0.3$} & \textcolor{light-gray}{1.15} & $1.31 \pm 0.6~$ & 1.13 & \textcolor{light-gray}{$2.17 \pm 1.0$} & \textcolor{light-gray}{1.96} & \textcolor{light-gray}{$2.63 \pm 2.0^{*}$} & \textcolor{light-gray}{1.95} \\ [1mm &\multirow{2}{*}{DWA}&Segmentation & $1.12 \pm 0.3^{*\dagger}$ & 1.04 & \textcolor{light-gray}{$1.74 \pm 2.0~$} & \textcolor{light-gray}{1.13} & $1.99 \pm 0.9^{*}$ & 1.77 & $1.00 \pm 0.4~$ & 0.85 \\ &&Registration & \textcolor{light-gray}{$1.14 \pm 0.3$} & \textcolor{light-gray}{1.14} & $1.27 \pm 0.6~$ & {1.07} & \textcolor{light-gray}{$2.24 \pm 1.1^{*}$} & \textcolor{light-gray}{1.97} & \textcolor{light-gray}{$2.72 \pm 1.9$} & \textcolor{light-gray}{2.13} \\ \hline \multirow{6}{*}{ SEDD }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$1.47 \pm 0.6^{*\dagger}$} & \textcolor{light-gray}{1.31} & \textcolor{light-gray}{$2.81 \pm 4.6$} & \textcolor{light-gray}{1.34} & $1.97 \pm 1.0$ & 1.59 & $1.21 \pm 1.0$ & 0.94 \\ &&Registration & \textcolor{light-gray}{$1.28 \pm 0.4^{*}$} & \textcolor{light-gray}{1.19} & \textcolor{light-gray}{$1.50 \pm 0.9^{*}$} & \textcolor{light-gray}{1.26} & \textcolor{light-gray}{$2.26 \pm 1.1^{*}$} & \textcolor{light-gray}{1.94} & \textcolor{light-gray}{$2.61 \pm 2.1^{*}$} & \textcolor{light-gray}{1.83} \\ [1mm &\multirow{2}{*}{Homoscedastic}&Segmentation & $1.15 \pm 0.3^{\dagger}$ & 1.14 & \textcolor{light-gray}{$1.47 \pm 1.0$} & \textcolor{light-gray}{1.22} & $2.12 \pm 1.1$ & 1.91 & $0.99 \pm 0.2$ & 0.94 \\ &&Registration & \textcolor{light-gray}{$1.19 \pm 0.3$} & \textcolor{light-gray}{1.21} & $1.23 \pm 0.5~$ & 1.13 & \textcolor{light-gray}{$2.15 \pm 1.0$} & \textcolor{light-gray}{1.92} & \textcolor{light-gray}{$2.31 \pm 2.0$} & \textcolor{light-gray}{1.64} \\ [1mm &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$1.22 \pm 0.3^{*\dagger}$} & 1.18 & \textcolor{light-gray}{$1.44 \pm 0.8$} & \textcolor{light-gray}{1.21} & $2.12 \pm 1.4$ & 1.73 & $1.10 \pm 0.6$ & 0.93 \\ &&Registration & $1.22 \pm 0.3$ & \textcolor{light-gray}{1.22} & $1.32 \pm 0.6^{*}~$ & 1.10 & \textcolor{light-gray}{$2.30 \pm 1.1^{*}$} & \textcolor{light-gray}{2.01} & \textcolor{light-gray}{$2.86 \pm 1.9^{*}$} & \textcolor{light-gray}{2.41} \\ \hline \multirow{6}{*}{Cross-stitch} & \multirow{2}{*}{Equal}&Segmentation & ${1.06} \pm {0.3}^{*}~$ & {0.99} & $1.27 \pm 0.4~$ & \textcolor{light-gray}{1.15} & ${1.76} \pm {0.8}^{*}~$ & {1.47} & ${0.91} \pm {0.4}~$ & {0.82} \\ &&Registration & \textcolor{light-gray}{$1.10 \pm 0.3^{*}~$} & \textcolor{light-gray}{1.06} & \textcolor{light-gray}{$1.30 \pm 0.6~$} & 1.13 & \textcolor{light-gray}{$2.00 \pm 1.0^{*}~$} & \textcolor{light-gray}{1.75} & \textcolor{light-gray}{$2.45 \pm 2.1~$} & \textcolor{light-gray}{1.81} \\ [1mm &\multirow{2}{*}{Homoscedastic}&Segmentation & \textcolor{light-gray}{$1.23 \pm 0.3^{\dagger}$} & \textcolor{light-gray}{1.16} & \textcolor{light-gray}{$1.51 \pm 1.2~$} & \textcolor{light-gray}{1.17} & \textcolor{light-gray}{$2.37 \pm 1.0$} & \textcolor{light-gray}{2.09} & $0.92 \pm 0.2~$ & 0.89 \\ &&Registration & \textcolor{light-gray}{$1.24 \pm 0.3$} & \textcolor{light-gray}{1.24} & $1.32 \pm 0.6~$ & 1.13 & $2.12 \pm 1.0$ & 1.89 & \textcolor{light-gray}{$2.45 \pm 1.9$} & \textcolor{light-gray}{1.97} \\ [1mm &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$1.34 \pm 0.4^{*\dagger}$} & \textcolor{light-gray}{1.27} & \textcolor{light-gray}{$1.75 \pm 1.7$} & \textcolor{light-gray}{1.29} & \textcolor{light-gray}{$2.32 \pm 0.9^{\dagger}$} & \textcolor{light-gray}{2.11} & $1.17 \pm 0.8^{*}$ & 0.91 \\ &&Registration & $1.22 \pm 0.3$ & 1.19 & $1.27 \pm 0.6~$ & 1.09 & $2.21 \pm 1.0^{*}~$ & 2.00 & \textcolor{light-gray}{$2.93 \pm 2.3^{*}$} & \textcolor{light-gray}{2.27} \\ \hline \end{tabular} } \label{table:HMC_MSD_weighting} \end{table*} \begin{figure*}[t!] \begin{center} \includegraphics[width=1\textwidth]{figures/weight_analysis_mod.pdf} \caption{The evolution of the loss weights during training for different multi-task networks on the validation dataset (HMC).} \label{fig:loss-weights} \end{center} \end{figure*} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{figures/cross_stitch_mod.png} \caption{The evolution of the Cross-stitch units weights during training using equal weights. CS\#1 and CS\#2 are placed in the down-sampling path, while CS\#3 and CS\#4 are placed in the upsampling path. The solid lines represent the mean of the weights across the diagonal of the CS unit, while the dashed lines represent the mean of the off-diagonal weights. } \label{fig:CS-weights} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{figures/scatter_bladder_volume_diff_MSD_HMC_final.pdf} \caption{The effect of the bladder volume deviation from the planning volume on the performance of the Seg, Reg, and Cross-stitch networks for the validation set (HMC). } \label{fig:HMC_bladder_filling} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{figures/scatter_bladder_volume_diff_MSD_EMC_final.pdf} \caption{The effect of the bladder volume deviation from the planning volume on the performance of the STL and the Seg, Reg, and Cross-stitch networks for the independent test set (EMC). } \label{fig:EMC_bladder_filling} \end{center} \end{figure} \subsection{Optimization of the Networks Inputs} During training, validation, and testing, we have access to the fixed image $\IFMath$\xspace, the moving image $\IMMath$\xspace, and the moving segmentation $\SMMath$\xspace. In Table \ref{table:network_input_MSD} we compared different sets of inputs on the validation dataset. This experiment helps to better understand how these network interpret and utilize these inputs and how this would reflect on the network outcome represented by the MSD metric. For this experiment we used equal loss weights for the MTL networks. Feeding $\SMMath$\xspace to the segmentation network improves the results substantially compared to only feeding $\IFMath$\xspace, especially for the seminal vesicles, while feeding $\IMMath$\xspace deteriorates the results. For the registration and JRS-reg networks, feeding $\SMMath$\xspace alongside $\IFMath$\xspace and $\IMMath$\xspace resulted in a similar performance compared to not feeding it. Since the Cross-stitch network is composed of two networks, one for segmentation and the other for registration, we experimented with various combinations of inputs. The results are very consistent with our previous findings on the single-task networks on the effect of using $\SMMath$\xspace as an input. For the remainder of this paper, we chose to use $\IFMath$\xspace as input for the segmentation network, and $\IFMath$\xspace and $\IMMath$\xspace as inputs for the registration network. Although adding $\SMMath$\xspace proved to be better especially for the segmentation network, here we exclude it, since these two methods act as a baseline and this is the standard setting in single-task networks. For dense, SEDD, and JRS-reg networks, we select a concatenation of $\IMMath$\xspace, $\IFMath$\xspace, and $\SMMath$\xspace for the final network. For the Cross-stitch network, we select $\IFMath$\xspace for the segmentation network and the concatenation of $\IMMath$\xspace, $\IFMath$\xspace, and $\SMMath$\xspace for the registration network. \subsection{Optimization of loss weighting strategy} In this experiment we investigate the performance of the various loss weighting strategies introduced in Section \ref{loss_weighting} in order to select the best weighting method for the underlying tasks. Table \ref{table:HMC_MSD_weighting} shows the results of the different weighting strategies for the MTL networks in terms of MSD. For the JRS-reg network architecture, weighting the losses with homoscedastic uncertainty achieved comparable results to using equal weights, while DWA scored somewhat less. For the dense and SEDD architectures, homoscedastic weighting achieved a slightly better performance, while equal weights was best for the Cross-stitch network. For these architectures (dense, SEDD, and Cross-stitch), the segmentation output path showed improvement over the registration output path. Figure \ref{fig:loss-weights} illustrates the evolution of the loss weights $w_i$ during training, for different multi-task network architectures and weighting strategies. For the remainder of this paper and based on the previous findings, we chose the homoscedastic uncertainty weighting strategy for the JRS-reg, dense and SEDD networks, while using equal weights for the Cross-stitch network. \subsection{Analysis of Cross-stitch units} Analysis of the behavior of the Cross-stitch units during training facilitates the understanding of how the segmentation and registration networks interacts in the MTL settings. Figure \ref{fig:CS-weights} shows the mean of the CS units across the diagonal and off-diagonal (See Equation (\ref{eq:cs})). Higher weights on the diagonal means that the network tends to separate the task-specific feature maps, while higher weights off-diagonal means that the network tends to share the corresponding feature maps. \subsection{Effect of the bladder filling} For the HMC dataset, which was used for training and validation, a bladder filling protocol was in place, meaning that the deformation of the bladder between daily and planning scans is not large. However, this is not the scenario for the EMC dataset, the test set. Figure \ref{fig:HMC_bladder_filling} and \ref{fig:EMC_bladder_filling} illustrates the effect of the bladder volume variation from the planning scan on the performance of the Seg, Reg, and Cross-stitch networks. The Cross-stitch network is resilient to bladder filling for both the HMC and EMC datasets. \subsection{Evaluation of the Quality of the DVF} The smoothness of the predicted DVF is an important parameter to evaluate the predicted deformation field. Table \ref{table:dvf-table} shows a detailed analysis of the DVF in terms of the standard deviation of the determinant of the Jacobian as well as the folding fraction for the registration path of the different networks. \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{MSD (mm) values for the different networks on the validation set (HMC). Lower values are better.} \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $1.49 \pm 0.3$ & 1.49 & $2.50 \pm 2.6$ & 2.09 & $3.39 \pm 2.2$ & 2.73 & $1.60 \pm 1.1$ & 1.13 \\ \hline \multirow{1}{*}{ Reg }&Registration & $1.43 \pm 0.8$ & 1.29 & $1.71 \pm 1.4$ & 1.37 & $2.44 \pm 1.1$ & 2.17 & $3.40 \pm 2.3$ & 2.71 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $1.20 \pm 0.3$ & \textcolor{light-gray}{1.20} & ${1.22} \pm {0.5}~$ & {1.07} & $2.05 \pm 1.0$ & 1.81 & $2.34 \pm 2.2$ & 1.60 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & $1.09 \pm 0.3$ & 1.04 & \textcolor{light-gray}{$1.51 \pm 1.2~$} & \textcolor{light-gray}{1.13} & $1.86 \pm 0.8~$ & 1.69 & $0.99 \pm 0.4$ & 0.91 \\ &Registration & \textcolor{light-gray}{$1.17 \pm 0.3$} & \textcolor{light-gray}{1.15} & $1.31 \pm 0.6~$ & 1.13 & \textcolor{light-gray}{$2.17 \pm 1.0$} & \textcolor{light-gray}{1.96} & \textcolor{light-gray}{$2.63 \pm 2.0$} & \textcolor{light-gray}{1.95} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & $1.15 \pm 0.3$ & 1.14 & \textcolor{light-gray}{$1.47 \pm 1.0$} & \textcolor{light-gray}{1.22} & $2.12 \pm 1.1$ & 1.91 & $0.99 \pm 0.2$ & 0.94 \\ &Registration & \textcolor{light-gray}{$1.19 \pm 0.3$} & \textcolor{light-gray}{1.21} & $1.23 \pm 0.5~$ & 1.13 & \textcolor{light-gray}{$2.15 \pm 1.0$} & \textcolor{light-gray}{1.92} & \textcolor{light-gray}{$2.31 \pm 2.0$} & \textcolor{light-gray}{1.64} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & ${1.06} \pm {0.3}$ & {0.99} & $1.27 \pm 0.4~$ & \textcolor{light-gray}{1.15} & ${1.76} \pm {0.8}$ & {1.47} & ${0.91} \pm {0.4}~$ & {0.82} \\ &Registration & \textcolor{light-gray}{$1.10 \pm 0.3$} & \textcolor{light-gray}{1.06} & \textcolor{light-gray}{$1.30 \pm 0.6~$} & 1.13 & \textcolor{light-gray}{$2.00 \pm 1.0$} & \textcolor{light-gray}{1.75} & \textcolor{light-gray}{$2.45 \pm 2.1~$} & \textcolor{light-gray}{1.81} \\ \hline \multirow{1}{*}{ Elastix \cite{qiao2017fast}}&Registration & $1.73 \pm 0.7$ & 1.59 & $2.71 \pm 1.6$ & 2.45 & $3.69 \pm 1.2$ & 3.50 & $5.26 \pm 2.6$ & 4.72 \\ \hline \multirow{1}{*}{ Hybrid \cite{MedPhys}}&Registration & $1.27 \pm 0.3$ & 1.25 & $1.47 \pm 0.5$ & 1.32 & $2.03 \pm 0.6$ & 1.85 & $1.75 \pm 1.0$ & 1.26 \\ \hline \multirow{1}{*}{ JRS-GAN \cite{JrsGan}}&Registration & $1.14 \pm 0.3$ & 1.04 & $1.75 \pm 1.3$ & 1.44 & $2.17 \pm 1.1$ & 1.89 & $2.25 \pm 1.9$ & 1.54 \\ \hline \end{tabular} } \label{table:HMC_MSD} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[Table caption text]{MSD (mm) values for the different networks on the independent test set (EMC). Lower values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $3.18 \pm 1.8$ & 2.57 & $9.33 \pm 10.1$ & 5.82 & $5.79 \pm 3.4$ & 5.18 & $1.88 \pm 1.5~$ & 1.50 \\ \hline \multirow{1}{*}{ Reg }&Registration & $2.01 \pm 2.5$ & 1.18 & $2.86 \pm 5.2$ & 1.18 & $2.89 \pm 2.5$ & 2.23 & $5.98 \pm 4.7$ & 4.44 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $1.94 \pm 2.6$ & 1.16 & $2.48 \pm 4.8~$ & 1.01 & \textcolor{light-gray}{$2.67 \pm 2.4$} & 2.05 & $4.80 \pm 4.6$ & 2.12 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & \textcolor{light-gray}{$2.01 \pm 2.6$} & 1.15 & \textcolor{light-gray}{$4.08 \pm 7.2$} & \textcolor{light-gray}{1.23} & \textcolor{light-gray}{$3.70 \pm 5.4~$} & 2.03 & $2.75 \pm 3.1$ & 1.23 \\ &Registration & $1.93 \pm 2.5$ & \textcolor{light-gray}{1.15} & $2.53 \pm 4.7~$ & 1.01 & $2.67 \pm 2.3$ & \textcolor{light-gray}{2.13} & \textcolor{light-gray}{$5.08 \pm 4.4$} & \textcolor{light-gray}{3.01} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & \textcolor{light-gray}{$1.99 \pm 2.4$} & \textcolor{light-gray}{1.24} & \textcolor{light-gray}{$6.26 \pm 8.9$} & \textcolor{light-gray}{3.01} & \textcolor{light-gray}{$4.21 \pm 4.9~$} & 2.12 & $2.43 \pm 2.9$ & 1.04 \\ &Registration & $1.92 \pm 2.5$ & 1.19 & $2.43 \pm 4.5~$ & 1.07 & $2.72 \pm 2.4$ & \textcolor{light-gray}{2.17} & \textcolor{light-gray}{$4.86 \pm 4.4$} & \textcolor{light-gray}{2.22} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & $1.88 \pm 1.9~$ & \textcolor{light-gray}{1.30} & \textcolor{light-gray}{$2.76 \pm 3.5~$} & \textcolor{light-gray}{1.28} & \textcolor{light-gray}{$4.87 \pm 6.8~$} & \textcolor{light-gray}{2.49} & $1.66 \pm 1.7$ & 0.85 \\ &Registration & \textcolor{light-gray}{$1.91 \pm 2.3$} & 1.23 & $2.41 \pm 4.5~$ & {0.95} & $2.78 \pm 2.4$ & 2.16 & \textcolor{light-gray}{$4.90 \pm 4.0$} & \textcolor{light-gray}{2.84} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & ${1.42} \pm {0.7}~$ & 1.17 & $2.07 \pm 2.6$ & 1.24 & $3.20 \pm 1.6$ & 3.07 & $5.30 \pm 5.1$ & 3.27 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & $1.55 \pm 0.6$ & 1.36 & ${1.65} \pm {1.3}~$ & 1.22 & $2.65 \pm 1.6~$ & 2.36 & $3.81 \pm 3.6$ & 2.26 \\ \hline \end{tabular} } \label{table:EMC_MSD} \end{table*} \begin{table*}[h] \centering \caption{Analysis of the determinant of the Jacobian for the validation and the independent test sets. Lower values are better. } \label{table:dvf-table} \begin{tabular}{lcccccc} &&\multicolumn{2}{c}{Validation set (HMC) }&&\multicolumn{2}{c}{Independent test set (EMC) } \\ \hline Network && Std. Jacobian & Folding fraction && Std. Jacobian & Folding fraction \\ \hline Reg &&$0.2935\pm0.1022$ & $0.0049\pm0.0039$ && $0.4129\pm0.2258$ & $0.0112\pm0.0115$ \\ \hline JRS-reg && $0.2543\pm0.0505$ & $0.0030\pm0.0014$ && $0.3148\pm0.1106$ & $0.0066\pm0.0062$ \\ \hline Dense && $0.2062\pm0.0431$ & $0.0018\pm0.0012$ && $0.2558\pm0.0899$ & $0.0036\pm0.0027$ \\ \hline SEDD && $0.2626\pm0.1167$ & $0.0019\pm0.0016$ && $0.4287\pm0.3000$ & $0.0066\pm0.0074$ \\ \hline Cross-stitch && $0.2241\pm0.0784$ & $0.0024\pm0.0018$ && $0.3301\pm0.1869$ & $0.0071\pm0.0070$ \\ \hline \end{tabular} \end{table*} \subsection{Comparison against the state-of-the-art} Table \ref{table:HMC_MSD} and \ref{table:EMC_MSD} show the results for the validation set (HMC) and test set (EMC), respectively. The first two networks in each table are single-task networks. For both sets, the registration network outperformed the segmentation network for all organs except the bladder. The mean MSD for the independent test set is higher than the corresponding numbers in the validation set for most organs. However, the median values are on par. For the MTL networks, the segmentation path of the networks achieved better performance than the registration path on both datasets except for the seminal vesicles. The Cross-stitch network achieved the best results compared to the other MTL networks. The proposed STL and MTL networks were compared against other state-of-the-art methods that were evaluated using the HMC dataset. For the validation set, the STL network achieved comparable results, while the Cross-stitch network outperformed these methods for both output paths. On the test set, \textit{elastix} \cite{qiao2017fast} and the Hybrid method \cite{MedPhys} performed better except for the bladder, although the median values of the MTL networks were better. For the quality of the predicted contours, Figure \ref{fig:HMC_examples} and \ref{fig:EMC_examples} show example contours from the HMC and EMC datasets for the Seg, Reg, and Cross-stitch networks. The examples show that the Cross-stitch network achieves better results compared to the Seg and Reg networks especially for the seminal vesicles and rectum with large gas pockets. \begin{figure*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c @{\quad} c @{\quad} || c @{\quad} c @{\quad} || c @{\quad} c} \large{Seg} & \large{Reg} & \large{Seg} & \large{Reg} & \large{Seg} & \large{Reg} \\ \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071211_seg_net.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071211_reg_net.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071213_seg_net_101.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071213_reg_net_101.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_22_visit_20071029_seg_net_124.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_22_visit_20071029_reg_net_124.png} \\ \large{Cross-stitch} & \large{Manual} & \large{Cross-stitch} & \large{Manual} & \large{Cross-stitch} & \large{Manual} \\ \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071211_cs_net.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071211_groundtruth.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071213_cs_net_101.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071213_groundtruth_101.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_22_visit_20071029_cs_net_124.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_22_visit_20071029_groundtruth_124.png} \end{tabular} } \caption{Example contours from the validation dataset (HMC) generated by the proposed STL and MTL networks. From left to right, the selected cases are the first, second, and third quartile in terms of the prostate MSD of the Cross-stitch network. The contours of the bladder, prostate, seminal vesicles, and rectum are colored in red, yellow, green, and blue, respectively.} \label{fig:HMC_examples} \end{figure*} \begin{figure*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c @{\quad} c @{\quad} || c @{\quad} c @{\quad} || c @{\quad} c} \large{Seg} & \large{Reg} & \large{Seg} & \large{Reg} & \large{Seg} & \large{Reg} \\ \includegraphics[width=25mm,height=35mm]{figures/Patient_06_visit_19000101_seg_net_157.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_06_visit_19000101_reg_net_157.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_15_visit_19000101_seg_net_100.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_15_visit_19000101_reg_net_100.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_17_visit_18990305_seg_net_103.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_17_visit_18990305_reg_net_103.png} \\ \large{Cross-stitch} & \large{Manual} & \large{Cross-stitch} & \large{Manual} & \large{Cross-stitch} & \large{Manual}\\ \includegraphics[width=25mm,height=35mm]{figures/Patient_06_visit_19000101_cs_net_157.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_06_visit_19000101_groundtruth_157.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_15_visit_19000101_cs_net_100.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_15_visit_19000101_groundtruth_100.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_17_visit_18990305_cs_net_103.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_17_visit_18990305_groundtruth_103.png} \end{tabular} } \caption{Example contours from the independent test set (EMC) generated by the proposed STL and MTL networks. From left to right, the selected cases are the first, second, and third quartile in terms of the prostate MSD of the Cross-stitch network.} \label{fig:EMC_examples} \end{figure*} \section{Introduction} Medical image analysis aims to extract clinically useful information that aids the diagnosis, prognosis, monitoring and treatment of diseases \cite{nilashi2020disease, shen2017deep}. Two of the most common tasks in such analyses are image registration and segmentation \cite{rueckert2014registration}. Image segmentation aims to identify and cluster objects that prevail similar characteristics into distinctive labels, where these labels can be used for diagnosis or treatment planning. Image registration is the task of finding the geometrical correspondence between images that were acquired at different time steps or from different imaging modalities. These two tasks are complementary, as for example image atlases warped by image registration algorithms are often used for image segmentation \cite{huo20193d,wang2014multi}, while image contours can be used to guide the image registration method in addition to the intensity images \cite{MedPhys, JrsGan, mahapatra2015joint}. Contours are also used for evaluating the quality of the registration \cite{woerner2017evaluation, gu2013contour}. Therefore, coupling image registration and segmentation tasks and modeling them in a single network could be beneficial. Adaptive image-guided radiotherapy is an exemplar application where the coupling of image registration and segmentation is vital. In radiotherapy, treatment radiation dose is delivered over a course of multiple inter-fraction sessions. In an adaptive setting, re-imaging of the daily anatomy and automatic re-contouring is crucial to compensate for patient misalignment, to compensate for anatomical variations in organ shape and position, and an enabler for the reduction of treatment margins or robustness settings \cite{hansen2006repeat, brock2019adaptive}. These have an important influence on the accuracy of the dose delivery, and improve the treatment quality, potentially reducing treatment related side-effects and increasing quality-of-life after treatment \cite{sonke2019adaptive}. Automatic contouring can be done by direct segmentation of the daily scan, or by registration of the annotated planning scan with the daily scan followed by contour propagation. Image registration has the advantage of leveraging prior knowledge from the initial planning CT scan and the corresponding clinical-quality delineations, which may especially be helpful for challenging organs. On the other hand, image segmentation methods may better delineate organs that vary substantially in shape and volume between treatment fractions, which is often the case for the rectum and the bladder. In this study, we propose to fuse these tasks at the network architecture level as well as via the loss function. Our key contributions in this paper are as follows: \begin{enumerate} \item We formulate image registration and segmentation as a multi-task learning problem, which we explore in the context of adaptive image-guided radiotherapy. \item We explore different joint network architectures as well as loss weighting methods for merging these tasks. \item We adopt the cross-stitch network architecture for segmentation and registration tasks and explore how these cross-stitch units facilitate information flow between these tasks. \item Furthermore, we compare MTL algorithms against single-task networks. We demonstrate that MTL algorithms outperform STL networks for both segmentation and registration tasks. To the best of our knowledge this is the first study to investigate various MTL algorithms on an architectural level as well as on a loss weighing level for joint registration and segmentation tasks. \item We thoroughly investigate the internals of the STL and MTL networks and pinpoint the best strategy to merge this information to maximize the information flow between the two tasks. \end{enumerate} Initial results of this work were presented in \cite{beljaards2020cross}, focusing on the cross-stitch unit in a proposed joint architecture. In the current paper we extend this study to the architectural fusion of these tasks as well as different loss weighting mechanisms. Moreover, an extensive analysis of the different methodologies was performed, detailing the effect of architectural choices, information flow between the two tasks, etc. The remainder of this paper is organized as follows: Section \ref{method_Section} introduces single-task networks, multi-task networks, and loss weighting approaches. In Section \ref{exp_results_section} we introduce the datasets and details about the implementation as well as the experiments. In Sections \ref{discussion} and \ref{conclusion}, we discuss our results, provide future research directions, and present our conclusions. \subsection{Related Work} In the last decade, researchers have been exploring the idea of fusing image segmentation and registration. Lu \emph{et al.} \cite{lu2011integrated} and Pohl \emph{et al.} \cite{pohl2006bayesian} proposed modeling these tasks using a Bayesian framework such that these tasks would constrain each other. Yezzi \cite{yezzi2003variational} proposed to fuse these tasks using active contours, while Unal \emph{et al.} \cite{unal2005coupled} proposed to generalize the previous approach by using partial differential equations without shape priors. Mahapatra \emph{et al.} \cite{mahapatra2015joint} proposed a Joint Registration and Segmentation (JRS) framework for cardiac perfusion images, where the temporal intensity images are decomposed into sparse and low rank components corresponding to the intensity change from the contrast agent and the motion, respectively. They proposed to use the sparse component for segmentation and the low rank component for registration. However, most of the aforementioned methods require complex parameter tuning and yield long computation times. Recently, deep learning-based networks have shown unprecedented success in many fields especially in the medical image analysis domain \cite{fu2020deep, yousefi2020esophageal, liu2019automatic, kiljunen2020deep, cao2018deep, leger2020cross, elmahdy2020patient, sokooti20193d}, where deep learning models perform on par with medical experts or even surpassing them in some tasks \cite{tschandl2019comparison, ardila2019end, hu2019observational, maidens2018artificial, mak2019use}. Several deep learning-based approaches have been proposed for joint registration and segmentation. The joining mechanisms in the literature can be classified in two categories, namely joining via the loss function and via the architecture as well as the loss function. Selected exemplar methods of the first approach are Hue \emph{et al.} \cite{hu2018label}, who proposed to join segmentation and registration via a multi-resolution Dice loss function. Elmahdy \emph{et al.} \cite{MedPhys} proposed a framework that is a hybrid between learning and iterative approaches, where a CNN network segments the bladder and feeds it to an iterative-based registration algorithm. The authors integrated domain-specific knowledge such as air pocket inpainting as well as contrast clipping, moreover they added an extra registration step in order to focus on the seminal vesicles and rectum. Elmahdy \emph{et al.} \cite{JrsGan} and Mahapatra \emph{et al.} \cite{mahapatra2018joint} proposed a GAN-based (Generative Adversarial Network) approach, where a generative network predicts the correspondence between a pair of images and a discriminator network for giving feedback on the quality of the deformed contours. Exemplar methods of the second category are Xu \emph{et al.} \cite{xu2019deepatlas}, who presented a framework that simultaneously trains a registration and a segmentation network. The authors proposed to jointly learn these tasks during training, however the networks can be used independently during test time. This enables prediction of only the registration output, when the labels are not available during test time. Estienne \emph{et al.} \cite{estienne2019u} proposed to merge affine and deformable registration as well as segmentation in a 3D end-to-end CNN network. Recently Liu \emph{et al.} \cite{liu2020jssr} proposed an end-to-end framework called JSSR that registers and segments multi-modal images. This framework is composed of three networks: a generator network, that synthesizes the moving image to match the modality of the fixed image, a registration network that registers the synthesized image to the fixed image, and finally a segmentation network that segments the fixed, moving, and synthesized images. All the previous methods explored the idea of joining segmentation and registration, where to the best of our knowledge none have explored how these tasks are best connected and how to optimize the information flow between them on both the loss and architectural levels. \section{Acknowledgment} The HMC dataset with contours was collected at Haukeland University Hospital, Bergen, Norway, and was provided to us by responsible oncologist Svein Inge Helle and physicist Liv Bolstad Hysing. The EMC dataset with contours was collected at Erasmus University Medical Center, Rotterdam, The Netherlands, and was provided to us by radiation therapist Luca Incrocci and physicist Mischa Hoogeman. They are gratefully acknowledged. \section{of the paper ``Joint Registration and Segmentation via Multi-Task Learning for Adaptive Radiotherapy of Prostate Cancer''} \vspace{0.5cm} In this appendix we provide a detailed results for the proposed methods and associated experiments in terms of DSC and \%95 HD. \vspace{0.5cm} \begin{table*}[h] \centering \setlength{\tabcolsep}{3pt} \caption[]{The effect of network input for the different architectures on the validation set (HMC) in terms of DSC. Higher values are better. Here, $\oplus$ is the concatenation operation, and $\cdot \| \cdot$ represents the inputs to the segmentation network (left of $\|$) and the inputs to the registration network (right of $\|$).} \resizebox{\textwidth}{!}{ \begin{tabular}{lcllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Input & Output path&\multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{4}{*}{ Seg }&\multirow{1}{*}{ $ {I_f}$ }&& $0.84 \pm 0.03~$ & 0.84 & $0.60 \pm 0.14~$ & 0.62 & $0.75 \pm 0.10~$ & 0.77 & $0.90 \pm 0.07~$ & 0.93 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{S_m}$ }&& $0.85 \pm 0.05~$ & 0.86 & ${0.66} \pm {0.16}~$ & {0.72} & ${0.79} \pm {0.12}~$ & {0.82} & ${0.93} \pm {0.03}~$ & {0.94} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& $0.66 \pm 0.08~$ & 0.67 & $0.39 \pm 0.21~$ & 0.40 & $0.39 \pm 0.21~$ & 0.41 & $0.91 \pm 0.08~$ & 0.93 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${0.86} \pm {0.04}~$ & {0.87} & $0.64 \pm 0.16~$ & 0.70 & $0.78 \pm 0.08~$ & 0.78 & ${0.93} \pm {0.03}~$ & {0.94} \\ \hline \multirow{2}{*}{ Reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${0.85} \pm {0.06}~$ & {0.86} & ${0.62} \pm {0.18}~$ & {0.68} & ${0.79} \pm {0.08}~$ & {0.81} & ${0.82} \pm {0.10}~$ & {0.84} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& $0.82 \pm 0.08~$ & 0.83 & $0.60 \pm 0.17~$ & 0.65 & $0.77 \pm 0.08~$ & 0.80 & $0.79 \pm 0.13~$ & 0.83 \\ \hline \multirow{2}{*}{ JRS-reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${0.87} \pm {0.04}~$ & {0.87} & ${0.68} \pm {0.14}~$ & {0.72} & $0.82 \pm 0.06~$ & {0.84} & ${0.87} \pm {0.08}~$ & {0.91} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${0.87} \pm {0.04}~$ & {0.87} & $0.67 \pm 0.15~$ & {0.72} & ${0.83} \pm {0.06}~$ & {0.84} & ${0.87} \pm {0.08}~$ & {0.91} \\ \hline \multirow{8}{*}{ Cross-stitch }&\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}$ }&Segmentation & \textcolor{light-gray}{$0.85 \pm 0.03$} & \textcolor{light-gray}{0.85} & \textcolor{light-gray}{$0.57 \pm 0.19$} & \textcolor{light-gray}{0.60} & \textcolor{light-gray}{$0.81 \pm 0.08$} & \textcolor{light-gray}{0.83} & $0.93 \pm 0.05$ & 0.94 \\ &&Registration & $0.87 \pm 0.03$ & 0.88 & $0.67 \pm 0.15$ & 0.70 & $0.82 \pm 0.06$ & 0.84 & \textcolor{light-gray}{$0.87 \pm 0.08$} & \textcolor{light-gray}{0.91} \\ [1mm] &\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & ${0.88} \pm {0.04}~$ & 0.88 & ${0.70} \pm {0.11}~$ & {0.74} & ${0.86} \pm {0.05}~$ & {0.88} & ${0.94} \pm {0.02}~$ & {0.95} \\ &&Registration & \textcolor{light-gray}{$0.87 \pm 0.03~$} & 0.88 & \textcolor{light-gray}{$0.68 \pm 0.15~$} & \textcolor{light-gray}{0.73} & \textcolor{light-gray}{$0.84 \pm 0.05~$} & \textcolor{light-gray}{0.85} & \textcolor{light-gray}{$0.88 \pm 0.08~$} & \textcolor{light-gray}{0.91} \\ [1mm] &\multirow{2}{*}{ $ {I_f}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & \textcolor{light-gray}{$0.77 \pm 0.11$} & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.52 \pm 0.19$} & \textcolor{light-gray}{0.57} & $0.80 \pm 0.05$ & \textcolor{light-gray}{0.80} & $0.93 \pm 0.03$ & 0.94 \\ &&Registration & $0.85 \pm 0.04$ & 0.85 & $0.66 \pm 0.14$ & 0.72 & $0.80 \pm 0.06$ & 0.82 & \textcolor{light-gray}{$0.87 \pm 0.08$} & \textcolor{light-gray}{0.90} \\ [1mm] &\multirow{2}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & ${0.88} \pm {0.04}~$ & {0.89} & $0.67 \pm 0.15$ & 0.72 & $0.85 \pm 0.05~$ & 0.86 & ${0.94} \pm {0.03}~$ & {0.95} \\ &&Registration & \textcolor{light-gray}{$0.86 \pm 0.04$} & \textcolor{light-gray}{0.87} & $0.67 \pm 0.16$ & 0.72 & \textcolor{light-gray}{$0.83 \pm 0.06$} & \textcolor{light-gray}{0.84} & \textcolor{light-gray}{$0.88 \pm 0.08$} & \textcolor{light-gray}{0.91} \\ \hline \end{tabular} } \label{table:_DSC} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[Table caption text]{ The effect of network input for the different architectures on the validation set (HMC) in terms of \%95 HD (mm). Lower values are better. Here, $\oplus$ is the concatenation operation, and $\cdot \| \cdot$ represents the inputs to the segmentation network (left of $\|$) and the inputs to the registration network (right of $\|$).} \resizebox{\textwidth}{!}{ \begin{tabular}{lcllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Input & Output path&\multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{4}{*}{ Seg }&\multirow{1}{*}{ $ {I_f}$ }&& $4.4 \pm 1.0~$ & 4.4 & $8.6 \pm 8.6~$ & 7.3 & $16.5 \pm 11.0~$ & 13.3 & $6.9 \pm 6.6~$ & 4.0 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{S_m}$ }&& $3.9 \pm 1.4~$ & {3.6} & ${5.9} \pm {5.9}~$ & {4.1} & $12.1 \pm 9.7~$ & {8.9} & $4.3 \pm 3.2~$ & {3.0} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& $9.1 \pm 2.3~$ & 8.7 & $14.9 \pm 10.5~$ & 11.7 & $45.1 \pm 17.3~$ & 41.8 & $5.3 \pm 5.6~$ & 3.6 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${3.8} \pm {1.1}~$ & {3.6} & $7.3 \pm 9.2~$ & 4.2 & ${11.5} \pm {6.7}~$ & 9.6 & ${3.3} \pm {1.5}~$ & {3.0} \\ \hline \multirow{2}{*}{ Reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${5.5} \pm {4.5}~$ & {4.0} & ${5.6} \pm {4.1}~$ & {4.3} & ${11.0} \pm {6.4}~$ & 9.4 & ${15.7} \pm {9.6}~$ & {12.1} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& $7.7 \pm 6.3~$ & 5.5 & $6.2 \pm 4.2~$ & 4.8 & $11.6 \pm 6.8~$ & {9.2} & $17.0 \pm 9.5~$ & 14.7 \\ \hline \multirow{2}{*}{ JRS-reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${3.6} \pm {1.3}~$ & {3.0} & $4.5 \pm 3.0~$ & {3.3} & ${9.6} \pm {5.7}~$ & 8.2 & ${13.1} \pm {10.1}~$ & {9.4} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${3.6} \pm {1.9}~$ & 3.1 & ${4.4} \pm {2.8}~$ & 3.7 & $9.8 \pm 5.9~$ & {8.1} & $13.4 \pm 10.7~$ & 10.6 \\ \hline \multirow{8}{*}{ Cross-stitch }&\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}$ }&Segmentation & \textcolor{light-gray}{$5.1 \pm 2.3$} & \textcolor{light-gray}{4.4} & \textcolor{light-gray}{$9.5 \pm 9.6$} & \textcolor{light-gray}{6.1} & \textcolor{light-gray}{$17.2 \pm 14.0$} & \textcolor{light-gray}{12.6} & $5.0 \pm 6.6$ & 3.0 \\ &&Registration & $3.3 \pm 0.9$ & {3.0} & $4.7 \pm 3.0~$ & 3.7 & $10.1 \pm 6.3~$ & 9.0 & \textcolor{light-gray}{$12.6 \pm 10.0$} & \textcolor{light-gray}{9.4} \\ [1mm] &\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & ${3.0} \pm {1.0}~$ & {3.0} & ${4.3} \pm {1.7}~$ & \textcolor{light-gray}{3.9} & ${9.5} \pm {6.2}~$ & {7.2} & ${3.3} \pm {2.9}~$ & {2.3} \\ &&Registration & \textcolor{light-gray}{$3.2 \pm 0.9~$} & {3.0} & \textcolor{light-gray}{$4.5 \pm 3.3~$} & 3.6 & \textcolor{light-gray}{$9.8 \pm 6.3~$} & \textcolor{light-gray}{8.6} & \textcolor{light-gray}{$12.2 \pm 10.1~$} & \textcolor{light-gray}{9.7} \\ [1mm] &\multirow{2}{*}{ $ {I_f}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & \textcolor{light-gray}{$5.8 \pm 2.0$} & \textcolor{light-gray}{5.9} & \textcolor{light-gray}{$11.0 \pm 13.4$} & \textcolor{light-gray}{5.8} & $10.2 \pm 4.9~$ & 8.5 & $4.5 \pm 4.3$ & 3.0 \\ &&Registration & $4.4 \pm 1.6$ & 4.1 & $4.5 \pm 3.3~$ & 3.6 & $10.2 \pm 5.7$ & \textcolor{light-gray}{9.3} & \textcolor{light-gray}{$12.9 \pm 9.3$} & \textcolor{light-gray}{11.1} \\ [1mm] &\multirow{2}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & $3.1 \pm 1.0~$ & {3.0} & \textcolor{light-gray}{$5.4 \pm 5.4$} & \textcolor{light-gray}{4.4} & $9.7 \pm 5.6~$ & 8.9 & $4.2 \pm 5.6$ & 2.6 \\ &&Registration & \textcolor{light-gray}{$3.5 \pm 1.2$} & \textcolor{light-gray}{3.2} & $4.4 \pm 3.1~$ & {3.4} & \textcolor{light-gray}{$10.2 \pm 6.3~$} & \textcolor{light-gray}{9.1} & \textcolor{light-gray}{$12.5 \pm 10.6$} & \textcolor{light-gray}{8.7} \\ \hline \end{tabular} } \label{table:_HD} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{DSC values for the different networks and loss weighting methods for the HMC dataset. Higher values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{llllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Weight & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{3}{*}{ JRS-reg }&\multirow{1}{*}{Equal}&Registration & ${0.84} \pm {0.16}~$ & {0.89} & {$0.67 \pm 0.25~$} & 0.79 & $0.76 \pm 0.14$ & {0.79} & {$0.79 \pm 0.17$} & {0.88} \\ &\multirow{1}{*}{Homoscedastic}&Registration & ${0.84} \pm {0.16}~$ & {0.89} & ${0.68} \pm {0.25}~$ & {0.77} & $0.76 \pm 0.15$ & 0.80 & $0.80 \pm 0.18$ & 0.89 \\ &\multirow{1}{*}{DWA}&Registration & {$0.83 \pm 0.16$} & {0.88} & {$0.66 \pm 0.25$} & 0.78 & {$0.74 \pm 0.15$} & {0.79} & {$0.76 \pm 0.18$} & {0.84} \\ \hline \multirow{6}{*}{ Dense }&\multirow{2}{*}{Equal}&Segmentation & $0.83 \pm 0.15$ & 0.88 & \textcolor{light-gray}{$0.55 \pm 0.29$} & \textcolor{light-gray}{0.65} & $0.78 \pm 0.16~$ & 0.81 & $0.88 \pm 0.11~$ & 0.93 \\ &&Registration & \textcolor{light-gray}{$0.83 \pm 0.16$} & \textcolor{light-gray}{0.88} & $0.66 \pm 0.25$ & 0.75 & \textcolor{light-gray}{$0.76 \pm 0.15$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.79 \pm 0.16$} & \textcolor{light-gray}{0.87} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & ${0.84} \pm {0.16}~$ & {0.89} & \textcolor{light-gray}{$0.63 \pm 0.27$} & \textcolor{light-gray}{0.75} & ${0.79} \pm {0.16}~$ & 0.82 & $0.87 \pm 0.13~$ & 0.93 \\ &&Registration & ${0.84} \pm {0.16}~$ & \textcolor{light-gray}{0.88} & ${0.68} \pm {0.25}~$ & 0.78 & \textcolor{light-gray}{$0.77 \pm 0.14$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.78 \pm 0.17$} & \textcolor{light-gray}{0.86} \\ &\multirow{2}{*}{DWA}&Segmentation & ${0.84} \pm {0.15}~$ & {0.89} & \textcolor{light-gray}{$0.58 \pm 0.28$} & \textcolor{light-gray}{0.67} & ${0.79} \pm {0.15}~$ & {0.83} & $0.88 \pm 0.12~$ & 0.93 \\ &&Registration & ${0.84} \pm {0.16}~$ & {0.89} & $0.67 \pm 0.24$ & 0.76 & \textcolor{light-gray}{$0.76 \pm 0.15$} & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.79 \pm 0.16$} & \textcolor{light-gray}{0.87} \\ \hline \multirow{6}{*}{ SEDD }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$0.79 \pm 0.16$} & \textcolor{light-gray}{0.85} & \textcolor{light-gray}{$0.46 \pm 0.28$} & \textcolor{light-gray}{0.53} & $0.77 \pm 0.14$ & 0.80 & $0.85 \pm 0.12$ & 0.91 \\ &&Registration & {$0.82 \pm 0.16$} & \textcolor{light-gray}{0.87} & $0.66 \pm 0.26$ & 0.78 & \textcolor{light-gray}{$0.75 \pm 0.15$} & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.78 \pm 0.16$} & \textcolor{light-gray}{0.86} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & ${0.84} \pm {0.15}~$ & {0.89} & \textcolor{light-gray}{$0.50 \pm 0.28$} & \textcolor{light-gray}{0.58} & $0.76 \pm 0.18~$ & 0.82 & $0.88 \pm 0.13~$ & 0.94 \\ &&Registration & ${0.84} \pm {0.16}~$ & \textcolor{light-gray}{0.88} & ${0.68} \pm {0.24}~$ & 0.78 & \textcolor{light-gray}{$0.76 \pm 0.15$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.79 \pm 0.17$} & \textcolor{light-gray}{0.88} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$0.83 \pm 0.14$} & 0.88 & \textcolor{light-gray}{$0.62 \pm 0.27$} & \textcolor{light-gray}{0.74} & $0.78 \pm 0.16~$ & {0.83} & $0.87 \pm 0.14~$ & 0.94 \\ &&Registration& ${0.84} \pm {0.15}$ & 0.88 & $0.67 \pm 0.24~$ & 0.78 & \textcolor{light-gray}{$0.75 \pm 0.15$} & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.78 \pm 0.18$} & \textcolor{light-gray}{0.86} \\ \hline \multirow{6}{*}{ Cross-stitch }&\multirow{2}{*}{Equal}&Segmentation & ${0.84} \pm {0.14}~$ & {0.89} & \textcolor{light-gray}{$0.61 \pm 0.27~$} & \textcolor{light-gray}{0.73} & $0.78 \pm 0.14~$ & 0.81 & $0.88 \pm 0.10~$ & 0.93 \\ &&Registration & ${0.84} \pm {0.15}~$ & {0.89} & ${0.68} \pm {0.24}~$ & {0.80} & \textcolor{light-gray}{$0.77 \pm 0.15~$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.80 \pm 0.16~$} & \textcolor{light-gray}{0.87} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & ${0.84} \pm {0.13}~$ & \textcolor{light-gray}{0.87} & \textcolor{light-gray}{$0.65 \pm 0.24~$} & \textcolor{light-gray}{0.76} & \textcolor{light-gray}{$0.74 \pm 0.18$} & 0.80 & ${0.92} \pm {0.08}$ & {0.95} \\ &&Registration & ${0.84} \pm {0.15}~$ & {0.89} & ${0.68} \pm {0.24}~$ & 0.79 & $0.75 \pm 0.15$ & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.80 \pm 0.17$} & \textcolor{light-gray}{0.87} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$0.82 \pm 0.14$} & \textcolor{light-gray}{0.86} & \textcolor{light-gray}{$0.66 \pm 0.24~$} & \textcolor{light-gray}{0.76} & $0.75 \pm 0.18$ & 0.79 & ${0.92} \pm {0.08}$ & {0.95} \\ &&Registration & ${0.84} \pm {0.15}~$ & {0.89} & ${0.68} \pm {0.23}~$ & 0.79 & $0.75 \pm 0.15$ & \textcolor{light-gray}{0.78} & \textcolor{light-gray}{$0.77 \pm 0.17$} & \textcolor{light-gray}{0.83} \\ \hline \end{tabular} } \label{table:_DSC} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{\%95 HD (mm) values for the different networks and loss weighting methods for the HMC dataset. Lower values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{llllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Weight & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{3}{*}{ JRS-reg }&\multirow{1}{*}{Equal}&Registration & $5.2 \pm 5.7~$ & 3.2 & {$6.5 \pm 7.1$} & {4.0} & ${12.6} \pm {6.7}~$ & {12.0} & {$20.3 \pm 14.0$} & {18.6} \\ &\multirow{1}{*}{Homoscedastic}&Registration & {$5.7 \pm 5.9$} & {3.7} & $6.2 \pm 7.1~$ & 3.6 & {$13.0 \pm 7.3~$} & {11.5} & $18.5 \pm 14.0$ & 13.0 \\ &\multirow{1}{*}{DWA}&Registration & $5.7 \pm 5.9$ & 3.5 & {$6.4 \pm 6.8$} & {3.7} & {$13.2 \pm 7.3$} & {12.2} & {$20.0 \pm 13.2$} & {17.6} \\ \hline \multirow{6}{*}{ Dense }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$5.7 \pm 5.4$} & \textcolor{light-gray}{4.1} & \textcolor{light-gray}{$14.4 \pm 17.2$} & \textcolor{light-gray}{6.8} & \textcolor{light-gray}{$16.8 \pm 12.6~$} & \textcolor{light-gray}{13.6} & $10.9 \pm 10.9$ & 5.5 \\ &&Registration & $5.6 \pm 5.6$ & \textcolor{light-gray}{4.0} & $6.6 \pm 7.8$ & 4.0 & $13.1 \pm 6.7$ & 13.0 & \textcolor{light-gray}{$19.6 \pm 12.0$} & \textcolor{light-gray}{17.4} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & \textcolor{light-gray}{$5.8 \pm 5.9$} & \textcolor{light-gray}{3.3} & \textcolor{light-gray}{$10.0 \pm 11.6$} & \textcolor{light-gray}{5.1} & \textcolor{light-gray}{$17.1 \pm 16.6~$} & \textcolor{light-gray}{13.8} & $11.4 \pm 11.3~$ & 5.9 \\ &&Registration & $5.3 \pm 5.7$ & {3.0} & $6.4 \pm 6.8$ & {3.2} & $13.0 \pm 6.5$ & 12.6 & \textcolor{light-gray}{$19.2 \pm 13.7$} & \textcolor{light-gray}{14.2} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$5.4 \pm 5.5$} & \textcolor{light-gray}{3.6} & \textcolor{light-gray}{$12.7 \pm 17.0$} & \textcolor{light-gray}{5.9} & \textcolor{light-gray}{$16.2 \pm 12.5$} & \textcolor{light-gray}{14.4} & $10.8 \pm 10.7$ & 6.2 \\ &&Registration & $5.3 \pm 5.6$ & 3.5 & ${6.0} \pm {6.6}~$ & 3.3 & $13.1 \pm 7.2$ & 13.0 & \textcolor{light-gray}{$19.4 \pm 11.9$} & \textcolor{light-gray}{17.4} \\ \hline \multirow{6}{*}{ SEDD }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$8.5 \pm 7.1$} & \textcolor{light-gray}{6.0} & \textcolor{light-gray}{$18.9 \pm 19.5$} & \textcolor{light-gray}{8.6} & \textcolor{light-gray}{$16.7 \pm 11.9$} & \textcolor{light-gray}{14.7} & $12.7 \pm 11.0$ & 8.5 \\ &&Registration & $5.6 \pm 5.8$ & 3.6 & $6.7 \pm 7.2$ & 4.1 & $13.3 \pm 7.0$ & 12.0 & \textcolor{light-gray}{$19.0 \pm 12.7$} & \textcolor{light-gray}{15.2} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & \textcolor{light-gray}{$5.7 \pm 5.5$} & \textcolor{light-gray}{3.9} & \textcolor{light-gray}{$16.0 \pm 16.3$} & \textcolor{light-gray}{10.6} & \textcolor{light-gray}{$18.8 \pm 16.5~$} & \textcolor{light-gray}{15.3} & $9.4 \pm 9.9~$ & 4.1 \\ &&Registration & $5.5 \pm 5.6$ & 3.3 & $6.3 \pm 6.7~$ & 3.6 & $13.3 \pm 7.3$ & 13.0 & \textcolor{light-gray}{$18.8 \pm 13.5$} & \textcolor{light-gray}{14.6} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$6.2 \pm 5.4$} & \textcolor{light-gray}{4.4} & \textcolor{light-gray}{$11.5 \pm 14.0$} & \textcolor{light-gray}{5.0} & \textcolor{light-gray}{$16.8 \pm 14.4~$} & \textcolor{light-gray}{13.0} & $9.5 \pm 10.8~$ & 4.4 \\ &&Registration & $5.8 \pm 5.7$ & 4.0 & $6.4 \pm 7.4~$ & 3.6 & $13.4 \pm 7.5$ & 12.5 & \textcolor{light-gray}{$21.9 \pm 11.5$} & \textcolor{light-gray}{19.0} \\ \hline \multirow{6}{*}{ Cross-stitch }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$5.8 \pm 5.4~$} & \textcolor{light-gray}{4.0} & \textcolor{light-gray}{$12.2 \pm 15.8~$} & \textcolor{light-gray}{5.0} & \textcolor{light-gray}{$17.0 \pm 14.7~$} & \textcolor{light-gray}{14.0} & $10.8 \pm 11.3~$ & 4.4 \\ &&Registration & ${5.1} \pm {5.5}~$ & 3.2 & $6.2 \pm 8.6~$ & 3.3 & ${12.6} \pm {6.7}~$ & 12.0 & \textcolor{light-gray}{$19.1 \pm 12.5~$} & \textcolor{light-gray}{16.2} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & {$5.9 \pm 5.4~$} & \textcolor{light-gray}{4.1} & \textcolor{light-gray}{$7.8 \pm 7.4~$} & \textcolor{light-gray}{4.6} & \textcolor{light-gray}{$20.5 \pm 18.9~$} & \textcolor{light-gray}{14.7} & $7.8 \pm 8.7$ & {3.1} \\ &&Registration & \textcolor{light-gray}{$6.2 \pm 5.6$} & \textcolor{light-gray}{4.5} & $6.1 \pm 7.2~$ & {3.2} & $13.5 \pm 7.3$ & 13.5 & \textcolor{light-gray}{$19.4 \pm 12.3$} & \textcolor{light-gray}{16.3} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$6.7 \pm 5.8$} & \textcolor{light-gray}{4.2} & \textcolor{light-gray}{$7.6 \pm 9.1$} & \textcolor{light-gray}{4.1} & \textcolor{light-gray}{$20.7 \pm 18.6$} & \textcolor{light-gray}{14.9} & ${7.5} \pm {8.8}$ & 3.5 \\ &&Registration & $6.0 \pm 5.7$ & 4.1 & $6.1 \pm 6.8~$ & 3.4 & $13.5 \pm 7.5$ & 13.6 & \textcolor{light-gray}{$21.5 \pm 11.6$} & \textcolor{light-gray}{20.1} \\ \hline \end{tabular} } \label{table:_HD} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{DSC values for the different networks on the validation set (HMC). Higher values are better.} \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $0.84 \pm 0.03~$ & 0.84 & $0.60 \pm 0.14~$ & 0.62 & $0.75 \pm 0.10~$ & 0.77 & $0.90 \pm 0.07~$ & 0.93 \\ \hline \multirow{1}{*}{ Reg }&Registration & $0.85 \pm 0.06~$ & 0.86 & $0.62 \pm 0.18~$ & 0.68 & $0.79 \pm 0.08~$ & 0.81 & $0.82 \pm 0.10~$ & 0.84 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $0.86 \pm 0.03~$ & 0.87 & $0.69 \pm 0.13~$ & 0.73 & $0.83 \pm 0.06~$ & 0.84 & $0.88 \pm 0.08~$ & 0.92 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & ${0.88} \pm {0.04}~$ & {0.89} & ${0.70} \pm {0.12}~$ & 0.73 & $0.85 \pm 0.04~$ & 0.86 & ${0.94} \pm {0.02}~$ & 0.94 \\ &Registration & \textcolor{light-gray}{$0.87 \pm 0.04~$} & \textcolor{light-gray}{0.88} & \textcolor{light-gray}{$0.68 \pm 0.15~$} & 0.73 & \textcolor{light-gray}{$0.82 \pm 0.06~$} & \textcolor{light-gray}{0.83} & \textcolor{light-gray}{$0.87 \pm 0.08~$} & \textcolor{light-gray}{0.90} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & $0.87 \pm 0.04~$ & 0.88 & $0.69 \pm 0.12~$ & \textcolor{light-gray}{0.72} & $0.83 \pm 0.07~$ & 0.84 & $0.93 \pm 0.02~$ & 0.94 \\ &Registration & \textcolor{light-gray}{$0.86 \pm 0.04~$} & \textcolor{light-gray}{0.87} & $0.69 \pm 0.13~$ & {0.74} & \textcolor{light-gray}{$0.82 \pm 0.06~$} & \textcolor{light-gray}{0.83} & \textcolor{light-gray}{$0.88 \pm 0.08~$} & \textcolor{light-gray}{0.92} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & ${0.88} \pm {0.04}~$ & 0.88 & ${0.70} \pm {0.11}~$ & {0.74} & ${0.86} \pm {0.05}~$ & {0.88} & ${0.94} \pm {0.02}~$ & {0.95} \\ &Registration & \textcolor{light-gray}{$0.87 \pm 0.03~$} & 0.88 & \textcolor{light-gray}{$0.68 \pm 0.15~$} & \textcolor{light-gray}{0.73} & \textcolor{light-gray}{$0.84 \pm 0.05~$} & \textcolor{light-gray}{0.85} & \textcolor{light-gray}{$0.88 \pm 0.08~$} & \textcolor{light-gray}{0.91} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & $0.84 \pm 0.07~$ & 0.86 & $0.50 \pm 0.25~$ & 0.53 & $0.74 \pm 0.06~$ & 0.74 & $0.75 \pm 0.10~$ & 0.76 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & ${0.88} \pm {0.04}~$ & {0.89} & ${0.70} \pm {0.14}~$ & 0.72 & $0.85 \pm 0.06~$ & 0.87 & $0.91 \pm 0.08~$ & {0.95} \\ \hline \multirow{1}{*}{ JRS-GAN~\cite{JrsGan} }&Registration & $0.86 \pm 0.04~$ & 0.87 & $0.61 \pm 0.20~$ & 0.67 & $0.82 \pm 0.06~$ & 0.83 & $0.88 \pm 0.08~$ & 0.92 \\ \hline \end{tabular} } \label{table:_DSC} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{\% 95 HD (mm) values for the different networks on the validation set (HMC). Lower values are better.} \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $4.4 \pm 1.0~$ & 4.4 & $8.6 \pm 8.6~$ & 7.3 & $16.5 \pm 11.0~$ & 13.3 & $6.9 \pm 6.6~$ & 4.0 \\ \hline \multirow{1}{*}{ Reg }&Registration & $5.5 \pm 4.5~$ & 4.0 & $5.6 \pm 4.1~$ & 4.3 & $11.0 \pm 6.4~$ & 9.4 & $15.7 \pm 9.6~$ & 12.1 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $3.8 \pm 1.3~$ & 3.2 & $4.1 \pm 2.8~$ & 3.2 & $9.9 \pm 6.2~$ & 8.4 & $11.7 \pm 10.3~$ & 9.2 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & $3.2 \pm 1.0~$ & 3.0 & \textcolor{light-gray}{$5.8 \pm 7.6~$} & \textcolor{light-gray}{3.9} & $9.6 \pm 5.8~$ & 8.0 & $3.8 \pm 3.9~$ & 2.8 \\ &Registration & \textcolor{light-gray}{$3.4 \pm 1.1~$} & \textcolor{light-gray}{3.2} & $4.4 \pm 3.0~$ & 3.2 & \textcolor{light-gray}{$10.5 \pm 6.0~$} & \textcolor{light-gray}{9.0} & \textcolor{light-gray}{$12.6 \pm 9.2~$} & \textcolor{light-gray}{10.2} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & $3.5 \pm 1.1~$ & \textcolor{light-gray}{3.3} & \textcolor{light-gray}{$5.2 \pm 5.2~$} & \textcolor{light-gray}{4.0} & \textcolor{light-gray}{$10.5 \pm 5.5~$} & \textcolor{light-gray}{9.7} & ${3.3} \pm {1.3}~$ & 3.0 \\ &Registration & \textcolor{light-gray}{$3.6 \pm 1.2~$} & 3.2 & $4.1 \pm 2.6~$ & {3.1} & $10.4 \pm 6.3~$ & 9.5 & \textcolor{light-gray}{$11.7 \pm 9.9~$} & \textcolor{light-gray}{8.7} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & $3.0 \pm 1.0~$ & 3.0 & $4.3 \pm 1.7~$ & \textcolor{light-gray}{3.9} & $9.5 \pm 6.2~$ & 7.2 & ${3.3} \pm {2.9}~$ & {2.3} \\ &Registration & \textcolor{light-gray}{$3.2 \pm 0.9~$} & 3.0 & \textcolor{light-gray}{$4.5 \pm 3.3~$} & 3.6 & \textcolor{light-gray}{$9.8 \pm 6.3~$} & \textcolor{light-gray}{8.6} & \textcolor{light-gray}{$12.2 \pm 10.1~$} & \textcolor{light-gray}{9.7} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & $4.0 \pm 1.7~$ & 3.7 & $6.0 \pm 3.4~$ & 5.6 & $10.9 \pm 5.2~$ & 9.8 & $15.3 \pm 8.3~$ & 13.6 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & ${2.9} \pm {0.9}~$ & {2.8} & ${3.8} \pm {2.2}~$ & {3.1} & ${7.7} \pm {4.5}~$ & {6.1} & $5.7 \pm 4.6~$ & 3.3 \\ \hline \multirow{1}{*}{ JRS-GAN~\cite{JrsGan} }&Registration & $3.4 \pm 1.2~$ & 3.0 & $5.3 \pm 3.0~$ & 4.6 & $10.1 \pm 6.1~$ & 8.4 & $11.0 \pm 9.6~$ & 7.6 \\ \hline \end{tabular} } \label{table:_HD} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[Table caption text]{DSC values for the different networks on the independent test set (EMC). Higher values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $0.73 \pm 0.11~$ & 0.77 & $0.37 \pm 0.30~$ & 0.28 & $0.67 \pm 0.10~$ & 0.68 & ${0.91} \pm {0.07}~$ & 0.93 \\ \hline \multirow{1}{*}{ Reg }&Registration & $0.83 \pm 0.16~$ & 0.88 & $0.64 \pm 0.26~$ & 0.74 & $0.72 \pm 0.16~$ & 0.77 & $0.75 \pm 0.19~$ & 0.82 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $0.84 \pm 0.16~$ & 0.89 & $0.68 \pm 0.25~$ & 0.77 & $0.76 \pm 0.15~$ & 0.80 & $0.80 \pm 0.18~$ & 0.89 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & $0.84 \pm 0.16~$ & 0.89 & \textcolor{light-gray}{$0.63 \pm 0.27~$} & \textcolor{light-gray}{0.75} & $0.79 \pm 0.16~$ & {0.82} & $0.87 \pm 0.13~$ & 0.93 \\ &Registration & $0.84 \pm 0.16~$ & \textcolor{light-gray}{0.88} & $0.68 \pm 0.25~$ & 0.78 & \textcolor{light-gray}{$0.77 \pm 0.14~$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.78 \pm 0.17~$} & \textcolor{light-gray}{0.86} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & $0.84 \pm 0.15~$ & 0.89 & \textcolor{light-gray}{$0.50 \pm 0.28~$} & \textcolor{light-gray}{0.58} & $0.76 \pm 0.18~$ & {0.82} & $0.88 \pm 0.13~$ & {0.94} \\ &Registration & $0.84 \pm 0.16~$ & \textcolor{light-gray}{0.88} & $0.68 \pm 0.24~$ & 0.78 & $0.76 \pm 0.15~$ & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.79 \pm 0.17~$} & \textcolor{light-gray}{0.88} \\ \multirow{2}{*}{ Cross-stitch }&Segmentation & $0.84 \pm 0.14~$ & 0.89 & \textcolor{light-gray}{$0.61 \pm 0.27~$} & \textcolor{light-gray}{0.73} & $0.78 \pm 0.14~$ & 0.81 & $0.88 \pm 0.10~$ & 0.93 \\ &Registration & $0.84 \pm 0.15~$ & 0.89 & $0.68 \pm 0.24~$ & 0.80 & \textcolor{light-gray}{$0.77 \pm 0.15~$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.80 \pm 0.16~$} & \textcolor{light-gray}{0.87} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & ${0.89} \pm {0.05}~$ & {0.91} & $0.72 \pm 0.24~$ & {0.82} & $0.75 \pm 0.12~$ & 0.76 & $0.79 \pm 0.18~$ & 0.87 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & $0.88 \pm 0.04~$ & 0.89 & ${0.77} \pm {0.15}~$ & 0.81 & ${0.80} \pm {0.10}~$ & {0.82} & $0.85 \pm 0.13~$ & 0.90 \\ \hline \end{tabular} } \label{table:_DSC} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[Table caption text]{\%95 HD (mm) values for the different networks on the independent test set (EMC). Lower values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $10.7 \pm 5.4~$ & 9.3 & $21.4 \pm 17.9~$ & 15.4 & $30.5 \pm 12.9~$ & 29.0 & $11.2 \pm 8.5~$ & 10.0 \\ \hline \multirow{1}{*}{ Reg }&Registration & $6.7 \pm 5.9~$ & 4.2 & $7.5 \pm 8.6~$ & 4.3 & $13.1 \pm 6.9~$ & 12.0 & $22.7 \pm 14.0~$ & 20.2 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $5.7 \pm 5.9~$ & 3.7 & $6.2 \pm 7.1~$ & 3.6 & $13.0 \pm 7.3~$ & 11.5 & $18.5 \pm 14.0~$ & 13.0 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & \textcolor{light-gray}{$5.8 \pm 5.9~$} & \textcolor{light-gray}{3.3} & \textcolor{light-gray}{$10.0 \pm 11.6~$} & \textcolor{light-gray}{5.1} & \textcolor{light-gray}{$17.1 \pm 16.6~$} & \textcolor{light-gray}{13.8} & $11.4 \pm 11.3~$ & 5.9 \\ &Registration & $5.3 \pm 5.7~$ & 3.0 & $6.4 \pm 6.8~$ & 3.2 & $13.0 \pm 6.5~$ & 12.6 & \textcolor{light-gray}{$19.2 \pm 13.7~$} & \textcolor{light-gray}{14.2} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & \textcolor{light-gray}{$5.7 \pm 5.5~$} & \textcolor{light-gray}{3.9} & \textcolor{light-gray}{$16.0 \pm 16.3~$} & \textcolor{light-gray}{10.6} & \textcolor{light-gray}{$18.8 \pm 16.5~$} & \textcolor{light-gray}{15.3} & ${9.4} \pm {9.9}~$ & {4.1} \\ &Registration & $5.5 \pm 5.6~$ & 3.3 & $6.3 \pm 6.7~$ & 3.6 & $13.3 \pm 7.3~$ & 13.0 & \textcolor{light-gray}{$18.8 \pm 13.5~$} & \textcolor{light-gray}{14.6} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & \textcolor{light-gray}{$5.8 \pm 5.4~$} & \textcolor{light-gray}{4.0} & \textcolor{light-gray}{$12.2 \pm 15.8~$} & \textcolor{light-gray}{5.0} & \textcolor{light-gray}{$17.0 \pm 14.7~$} & \textcolor{light-gray}{14.0} & $10.8 \pm 11.3~$ & 4.4 \\ &Registration & $5.1 \pm 5.5~$ & 3.2 & $6.2 \pm 8.6~$ & 3.3 & $12.6 \pm 6.7~$ & 12.0 & \textcolor{light-gray}{$19.1 \pm 12.5~$} & \textcolor{light-gray}{16.2} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & ${3.6} \pm {2.0}~$ & {2.9} & ${4.6} \pm {4.4}~$ & 3.2 & $11.3 \pm 6.0~$ & 11.3 & $16.1 \pm 14.8~$ & 10.4 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & $3.9 \pm 1.9~$ & 3.4 & $4.8 \pm 4.7~$ & {3.1} & ${10.3} \pm {6.8}~$ & {8.6} & $11.1 \pm 10.6~$ & 6.6 \\ \hline \end{tabular} } \label{table:_HD} \end{table*} \section{Conclusion} \label{conclusion} In this paper, we propose to formulate the registration and segmentation tasks as a multi-task learning problem. We presented various approaches in order to do so, both on an architectural level and via the loss function. We experimented with different network architectures in order to investigate the best setting that maximizes the information flow between these tasks. Moreover, we compared different loss weighting methods in order to optimally combine the losses from these tasks. We proved that multi-task learning approaches outperform their single-task counterparts. Using an adaptive parameter sharing mechanism via Cross-stitch units gives the networks freedom to share information between these two tasks, which resulted in the best performance. An equal loss weighting approach had similar performance to more sophisticated methods. The cross stitch network with equal loss weights achieved a median MSD of 0.99 mm, 0.82 mm, 1.13 mm and 1.47 mm on the validation set and 1.09 mm, 1.24 mm, 1.02 mm, and 2.10 mm on the independent test set for the prostate, bladder, seminal vesicles, and rectum, respectively. That is equal or less than slice thickness (2 mm). Due to the fast inference of the methods, the proposed method is highly promising for automatic re-contouring of follow-up scans for adaptive radiotherapy, potentially reducing treatment related complications and therefore improving patient quality-of-life after treatment. \section{Discussion} \label{discussion} In this study, we proposed to merge image registration and segmentation on the architectural level as well as via the loss, via a multi-task learning setting. We studied different network architectures and loss weighting methods in order to explore how these tasks interact, and thereby leverage the shared knowledge between them. Moreover, we carried out extensive quantitative analysis in the context of adaptive radiotherapy, and compared the proposed multi-task methods to their single-task counterparts. In this paper, a substantial number of experiments were executed, where we explored the following methodological choices: the bending energy weight, the input to the STL and MTL networks, and the loss weighting method. We also performed a thorough analysis on how Cross-stitch units and loss weights evolve during training. Finally, we compared our proposed methods against state-of-the-art methods. In all the experiments we fixed the weight of the bending energy weight so that the network would not set it too low in order to improve the DSC of the deformed contours on the account of the smoothness of the predicted DVF. As shown in Figure \ref{fig:bending_energy} low bending energy weights result in better contour quality on the account of the smoothness of the predicted DVF. For the inputs to the STL networks, additionally feeding $\SMMath$\xspace to the segmentation network resulted in a statistically significant improvement especially for the seminal vesicles. Apparently the network considers $\SMMath$\xspace as an initial estimation for $\SFMath$\xspace and subsequently uses it as a guidance for its final prediction. When feeding $\IMMath$\xspace the results deteriorated; this may confuse the network as $\IFMath$\xspace and $\IMMath$\xspace have the same anatomy but with different shapes and local positions. The addition of both $\IMMath$\xspace and $\SMMath$\xspace performed similar to the addition of only $\SMMath$\xspace, which indicates that the networks learned to ignore $\IMMath$\xspace. For the registration network, the addition of $\SMMath$\xspace resulted in a sub-optimal result, since the $\SMMath$\xspace contours on its own does not represent the underlying deformation well. For the inputs to the MTL networks, in the JRS-reg network, feeding $\SMMath$\xspace alongside $\IFMath$\xspace and $\IMMath$\xspace resulted in a similar performance compared to not feeding it. This indicates that the incorporation of $\SMMath$\xspace via the DSC loss, already enables the JRS-reg network to exploit this extra information, and that additionally adding $\SMMath$\xspace as a network input does not provide further benefits. In the Cross-stitch network, we found that adding $\SMMath$\xspace to the registration network results in a statistically significant improvement. Furthermore, feeding $\SMMath$\xspace to one of the networks is sufficient, proving that segmentation and registration networks communicate their knowledge efficiently through the Cross-stitch units. We selected the STL networks with $\IFMath$\xspace (for segmentation) and $\IFMath$\xspace alongside $\IMMath$\xspace (for registration) as input to our baseline methods. Between these two networks, the registration network performed better overall, since the registration network leverages prior knowledge from the organs in the moving image. For the bladder, the segmentation network achieved better results; Apparently the registration network had difficulties finding the correspondence between the bladder in the fixed and moving images, since it tends to deform considerably between visits. However, the segmentation network failed to segment the seminal vesicles for five cases. That is explained by the fact that the seminal vesicles is a difficult structure to segment, due to its relatively small size, undefined borders, and poor contrast with its surroundings. The registration network on the other hand is able to employ the surrounding anatomy as context, to accurately warp the seminal vesicles. For the multi-task networks, we demonstrated that fusing segmentation and registration tasks is performing better than its single-task counterparts. Merging these tasks using Cross-stitch network achieved the best results on both the validation and testing datasets. Different loss weighting methods achieved comparable results as shown in Table \ref{table:HMC_MSD_weighting}. In Figure \ref{fig:loss-weights}, homoscedastic uncertainty tended to weigh all losses equally, using almost a fixed weight of 0.9 during most of the training iterations. On the contrary, DWA tended to fluctuate during training as the weights are updated based on the ratio of the loss from previous iterations, which fluctuates due to the batch-based training. Since the fixed and moving images are affinely registered beforehand, DWA tended to down-weigh the registration loss and the associated DSC at the beginning of the training, while weighting the segmentation network loss more in order to improve its prediction. Later during training, all the weights stabilized around 0.9 similar to homoscedastic uncertainty. Although both methods stabilized by the end of the training around the same value (0.9), the homoscedastic uncertainty achieved slightly better results compared to DWA and equal weighting methods, except for the Cross-stitch network. Our reasoning behind this is that homoscedastic uncertainty, unlike other methods, is learnable during the training and highly dependent on the underlying task uncertainty. By analyzing the performance of the Cross-stitch units as demonstrated in Figure \ref{fig:CS-weights}, we found that the Cross-stitch units tended to average feature maps for the down-sampling path, while preferring to be more task-specific for the upsampling path. This somewhat mimics the shared encoder double decoder (SEDD) network, but in contrast to this network, the Cross-stitch network does not completely split the decoder paths. This finding confirms that the segmentation and registration tasks are correlated and thereby encode similar features. We carried out an experiment to study the effect of the bladder filling protocol between the HMC and EMC datasets. As shown in Figure \ref{fig:HMC_bladder_filling}, the HMC dataset has a bladder filling protocol so the volume of the bladder changes slightly around 100 mL between different sessions, which is not the case for the EMC dataset as shown in Figure \ref{fig:EMC_bladder_filling}. Since the registration-based networks and joint networks were trained on small bladder deformations, they failed on large deformations, however the segmentation network was not affected since it does not depend on the deformation but rather the underlying texture to segment the bladder. In terms of the smoothness of the predicted DVF shown in Table \ref{table:dvf-table}, MTL networks achieved lower numbers for the standard deviation of the Jacobian as well as for the folding fraction, compared to the STL network (Reg), on both the test and validation set. Our reasoning is that joining the segmentation task to the registration task works as an additional regularization to the registration network. Due to the fact that the higher the quality of the predicted DVF, the higher the quality of the propagated contours and subsequently the lower the DSC loss. The numbers on the test set are slightly higher than the validation set, but this is due to the variance between the deformations between both sets and the fact that the network has not seen the test set before. This can be addressed using transfer learning as suggested by Elmahdy \textit{et al.} \cite{elmahdy2020patient} or by using synthetic deformations that mimic the one presented in the EMC dataset. In the paper, we compared our algorithm against different algorithms from various categories: non-learning (\texttt{elastix} \cite{elastix}, a popular conventional tool); hybrid \cite{MedPhys}, and GAN-based \cite{JrsGan}. The presented multi-task networks outperformed these approaches on the validation set and performed on par to these methods for the test set. However, the test time for the hybrid and \texttt{elastix} methods are in the order of minutes, while the presented methods have the advantage of fast prediction in less than a second. This enables online automatic re-contouring of daily scans for adaptive radiotherapy. Moreover, in our hybrid study \cite{MedPhys} we carried out an extensive dosimetric evaluation alongside the geometric evaluation. The predicted contours from that study met the dose coverage constraints in 86\%, 91\%, and 99\% of the cases for the prostate, seminal vesicles, and lymph nodes, respectively. Since our multi-task networks outperformed the geometrical results in that study, we expect that our contours would achieve a higher success rate in terms of the dose coverage. This could potentially reduce treatment related complications and therefore improve patient quality-of-life after treatment. A promising direction for future research is the addition of a third task, potentially radiotherapy dose plan estimation. Hence, we can generate contours that are consistent with an optimal dose planning. Further studies could also focus on sophisticated MTL network architectures similar to sluice networks \cite{ruder2017sluice} or routing networks \cite{rosenbaum2017routing}. Moreover, we can study how to fuse the contours from the segmentation and registration paths in a smarter way rather than simply selecting one of them based on the validation set. \section{ Datasets, Implementation, and Evaluation} \label{exp_results_section} \subsection{Datasets} This study involves two datasets from two different institutes and scanners for patients who underwent intensity-modulated radiotherapy for prostate cancer. The first dataset is from Haukeland Medical Center (HMC), Norway. The dataset has 18 patients with 8-11 daily CT scans, each corresponding to a treatment fraction. These scans were acquired using a GE scanner and have 90 to 180 slices with a voxel size of approximately 0.9 $\times$ 0.9 $\times$ 2.0 mm. The second dataset is from Erasmus Medical Center (EMC), The Netherlands. This dataset consists of 14 patients with 3 daily CT scans each. The scans were acquired using a Siemens scanner, and have 91 to 218 slices with a voxel size of approximately 0.9 $\times$ 0.9 $\times$ 1.5 mm. The target structures (prostate and seminal vesicles) as well as organs-at-risk (bladder and rectum) were manually delineated by radiation oncologists. All datasets were resampled to an isotropic voxel size of 1 $\times$ 1 $\times$ 1 mm. All scans and corresponding contours were affinely registered beforehand using \texttt{elastix} \cite{elastix}, so that corresponding anatomical structures would fit in the network's field of view. The scan intensities were clipped to [-1000, 1000] . \subsection{Implementation and Training Details}\label{implement_details} All experiments were developed using Tensorflow (version 1.14) \cite{abadi2016tensorflow}. The convolutional layers were initialized with a random normal distribution ($\mu=0.0$, $\sigma=0.02$). All parameters of the Cross-stitch units were initialized using a truncated normal distribution ($\mu=0.5$, $\sigma=0.25$) in order to encourage the network to share information at the beginning of the training. In order to ensure fairness regarding the number of parameters in all the networks, the number of filters for the Cross-stitch network were set to [16, 32, 64, 32, 16], while for the other networks the numbers were scaled by $\sqrt{2}$ resulting in [23, 45, 91, 45, 23] filtermaps. This results in approximately $7.8 \times 10^5$ trainable parameters for each network. The networks were trained using the RAdam optimizer \cite{liu2019variance} with a fixed learning rate of $10^{-4}$. Patches were sampled equally from the target organs, organs-at-risk and torso. All networks were trained for 200K iterations using an initial batch size of 2. The batch size is then doubled by switching the fixed and moving patches so that the network would warp the fixed patch to the moving patch and vice versa at the same training iteration. The networks were trained and optimized on the HMC dataset, while the EMC dataset was used as an independent test set. Training was performed on a subset of 111 image pairs from 12 patients, while validation and optimization was carried out on the remaining 50 image pairs from 6 patients. From each image, 1,000 patches of size 96 $\times$ 96 $\times$ 96 voxels were sampled. The size of the patch was chosen so that it would fit in the GPU memory, while still producing a patch size of $17^3$ at the lowest resolution, which is a reasonable size to encode the deformation from the surrounding region. Losses from the deeply supervised resolutions were weighted equally, $\frac{1}{3}$ each. Training was performed on a cluster equipped with NVIDIA RTX6000, Tesla V100, and GTX1080 Ti GPUs with 24, 16 and 11 GB of memory, respectively. \subsection{Evaluation Metrics} The automatically generated contours are evaluated geometrically by comparing them against the manual contours for the prostate, seminal vesicle, rectum, and bladder. The Dice similarity coefficient (DSC) measures the overlap between contours: \begin{equation}\label{eq:dsc} \centering \mathrm{DSC}= \sum \frac{2 \mid {S_f} \cap {S_g} \mid}{\mid {S_f} \mid + \mid {S_g} \mid}, \end{equation} where ${S_g}$ is the generated contour from either the segmentation or the registration network. The distance between the contours is measured by the Mean Surface Distance (MSD) and Hausdorff Distance (HD) defined as follows: \begin{align} \centering \mathrm{MSD} &= \frac{1}{2} \left( \frac{1}{N} \sum_{i=1}^{n} d \left( a_i, {S_g} \right) + \frac{1}{M} \sum_{i=1}^{m} d \left( b_i, {S_f} \right) \right),\label{eq:msd}\\ \mathrm{HD} &= \max\! \left\lbrace\! \max_i \left\lbrace d \left( a_i, {S_g} \right) \right\rbrace , \max_j \left\lbrace d \left( b_i, {S_f} \right) \right\rbrace \!\right\rbrace,\label{eq:hd} \end{align} where $\{$$a_1$; $a_2$; \dots ; $a_n$$\}$ and $\{$$b_1$; $b_2$; \dots; $b_m$$\}$ are the surface mesh points of the manual and generated contours, respectively, and $d \left( a_i, {S_g} \right) = \min_j \, \|b_j - a_i\|$. For all the experiments, we apply the largest connected component operation on the network prediction. In order to evaluate the quality of the deformations, we calculate the determinant of the Jacobian matrix. A Jacobian of 1 indicates that no volume change has occurred; a Jacobian $>$ 1 indicates expansion, a Jacobian between 0 and 1 indicates shrinkage, and a Jacobian $\leq$ 0 indicates a singularity, i.e. a place where folding has occurred. We can quantify the smoothness and quality of the DVF by indicating the fraction of foldings per image and by calculating the standard deviation of the Jacobian alongside the MSD of the segmentation. A repeated one-way ANOVA test was performed using a significance level of $p = 0.05$. P-values are only stated for the comparisons between the best network with the other networks. \section{Experiments and Results} \begin{table*}[!t] \centering \setlength{\tabcolsep}{3pt} \caption[]{The effect of network input for the different architectures on the validation set (HMC) in terms of MSD (mm). Lower values are better. Here, $\oplus$ is the concatenation operation, and $\cdot \| \cdot$ represents the inputs to the segmentation network (left of $\|$) and the inputs to the registration network (right of $\|$). Stars denote one-way ANOVA statistical significance with respect to the Cross-stitch network with ${I_f} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ as inputs.} \resizebox{\textwidth}{!}{ \begin{tabular}{lcllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Input & Output path&\multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{4}{*}{ Seg }&\multirow{1}{*}{ $ {I_f}$ }& &$1.49 \pm 0.3^{*}~$ & 1.49 & $2.50 \pm 2.6~$ & 2.09 & $3.39 \pm 2.2~$ & 2.73 & $1.60 \pm 1.1^{*}~$ & 1.13 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{S_m}$ }&& $1.31 \pm 0.4~$ & 1.23 & ${1.63} \pm {0.9}~$ & {1.26} & $2.88 \pm 3.4~$ & {2.06} & $1.12 \pm 0.5~$ & {0.97} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& $3.06 \pm 0.6^{*}~$ & 3.01 & $5.36 \pm 4.4~$ & 3.71 & $14.57 \pm 9.4^{*}~$ & 11.58 & $1.46 \pm 1.3~$ & 1.12 \\ [1mm &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${1.26} \pm {0.4}~$ & {1.20} & $2.08 \pm 2.2~$ & 1.27 & ${2.79} \pm {1.6}~$ & 2.45 & ${1.05} \pm {0.4}~$ & {0.97} \\ \hline \multirow{2}{*}{ Reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${1.43} \pm {0.8}^{*}~$ & {1.29} & ${1.71} \pm {1.4}^{*}~$ & {1.37} & ${2.44} \pm {1.1}^{*}~$ & {2.17} & ${3.40} \pm {2.3}^{*}~$ & {2.71} \\ [1mm &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& $1.91 \pm 1.3~$ & 1.59 & $1.92 \pm 1.5~$ & 1.44 & $2.58 \pm 1.1~$ & 2.33 & $3.88 \pm 2.5~$ & 3.16 \\ \hline \multirow{2}{*}{ JRS-reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${1.16} \pm {0.3}~$ & 1.16 & ${1.32} \pm {0.6}~$ & {1.11} & ${2.08} \pm {1.0}~$ & {1.82} & ${2.57} \pm {2.0}~$ & 2.04 \\ [1mm &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& $1.20 \pm 0.4~$ & {1.13} & $1.35 \pm 0.7~$ & 1.16 & ${2.08} \pm {1.0}~$ & {1.82} & $2.63 \pm 2.3~$ & {1.90} \\ \hline \multirow{8}{*}{ Cross-stitch }&\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}$ }&Segmentation & \textcolor{light-gray}{$1.47 \pm 0.3^{*}$} & \textcolor{light-gray}{1.48} & \textcolor{light-gray}{$2.93 \pm 3.0^{*}$} & \textcolor{light-gray}{2.08} & \textcolor{light-gray}{$2.93 \pm 2.0^{*}$} & \textcolor{light-gray}{2.25} & $1.19 \pm 1.0$ & 0.89 \\ &&Registration & $1.10 \pm 0.3~$ & 1.07 & $1.38 \pm 0.7~$ & 1.17 & $2.12 \pm 1.0$ & 1.89 & \textcolor{light-gray}{$2.55 \pm 2.1$} & \textcolor{light-gray}{1.89} \\ [1.5mm] &\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & ${1.06} \pm {0.3}~$ & {0.99} & ${1.27} \pm {0.4}~$ & \textcolor{light-gray}{1.15} & ${1.76} \pm {0.8}~$ & {1.47} & ${0.91} \pm {0.4}~$ & {0.82} \\ &&Registration & \textcolor{light-gray}{$1.10 \pm 0.3~$} & \textcolor{light-gray}{1.06} & \textcolor{light-gray}{$1.30 \pm 0.6~$} & {1.13} & \textcolor{light-gray}{$2.00 \pm 1.0~$} & \textcolor{light-gray}{1.75} & \textcolor{light-gray}{$2.45 \pm 2.1~$} & \textcolor{light-gray}{1.81} \\ [1.5mm &\multirow{2}{*}{ $ {I_f}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & \textcolor{light-gray}{$2.05 \pm 0.7^{*}$} & \textcolor{light-gray}{2.00} & \textcolor{light-gray}{$3.66 \pm 4.4^{*}$} & \textcolor{light-gray}{2.19} & \textcolor{light-gray}{$2.44 \pm 1.0^{*}$} & \textcolor{light-gray}{2.35} & $1.09 \pm 0.5^{*}$ & 0.93 \\ &&Registration & $1.40 \pm 0.4$ & 1.35 & $1.31 \pm 0.6~$ & 1.17 & $2.27 \pm 1.0$ & 2.02 & \textcolor{light-gray}{$2.56 \pm 1.9$} & \textcolor{light-gray}{1.96} \\ [1.5mm &\multirow{2}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & $1.08 \pm 0.3~$ & 1.05 & \textcolor{light-gray}{$1.54 \pm 0.9^{*}$} & \textcolor{light-gray}{1.28} & $1.88 \pm 1.0~$ & 1.61 & $1.01 \pm 0.7~$ & {0.82} \\ &&Registration & \textcolor{light-gray}{$1.20 \pm 0.3$} & \textcolor{light-gray}{1.18} & $1.35 \pm 0.7~$ & 1.16 & \textcolor{light-gray}{$2.12 \pm 1.1$} & \textcolor{light-gray}{1.87} & \textcolor{light-gray}{$2.54 \pm 2.2$} & \textcolor{light-gray}{1.80} \\ \hline \end{tabular} } \label{table:network_input_MSD} \end{table*} In the paper we present two single-task networks dubbed \textit{Seg} and \textit{Reg} networks (see Sections \ref{seg_network} and \ref{reg_network} for more details). Moreover, we investigated multiple multi-task networks, namely JRS-reg, dense, SEDD, and Cross-stitch (see Sections \ref{jrs_network}, \ref{dense_network}, \ref{sedd_network}, and \ref{cs_network} for more details). We compared our proposed methods against three state-of-the-art methods that were developed for prostate CT contouring. These methods represent three approaches, namely an iterative conventional registration method, a deep learning-based registration method, and a hybrid method. For the iterative method, we used \texttt{elastix} software \cite{elastix} with the NCC similarity loss using the settings proposed by Qiao \emph{et. al.} \cite{qiao2017fast}. In the deep learning method proposed by Elmahdy \emph{et. al.} \cite{JrsGan}, a generative network is trained for contour propagation by registration, while a discrimination network evaluates the quality of the propagated contours. Finally, we compare our methods against the hybrid method proposed by Elmahdy \emph{et. al.} \cite{MedPhys}, where a CNN network segments the bladder and then feeds it to the iterative registration method as prior knowledge. Following, we optimize some of the network settings on the validation set (HMC), in order to investigate the influence of the bending energy weight, network inputs, weighting strategy and network architecture on the results. Then, on the independent test set, we present the final results comparing with methods from the literature. \subsection{Bending Energy Weight} \label{bending_Section} We compared the single-task registration, the JRS-reg method and the Cross-stitch network for a set of bending energy weights, see Equations (\ref{eq:RegistrationLoss}) and (\ref{eq:general_mtl}), while the weights of the other loss functions are set to 1. Figure \ref{fig:bending_energy} shows the performance of the aforementioned methods using different bending energy weights. The optimal performance of the registration network occurs at a bending weight of 0.5, while the optimal bending weight for both JRS-reg and Cross-stitch network is much lower but with higher standard deviation of the Jacobian. Therefore, for the remainder of the paper we set the weight of the bending energy to 0.5 since it achieves the best compromise between the contour performance in terms of MSD and the registration performance in terms of the std. of the Jacobian determinant. \begin{figure}[t!] \begin{center} \includegraphics[width=1\linewidth]{figures/bending_plot2.pdf} \caption{The performance of the registration, JRS-registration and Cross-stitch networks with different bending energy weights on the validation set (HMC), in terms of mean MSD averaged over the four organs. The annotation at each point represents the standard deviation of the determinant of the Jacobian.} \label{fig:bending_energy} \end{center} \end{figure} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{MSD (mm) values for the different networks and loss weighting methods for the HMC dataset. Lower values are better. Stars and daggers denote one-way ANOVA statistical significance for inter-network experiments with respect to Homoscedastic weights and intra-network experiments with respect to Cross-stitch with Equal weights, respectively. Grey numbers represent the values of the worst path between the segmentation and registration paths.} \resizebox{\textwidth}{!}{ \begin{tabular}{llllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Weight & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{3}{*}{ JRS-reg }&\multirow{1}{*}{Equal}&Registration & $1.20 \pm 0.4$ & 1.13 & {$1.35 \pm 0.7~$} &{1.16} & {$2.08 \pm 1.0$} & {1.82} & {$2.63 \pm 2.3^{*}$} & {1.90} \\ &\multirow{1}{*}{Homoscedastic}&Registration & $1.20 \pm 0.3$ & {1.20} & ${1.22} \pm {0.5}~$ & {1.07} & $2.05 \pm 1.0$ & 1.81 & $2.34 \pm 2.2$ & 1.60 \\ &\multirow{1}{*}{DWA}&Registration & {$1.22 \pm 0.3$} & 1.18 & {$1.37 \pm 0.7^{*}~$} & {1.20} & {$2.29 \pm 1.1^{*}$} & {2.04} & {$3.18 \pm 2.4^{*}$} & {2.43} \\ \hline \multirow{6}{*}{ Dense }&\multirow{2}{*}{Equal}&Segmentation & $1.14 \pm 0.4$ & 1.06 & \textcolor{light-gray}{$1.73 \pm 2.1~$} & \textcolor{light-gray}{1.12} & $1.91 \pm 0.9~$ & 1.64 & $1.04 \pm 0.7$ & 0.87 \\ &&Registration & \textcolor{light-gray}{$1.20 \pm 0.3$} & \textcolor{light-gray}{1.11} & $1.33 \pm 0.7^{*}~$ & 1.10 & \textcolor{light-gray}{$2.16 \pm 1.1$} & \textcolor{light-gray}{1.85} & \textcolor{light-gray}{$2.56 \pm 1.9$} & \textcolor{light-gray}{1.90} \\ [1mm &\multirow{2}{*}{Homoscedastic}&Segmentation & $1.09 \pm 0.3$ & 1.04 & \textcolor{light-gray}{$1.51 \pm 1.2~$} & \textcolor{light-gray}{1.13} & $1.86 \pm 0.8~$ & 1.69 & $0.99 \pm 0.4$ & 0.91 \\ &&Registration & \textcolor{light-gray}{$1.17 \pm 0.3$} & \textcolor{light-gray}{1.15} & $1.31 \pm 0.6~$ & 1.13 & \textcolor{light-gray}{$2.17 \pm 1.0$} & \textcolor{light-gray}{1.96} & \textcolor{light-gray}{$2.63 \pm 2.0^{*}$} & \textcolor{light-gray}{1.95} \\ [1mm &\multirow{2}{*}{DWA}&Segmentation & $1.12 \pm 0.3^{*\dagger}$ & 1.04 & \textcolor{light-gray}{$1.74 \pm 2.0~$} & \textcolor{light-gray}{1.13} & $1.99 \pm 0.9^{*}$ & 1.77 & $1.00 \pm 0.4~$ & 0.85 \\ &&Registration & \textcolor{light-gray}{$1.14 \pm 0.3$} & \textcolor{light-gray}{1.14} & $1.27 \pm 0.6~$ & {1.07} & \textcolor{light-gray}{$2.24 \pm 1.1^{*}$} & \textcolor{light-gray}{1.97} & \textcolor{light-gray}{$2.72 \pm 1.9$} & \textcolor{light-gray}{2.13} \\ \hline \multirow{6}{*}{ SEDD }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$1.47 \pm 0.6^{*\dagger}$} & \textcolor{light-gray}{1.31} & \textcolor{light-gray}{$2.81 \pm 4.6$} & \textcolor{light-gray}{1.34} & $1.97 \pm 1.0$ & 1.59 & $1.21 \pm 1.0$ & 0.94 \\ &&Registration & \textcolor{light-gray}{$1.28 \pm 0.4^{*}$} & \textcolor{light-gray}{1.19} & \textcolor{light-gray}{$1.50 \pm 0.9^{*}$} & \textcolor{light-gray}{1.26} & \textcolor{light-gray}{$2.26 \pm 1.1^{*}$} & \textcolor{light-gray}{1.94} & \textcolor{light-gray}{$2.61 \pm 2.1^{*}$} & \textcolor{light-gray}{1.83} \\ [1mm &\multirow{2}{*}{Homoscedastic}&Segmentation & $1.15 \pm 0.3^{\dagger}$ & 1.14 & \textcolor{light-gray}{$1.47 \pm 1.0$} & \textcolor{light-gray}{1.22} & $2.12 \pm 1.1$ & 1.91 & $0.99 \pm 0.2$ & 0.94 \\ &&Registration & \textcolor{light-gray}{$1.19 \pm 0.3$} & \textcolor{light-gray}{1.21} & $1.23 \pm 0.5~$ & 1.13 & \textcolor{light-gray}{$2.15 \pm 1.0$} & \textcolor{light-gray}{1.92} & \textcolor{light-gray}{$2.31 \pm 2.0$} & \textcolor{light-gray}{1.64} \\ [1mm &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$1.22 \pm 0.3^{*\dagger}$} & 1.18 & \textcolor{light-gray}{$1.44 \pm 0.8$} & \textcolor{light-gray}{1.21} & $2.12 \pm 1.4$ & 1.73 & $1.10 \pm 0.6$ & 0.93 \\ &&Registration & $1.22 \pm 0.3$ & \textcolor{light-gray}{1.22} & $1.32 \pm 0.6^{*}~$ & 1.10 & \textcolor{light-gray}{$2.30 \pm 1.1^{*}$} & \textcolor{light-gray}{2.01} & \textcolor{light-gray}{$2.86 \pm 1.9^{*}$} & \textcolor{light-gray}{2.41} \\ \hline \multirow{6}{*}{Cross-stitch} & \multirow{2}{*}{Equal}&Segmentation & ${1.06} \pm {0.3}^{*}~$ & {0.99} & $1.27 \pm 0.4~$ & \textcolor{light-gray}{1.15} & ${1.76} \pm {0.8}^{*}~$ & {1.47} & ${0.91} \pm {0.4}~$ & {0.82} \\ &&Registration & \textcolor{light-gray}{$1.10 \pm 0.3^{*}~$} & \textcolor{light-gray}{1.06} & \textcolor{light-gray}{$1.30 \pm 0.6~$} & 1.13 & \textcolor{light-gray}{$2.00 \pm 1.0^{*}~$} & \textcolor{light-gray}{1.75} & \textcolor{light-gray}{$2.45 \pm 2.1~$} & \textcolor{light-gray}{1.81} \\ [1mm &\multirow{2}{*}{Homoscedastic}&Segmentation & \textcolor{light-gray}{$1.23 \pm 0.3^{\dagger}$} & \textcolor{light-gray}{1.16} & \textcolor{light-gray}{$1.51 \pm 1.2~$} & \textcolor{light-gray}{1.17} & \textcolor{light-gray}{$2.37 \pm 1.0$} & \textcolor{light-gray}{2.09} & $0.92 \pm 0.2~$ & 0.89 \\ &&Registration & \textcolor{light-gray}{$1.24 \pm 0.3$} & \textcolor{light-gray}{1.24} & $1.32 \pm 0.6~$ & 1.13 & $2.12 \pm 1.0$ & 1.89 & \textcolor{light-gray}{$2.45 \pm 1.9$} & \textcolor{light-gray}{1.97} \\ [1mm &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$1.34 \pm 0.4^{*\dagger}$} & \textcolor{light-gray}{1.27} & \textcolor{light-gray}{$1.75 \pm 1.7$} & \textcolor{light-gray}{1.29} & \textcolor{light-gray}{$2.32 \pm 0.9^{\dagger}$} & \textcolor{light-gray}{2.11} & $1.17 \pm 0.8^{*}$ & 0.91 \\ &&Registration & $1.22 \pm 0.3$ & 1.19 & $1.27 \pm 0.6~$ & 1.09 & $2.21 \pm 1.0^{*}~$ & 2.00 & \textcolor{light-gray}{$2.93 \pm 2.3^{*}$} & \textcolor{light-gray}{2.27} \\ \hline \end{tabular} } \label{table:HMC_MSD_weighting} \end{table*} \begin{figure*}[t!] \begin{center} \includegraphics[width=1\textwidth]{figures/weight_analysis_mod.pdf} \caption{The evolution of the loss weights during training for different multi-task networks on the validation dataset (HMC).} \label{fig:loss-weights} \end{center} \end{figure*} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{figures/cross_stitch_mod.png} \caption{The evolution of the Cross-stitch units weights during training using equal weights. CS\#1 and CS\#2 are placed in the down-sampling path, while CS\#3 and CS\#4 are placed in the upsampling path. The solid lines represent the mean of the weights across the diagonal of the CS unit, while the dashed lines represent the mean of the off-diagonal weights. } \label{fig:CS-weights} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{figures/scatter_bladder_volume_diff_MSD_HMC_final.pdf} \caption{The effect of the bladder volume deviation from the planning volume on the performance of the Seg, Reg, and Cross-stitch networks for the validation set (HMC). } \label{fig:HMC_bladder_filling} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{figures/scatter_bladder_volume_diff_MSD_EMC_final.pdf} \caption{The effect of the bladder volume deviation from the planning volume on the performance of the STL and the Seg, Reg, and Cross-stitch networks for the independent test set (EMC). } \label{fig:EMC_bladder_filling} \end{center} \end{figure} \subsection{Optimization of the Networks Inputs} During training, validation, and testing, we have access to the fixed image $\IFMath$\xspace, the moving image $\IMMath$\xspace, and the moving segmentation $\SMMath$\xspace. In Table \ref{table:network_input_MSD} we compared different sets of inputs on the validation dataset. This experiment helps to better understand how these network interpret and utilize these inputs and how this would reflect on the network outcome represented by the MSD metric. For this experiment we used equal loss weights for the MTL networks. Feeding $\SMMath$\xspace to the segmentation network improves the results substantially compared to only feeding $\IFMath$\xspace, especially for the seminal vesicles, while feeding $\IMMath$\xspace deteriorates the results. For the registration and JRS-reg networks, feeding $\SMMath$\xspace alongside $\IFMath$\xspace and $\IMMath$\xspace resulted in a similar performance compared to not feeding it. Since the Cross-stitch network is composed of two networks, one for segmentation and the other for registration, we experimented with various combinations of inputs. The results are very consistent with our previous findings on the single-task networks on the effect of using $\SMMath$\xspace as an input. For the remainder of this paper, we chose to use $\IFMath$\xspace as input for the segmentation network, and $\IFMath$\xspace and $\IMMath$\xspace as inputs for the registration network. Although adding $\SMMath$\xspace proved to be better especially for the segmentation network, here we exclude it, since these two methods act as a baseline and this is the standard setting in single-task networks. For dense, SEDD, and JRS-reg networks, we select a concatenation of $\IMMath$\xspace, $\IFMath$\xspace, and $\SMMath$\xspace for the final network. For the Cross-stitch network, we select $\IFMath$\xspace for the segmentation network and the concatenation of $\IMMath$\xspace, $\IFMath$\xspace, and $\SMMath$\xspace for the registration network. \subsection{Optimization of loss weighting strategy} In this experiment we investigate the performance of the various loss weighting strategies introduced in Section \ref{loss_weighting} in order to select the best weighting method for the underlying tasks. Table \ref{table:HMC_MSD_weighting} shows the results of the different weighting strategies for the MTL networks in terms of MSD. For the JRS-reg network architecture, weighting the losses with homoscedastic uncertainty achieved comparable results to using equal weights, while DWA scored somewhat less. For the dense and SEDD architectures, homoscedastic weighting achieved a slightly better performance, while equal weights was best for the Cross-stitch network. For these architectures (dense, SEDD, and Cross-stitch), the segmentation output path showed improvement over the registration output path. Figure \ref{fig:loss-weights} illustrates the evolution of the loss weights $w_i$ during training, for different multi-task network architectures and weighting strategies. For the remainder of this paper and based on the previous findings, we chose the homoscedastic uncertainty weighting strategy for the JRS-reg, dense and SEDD networks, while using equal weights for the Cross-stitch network. \subsection{Analysis of Cross-stitch units} Analysis of the behavior of the Cross-stitch units during training facilitates the understanding of how the segmentation and registration networks interacts in the MTL settings. Figure \ref{fig:CS-weights} shows the mean of the CS units across the diagonal and off-diagonal (See Equation (\ref{eq:cs})). Higher weights on the diagonal means that the network tends to separate the task-specific feature maps, while higher weights off-diagonal means that the network tends to share the corresponding feature maps. \subsection{Effect of the bladder filling} For the HMC dataset, which was used for training and validation, a bladder filling protocol was in place, meaning that the deformation of the bladder between daily and planning scans is not large. However, this is not the scenario for the EMC dataset, the test set. Figure \ref{fig:HMC_bladder_filling} and \ref{fig:EMC_bladder_filling} illustrates the effect of the bladder volume variation from the planning scan on the performance of the Seg, Reg, and Cross-stitch networks. The Cross-stitch network is resilient to bladder filling for both the HMC and EMC datasets. \subsection{Evaluation of the Quality of the DVF} The smoothness of the predicted DVF is an important parameter to evaluate the predicted deformation field. Table \ref{table:dvf-table} shows a detailed analysis of the DVF in terms of the standard deviation of the determinant of the Jacobian as well as the folding fraction for the registration path of the different networks. \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{MSD (mm) values for the different networks on the validation set (HMC). Lower values are better.} \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $1.49 \pm 0.3$ & 1.49 & $2.50 \pm 2.6$ & 2.09 & $3.39 \pm 2.2$ & 2.73 & $1.60 \pm 1.1$ & 1.13 \\ \hline \multirow{1}{*}{ Reg }&Registration & $1.43 \pm 0.8$ & 1.29 & $1.71 \pm 1.4$ & 1.37 & $2.44 \pm 1.1$ & 2.17 & $3.40 \pm 2.3$ & 2.71 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $1.20 \pm 0.3$ & \textcolor{light-gray}{1.20} & ${1.22} \pm {0.5}~$ & {1.07} & $2.05 \pm 1.0$ & 1.81 & $2.34 \pm 2.2$ & 1.60 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & $1.09 \pm 0.3$ & 1.04 & \textcolor{light-gray}{$1.51 \pm 1.2~$} & \textcolor{light-gray}{1.13} & $1.86 \pm 0.8~$ & 1.69 & $0.99 \pm 0.4$ & 0.91 \\ &Registration & \textcolor{light-gray}{$1.17 \pm 0.3$} & \textcolor{light-gray}{1.15} & $1.31 \pm 0.6~$ & 1.13 & \textcolor{light-gray}{$2.17 \pm 1.0$} & \textcolor{light-gray}{1.96} & \textcolor{light-gray}{$2.63 \pm 2.0$} & \textcolor{light-gray}{1.95} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & $1.15 \pm 0.3$ & 1.14 & \textcolor{light-gray}{$1.47 \pm 1.0$} & \textcolor{light-gray}{1.22} & $2.12 \pm 1.1$ & 1.91 & $0.99 \pm 0.2$ & 0.94 \\ &Registration & \textcolor{light-gray}{$1.19 \pm 0.3$} & \textcolor{light-gray}{1.21} & $1.23 \pm 0.5~$ & 1.13 & \textcolor{light-gray}{$2.15 \pm 1.0$} & \textcolor{light-gray}{1.92} & \textcolor{light-gray}{$2.31 \pm 2.0$} & \textcolor{light-gray}{1.64} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & ${1.06} \pm {0.3}$ & {0.99} & $1.27 \pm 0.4~$ & \textcolor{light-gray}{1.15} & ${1.76} \pm {0.8}$ & {1.47} & ${0.91} \pm {0.4}~$ & {0.82} \\ &Registration & \textcolor{light-gray}{$1.10 \pm 0.3$} & \textcolor{light-gray}{1.06} & \textcolor{light-gray}{$1.30 \pm 0.6~$} & 1.13 & \textcolor{light-gray}{$2.00 \pm 1.0$} & \textcolor{light-gray}{1.75} & \textcolor{light-gray}{$2.45 \pm 2.1~$} & \textcolor{light-gray}{1.81} \\ \hline \multirow{1}{*}{ Elastix \cite{qiao2017fast}}&Registration & $1.73 \pm 0.7$ & 1.59 & $2.71 \pm 1.6$ & 2.45 & $3.69 \pm 1.2$ & 3.50 & $5.26 \pm 2.6$ & 4.72 \\ \hline \multirow{1}{*}{ Hybrid \cite{MedPhys}}&Registration & $1.27 \pm 0.3$ & 1.25 & $1.47 \pm 0.5$ & 1.32 & $2.03 \pm 0.6$ & 1.85 & $1.75 \pm 1.0$ & 1.26 \\ \hline \multirow{1}{*}{ JRS-GAN \cite{JrsGan}}&Registration & $1.14 \pm 0.3$ & 1.04 & $1.75 \pm 1.3$ & 1.44 & $2.17 \pm 1.1$ & 1.89 & $2.25 \pm 1.9$ & 1.54 \\ \hline \end{tabular} } \label{table:HMC_MSD} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[Table caption text]{MSD (mm) values for the different networks on the independent test set (EMC). Lower values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $3.18 \pm 1.8$ & 2.57 & $9.33 \pm 10.1$ & 5.82 & $5.79 \pm 3.4$ & 5.18 & $1.88 \pm 1.5~$ & 1.50 \\ \hline \multirow{1}{*}{ Reg }&Registration & $2.01 \pm 2.5$ & 1.18 & $2.86 \pm 5.2$ & 1.18 & $2.89 \pm 2.5$ & 2.23 & $5.98 \pm 4.7$ & 4.44 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $1.94 \pm 2.6$ & 1.16 & $2.48 \pm 4.8~$ & 1.01 & \textcolor{light-gray}{$2.67 \pm 2.4$} & 2.05 & $4.80 \pm 4.6$ & 2.12 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & \textcolor{light-gray}{$2.01 \pm 2.6$} & 1.15 & \textcolor{light-gray}{$4.08 \pm 7.2$} & \textcolor{light-gray}{1.23} & \textcolor{light-gray}{$3.70 \pm 5.4~$} & 2.03 & $2.75 \pm 3.1$ & 1.23 \\ &Registration & $1.93 \pm 2.5$ & \textcolor{light-gray}{1.15} & $2.53 \pm 4.7~$ & 1.01 & $2.67 \pm 2.3$ & \textcolor{light-gray}{2.13} & \textcolor{light-gray}{$5.08 \pm 4.4$} & \textcolor{light-gray}{3.01} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & \textcolor{light-gray}{$1.99 \pm 2.4$} & \textcolor{light-gray}{1.24} & \textcolor{light-gray}{$6.26 \pm 8.9$} & \textcolor{light-gray}{3.01} & \textcolor{light-gray}{$4.21 \pm 4.9~$} & 2.12 & $2.43 \pm 2.9$ & 1.04 \\ &Registration & $1.92 \pm 2.5$ & 1.19 & $2.43 \pm 4.5~$ & 1.07 & $2.72 \pm 2.4$ & \textcolor{light-gray}{2.17} & \textcolor{light-gray}{$4.86 \pm 4.4$} & \textcolor{light-gray}{2.22} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & $1.88 \pm 1.9~$ & \textcolor{light-gray}{1.30} & \textcolor{light-gray}{$2.76 \pm 3.5~$} & \textcolor{light-gray}{1.28} & \textcolor{light-gray}{$4.87 \pm 6.8~$} & \textcolor{light-gray}{2.49} & $1.66 \pm 1.7$ & 0.85 \\ &Registration & \textcolor{light-gray}{$1.91 \pm 2.3$} & 1.23 & $2.41 \pm 4.5~$ & {0.95} & $2.78 \pm 2.4$ & 2.16 & \textcolor{light-gray}{$4.90 \pm 4.0$} & \textcolor{light-gray}{2.84} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & ${1.42} \pm {0.7}~$ & 1.17 & $2.07 \pm 2.6$ & 1.24 & $3.20 \pm 1.6$ & 3.07 & $5.30 \pm 5.1$ & 3.27 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & $1.55 \pm 0.6$ & 1.36 & ${1.65} \pm {1.3}~$ & 1.22 & $2.65 \pm 1.6~$ & 2.36 & $3.81 \pm 3.6$ & 2.26 \\ \hline \end{tabular} } \label{table:EMC_MSD} \end{table*} \begin{table*}[h] \centering \caption{Analysis of the determinant of the Jacobian for the validation and the independent test sets. Lower values are better. } \label{table:dvf-table} \begin{tabular}{lcccccc} &&\multicolumn{2}{c}{Validation set (HMC) }&&\multicolumn{2}{c}{Independent test set (EMC) } \\ \hline Network && Std. Jacobian & Folding fraction && Std. Jacobian & Folding fraction \\ \hline Reg &&$0.2935\pm0.1022$ & $0.0049\pm0.0039$ && $0.4129\pm0.2258$ & $0.0112\pm0.0115$ \\ \hline JRS-reg && $0.2543\pm0.0505$ & $0.0030\pm0.0014$ && $0.3148\pm0.1106$ & $0.0066\pm0.0062$ \\ \hline Dense && $0.2062\pm0.0431$ & $0.0018\pm0.0012$ && $0.2558\pm0.0899$ & $0.0036\pm0.0027$ \\ \hline SEDD && $0.2626\pm0.1167$ & $0.0019\pm0.0016$ && $0.4287\pm0.3000$ & $0.0066\pm0.0074$ \\ \hline Cross-stitch && $0.2241\pm0.0784$ & $0.0024\pm0.0018$ && $0.3301\pm0.1869$ & $0.0071\pm0.0070$ \\ \hline \end{tabular} \end{table*} \subsection{Comparison against the state-of-the-art} Table \ref{table:HMC_MSD} and \ref{table:EMC_MSD} show the results for the validation set (HMC) and test set (EMC), respectively. The first two networks in each table are single-task networks. For both sets, the registration network outperformed the segmentation network for all organs except the bladder. The mean MSD for the independent test set is higher than the corresponding numbers in the validation set for most organs. However, the median values are on par. For the MTL networks, the segmentation path of the networks achieved better performance than the registration path on both datasets except for the seminal vesicles. The Cross-stitch network achieved the best results compared to the other MTL networks. The proposed STL and MTL networks were compared against other state-of-the-art methods that were evaluated using the HMC dataset. For the validation set, the STL network achieved comparable results, while the Cross-stitch network outperformed these methods for both output paths. On the test set, \textit{elastix} \cite{qiao2017fast} and the Hybrid method \cite{MedPhys} performed better except for the bladder, although the median values of the MTL networks were better. For the quality of the predicted contours, Figure \ref{fig:HMC_examples} and \ref{fig:EMC_examples} show example contours from the HMC and EMC datasets for the Seg, Reg, and Cross-stitch networks. The examples show that the Cross-stitch network achieves better results compared to the Seg and Reg networks especially for the seminal vesicles and rectum with large gas pockets. \begin{figure*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c @{\quad} c @{\quad} || c @{\quad} c @{\quad} || c @{\quad} c} \large{Seg} & \large{Reg} & \large{Seg} & \large{Reg} & \large{Seg} & \large{Reg} \\ \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071211_seg_net.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071211_reg_net.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071213_seg_net_101.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071213_reg_net_101.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_22_visit_20071029_seg_net_124.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_22_visit_20071029_reg_net_124.png} \\ \large{Cross-stitch} & \large{Manual} & \large{Cross-stitch} & \large{Manual} & \large{Cross-stitch} & \large{Manual} \\ \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071211_cs_net.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071211_groundtruth.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071213_cs_net_101.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_24_visit_20071213_groundtruth_101.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_22_visit_20071029_cs_net_124.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_22_visit_20071029_groundtruth_124.png} \end{tabular} } \caption{Example contours from the validation dataset (HMC) generated by the proposed STL and MTL networks. From left to right, the selected cases are the first, second, and third quartile in terms of the prostate MSD of the Cross-stitch network. The contours of the bladder, prostate, seminal vesicles, and rectum are colored in red, yellow, green, and blue, respectively.} \label{fig:HMC_examples} \end{figure*} \begin{figure*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c @{\quad} c @{\quad} || c @{\quad} c @{\quad} || c @{\quad} c} \large{Seg} & \large{Reg} & \large{Seg} & \large{Reg} & \large{Seg} & \large{Reg} \\ \includegraphics[width=25mm,height=35mm]{figures/Patient_06_visit_19000101_seg_net_157.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_06_visit_19000101_reg_net_157.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_15_visit_19000101_seg_net_100.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_15_visit_19000101_reg_net_100.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_17_visit_18990305_seg_net_103.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_17_visit_18990305_reg_net_103.png} \\ \large{Cross-stitch} & \large{Manual} & \large{Cross-stitch} & \large{Manual} & \large{Cross-stitch} & \large{Manual}\\ \includegraphics[width=25mm,height=35mm]{figures/Patient_06_visit_19000101_cs_net_157.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_06_visit_19000101_groundtruth_157.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_15_visit_19000101_cs_net_100.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_15_visit_19000101_groundtruth_100.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_17_visit_18990305_cs_net_103.png} & \includegraphics[width=25mm,height=35mm]{figures/Patient_17_visit_18990305_groundtruth_103.png} \end{tabular} } \caption{Example contours from the independent test set (EMC) generated by the proposed STL and MTL networks. From left to right, the selected cases are the first, second, and third quartile in terms of the prostate MSD of the Cross-stitch network.} \label{fig:EMC_examples} \end{figure*} \section{Introduction} Medical image analysis aims to extract clinically useful information that aids the diagnosis, prognosis, monitoring and treatment of diseases \cite{nilashi2020disease, shen2017deep}. Two of the most common tasks in such analyses are image registration and segmentation \cite{rueckert2014registration}. Image segmentation aims to identify and cluster objects that prevail similar characteristics into distinctive labels, where these labels can be used for diagnosis or treatment planning. Image registration is the task of finding the geometrical correspondence between images that were acquired at different time steps or from different imaging modalities. These two tasks are complementary, as for example image atlases warped by image registration algorithms are often used for image segmentation \cite{huo20193d,wang2014multi}, while image contours can be used to guide the image registration method in addition to the intensity images \cite{MedPhys, JrsGan, mahapatra2015joint}. Contours are also used for evaluating the quality of the registration \cite{woerner2017evaluation, gu2013contour}. Therefore, coupling image registration and segmentation tasks and modeling them in a single network could be beneficial. Adaptive image-guided radiotherapy is an exemplar application where the coupling of image registration and segmentation is vital. In radiotherapy, treatment radiation dose is delivered over a course of multiple inter-fraction sessions. In an adaptive setting, re-imaging of the daily anatomy and automatic re-contouring is crucial to compensate for patient misalignment, to compensate for anatomical variations in organ shape and position, and an enabler for the reduction of treatment margins or robustness settings \cite{hansen2006repeat, brock2019adaptive}. These have an important influence on the accuracy of the dose delivery, and improve the treatment quality, potentially reducing treatment related side-effects and increasing quality-of-life after treatment \cite{sonke2019adaptive}. Automatic contouring can be done by direct segmentation of the daily scan, or by registration of the annotated planning scan with the daily scan followed by contour propagation. Image registration has the advantage of leveraging prior knowledge from the initial planning CT scan and the corresponding clinical-quality delineations, which may especially be helpful for challenging organs. On the other hand, image segmentation methods may better delineate organs that vary substantially in shape and volume between treatment fractions, which is often the case for the rectum and the bladder. In this study, we propose to fuse these tasks at the network architecture level as well as via the loss function. Our key contributions in this paper are as follows: \begin{enumerate} \item We formulate image registration and segmentation as a multi-task learning problem, which we explore in the context of adaptive image-guided radiotherapy. \item We explore different joint network architectures as well as loss weighting methods for merging these tasks. \item We adopt the cross-stitch network architecture for segmentation and registration tasks and explore how these cross-stitch units facilitate information flow between these tasks. \item Furthermore, we compare MTL algorithms against single-task networks. We demonstrate that MTL algorithms outperform STL networks for both segmentation and registration tasks. To the best of our knowledge this is the first study to investigate various MTL algorithms on an architectural level as well as on a loss weighing level for joint registration and segmentation tasks. \item We thoroughly investigate the internals of the STL and MTL networks and pinpoint the best strategy to merge this information to maximize the information flow between the two tasks. \end{enumerate} Initial results of this work were presented in \cite{beljaards2020cross}, focusing on the cross-stitch unit in a proposed joint architecture. In the current paper we extend this study to the architectural fusion of these tasks as well as different loss weighting mechanisms. Moreover, an extensive analysis of the different methodologies was performed, detailing the effect of architectural choices, information flow between the two tasks, etc. The remainder of this paper is organized as follows: Section \ref{method_Section} introduces single-task networks, multi-task networks, and loss weighting approaches. In Section \ref{exp_results_section} we introduce the datasets and details about the implementation as well as the experiments. In Sections \ref{discussion} and \ref{conclusion}, we discuss our results, provide future research directions, and present our conclusions. \subsection{Related Work} In the last decade, researchers have been exploring the idea of fusing image segmentation and registration. Lu \emph{et al.} \cite{lu2011integrated} and Pohl \emph{et al.} \cite{pohl2006bayesian} proposed modeling these tasks using a Bayesian framework such that these tasks would constrain each other. Yezzi \cite{yezzi2003variational} proposed to fuse these tasks using active contours, while Unal \emph{et al.} \cite{unal2005coupled} proposed to generalize the previous approach by using partial differential equations without shape priors. Mahapatra \emph{et al.} \cite{mahapatra2015joint} proposed a Joint Registration and Segmentation (JRS) framework for cardiac perfusion images, where the temporal intensity images are decomposed into sparse and low rank components corresponding to the intensity change from the contrast agent and the motion, respectively. They proposed to use the sparse component for segmentation and the low rank component for registration. However, most of the aforementioned methods require complex parameter tuning and yield long computation times. Recently, deep learning-based networks have shown unprecedented success in many fields especially in the medical image analysis domain \cite{fu2020deep, yousefi2020esophageal, liu2019automatic, kiljunen2020deep, cao2018deep, leger2020cross, elmahdy2020patient, sokooti20193d}, where deep learning models perform on par with medical experts or even surpassing them in some tasks \cite{tschandl2019comparison, ardila2019end, hu2019observational, maidens2018artificial, mak2019use}. Several deep learning-based approaches have been proposed for joint registration and segmentation. The joining mechanisms in the literature can be classified in two categories, namely joining via the loss function and via the architecture as well as the loss function. Selected exemplar methods of the first approach are Hue \emph{et al.} \cite{hu2018label}, who proposed to join segmentation and registration via a multi-resolution Dice loss function. Elmahdy \emph{et al.} \cite{MedPhys} proposed a framework that is a hybrid between learning and iterative approaches, where a CNN network segments the bladder and feeds it to an iterative-based registration algorithm. The authors integrated domain-specific knowledge such as air pocket inpainting as well as contrast clipping, moreover they added an extra registration step in order to focus on the seminal vesicles and rectum. Elmahdy \emph{et al.} \cite{JrsGan} and Mahapatra \emph{et al.} \cite{mahapatra2018joint} proposed a GAN-based (Generative Adversarial Network) approach, where a generative network predicts the correspondence between a pair of images and a discriminator network for giving feedback on the quality of the deformed contours. Exemplar methods of the second category are Xu \emph{et al.} \cite{xu2019deepatlas}, who presented a framework that simultaneously trains a registration and a segmentation network. The authors proposed to jointly learn these tasks during training, however the networks can be used independently during test time. This enables prediction of only the registration output, when the labels are not available during test time. Estienne \emph{et al.} \cite{estienne2019u} proposed to merge affine and deformable registration as well as segmentation in a 3D end-to-end CNN network. Recently Liu \emph{et al.} \cite{liu2020jssr} proposed an end-to-end framework called JSSR that registers and segments multi-modal images. This framework is composed of three networks: a generator network, that synthesizes the moving image to match the modality of the fixed image, a registration network that registers the synthesized image to the fixed image, and finally a segmentation network that segments the fixed, moving, and synthesized images. All the previous methods explored the idea of joining segmentation and registration, where to the best of our knowledge none have explored how these tasks are best connected and how to optimize the information flow between them on both the loss and architectural levels. \section{Methods} \label{method_Section} \subsection{Base Network Architecture} \label{base_network} The base architecture for the networks in this paper is a 3D CNN network inspired by the U-Net and BIRNet architectures \cite{ronneberger2015u, fan2019birnet}. Figure~\ref{fig:base_network}a shows the architecture of the base network. The network encodes the input through $3 \times 3 \times 3$ convolution layers with no padding. LeakyReLU \cite{nair2010rectified} and batch normalization \cite {pmlr-v37-ioffe15} are applied after each convolutional layer. We used strided convolutions in the down-sampling path and trilinear upsampling layers in the upsampling path. Through the upsampling path, the number of feature maps increases while the size of the feature maps decreases, and vice versa for the down-sampling path. The network has three output resolutions and is deeply supervised at each resolution. Each resolution is preceded by a $1 \times 1 \times 1$ fully convolution layer (Fconv) so that at coarse resolution, the network can focus on large organs as well as large deformations, while vice versa at fine resolution. In order to extract the groundtruth for different resolutions, we perform cropping of different sizes as well as strided sampling so that for every input patch of size $n^3$, the sizes of the coarse, mid, and fine resolution are $(\frac{n}{4}-7)^3$, $(\frac{n}{2}-18)^3$, and $(n-40)^3$, respectively. \begin{figure*}[t!] \begin{center} \includegraphics[width=1\textwidth]{figures/BaseNetworkArchitecture6.pdf} \caption{The proposed network architectures introduced in the paper. (a) is the base STL network architecture for either segmentation or registration, but also represents the dense parameter sharing MTL network architecture; (b) is the architecture with a shared encoder, while (c) is the Cross-stitch network architecture. Details about the number of feature maps are presented in Section \ref{implement_details}.} \label{fig:base_network} \end{center} \end{figure*} \subsection{Single Task Learning} Single-task networks are designed to solve one task and therefore require a large amount of labeled training samples, which are scarce in the medical domain since it takes time and trained medical personnel to contour these images. The segmentation and registration networks have the same architecture as the base network depicted in Figure~\ref{fig:base_network}a, but differ in the input and output layers. Here, single-task networks are considered baseline networks for comparing with the performance of the proposed multi-task networks. \subsubsection{Segmentation Network} \label{seg_network} The input to the segmentation network is the daily CT scan, referred to as the fixed image $\IFMath$\xspace, where the network predicts the corresponding segmentation $\SPredMath$\xspace. $\SPredMath$\xspace represents the probability maps for the background, target organs, and organs-at-risk. The network was trained using the Dice Similarity Coefficient (DSC) loss, which quantifies the overlap between the network prediction $\SPredMath$\xspace and the groundtruth $\SFMath$\xspace as follows: \begin{equation} \mathcal{L}_{\mathrm{DSC}} = 1 - \frac{1}{K}\sum^{K}_{k=1} \frac{2*\sum_{x}S^{\mathrm{pred}}_k(x) \cdot S_k(x)} {\sum_{x}S^{\mathrm{pred}}_k(x)+\sum_{x}S_k(x)}, \label{eq:DiceLoss} \end{equation} where $K$ is the number of structures to be segmented, $x$ is the voxel coordinate, $S_k$ is the ground truth segmentation, and $S^{\mathrm{pred}}_k$ the predicted probabilities. The network has 779,436 trainable parameters. \subsubsection{Registration Network}\label{reg_network} The input to the registration network is the concatenation of the planning scan, referred to as the moving image $\IMMath$\xspace and the daily scan $\IFMath$\xspace. The network predicts the geometrical correspondence between the input images. This correspondence is represented by the displacement vector field (DVF), referred to as $\DVFMath$\xspace. This DVF is then used to warp $\IMMath$\xspace. In an ideal scenario, the warped moving image $\IWarpMath$\xspace would be identical to $\IFMath$\xspace. The network is trained using Normalized Cross Correlation (NCC) in order to quantify the dissimilarity between $\IWarpMath$\xspace and $\IFMath$\xspace. Since the images are from a single imaging modality (CT) with a similar intensity distribution, NCC is an obvious choice abundantly used in the registration literature. Moreover, the implementation is straightforward and efficient when using plain convolution operations. NCC is defined by the following equation: \begin{equation} \resizebox{0.999\hsize}{!}{ $\mathcal{L}_{\mathrm{NCC}} = 1 - \frac{\sum_x [({I_f}(x) - \overline{{I_f}}) \cdot ({I_m^{\mathrm{warped}}}(x) - \overline{{I_m^{\mathrm{warped}}}})]} {\sigma_{I_f}\sigma_{I_m^{\mathrm{warped}}}}$}, \label{eq:NCCLoss} \end{equation} where $x$ is the voxel coordinate, and $\sigma_{I_f}$ and $\sigma_{I_m^{\mathrm{warped}}}$ are the standard deviation of the fixed and warped images, respectively. In order to encourage the network to predict a smooth DVF, a bending energy penalty term is added for regularization: \begin{equation} \mathcal{L}_{\mathrm{BE}} = \frac{1}{N}\sum_{x}\|H(\phi^{pred}(x))\|^2_2, \label{eq:BendingEnergyLoss} \end{equation} where $H$ is the Hessian matrix. Now the total registration loss becomes: \begin{equation} \mathcal{L}_{\mathrm{Registration}} = \mathcal{L}_{\mathrm{NCC}} + w \cdot \mathcal{L}_{\mathrm{BE}}, \label{eq:RegistrationLoss} \end{equation} where $w$ is the bending energy weight. For more details on the selection of $w$, see Section \ref{bending_Section}. The network has 779,733 trainable parameters. \subsection{Multi Task Learning} In Multi-Task Learning (MTL), related tasks regularize each other by introducing an inductive bias, thus making the model agnostic to overfitting compared to its STL counterparts \cite{baxter2000model}. MTL can also be considered as an implicit data augmentation strategy, since it effectively increases the training sample size while encouraging the model to ignore data-dependent noise. Because different tasks have different noise patterns, modeling these tasks simultaneously enables the model to generalize well \cite{meyerson2018pseudo}. Moreover, in MTL models, some features can be more easily learned by one task than another, thus encouraging information cross-talk between tasks \cite{abu1990learning}. Also, in real-world scenarios, physicians usually incorporate knowledge from different imaging modalities or previous tasks in order to come up with a diagnosis or better understanding of the underlying problem. This illustrates that the knowledge embedded in one task can be leveraged by other tasks and hence it is beneficial to jointly learn related tasks. Choosing the architecture of an MTL network is based on the following two factors \cite{zhang2017survey}: \textit{what to share} and \textit{how to share}. \textit{What to share} defines the form in which knowledge is shared between tasks. This knowledge sharing can be done through hand-crafted features, input images, and model parameters. \textit{How to share} determines the optimal manner in which this knowledge is shared. In this paper, we focus on parameter-based sharing. In the following sections, we investigate different MTL network architectures in order to best understand how segmentation and registration tasks share information on the architectural level. The investigated networks predict two sets of contours, one set resulting from the segmentation task and one from the registration task. In this paper, we select the best set of contours as the final output, based on the validation results. More sophisticated strategies are discussed in Section \ref{discussion}. \subsubsection{Joint Registration and Segmentation via the Registration network}\label{jrs_network} The network in this method, dubbed JRS-reg, has the same architecture as the STL registration network from Section \ref{reg_network}, except that this network is optimized using a joint loss as presented in Eq. \ref{eq:general_mtl}. \subsubsection{Dense Parameter Sharing}\label{dense_network} In this architecture both segmentation and registration tasks are modeled using a single network, where both tasks share all parameters except for the task-specific parameters in the output layer, see Figure \ref{fig:base_network}a. The network architecture is the same as the base network (see Section \ref{base_network}) except for the input and output layers. This dense sharing eliminates overfitting issues since it enforces the parameters to model all the tasks at once, however it does not guarantee the best representation for individual tasks \cite{zhang2017survey}. The input to the network is the concatenation of $\IMMath$\xspace, $\IFMath$\xspace, and $\SMMath$\xspace. The network predicts the $\DVFMath$\xspace between input images as well as $\SPredMath$\xspace. The network has 781,164 trainable parameters. \subsubsection{Encoder Parameter Sharing} \label{sedd_network} Since the input to the segmentation and registration tasks are both CT scans, this means they both encode similar features in the down-sampling path of the network. Therefore in this network both tasks share the encoding path and then splits into two upsampling task specific decoder paths. We call this network the Shared Encoder Double Decoder (SEDD) network. Figure \ref{fig:base_network}b shows the architecture of the network. The input to the network is the concatenation of $\IMMath$\xspace, $\IFMath$\xspace, and $\SMMath$\xspace. The network predicts $\DVFMath$\xspace between the input images from the registration path while predicting $\SPredMath$\xspace from the segmentation path. The network has 722,936 trainable parameters. \subsubsection{Cross-stitch network}\label{cs_network} A flexible approach to share parameters is via a Cross-Stitch (CS) network \cite{misra2016cross}. In contrast to the heuristic approach of manually choosing which layers are shared and which are task-specific, the CS network introduces a learning-based unit to determine the amount of feature sharing between tasks. The CS units learn to linearly combine feature maps from the two networks, one for segmentation and one for registration, as shown in Figure \ref{fig:base_network}c. The unit itself is defined as: \begin{equation} \left[ \begin{array}{c} \bar{X}^{\ell, k}_S \vspace{1mm}\\ \bar{X}^{\ell, k}_R \\ \end{array} \right] = \left[ \begin{array}{cc} \alpha^{\ell,k}_{SS} & \alpha^{\ell,k}_{SR} \vspace{1mm}\\ \alpha^{\ell,k}_{RS} & \alpha^{\ell,k}_{RR} \\ \end{array} \right] \left[ \begin{array}{c} X^{\ell, k}_S \vspace{1mm}\\ X^{\ell, k}_R \\ \end{array} \right], \label{eq:cs} \end{equation} where $X^{\ell, k}_S$ and $X^{\ell, k}_R$ represent the feature maps $k$ at layer $l$ for the segmentation and registration networks, respectively. $\alpha^{\ell,k}_{SS}$, $\alpha^{\ell,k}_{SR}$, $\alpha^{\ell,k}_{RS}$, and $\alpha^{\ell,k}_{RR}$ represent the learnable parameters of the CS unit. $\bar{X}^{\ell, k}_S$ and $\bar{X}^{\ell, k}_R$ are the output feature maps for the segmentation and registration networks, respectively. The advantage of CS units is that the network can dynamically learn to share the feature maps in case this is beneficial in terms of the final loss value. In case there is no benefit, an identity matrix can be learned, so that the feature maps become task-specific. This allows the network to learn a smooth sharing between the tasks at a negligible increase in the number of parameters. As suggested by the original paper, we placed the CS units after the downsampling and upsampling layers resulting in a total of 4 CS units. The CS network has 779,000 trainable parameters. \subsection{Loss Weighting } \label{loss_weighting} The loss function for the MTL networks is defined by: \begin{equation}\label{eq:general_mtl} \centering \mathcal{L} = w_0 \cdot \mathcal{L}_{\mathrm{NCC}} + w_1 \cdot \mathcal{L}_{\mathrm{DSC-R}} + w_2 \cdot \mathcal{L}_{\mathrm{DSC-S}} + w_3 \cdot \mathcal{L}_{\mathrm{BE}}, \end{equation} where $w_i$ are the loss weights. They are chosen based on the relative contribution of their corresponding tasks, so that different tasks would learn at the same pace. These weights can be chosen manually based on empirical knowledge, or automatically. A simple choice would be to weigh the losses equally with a fixed weight of 1. Following are some exemplar algorithms for choosing the loss weights automatically. Chen \emph{et al.} proposed GradNorm \cite{chen2018gradnorm} to weigh different tasks by dynamic tuning of the gradient magnitudes of the tasks. This tuning is achieved by dynamically changing the learning rate for each task so that all tasks would be learning at the same speed. The drawback of this approach is that it requires access to the internal gradients of the shared layers which could be cumbersome. Moreover, one needs to choose which shared layer to back propagate to in case of multiple shared layers. Kendall \emph{et al.} \cite{kendall2018multi} proposed to weigh each task by considering the homoscedastic uncertainty of that task, so that tasks with high output variance will be weighted less than tasks with low variance. This approach only adds few trainable parameters, namely equal to the number of loss functions. Inspired by GradNorm, Liu \emph{et al.} proposed Dynamic Weight Averaging (DWA) \cite{liu2019end}, where each task is weighted over time by considering the rate of change of the relative loss weights. Contrary to GradNorm, DWA only requires the numerical values of the loss functions rather than their derivatives. In this paper, we compared equal weights versus homoscedastic uncertainty and DWA. For all the experiments, we set the weight of the bending energy to a fixed value of 0.5 (for more details see Section \ref{bending_Section}) instead of a trainable one. This is to prevent the network to set it too low in order to improve the DSC of the deformed contours on the account of the smoothness of the predicted DVF. \subsubsection{Homoscedastic Uncertainty} Homoscedastic uncertainty was proposed as a loss weighting method by Kendall \emph{et al.} \cite{kendall2018multi}. This is a task-dependant uncertainty which is not dependant on the input data but rather varies between tasks. The authors derived their finding by maximizing the Gaussian likelihood while considering the observational noise scalar $\sigma$ that represents the homoscedastic uncertainty term related to each task. The following equation describes the weight loss using homoscedastic uncertainty, where $\sigma$ is a trainable parameter: \begin{equation}\label{eq:homoscedastic} \centering \mathcal{L}_{\mathrm{homoscedastic}} = \sum\limits_{i=1}^T \frac{1}{\sigma_i^2} \: \mathcal{L}_i + \log \: \sigma_i, \end{equation} where $T$ is the number of tasks. The higher the uncertainty of task $i$, the lower the contribution of its associated loss $\mathcal{L}_i$ to the overall loss. The $\log$ term can be viewed as a regularization term, so that the network would not learn a trivial solution by setting the uncertainty of all tasks to extreme values. \subsubsection{Dynamic Weight Averaging} Dynamic Weight Averaging (DWA) was proposed by Liu \emph{et al.} \cite{liu2019end}. Similar to GradNorm \cite{chen2018gradnorm}, DWA weights the losses via the rate of change of the loss of each task over the training iterations $t$. In contrast to GradNorm, DWA does not require access to the internal gradients of the network, but only requires the numerical loss values. According to DWA, the weight $w$ of the loss $\mathcal{L}$ associated with the task $k$ is defined as: \begin{equation} \label{eq:dwa} \centering w_k(t) = \frac{K \: \exp(r_{k}(t-1)/tmp)}{\sum_i \exp(r_{i}(t-1)/tmp)}, \: r_k(t-1) = \frac{\mathcal{L}_k(t-1)}{\mathcal{L}_k(t-2)}, \end{equation} where $r_k$ is the relative loss ratio and $tmp$ is the temperature that controls the smoothness of the the task weighting. Here, we set $tmp=1$ as suggested by the original paper. For the initial two iterations, $r_k(t)$ is set to 1. \section{Methods} \label{method_Section} \subsection{Base Network Architecture} \label{base_network} The base architecture for the networks in this paper is a 3D CNN network inspired by the U-Net and BIRNet architectures \cite{ronneberger2015u, fan2019birnet}. Figure~\ref{fig:base_network}a shows the architecture of the base network. The network encodes the input through $3 \times 3 \times 3$ convolution layers with no padding. LeakyReLU \cite{nair2010rectified} and batch normalization \cite {pmlr-v37-ioffe15} are applied after each convolutional layer. We used strided convolutions in the down-sampling path and trilinear upsampling layers in the upsampling path. Through the upsampling path, the number of feature maps increases while the size of the feature maps decreases, and vice versa for the down-sampling path. The network has three output resolutions and is deeply supervised at each resolution. Each resolution is preceded by a $1 \times 1 \times 1$ fully convolution layer (Fconv) so that at coarse resolution, the network can focus on large organs as well as large deformations, while vice versa at fine resolution. In order to extract the groundtruth for different resolutions, we perform cropping of different sizes as well as strided sampling so that for every input patch of size $n^3$, the sizes of the coarse, mid, and fine resolution are $(\frac{n}{4}-7)^3$, $(\frac{n}{2}-18)^3$, and $(n-40)^3$, respectively. \begin{figure*}[t!] \begin{center} \includegraphics[width=1\textwidth]{figures/BaseNetworkArchitecture6.pdf} \caption{The proposed network architectures introduced in the paper. (a) is the base STL network architecture for either segmentation or registration, but also represents the dense parameter sharing MTL network architecture; (b) is the architecture with a shared encoder, while (c) is the Cross-stitch network architecture. Details about the number of feature maps are presented in Section \ref{implement_details}.} \label{fig:base_network} \end{center} \end{figure*} \subsection{Single Task Learning} Single-task networks are designed to solve one task and therefore require a large amount of labeled training samples, which are scarce in the medical domain since it takes time and trained medical personnel to contour these images. The segmentation and registration networks have the same architecture as the base network depicted in Figure~\ref{fig:base_network}a, but differ in the input and output layers. Here, single-task networks are considered baseline networks for comparing with the performance of the proposed multi-task networks. \subsubsection{Segmentation Network} \label{seg_network} The input to the segmentation network is the daily CT scan, referred to as the fixed image $\IFMath$\xspace, where the network predicts the corresponding segmentation $\SPredMath$\xspace. $\SPredMath$\xspace represents the probability maps for the background, target organs, and organs-at-risk. The network was trained using the Dice Similarity Coefficient (DSC) loss, which quantifies the overlap between the network prediction $\SPredMath$\xspace and the groundtruth $\SFMath$\xspace as follows: \begin{equation} \mathcal{L}_{\mathrm{DSC}} = 1 - \frac{1}{K}\sum^{K}_{k=1} \frac{2*\sum_{x}S^{\mathrm{pred}}_k(x) \cdot S_k(x)} {\sum_{x}S^{\mathrm{pred}}_k(x)+\sum_{x}S_k(x)}, \label{eq:DiceLoss} \end{equation} where $K$ is the number of structures to be segmented, $x$ is the voxel coordinate, $S_k$ is the ground truth segmentation, and $S^{\mathrm{pred}}_k$ the predicted probabilities. The network has 779,436 trainable parameters. \subsubsection{Registration Network}\label{reg_network} The input to the registration network is the concatenation of the planning scan, referred to as the moving image $\IMMath$\xspace and the daily scan $\IFMath$\xspace. The network predicts the geometrical correspondence between the input images. This correspondence is represented by the displacement vector field (DVF), referred to as $\DVFMath$\xspace. This DVF is then used to warp $\IMMath$\xspace. In an ideal scenario, the warped moving image $\IWarpMath$\xspace would be identical to $\IFMath$\xspace. The network is trained using Normalized Cross Correlation (NCC) in order to quantify the dissimilarity between $\IWarpMath$\xspace and $\IFMath$\xspace. Since the images are from a single imaging modality (CT) with a similar intensity distribution, NCC is an obvious choice abundantly used in the registration literature. Moreover, the implementation is straightforward and efficient when using plain convolution operations. NCC is defined by the following equation: \begin{equation} \resizebox{0.999\hsize}{!}{ $\mathcal{L}_{\mathrm{NCC}} = 1 - \frac{\sum_x [({I_f}(x) - \overline{{I_f}}) \cdot ({I_m^{\mathrm{warped}}}(x) - \overline{{I_m^{\mathrm{warped}}}})]} {\sigma_{I_f}\sigma_{I_m^{\mathrm{warped}}}}$}, \label{eq:NCCLoss} \end{equation} where $x$ is the voxel coordinate, and $\sigma_{I_f}$ and $\sigma_{I_m^{\mathrm{warped}}}$ are the standard deviation of the fixed and warped images, respectively. In order to encourage the network to predict a smooth DVF, a bending energy penalty term is added for regularization: \begin{equation} \mathcal{L}_{\mathrm{BE}} = \frac{1}{N}\sum_{x}\|H(\phi^{pred}(x))\|^2_2, \label{eq:BendingEnergyLoss} \end{equation} where $H$ is the Hessian matrix. Now the total registration loss becomes: \begin{equation} \mathcal{L}_{\mathrm{Registration}} = \mathcal{L}_{\mathrm{NCC}} + w \cdot \mathcal{L}_{\mathrm{BE}}, \label{eq:RegistrationLoss} \end{equation} where $w$ is the bending energy weight. For more details on the selection of $w$, see Section \ref{bending_Section}. The network has 779,733 trainable parameters. \subsection{Multi Task Learning} In Multi-Task Learning (MTL), related tasks regularize each other by introducing an inductive bias, thus making the model agnostic to overfitting compared to its STL counterparts \cite{baxter2000model}. MTL can also be considered as an implicit data augmentation strategy, since it effectively increases the training sample size while encouraging the model to ignore data-dependent noise. Because different tasks have different noise patterns, modeling these tasks simultaneously enables the model to generalize well \cite{meyerson2018pseudo}. Moreover, in MTL models, some features can be more easily learned by one task than another, thus encouraging information cross-talk between tasks \cite{abu1990learning}. Also, in real-world scenarios, physicians usually incorporate knowledge from different imaging modalities or previous tasks in order to come up with a diagnosis or better understanding of the underlying problem. This illustrates that the knowledge embedded in one task can be leveraged by other tasks and hence it is beneficial to jointly learn related tasks. Choosing the architecture of an MTL network is based on the following two factors \cite{zhang2017survey}: \textit{what to share} and \textit{how to share}. \textit{What to share} defines the form in which knowledge is shared between tasks. This knowledge sharing can be done through hand-crafted features, input images, and model parameters. \textit{How to share} determines the optimal manner in which this knowledge is shared. In this paper, we focus on parameter-based sharing. In the following sections, we investigate different MTL network architectures in order to best understand how segmentation and registration tasks share information on the architectural level. The investigated networks predict two sets of contours, one set resulting from the segmentation task and one from the registration task. In this paper, we select the best set of contours as the final output, based on the validation results. More sophisticated strategies are discussed in Section \ref{discussion}. \subsubsection{Joint Registration and Segmentation via the Registration network}\label{jrs_network} The network in this method, dubbed JRS-reg, has the same architecture as the STL registration network from Section \ref{reg_network}, except that this network is optimized using a joint loss as presented in Eq. \ref{eq:general_mtl}. \subsubsection{Dense Parameter Sharing}\label{dense_network} In this architecture both segmentation and registration tasks are modeled using a single network, where both tasks share all parameters except for the task-specific parameters in the output layer, see Figure \ref{fig:base_network}a. The network architecture is the same as the base network (see Section \ref{base_network}) except for the input and output layers. This dense sharing eliminates overfitting issues since it enforces the parameters to model all the tasks at once, however it does not guarantee the best representation for individual tasks \cite{zhang2017survey}. The input to the network is the concatenation of $\IMMath$\xspace, $\IFMath$\xspace, and $\SMMath$\xspace. The network predicts the $\DVFMath$\xspace between input images as well as $\SPredMath$\xspace. The network has 781,164 trainable parameters. \subsubsection{Encoder Parameter Sharing} \label{sedd_network} Since the input to the segmentation and registration tasks are both CT scans, this means they both encode similar features in the down-sampling path of the network. Therefore in this network both tasks share the encoding path and then splits into two upsampling task specific decoder paths. We call this network the Shared Encoder Double Decoder (SEDD) network. Figure \ref{fig:base_network}b shows the architecture of the network. The input to the network is the concatenation of $\IMMath$\xspace, $\IFMath$\xspace, and $\SMMath$\xspace. The network predicts $\DVFMath$\xspace between the input images from the registration path while predicting $\SPredMath$\xspace from the segmentation path. The network has 722,936 trainable parameters. \subsubsection{Cross-stitch network}\label{cs_network} A flexible approach to share parameters is via a Cross-Stitch (CS) network \cite{misra2016cross}. In contrast to the heuristic approach of manually choosing which layers are shared and which are task-specific, the CS network introduces a learning-based unit to determine the amount of feature sharing between tasks. The CS units learn to linearly combine feature maps from the two networks, one for segmentation and one for registration, as shown in Figure \ref{fig:base_network}c. The unit itself is defined as: \begin{equation} \left[ \begin{array}{c} \bar{X}^{\ell, k}_S \vspace{1mm}\\ \bar{X}^{\ell, k}_R \\ \end{array} \right] = \left[ \begin{array}{cc} \alpha^{\ell,k}_{SS} & \alpha^{\ell,k}_{SR} \vspace{1mm}\\ \alpha^{\ell,k}_{RS} & \alpha^{\ell,k}_{RR} \\ \end{array} \right] \left[ \begin{array}{c} X^{\ell, k}_S \vspace{1mm}\\ X^{\ell, k}_R \\ \end{array} \right], \label{eq:cs} \end{equation} where $X^{\ell, k}_S$ and $X^{\ell, k}_R$ represent the feature maps $k$ at layer $l$ for the segmentation and registration networks, respectively. $\alpha^{\ell,k}_{SS}$, $\alpha^{\ell,k}_{SR}$, $\alpha^{\ell,k}_{RS}$, and $\alpha^{\ell,k}_{RR}$ represent the learnable parameters of the CS unit. $\bar{X}^{\ell, k}_S$ and $\bar{X}^{\ell, k}_R$ are the output feature maps for the segmentation and registration networks, respectively. The advantage of CS units is that the network can dynamically learn to share the feature maps in case this is beneficial in terms of the final loss value. In case there is no benefit, an identity matrix can be learned, so that the feature maps become task-specific. This allows the network to learn a smooth sharing between the tasks at a negligible increase in the number of parameters. As suggested by the original paper, we placed the CS units after the downsampling and upsampling layers resulting in a total of 4 CS units. The CS network has 779,000 trainable parameters. \subsection{Loss Weighting } \label{loss_weighting} The loss function for the MTL networks is defined by: \begin{equation}\label{eq:general_mtl} \centering \mathcal{L} = w_0 \cdot \mathcal{L}_{\mathrm{NCC}} + w_1 \cdot \mathcal{L}_{\mathrm{DSC-R}} + w_2 \cdot \mathcal{L}_{\mathrm{DSC-S}} + w_3 \cdot \mathcal{L}_{\mathrm{BE}}, \end{equation} where $w_i$ are the loss weights. They are chosen based on the relative contribution of their corresponding tasks, so that different tasks would learn at the same pace. These weights can be chosen manually based on empirical knowledge, or automatically. A simple choice would be to weigh the losses equally with a fixed weight of 1. Following are some exemplar algorithms for choosing the loss weights automatically. Chen \emph{et al.} proposed GradNorm \cite{chen2018gradnorm} to weigh different tasks by dynamic tuning of the gradient magnitudes of the tasks. This tuning is achieved by dynamically changing the learning rate for each task so that all tasks would be learning at the same speed. The drawback of this approach is that it requires access to the internal gradients of the shared layers which could be cumbersome. Moreover, one needs to choose which shared layer to back propagate to in case of multiple shared layers. Kendall \emph{et al.} \cite{kendall2018multi} proposed to weigh each task by considering the homoscedastic uncertainty of that task, so that tasks with high output variance will be weighted less than tasks with low variance. This approach only adds few trainable parameters, namely equal to the number of loss functions. Inspired by GradNorm, Liu \emph{et al.} proposed Dynamic Weight Averaging (DWA) \cite{liu2019end}, where each task is weighted over time by considering the rate of change of the relative loss weights. Contrary to GradNorm, DWA only requires the numerical values of the loss functions rather than their derivatives. In this paper, we compared equal weights versus homoscedastic uncertainty and DWA. For all the experiments, we set the weight of the bending energy to a fixed value of 0.5 (for more details see Section \ref{bending_Section}) instead of a trainable one. This is to prevent the network to set it too low in order to improve the DSC of the deformed contours on the account of the smoothness of the predicted DVF. \subsubsection{Homoscedastic Uncertainty} Homoscedastic uncertainty was proposed as a loss weighting method by Kendall \emph{et al.} \cite{kendall2018multi}. This is a task-dependant uncertainty which is not dependant on the input data but rather varies between tasks. The authors derived their finding by maximizing the Gaussian likelihood while considering the observational noise scalar $\sigma$ that represents the homoscedastic uncertainty term related to each task. The following equation describes the weight loss using homoscedastic uncertainty, where $\sigma$ is a trainable parameter: \begin{equation}\label{eq:homoscedastic} \centering \mathcal{L}_{\mathrm{homoscedastic}} = \sum\limits_{i=1}^T \frac{1}{\sigma_i^2} \: \mathcal{L}_i + \log \: \sigma_i, \end{equation} where $T$ is the number of tasks. The higher the uncertainty of task $i$, the lower the contribution of its associated loss $\mathcal{L}_i$ to the overall loss. The $\log$ term can be viewed as a regularization term, so that the network would not learn a trivial solution by setting the uncertainty of all tasks to extreme values. \subsubsection{Dynamic Weight Averaging} Dynamic Weight Averaging (DWA) was proposed by Liu \emph{et al.} \cite{liu2019end}. Similar to GradNorm \cite{chen2018gradnorm}, DWA weights the losses via the rate of change of the loss of each task over the training iterations $t$. In contrast to GradNorm, DWA does not require access to the internal gradients of the network, but only requires the numerical loss values. According to DWA, the weight $w$ of the loss $\mathcal{L}$ associated with the task $k$ is defined as: \begin{equation} \label{eq:dwa} \centering w_k(t) = \frac{K \: \exp(r_{k}(t-1)/tmp)}{\sum_i \exp(r_{i}(t-1)/tmp)}, \: r_k(t-1) = \frac{\mathcal{L}_k(t-1)}{\mathcal{L}_k(t-2)}, \end{equation} where $r_k$ is the relative loss ratio and $tmp$ is the temperature that controls the smoothness of the the task weighting. Here, we set $tmp=1$ as suggested by the original paper. For the initial two iterations, $r_k(t)$ is set to 1. \section{of the paper ``Joint Registration and Segmentation via Multi-Task Learning for Adaptive Radiotherapy of Prostate Cancer''} \vspace{0.5cm} In this appendix we provide a detailed results for the proposed methods and associated experiments in terms of DSC and \%95 HD. \vspace{0.5cm} \begin{table*}[h] \centering \setlength{\tabcolsep}{3pt} \caption[]{The effect of network input for the different architectures on the validation set (HMC) in terms of DSC. Higher values are better. Here, $\oplus$ is the concatenation operation, and $\cdot \| \cdot$ represents the inputs to the segmentation network (left of $\|$) and the inputs to the registration network (right of $\|$).} \resizebox{\textwidth}{!}{ \begin{tabular}{lcllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Input & Output path&\multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{4}{*}{ Seg }&\multirow{1}{*}{ $ {I_f}$ }&& $0.84 \pm 0.03~$ & 0.84 & $0.60 \pm 0.14~$ & 0.62 & $0.75 \pm 0.10~$ & 0.77 & $0.90 \pm 0.07~$ & 0.93 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{S_m}$ }&& $0.85 \pm 0.05~$ & 0.86 & ${0.66} \pm {0.16}~$ & {0.72} & ${0.79} \pm {0.12}~$ & {0.82} & ${0.93} \pm {0.03}~$ & {0.94} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& $0.66 \pm 0.08~$ & 0.67 & $0.39 \pm 0.21~$ & 0.40 & $0.39 \pm 0.21~$ & 0.41 & $0.91 \pm 0.08~$ & 0.93 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${0.86} \pm {0.04}~$ & {0.87} & $0.64 \pm 0.16~$ & 0.70 & $0.78 \pm 0.08~$ & 0.78 & ${0.93} \pm {0.03}~$ & {0.94} \\ \hline \multirow{2}{*}{ Reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${0.85} \pm {0.06}~$ & {0.86} & ${0.62} \pm {0.18}~$ & {0.68} & ${0.79} \pm {0.08}~$ & {0.81} & ${0.82} \pm {0.10}~$ & {0.84} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& $0.82 \pm 0.08~$ & 0.83 & $0.60 \pm 0.17~$ & 0.65 & $0.77 \pm 0.08~$ & 0.80 & $0.79 \pm 0.13~$ & 0.83 \\ \hline \multirow{2}{*}{ JRS-reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${0.87} \pm {0.04}~$ & {0.87} & ${0.68} \pm {0.14}~$ & {0.72} & $0.82 \pm 0.06~$ & {0.84} & ${0.87} \pm {0.08}~$ & {0.91} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${0.87} \pm {0.04}~$ & {0.87} & $0.67 \pm 0.15~$ & {0.72} & ${0.83} \pm {0.06}~$ & {0.84} & ${0.87} \pm {0.08}~$ & {0.91} \\ \hline \multirow{8}{*}{ Cross-stitch }&\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}$ }&Segmentation & \textcolor{light-gray}{$0.85 \pm 0.03$} & \textcolor{light-gray}{0.85} & \textcolor{light-gray}{$0.57 \pm 0.19$} & \textcolor{light-gray}{0.60} & \textcolor{light-gray}{$0.81 \pm 0.08$} & \textcolor{light-gray}{0.83} & $0.93 \pm 0.05$ & 0.94 \\ &&Registration & $0.87 \pm 0.03$ & 0.88 & $0.67 \pm 0.15$ & 0.70 & $0.82 \pm 0.06$ & 0.84 & \textcolor{light-gray}{$0.87 \pm 0.08$} & \textcolor{light-gray}{0.91} \\ [1mm] &\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & ${0.88} \pm {0.04}~$ & 0.88 & ${0.70} \pm {0.11}~$ & {0.74} & ${0.86} \pm {0.05}~$ & {0.88} & ${0.94} \pm {0.02}~$ & {0.95} \\ &&Registration & \textcolor{light-gray}{$0.87 \pm 0.03~$} & 0.88 & \textcolor{light-gray}{$0.68 \pm 0.15~$} & \textcolor{light-gray}{0.73} & \textcolor{light-gray}{$0.84 \pm 0.05~$} & \textcolor{light-gray}{0.85} & \textcolor{light-gray}{$0.88 \pm 0.08~$} & \textcolor{light-gray}{0.91} \\ [1mm] &\multirow{2}{*}{ $ {I_f}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & \textcolor{light-gray}{$0.77 \pm 0.11$} & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.52 \pm 0.19$} & \textcolor{light-gray}{0.57} & $0.80 \pm 0.05$ & \textcolor{light-gray}{0.80} & $0.93 \pm 0.03$ & 0.94 \\ &&Registration & $0.85 \pm 0.04$ & 0.85 & $0.66 \pm 0.14$ & 0.72 & $0.80 \pm 0.06$ & 0.82 & \textcolor{light-gray}{$0.87 \pm 0.08$} & \textcolor{light-gray}{0.90} \\ [1mm] &\multirow{2}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & ${0.88} \pm {0.04}~$ & {0.89} & $0.67 \pm 0.15$ & 0.72 & $0.85 \pm 0.05~$ & 0.86 & ${0.94} \pm {0.03}~$ & {0.95} \\ &&Registration & \textcolor{light-gray}{$0.86 \pm 0.04$} & \textcolor{light-gray}{0.87} & $0.67 \pm 0.16$ & 0.72 & \textcolor{light-gray}{$0.83 \pm 0.06$} & \textcolor{light-gray}{0.84} & \textcolor{light-gray}{$0.88 \pm 0.08$} & \textcolor{light-gray}{0.91} \\ \hline \end{tabular} } \label{table:_DSC} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[Table caption text]{ The effect of network input for the different architectures on the validation set (HMC) in terms of \%95 HD (mm). Lower values are better. Here, $\oplus$ is the concatenation operation, and $\cdot \| \cdot$ represents the inputs to the segmentation network (left of $\|$) and the inputs to the registration network (right of $\|$).} \resizebox{\textwidth}{!}{ \begin{tabular}{lcllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Input & Output path&\multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{4}{*}{ Seg }&\multirow{1}{*}{ $ {I_f}$ }&& $4.4 \pm 1.0~$ & 4.4 & $8.6 \pm 8.6~$ & 7.3 & $16.5 \pm 11.0~$ & 13.3 & $6.9 \pm 6.6~$ & 4.0 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{S_m}$ }&& $3.9 \pm 1.4~$ & {3.6} & ${5.9} \pm {5.9}~$ & {4.1} & $12.1 \pm 9.7~$ & {8.9} & $4.3 \pm 3.2~$ & {3.0} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& $9.1 \pm 2.3~$ & 8.7 & $14.9 \pm 10.5~$ & 11.7 & $45.1 \pm 17.3~$ & 41.8 & $5.3 \pm 5.6~$ & 3.6 \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${3.8} \pm {1.1}~$ & {3.6} & $7.3 \pm 9.2~$ & 4.2 & ${11.5} \pm {6.7}~$ & 9.6 & ${3.3} \pm {1.5}~$ & {3.0} \\ \hline \multirow{2}{*}{ Reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${5.5} \pm {4.5}~$ & {4.0} & ${5.6} \pm {4.1}~$ & {4.3} & ${11.0} \pm {6.4}~$ & 9.4 & ${15.7} \pm {9.6}~$ & {12.1} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& $7.7 \pm 6.3~$ & 5.5 & $6.2 \pm 4.2~$ & 4.8 & $11.6 \pm 6.8~$ & {9.2} & $17.0 \pm 9.5~$ & 14.7 \\ \hline \multirow{2}{*}{ JRS-reg }&\multirow{1}{*}{ $ {I_f}\oplus{I_m}$ }&& ${3.6} \pm {1.3}~$ & {3.0} & $4.5 \pm 3.0~$ & {3.3} & ${9.6} \pm {5.7}~$ & 8.2 & ${13.1} \pm {10.1}~$ & {9.4} \\ [1mm] &\multirow{1}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m}$ }&& ${3.6} \pm {1.9}~$ & 3.1 & ${4.4} \pm {2.8}~$ & 3.7 & $9.8 \pm 5.9~$ & {8.1} & $13.4 \pm 10.7~$ & 10.6 \\ \hline \multirow{8}{*}{ Cross-stitch }&\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}$ }&Segmentation & \textcolor{light-gray}{$5.1 \pm 2.3$} & \textcolor{light-gray}{4.4} & \textcolor{light-gray}{$9.5 \pm 9.6$} & \textcolor{light-gray}{6.1} & \textcolor{light-gray}{$17.2 \pm 14.0$} & \textcolor{light-gray}{12.6} & $5.0 \pm 6.6$ & 3.0 \\ &&Registration & $3.3 \pm 0.9$ & {3.0} & $4.7 \pm 3.0~$ & 3.7 & $10.1 \pm 6.3~$ & 9.0 & \textcolor{light-gray}{$12.6 \pm 10.0$} & \textcolor{light-gray}{9.4} \\ [1mm] &\multirow{2}{*}{ $ {I_f} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & ${3.0} \pm {1.0}~$ & {3.0} & ${4.3} \pm {1.7}~$ & \textcolor{light-gray}{3.9} & ${9.5} \pm {6.2}~$ & {7.2} & ${3.3} \pm {2.9}~$ & {2.3} \\ &&Registration & \textcolor{light-gray}{$3.2 \pm 0.9~$} & {3.0} & \textcolor{light-gray}{$4.5 \pm 3.3~$} & 3.6 & \textcolor{light-gray}{$9.8 \pm 6.3~$} & \textcolor{light-gray}{8.6} & \textcolor{light-gray}{$12.2 \pm 10.1~$} & \textcolor{light-gray}{9.7} \\ [1mm] &\multirow{2}{*}{ $ {I_f}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & \textcolor{light-gray}{$5.8 \pm 2.0$} & \textcolor{light-gray}{5.9} & \textcolor{light-gray}{$11.0 \pm 13.4$} & \textcolor{light-gray}{5.8} & $10.2 \pm 4.9~$ & 8.5 & $4.5 \pm 4.3$ & 3.0 \\ &&Registration & $4.4 \pm 1.6$ & 4.1 & $4.5 \pm 3.3~$ & 3.6 & $10.2 \pm 5.7$ & \textcolor{light-gray}{9.3} & \textcolor{light-gray}{$12.9 \pm 9.3$} & \textcolor{light-gray}{11.1} \\ [1mm] &\multirow{2}{*}{ $ {I_f}\oplus{I_m}\oplus{S_m} \: || \: {I_f}\oplus{I_m}\oplus{S_m}$ }&Segmentation & $3.1 \pm 1.0~$ & {3.0} & \textcolor{light-gray}{$5.4 \pm 5.4$} & \textcolor{light-gray}{4.4} & $9.7 \pm 5.6~$ & 8.9 & $4.2 \pm 5.6$ & 2.6 \\ &&Registration & \textcolor{light-gray}{$3.5 \pm 1.2$} & \textcolor{light-gray}{3.2} & $4.4 \pm 3.1~$ & {3.4} & \textcolor{light-gray}{$10.2 \pm 6.3~$} & \textcolor{light-gray}{9.1} & \textcolor{light-gray}{$12.5 \pm 10.6$} & \textcolor{light-gray}{8.7} \\ \hline \end{tabular} } \label{table:_HD} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{DSC values for the different networks and loss weighting methods for the HMC dataset. Higher values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{llllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Weight & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{3}{*}{ JRS-reg }&\multirow{1}{*}{Equal}&Registration & ${0.84} \pm {0.16}~$ & {0.89} & {$0.67 \pm 0.25~$} & 0.79 & $0.76 \pm 0.14$ & {0.79} & {$0.79 \pm 0.17$} & {0.88} \\ &\multirow{1}{*}{Homoscedastic}&Registration & ${0.84} \pm {0.16}~$ & {0.89} & ${0.68} \pm {0.25}~$ & {0.77} & $0.76 \pm 0.15$ & 0.80 & $0.80 \pm 0.18$ & 0.89 \\ &\multirow{1}{*}{DWA}&Registration & {$0.83 \pm 0.16$} & {0.88} & {$0.66 \pm 0.25$} & 0.78 & {$0.74 \pm 0.15$} & {0.79} & {$0.76 \pm 0.18$} & {0.84} \\ \hline \multirow{6}{*}{ Dense }&\multirow{2}{*}{Equal}&Segmentation & $0.83 \pm 0.15$ & 0.88 & \textcolor{light-gray}{$0.55 \pm 0.29$} & \textcolor{light-gray}{0.65} & $0.78 \pm 0.16~$ & 0.81 & $0.88 \pm 0.11~$ & 0.93 \\ &&Registration & \textcolor{light-gray}{$0.83 \pm 0.16$} & \textcolor{light-gray}{0.88} & $0.66 \pm 0.25$ & 0.75 & \textcolor{light-gray}{$0.76 \pm 0.15$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.79 \pm 0.16$} & \textcolor{light-gray}{0.87} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & ${0.84} \pm {0.16}~$ & {0.89} & \textcolor{light-gray}{$0.63 \pm 0.27$} & \textcolor{light-gray}{0.75} & ${0.79} \pm {0.16}~$ & 0.82 & $0.87 \pm 0.13~$ & 0.93 \\ &&Registration & ${0.84} \pm {0.16}~$ & \textcolor{light-gray}{0.88} & ${0.68} \pm {0.25}~$ & 0.78 & \textcolor{light-gray}{$0.77 \pm 0.14$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.78 \pm 0.17$} & \textcolor{light-gray}{0.86} \\ &\multirow{2}{*}{DWA}&Segmentation & ${0.84} \pm {0.15}~$ & {0.89} & \textcolor{light-gray}{$0.58 \pm 0.28$} & \textcolor{light-gray}{0.67} & ${0.79} \pm {0.15}~$ & {0.83} & $0.88 \pm 0.12~$ & 0.93 \\ &&Registration & ${0.84} \pm {0.16}~$ & {0.89} & $0.67 \pm 0.24$ & 0.76 & \textcolor{light-gray}{$0.76 \pm 0.15$} & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.79 \pm 0.16$} & \textcolor{light-gray}{0.87} \\ \hline \multirow{6}{*}{ SEDD }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$0.79 \pm 0.16$} & \textcolor{light-gray}{0.85} & \textcolor{light-gray}{$0.46 \pm 0.28$} & \textcolor{light-gray}{0.53} & $0.77 \pm 0.14$ & 0.80 & $0.85 \pm 0.12$ & 0.91 \\ &&Registration & {$0.82 \pm 0.16$} & \textcolor{light-gray}{0.87} & $0.66 \pm 0.26$ & 0.78 & \textcolor{light-gray}{$0.75 \pm 0.15$} & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.78 \pm 0.16$} & \textcolor{light-gray}{0.86} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & ${0.84} \pm {0.15}~$ & {0.89} & \textcolor{light-gray}{$0.50 \pm 0.28$} & \textcolor{light-gray}{0.58} & $0.76 \pm 0.18~$ & 0.82 & $0.88 \pm 0.13~$ & 0.94 \\ &&Registration & ${0.84} \pm {0.16}~$ & \textcolor{light-gray}{0.88} & ${0.68} \pm {0.24}~$ & 0.78 & \textcolor{light-gray}{$0.76 \pm 0.15$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.79 \pm 0.17$} & \textcolor{light-gray}{0.88} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$0.83 \pm 0.14$} & 0.88 & \textcolor{light-gray}{$0.62 \pm 0.27$} & \textcolor{light-gray}{0.74} & $0.78 \pm 0.16~$ & {0.83} & $0.87 \pm 0.14~$ & 0.94 \\ &&Registration& ${0.84} \pm {0.15}$ & 0.88 & $0.67 \pm 0.24~$ & 0.78 & \textcolor{light-gray}{$0.75 \pm 0.15$} & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.78 \pm 0.18$} & \textcolor{light-gray}{0.86} \\ \hline \multirow{6}{*}{ Cross-stitch }&\multirow{2}{*}{Equal}&Segmentation & ${0.84} \pm {0.14}~$ & {0.89} & \textcolor{light-gray}{$0.61 \pm 0.27~$} & \textcolor{light-gray}{0.73} & $0.78 \pm 0.14~$ & 0.81 & $0.88 \pm 0.10~$ & 0.93 \\ &&Registration & ${0.84} \pm {0.15}~$ & {0.89} & ${0.68} \pm {0.24}~$ & {0.80} & \textcolor{light-gray}{$0.77 \pm 0.15~$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.80 \pm 0.16~$} & \textcolor{light-gray}{0.87} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & ${0.84} \pm {0.13}~$ & \textcolor{light-gray}{0.87} & \textcolor{light-gray}{$0.65 \pm 0.24~$} & \textcolor{light-gray}{0.76} & \textcolor{light-gray}{$0.74 \pm 0.18$} & 0.80 & ${0.92} \pm {0.08}$ & {0.95} \\ &&Registration & ${0.84} \pm {0.15}~$ & {0.89} & ${0.68} \pm {0.24}~$ & 0.79 & $0.75 \pm 0.15$ & \textcolor{light-gray}{0.79} & \textcolor{light-gray}{$0.80 \pm 0.17$} & \textcolor{light-gray}{0.87} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$0.82 \pm 0.14$} & \textcolor{light-gray}{0.86} & \textcolor{light-gray}{$0.66 \pm 0.24~$} & \textcolor{light-gray}{0.76} & $0.75 \pm 0.18$ & 0.79 & ${0.92} \pm {0.08}$ & {0.95} \\ &&Registration & ${0.84} \pm {0.15}~$ & {0.89} & ${0.68} \pm {0.23}~$ & 0.79 & $0.75 \pm 0.15$ & \textcolor{light-gray}{0.78} & \textcolor{light-gray}{$0.77 \pm 0.17$} & \textcolor{light-gray}{0.83} \\ \hline \end{tabular} } \label{table:_DSC} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{\%95 HD (mm) values for the different networks and loss weighting methods for the HMC dataset. Lower values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{llllclclclc} &&&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Weight & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{3}{*}{ JRS-reg }&\multirow{1}{*}{Equal}&Registration & $5.2 \pm 5.7~$ & 3.2 & {$6.5 \pm 7.1$} & {4.0} & ${12.6} \pm {6.7}~$ & {12.0} & {$20.3 \pm 14.0$} & {18.6} \\ &\multirow{1}{*}{Homoscedastic}&Registration & {$5.7 \pm 5.9$} & {3.7} & $6.2 \pm 7.1~$ & 3.6 & {$13.0 \pm 7.3~$} & {11.5} & $18.5 \pm 14.0$ & 13.0 \\ &\multirow{1}{*}{DWA}&Registration & $5.7 \pm 5.9$ & 3.5 & {$6.4 \pm 6.8$} & {3.7} & {$13.2 \pm 7.3$} & {12.2} & {$20.0 \pm 13.2$} & {17.6} \\ \hline \multirow{6}{*}{ Dense }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$5.7 \pm 5.4$} & \textcolor{light-gray}{4.1} & \textcolor{light-gray}{$14.4 \pm 17.2$} & \textcolor{light-gray}{6.8} & \textcolor{light-gray}{$16.8 \pm 12.6~$} & \textcolor{light-gray}{13.6} & $10.9 \pm 10.9$ & 5.5 \\ &&Registration & $5.6 \pm 5.6$ & \textcolor{light-gray}{4.0} & $6.6 \pm 7.8$ & 4.0 & $13.1 \pm 6.7$ & 13.0 & \textcolor{light-gray}{$19.6 \pm 12.0$} & \textcolor{light-gray}{17.4} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & \textcolor{light-gray}{$5.8 \pm 5.9$} & \textcolor{light-gray}{3.3} & \textcolor{light-gray}{$10.0 \pm 11.6$} & \textcolor{light-gray}{5.1} & \textcolor{light-gray}{$17.1 \pm 16.6~$} & \textcolor{light-gray}{13.8} & $11.4 \pm 11.3~$ & 5.9 \\ &&Registration & $5.3 \pm 5.7$ & {3.0} & $6.4 \pm 6.8$ & {3.2} & $13.0 \pm 6.5$ & 12.6 & \textcolor{light-gray}{$19.2 \pm 13.7$} & \textcolor{light-gray}{14.2} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$5.4 \pm 5.5$} & \textcolor{light-gray}{3.6} & \textcolor{light-gray}{$12.7 \pm 17.0$} & \textcolor{light-gray}{5.9} & \textcolor{light-gray}{$16.2 \pm 12.5$} & \textcolor{light-gray}{14.4} & $10.8 \pm 10.7$ & 6.2 \\ &&Registration & $5.3 \pm 5.6$ & 3.5 & ${6.0} \pm {6.6}~$ & 3.3 & $13.1 \pm 7.2$ & 13.0 & \textcolor{light-gray}{$19.4 \pm 11.9$} & \textcolor{light-gray}{17.4} \\ \hline \multirow{6}{*}{ SEDD }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$8.5 \pm 7.1$} & \textcolor{light-gray}{6.0} & \textcolor{light-gray}{$18.9 \pm 19.5$} & \textcolor{light-gray}{8.6} & \textcolor{light-gray}{$16.7 \pm 11.9$} & \textcolor{light-gray}{14.7} & $12.7 \pm 11.0$ & 8.5 \\ &&Registration & $5.6 \pm 5.8$ & 3.6 & $6.7 \pm 7.2$ & 4.1 & $13.3 \pm 7.0$ & 12.0 & \textcolor{light-gray}{$19.0 \pm 12.7$} & \textcolor{light-gray}{15.2} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & \textcolor{light-gray}{$5.7 \pm 5.5$} & \textcolor{light-gray}{3.9} & \textcolor{light-gray}{$16.0 \pm 16.3$} & \textcolor{light-gray}{10.6} & \textcolor{light-gray}{$18.8 \pm 16.5~$} & \textcolor{light-gray}{15.3} & $9.4 \pm 9.9~$ & 4.1 \\ &&Registration & $5.5 \pm 5.6$ & 3.3 & $6.3 \pm 6.7~$ & 3.6 & $13.3 \pm 7.3$ & 13.0 & \textcolor{light-gray}{$18.8 \pm 13.5$} & \textcolor{light-gray}{14.6} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$6.2 \pm 5.4$} & \textcolor{light-gray}{4.4} & \textcolor{light-gray}{$11.5 \pm 14.0$} & \textcolor{light-gray}{5.0} & \textcolor{light-gray}{$16.8 \pm 14.4~$} & \textcolor{light-gray}{13.0} & $9.5 \pm 10.8~$ & 4.4 \\ &&Registration & $5.8 \pm 5.7$ & 4.0 & $6.4 \pm 7.4~$ & 3.6 & $13.4 \pm 7.5$ & 12.5 & \textcolor{light-gray}{$21.9 \pm 11.5$} & \textcolor{light-gray}{19.0} \\ \hline \multirow{6}{*}{ Cross-stitch }&\multirow{2}{*}{Equal}&Segmentation & \textcolor{light-gray}{$5.8 \pm 5.4~$} & \textcolor{light-gray}{4.0} & \textcolor{light-gray}{$12.2 \pm 15.8~$} & \textcolor{light-gray}{5.0} & \textcolor{light-gray}{$17.0 \pm 14.7~$} & \textcolor{light-gray}{14.0} & $10.8 \pm 11.3~$ & 4.4 \\ &&Registration & ${5.1} \pm {5.5}~$ & 3.2 & $6.2 \pm 8.6~$ & 3.3 & ${12.6} \pm {6.7}~$ & 12.0 & \textcolor{light-gray}{$19.1 \pm 12.5~$} & \textcolor{light-gray}{16.2} \\ &\multirow{2}{*}{Homoscedastic}&Segmentation & {$5.9 \pm 5.4~$} & \textcolor{light-gray}{4.1} & \textcolor{light-gray}{$7.8 \pm 7.4~$} & \textcolor{light-gray}{4.6} & \textcolor{light-gray}{$20.5 \pm 18.9~$} & \textcolor{light-gray}{14.7} & $7.8 \pm 8.7$ & {3.1} \\ &&Registration & \textcolor{light-gray}{$6.2 \pm 5.6$} & \textcolor{light-gray}{4.5} & $6.1 \pm 7.2~$ & {3.2} & $13.5 \pm 7.3$ & 13.5 & \textcolor{light-gray}{$19.4 \pm 12.3$} & \textcolor{light-gray}{16.3} \\ &\multirow{2}{*}{DWA}&Segmentation & \textcolor{light-gray}{$6.7 \pm 5.8$} & \textcolor{light-gray}{4.2} & \textcolor{light-gray}{$7.6 \pm 9.1$} & \textcolor{light-gray}{4.1} & \textcolor{light-gray}{$20.7 \pm 18.6$} & \textcolor{light-gray}{14.9} & ${7.5} \pm {8.8}$ & 3.5 \\ &&Registration & $6.0 \pm 5.7$ & 4.1 & $6.1 \pm 6.8~$ & 3.4 & $13.5 \pm 7.5$ & 13.6 & \textcolor{light-gray}{$21.5 \pm 11.6$} & \textcolor{light-gray}{20.1} \\ \hline \end{tabular} } \label{table:_HD} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{DSC values for the different networks on the validation set (HMC). Higher values are better.} \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $0.84 \pm 0.03~$ & 0.84 & $0.60 \pm 0.14~$ & 0.62 & $0.75 \pm 0.10~$ & 0.77 & $0.90 \pm 0.07~$ & 0.93 \\ \hline \multirow{1}{*}{ Reg }&Registration & $0.85 \pm 0.06~$ & 0.86 & $0.62 \pm 0.18~$ & 0.68 & $0.79 \pm 0.08~$ & 0.81 & $0.82 \pm 0.10~$ & 0.84 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $0.86 \pm 0.03~$ & 0.87 & $0.69 \pm 0.13~$ & 0.73 & $0.83 \pm 0.06~$ & 0.84 & $0.88 \pm 0.08~$ & 0.92 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & ${0.88} \pm {0.04}~$ & {0.89} & ${0.70} \pm {0.12}~$ & 0.73 & $0.85 \pm 0.04~$ & 0.86 & ${0.94} \pm {0.02}~$ & 0.94 \\ &Registration & \textcolor{light-gray}{$0.87 \pm 0.04~$} & \textcolor{light-gray}{0.88} & \textcolor{light-gray}{$0.68 \pm 0.15~$} & 0.73 & \textcolor{light-gray}{$0.82 \pm 0.06~$} & \textcolor{light-gray}{0.83} & \textcolor{light-gray}{$0.87 \pm 0.08~$} & \textcolor{light-gray}{0.90} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & $0.87 \pm 0.04~$ & 0.88 & $0.69 \pm 0.12~$ & \textcolor{light-gray}{0.72} & $0.83 \pm 0.07~$ & 0.84 & $0.93 \pm 0.02~$ & 0.94 \\ &Registration & \textcolor{light-gray}{$0.86 \pm 0.04~$} & \textcolor{light-gray}{0.87} & $0.69 \pm 0.13~$ & {0.74} & \textcolor{light-gray}{$0.82 \pm 0.06~$} & \textcolor{light-gray}{0.83} & \textcolor{light-gray}{$0.88 \pm 0.08~$} & \textcolor{light-gray}{0.92} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & ${0.88} \pm {0.04}~$ & 0.88 & ${0.70} \pm {0.11}~$ & {0.74} & ${0.86} \pm {0.05}~$ & {0.88} & ${0.94} \pm {0.02}~$ & {0.95} \\ &Registration & \textcolor{light-gray}{$0.87 \pm 0.03~$} & 0.88 & \textcolor{light-gray}{$0.68 \pm 0.15~$} & \textcolor{light-gray}{0.73} & \textcolor{light-gray}{$0.84 \pm 0.05~$} & \textcolor{light-gray}{0.85} & \textcolor{light-gray}{$0.88 \pm 0.08~$} & \textcolor{light-gray}{0.91} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & $0.84 \pm 0.07~$ & 0.86 & $0.50 \pm 0.25~$ & 0.53 & $0.74 \pm 0.06~$ & 0.74 & $0.75 \pm 0.10~$ & 0.76 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & ${0.88} \pm {0.04}~$ & {0.89} & ${0.70} \pm {0.14}~$ & 0.72 & $0.85 \pm 0.06~$ & 0.87 & $0.91 \pm 0.08~$ & {0.95} \\ \hline \multirow{1}{*}{ JRS-GAN~\cite{JrsGan} }&Registration & $0.86 \pm 0.04~$ & 0.87 & $0.61 \pm 0.20~$ & 0.67 & $0.82 \pm 0.06~$ & 0.83 & $0.88 \pm 0.08~$ & 0.92 \\ \hline \end{tabular} } \label{table:_DSC} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[]{\% 95 HD (mm) values for the different networks on the validation set (HMC). Lower values are better.} \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $4.4 \pm 1.0~$ & 4.4 & $8.6 \pm 8.6~$ & 7.3 & $16.5 \pm 11.0~$ & 13.3 & $6.9 \pm 6.6~$ & 4.0 \\ \hline \multirow{1}{*}{ Reg }&Registration & $5.5 \pm 4.5~$ & 4.0 & $5.6 \pm 4.1~$ & 4.3 & $11.0 \pm 6.4~$ & 9.4 & $15.7 \pm 9.6~$ & 12.1 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $3.8 \pm 1.3~$ & 3.2 & $4.1 \pm 2.8~$ & 3.2 & $9.9 \pm 6.2~$ & 8.4 & $11.7 \pm 10.3~$ & 9.2 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & $3.2 \pm 1.0~$ & 3.0 & \textcolor{light-gray}{$5.8 \pm 7.6~$} & \textcolor{light-gray}{3.9} & $9.6 \pm 5.8~$ & 8.0 & $3.8 \pm 3.9~$ & 2.8 \\ &Registration & \textcolor{light-gray}{$3.4 \pm 1.1~$} & \textcolor{light-gray}{3.2} & $4.4 \pm 3.0~$ & 3.2 & \textcolor{light-gray}{$10.5 \pm 6.0~$} & \textcolor{light-gray}{9.0} & \textcolor{light-gray}{$12.6 \pm 9.2~$} & \textcolor{light-gray}{10.2} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & $3.5 \pm 1.1~$ & \textcolor{light-gray}{3.3} & \textcolor{light-gray}{$5.2 \pm 5.2~$} & \textcolor{light-gray}{4.0} & \textcolor{light-gray}{$10.5 \pm 5.5~$} & \textcolor{light-gray}{9.7} & ${3.3} \pm {1.3}~$ & 3.0 \\ &Registration & \textcolor{light-gray}{$3.6 \pm 1.2~$} & 3.2 & $4.1 \pm 2.6~$ & {3.1} & $10.4 \pm 6.3~$ & 9.5 & \textcolor{light-gray}{$11.7 \pm 9.9~$} & \textcolor{light-gray}{8.7} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & $3.0 \pm 1.0~$ & 3.0 & $4.3 \pm 1.7~$ & \textcolor{light-gray}{3.9} & $9.5 \pm 6.2~$ & 7.2 & ${3.3} \pm {2.9}~$ & {2.3} \\ &Registration & \textcolor{light-gray}{$3.2 \pm 0.9~$} & 3.0 & \textcolor{light-gray}{$4.5 \pm 3.3~$} & 3.6 & \textcolor{light-gray}{$9.8 \pm 6.3~$} & \textcolor{light-gray}{8.6} & \textcolor{light-gray}{$12.2 \pm 10.1~$} & \textcolor{light-gray}{9.7} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & $4.0 \pm 1.7~$ & 3.7 & $6.0 \pm 3.4~$ & 5.6 & $10.9 \pm 5.2~$ & 9.8 & $15.3 \pm 8.3~$ & 13.6 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & ${2.9} \pm {0.9}~$ & {2.8} & ${3.8} \pm {2.2}~$ & {3.1} & ${7.7} \pm {4.5}~$ & {6.1} & $5.7 \pm 4.6~$ & 3.3 \\ \hline \multirow{1}{*}{ JRS-GAN~\cite{JrsGan} }&Registration & $3.4 \pm 1.2~$ & 3.0 & $5.3 \pm 3.0~$ & 4.6 & $10.1 \pm 6.1~$ & 8.4 & $11.0 \pm 9.6~$ & 7.6 \\ \hline \end{tabular} } \label{table:_HD} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[Table caption text]{DSC values for the different networks on the independent test set (EMC). Higher values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $0.73 \pm 0.11~$ & 0.77 & $0.37 \pm 0.30~$ & 0.28 & $0.67 \pm 0.10~$ & 0.68 & ${0.91} \pm {0.07}~$ & 0.93 \\ \hline \multirow{1}{*}{ Reg }&Registration & $0.83 \pm 0.16~$ & 0.88 & $0.64 \pm 0.26~$ & 0.74 & $0.72 \pm 0.16~$ & 0.77 & $0.75 \pm 0.19~$ & 0.82 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $0.84 \pm 0.16~$ & 0.89 & $0.68 \pm 0.25~$ & 0.77 & $0.76 \pm 0.15~$ & 0.80 & $0.80 \pm 0.18~$ & 0.89 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & $0.84 \pm 0.16~$ & 0.89 & \textcolor{light-gray}{$0.63 \pm 0.27~$} & \textcolor{light-gray}{0.75} & $0.79 \pm 0.16~$ & {0.82} & $0.87 \pm 0.13~$ & 0.93 \\ &Registration & $0.84 \pm 0.16~$ & \textcolor{light-gray}{0.88} & $0.68 \pm 0.25~$ & 0.78 & \textcolor{light-gray}{$0.77 \pm 0.14~$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.78 \pm 0.17~$} & \textcolor{light-gray}{0.86} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & $0.84 \pm 0.15~$ & 0.89 & \textcolor{light-gray}{$0.50 \pm 0.28~$} & \textcolor{light-gray}{0.58} & $0.76 \pm 0.18~$ & {0.82} & $0.88 \pm 0.13~$ & {0.94} \\ &Registration & $0.84 \pm 0.16~$ & \textcolor{light-gray}{0.88} & $0.68 \pm 0.24~$ & 0.78 & $0.76 \pm 0.15~$ & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.79 \pm 0.17~$} & \textcolor{light-gray}{0.88} \\ \multirow{2}{*}{ Cross-stitch }&Segmentation & $0.84 \pm 0.14~$ & 0.89 & \textcolor{light-gray}{$0.61 \pm 0.27~$} & \textcolor{light-gray}{0.73} & $0.78 \pm 0.14~$ & 0.81 & $0.88 \pm 0.10~$ & 0.93 \\ &Registration & $0.84 \pm 0.15~$ & 0.89 & $0.68 \pm 0.24~$ & 0.80 & \textcolor{light-gray}{$0.77 \pm 0.15~$} & \textcolor{light-gray}{0.80} & \textcolor{light-gray}{$0.80 \pm 0.16~$} & \textcolor{light-gray}{0.87} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & ${0.89} \pm {0.05}~$ & {0.91} & $0.72 \pm 0.24~$ & {0.82} & $0.75 \pm 0.12~$ & 0.76 & $0.79 \pm 0.18~$ & 0.87 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & $0.88 \pm 0.04~$ & 0.89 & ${0.77} \pm {0.15}~$ & 0.81 & ${0.80} \pm {0.10}~$ & {0.82} & $0.85 \pm 0.13~$ & 0.90 \\ \hline \end{tabular} } \label{table:_DSC} \end{table*} \begin{table*}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption[Table caption text]{\%95 HD (mm) values for the different networks on the independent test set (EMC). Lower values are better. } \resizebox{\textwidth}{!}{ \begin{tabular}{lllclclclc} &&\multicolumn{2}{c}{Prostate}&\multicolumn{2}{c}{Seminal vesicles}&\multicolumn{2}{c}{Rectum}& \multicolumn{2}{c}{Bladder} \\ \hline Network & Output path & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median & \multicolumn{1}{c}{$\mu \pm \sigma$} & median \\ \hline \multirow{1}{*}{ Seg }&Segmentation & $10.7 \pm 5.4~$ & 9.3 & $21.4 \pm 17.9~$ & 15.4 & $30.5 \pm 12.9~$ & 29.0 & $11.2 \pm 8.5~$ & 10.0 \\ \hline \multirow{1}{*}{ Reg }&Registration & $6.7 \pm 5.9~$ & 4.2 & $7.5 \pm 8.6~$ & 4.3 & $13.1 \pm 6.9~$ & 12.0 & $22.7 \pm 14.0~$ & 20.2 \\ \hline \multirow{1}{*}{ JRS-reg }&Registration & $5.7 \pm 5.9~$ & 3.7 & $6.2 \pm 7.1~$ & 3.6 & $13.0 \pm 7.3~$ & 11.5 & $18.5 \pm 14.0~$ & 13.0 \\ \hline \multirow{2}{*}{ Dense }&Segmentation & \textcolor{light-gray}{$5.8 \pm 5.9~$} & \textcolor{light-gray}{3.3} & \textcolor{light-gray}{$10.0 \pm 11.6~$} & \textcolor{light-gray}{5.1} & \textcolor{light-gray}{$17.1 \pm 16.6~$} & \textcolor{light-gray}{13.8} & $11.4 \pm 11.3~$ & 5.9 \\ &Registration & $5.3 \pm 5.7~$ & 3.0 & $6.4 \pm 6.8~$ & 3.2 & $13.0 \pm 6.5~$ & 12.6 & \textcolor{light-gray}{$19.2 \pm 13.7~$} & \textcolor{light-gray}{14.2} \\ \hline \multirow{2}{*}{ SEDD }&Segmentation & \textcolor{light-gray}{$5.7 \pm 5.5~$} & \textcolor{light-gray}{3.9} & \textcolor{light-gray}{$16.0 \pm 16.3~$} & \textcolor{light-gray}{10.6} & \textcolor{light-gray}{$18.8 \pm 16.5~$} & \textcolor{light-gray}{15.3} & ${9.4} \pm {9.9}~$ & {4.1} \\ &Registration & $5.5 \pm 5.6~$ & 3.3 & $6.3 \pm 6.7~$ & 3.6 & $13.3 \pm 7.3~$ & 13.0 & \textcolor{light-gray}{$18.8 \pm 13.5~$} & \textcolor{light-gray}{14.6} \\ \hline \multirow{2}{*}{ Cross-stitch }&Segmentation & \textcolor{light-gray}{$5.8 \pm 5.4~$} & \textcolor{light-gray}{4.0} & \textcolor{light-gray}{$12.2 \pm 15.8~$} & \textcolor{light-gray}{5.0} & \textcolor{light-gray}{$17.0 \pm 14.7~$} & \textcolor{light-gray}{14.0} & $10.8 \pm 11.3~$ & 4.4 \\ &Registration & $5.1 \pm 5.5~$ & 3.2 & $6.2 \pm 8.6~$ & 3.3 & $12.6 \pm 6.7~$ & 12.0 & \textcolor{light-gray}{$19.1 \pm 12.5~$} & \textcolor{light-gray}{16.2} \\ \hline \multirow{1}{*}{ Elastix~\cite{qiao2017fast} }&Registration & ${3.6} \pm {2.0}~$ & {2.9} & ${4.6} \pm {4.4}~$ & 3.2 & $11.3 \pm 6.0~$ & 11.3 & $16.1 \pm 14.8~$ & 10.4 \\ \hline \multirow{1}{*}{ Hybrid~\cite{MedPhys} }&Registration & $3.9 \pm 1.9~$ & 3.4 & $4.8 \pm 4.7~$ & {3.1} & ${10.3} \pm {6.8}~$ & {8.6} & $11.1 \pm 10.6~$ & 6.6 \\ \hline \end{tabular} } \label{table:_HD} \end{table*}
{ "timestamp": "2021-05-06T02:08:41", "yymm": "2105", "arxiv_id": "2105.01844", "language": "en", "url": "https://arxiv.org/abs/2105.01844" }
\section{Introduction} In the context of robotics research, participatory design attempts to empower non-roboticists such that they can shape the direction of robotics research and actively collaborate in robot design~\citep{leeStepsParticipatoryDesign2017}. Typically, participatory design is achieved by researchers running workshops or focus groups with end-users and/or domain experts. Output may include potential use case scenarios~\citep{jenkinsCareMonitoringCompanionship2015}, design guidelines/ recommendations~\citep{winkleSocialRobotsEngagement2018} and/or prototype robot behaviours~\citep{azenkotEnablingBuildingService2016}. {\v S}abanovi{\'c} identified such methods as appropriate for the pursuit of a mutual shaping approach in robot design, that is one which recognises the dynamic interactions between social robots and their context of use~\citep{sabanovicRobotsSocietySociety2010}, an approach that we find compelling for designing effective and acceptable social robots efficiently. However, the automation of social robot behaviour, which requires a significant technical understanding of robotics and artificial intelligence, is not typically considered during such activities. Instead, common methods for the automation of social robot behaviour include utilising models based on human psychology (e.g. Theory of Mind~\citep{lemaignanArtificialCognitionSocial2017}) or animal behaviour~\citep{arkinEthologicalModelingArchitecture2001}, or attempting to observe and replicate human-human interaction behaviours (e.g.~\citep{sussenbachRobotFitnessCompanion2014}). This limits the potential for direct input from domain experts (teachers, therapists etc.) who are skilled in the use of social interaction in complex scenarios. Previous work with such experts has demonstrated that a lot of the related expertise is intuitive and intangible, making it difficult to access in a way that can easily inform robot automation~\citep{winkleSocialRobotsEngagement2018}. This is somewhat addressed by methods that capture domain expert operation of a robot directly, for example end-user programming tools (e.g.~\citet{leonardi2019trigger}) or learning from expert teleoperation of robots (e.g.~\citet{sequeira2016discovering}). However, these methods tend to focus on offline learning/programming. As such, there is no opportunity for experts to create an adequate, situated mental model of the robot's capabilities, limiting the guarantee of appropriate behaviour when the robot is eventually deployed to interact with users autonomously. Instead, we argue that robots should be automated by domain experts themselves, in real-time, and while being situated in the interaction context; and that this automation should be done through a direct, bi-directional interaction between the expert and the robot. We refer to this as the \emph{teaching phase}, where the robot is taught what to do by the domain expert, regardless of whether it is, e.g. a machine learning algorithm or an authoring tool that underpins this interaction. This live, in-situ and interactive approach allows \emph{mutual shaping} to occur during robot automation, as the expert defines the robot's program in response to the evolving dynamics of the social context into which the robot has been deployed. \smallskip \subsection{Supporting A Mutual Shaping Approach to Robot Design} {\v S}abanovi{\'c} proposed a \emph{mutual shaping} approach to social robot design, that is one which recognises the dynamic interactions between social robots and their context of use, in response to their finding that most roboticists were taking a technologically deterministic view of the interaction between robots and society~\citep{sabanovicRobotsSocietySociety2010}. Studies of real-world HRI motivate such an approach, as they demonstrate how mutual shaping effects impact robot effectiveness upon deployment in the real world. For example, use and acceptance of robots in older adult health settings has been shown to be affected by situational and context of use factors such as user age and gender, household type, and the prompting of its use by others~\citep{degraafSharingLifeHarvey2015,changInteractionExpandsFunction2015} i.e. factors unrelated to the robot's functionality. Pursuit of a mutual shaping approach, primarily through use of participatory design and in-the-wild robot evaluation methods, gives the best possible chance of identifying and accounting for such factors during the design and development process, such that the robot has maximum positive impact on its eventual long-term deployment. To this end, {\v S}abanovi{\'c} describes four key practices that underpin a mutual shaping approach to support a \textit{``socially robust understanding of technological development that enables the participation of multiple stakeholders and disciplines''}: \begin{enumerate} \item Evaluating robots in society: human-robot interaction studies and robot evaluations should be conducted `in the wild', i.e. in the environments and context of use for which they are ultimately intended to be deployed~\citep{ros2011child}. \item Studying socio-technological ecologies: robot design should be informed by systematic study of the context of use, and evaluation of robots should consider impact on the socio-technology ecology into which they have been deployed. \item Outside-in design: design constraints should be defined by empirical social research and the social context of use, rather than technical capabilities, and evaluation should be based on user experiences rather than internal measures of technical capability. \item Designing with users: stakeholders (those who will be directly affected by the robot's deployment and use) should be included in identifying robot applications and thinking about how robots will be used as well as in designing the robot and its behaviour(s). \end{enumerate} However, as we explain in Section~\ref{sec:related}, robot development at present typically represents a discontinuous process, particularly broken up by the automation of social robot behaviour. It still tends to heavily rely on technical expertise, executed in research/development environments rather than the real-world, with little active inclusion of domain experts or other expert stakeholders. This discontinuity also represents a key hurdle to truly multi-disciplinary working, a disconnect between those of different academic backgrounds on the research team which can result in a number of practical challenges and frustrations. \subsection{The \emph{Led-by-Experts Automation and Design Of Robots\xspace} (LEADOR\xspace) Method} The generalisable method we provide in this work derives from two (independently undertaken) foundational works. Firstly is~\cite{senft2019teaching}'s educational robot for school children, in which a psychologist taught a robot to support children in an educational activity. After the teaching phase with 25 children, the robot was evaluated in further autonomous interaction with children which demonstrated the opportunity of online teaching as a way to define autonomous robot behaviours. Secondly is~\cite{winkle2020insitu}'s robot fitness coach. This work built on~\cite{senft2019teaching} by integrating the online teaching method into an end-to-end participatory design process whereby the same professional fitness instructor was involved in the co-design, automation and evaluation of a robot fitness coach. This work also demonstrated the value of online teaching when compared to expert-designed heuristics as a next-best alternative for defining autonomous robot behaviours with domain expert involvement. Both studies used a teaching phase where a domain expert interacted with the robot to create an interactive behaviour, and in both studies, the resulting autonomous robot behaviour was evaluated with success. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/png/PD_comparison.png} \caption{Comparison between a classic Participatory Design (PD) approach and LEADOR\xspace, our proposed end-to-end participatory design approach. Green activities represent joint work between domain experts, multidisciplinary researchers and/or engineers; yellow activities are domain expert-led; blue activities are engineer-led. Compared to typical PD, the two key differences in our approach are the focus on developing a \emph{teaching system} instead of a \emph{final autonomous behaviour} in step 4, and the combining of autonomous action policy definition and deployment in the real world into a single step 5 + 6. In addition, our method reduces the amount of work that is carried out independently by engineers (i.e. with no domain expert or non-roboticist input).} \label{fig:methodcomp} \end{figure} From these works, we have derived a five step, generalisable, method for end-to-end participatory design (PD) of autonomous social robots (\emph{Led-by-Experts Automation and Design Of Robots\xspace} or LEADOR\xspace), depicted alongside typical PD in Figure~\ref{fig:methodcomp}. The key stages of our approach, as referenced in the figure, can be summarised as follows: \begin{itemize} \item[(i)] Problem Definition: \textit{Initial brainstorming, studies of context of use, studies with stakeholders.} \item[(ii)] Interaction Design: \textit{Detailed refinement of robot application and interaction scenario, choice/design of robot platform.} \item[(iii)] System Specification: \textit{Co-design of the robot's action space, input space, and teaching interface.} \item[(iv)] Technical Implementation: \textit{Realisation of (iii) through technical implementation of underlying architecture and all sub-components and tools required for the teaching phase.} \item[(v)] Real World Deployment: \textit{Robot is deployed in the real-world, where a teaching phase is undertaken, led by the domain expert(s), to create autonomous robot behaviour.} \end{itemize} \begin{figure} \centering \includegraphics[width=.5\linewidth]{figs/png/threeway.png} \caption{Three-way interaction between the domain-expert, the robot, and the target user through which the expert teaches the robot during a teaching phase upon real world deployment. Robot automation is therefore happening in the real world, while the robot is fully embedded in its long-term application context. The expert is teaching the robot through bi-directional communication, as the robot interacts with the target user. The extent of interaction(s) between the domain expert and target user should be consistent with what is envisaged for long term deployment of the robot, and is domain-dependent.} \label{fig:three-way} \end{figure} The cornerstone of our method is to facilitate robot automation through direct interaction between the expert and the robot, during a `teaching phase' whereby the domain expert teaches the robot what to do during real interaction(s) with the target user. The resultant interaction is depicted in Figure~\ref{fig:three-way}. Regardless of the specifics of the final interaction, the output of this phase is a robot that \textit{can} operate autonomously, but could also allow for continued expert-in-the-loop supervision and/or behaviour correction/additional training. Through our foundational works, we demonstrate the flexibility in our method for developing autonomous robots for different long-term interaction settings. \citet{senft2019teaching}'s educational robot was intended for diadic, unsupervised robot-user interactions, whereas \citet{winkle2020insitu}'s robot fitness coach was intended primarily for diadic robot-user interactions but to be complimented with additional expert-user interactions/supervision and with additional expert-robot-user teaching interactions if necessary. LEADOR\xspace could also be used to design robots with other interaction requirements, e.g. an autonomous robot to be used in fully triadic expert-robot-user interactions or to facilitate permanent expert supervision and validation of autonomous behaviour. In this paper, we have combined our experiences from these foundational works to propose an end-to-end participatory design process, centered around an in-situ teaching phase, that uniquely delivers on the promises of mutual shaping and participatory design. We suggest this approach is as \emph{practical} as it is \emph{responsible}, as our foundational studies demonstrate we were able to create appropriate, intelligent, autonomous social robot behaviour for complex application environments in a timely manner. As detailed in~\citet{senft2015sparc,senft2019teaching} this teaching phase is achieved by deploying the robot in the proposed use case and it being initially controlled completely by a human `teacher'. The teacher can progressively improve the robot behaviour in-situ and generate a mental model of the robot's policy. This teaching can continue until the domain expert is confident that the robot can satisfactorily operate autonomously. This approach therefore allows non-roboticist, domain experts to actively participate in creating autonomous robot behaviour. It also allows for the continual shaping of robot behaviour, as teaching can be seamlessly (re-)continued at any time to address any changes in the interaction dynamics, therefore better supporting a mutual shaping approach. We suggest our methodolog is particularly appropriate for use cases in which difficult-to-automate and/or difficult-to-explain `intuitive' human domain expertise and experience are needed to inform personalised interaction and engagement (e.g. socially assistive robotics). The result then, is an autonomous robot which has been designed, developed and evaluated (by a multi-disciplinary research team) directly in conjunction with domain experts, within its real-world context of use, that can intelligently respond to complex social dynamics in ways that would have otherwise been very difficult to automate. For clarity, hereafter we use the term \emph{domain expert} (or \emph{teacher}) to refer to experts in an application domain. For example, these domain experts could be therapists, shop owners, or school teachers. These experts interact with the robot and specify its behaviour in a \emph{teaching interaction} (even if no actual machine learning might be involved). On the other hand, \emph{engineers} or developers refer to people with technical expertise in robotics or programming. They are the ones typically programming a robot behaviour or developing tools to be used by domain experts. Finally, the \emph{target user} is the person a robot would interact with in the \emph{application interaction}. For example, such target users could be children during a therapy session or store customers in a shopping interaction. This population is expected to interact with the robot at the point of use, rather than be the ones directly defining the autonomous robot behaviour. \section{Related Work} \label{sec:related} \subsection{Participatory Design and Social Robotics} \label{sec:relatedPD} To properly situate our method in the context of participatory design (PD), we first define PD and how it relates to other methodologies typically seen in social robotics. Most works relevant to PD in social human-robot interaction showcase one (or more) of the following methods: \begin{enumerate} \item \textit{Ethnographic/`In-the-Wild' Studies} typically focus on understanding situated use and/or emergent behaviour(s) on deployment of a robot into the real world. Concerning robot design, such studies are inherently limited to the testing of prototypes, WoZ systems or finished products (e.g.~\citet{forlizziHowRoboticProducts2007, changInteractionExpandsFunction2015}). However, they might be used to inform initial design requirements (and their iteration) through observation of the use case environment and user behaviour. \item \textit{User-Centered Design (UCD)} aims to understand and incorporate user perspective and needs into robot design. Typically researchers set the research agenda based on prior assumptions regarding the context of use and proposed robot application (e.g. \citet{louieFocusGroupStudy2014,wuDesigningRobotsElderly2012,beerDomesticatedRobotDesign2012}). \item \textit{Participatory Design (PD)} encourages participants (users, stakeholders etc.) to actively join in decision making processes which shape robot design and/or the direction of research. This typically involves participants having equal authority as the researchers and designers, with both engaging in a two-way exchange of knowledge and ideas (e.g. \citet{azenkotEnablingBuildingService2016, bjorlingParticipatoryResearchPrinciples2019}). \end{enumerate} Lee et al. give a good overview of the above practices as employed in social robot and HRI design/research, with a particular focus on how the shortcomings of 1 and 2 can be addressed using PD \citep{leeStepsParticipatoryDesign2017}. The authors use a case study of social robot PD from their own work to highlight a number of PD design principles for informing social robot design and further development of PD methodologies. They particularly highlight the empowering of PD participants to become active `robot co-designers' through \textit{mutual learning}, whereby there is a two way exchange of knowledge and experience between researchers/designers and expert stakeholders. Through this process, users learn about e.g. robot capabilities, such that they are better informed to contribute to discussions on potential applications, whilst the researchers/designers come to learn more about the realities of the proposed context of use from the users' perspective. Since publication of Lee et al.'s work, PD methods have been gaining visibility for the design of social robots, with other roboticists further refining PD methods and best practice for their use in social robotics and HRI. As such, PD works relating to ours can be grouped into two categories: \begin{itemize} \item[(i)] results-focused publications which utilised PD methods. \item[(ii)] methodology-focused publications in which the authors share or reflect on PD methods for use in social robot/HRI design \end{itemize} Works on (i) have typically taken the form of researchers working closely with prospective users and/or other stakeholders via focus groups, interviews, workshops etc. with the researchers then concatenating their results to produce potential use case scenarios~\citep{jenkinsCareMonitoringCompanionship2015}, design guidelines/recommendations~\citep{winkleSocialRobotsEngagement2018} and/or prototype robot behaviours~\citep{azenkotEnablingBuildingService2016}. For example, \citet{azenkotEnablingBuildingService2016} used participatory design to generate specifications for a socially assistive robot for guiding blind people through a building. The authors' study consisted of multiple sessions including interviews, a group workshop and individual user-robot prototyping sessions. The initial interviews were used, in part, to brief participants about robot capabilities. The group session was used to develop a conceptual storyboard of robot use, identifying interactions between the robot guide and the user. \citet{winkleSocialRobotsEngagement2018} conducted a study with therapists, utilising a novel focus group methodology combined with follow-up individual interviews in order to generate an expert-informed set of design guidelines for informing the design and use of socially assistive robots in rehabilitative therapies. The topic guides for each part of the study were designed to help the researchers understand typical therapy practice and therapist best practices for improving patient engagement and to explore therapists' ideas and opinions on the potential role(s) social robots might play in rehabilitation. A key finding of this work was the extent to which therapists' intuitive, instantaneous behaviour is driven by situational factors specific to each individual client, making it difficult, for example, to extract any clear cut heuristics that might inform generalisable, autonomous social robot behaviour directly. The resultant design guidelines therefore suggested that socially assistive robots require 'high-level' personalisation to each user as well as the ability to adapt, in real time, to e.g. the user's performance and other situational factors. This is one of the key works that motivates our effort to therefore facilitate expert-led, in-situ robot teaching, in order to capture this sort of tacit social intelligence. A follow up publication by the same authors then comes under category (ii). Specifically, the authors provide more detail on their focus group methodology, and how it reflects a mutual shaping approach to social robot design, alongside a general guide in how it might be applied to other domains~\citep{winkleMutualShapingDesign2019}. The method combines elements of PD and UCD, and utilises a demonstration of robot capabilities to support mutual learning between the researchers and participants. To evidence how this method supported mutual shaping in their work, and why this was beneficial, the authors identify specific project-related considerations as well as new research directions that could only be identified in conjunction with their domain expert participants, and also note that taking part in a focus group significantly (positively) impacted on participants' acceptance of social robots. Further on (ii), \citet{bjorlingParticipatoryResearchPrinciples2019} shared PD methods they used in the context of taking an overall \textit{human-centered design} approach to co-designing robots for improved mental health with teens. They present three method cases which cover novel and creative participatory techniques such as research design, script-writing and prototyping, concluding with a set of participatory design principles for guiding design work with vulnerable participants in a human-centred domain. One of their methods revolved around inviting teens to act as WoZ robot operators. Specifically, their setup had one teen teleoperating a robot whilst another teen recounted a (pre-scripted) stressful experience. In a second experiment, they utilised virtual reality (VR) such that one teen interacted, in an immersive VR environment, with a robot avatar teleoperated by a teen outside of that VR environment. From this study, the authors gathered data about the way teens collaborate and their perceptions of robot roles and behaviours. To this end, they demonstrated the value in expert (user) teleoperation of a proposed robot, both for better understanding the use case requirements and user needs, but also as a way to generate exemplars of desirable autonomous robot behaviour. \citet{alves-oliveiraYOLORobotCreativity2017} also demonstrated a similar use of puppeteering and roleplay methods as part of a co-design study with children. In summary, work to date has demonstrated how PD methods can be used to study a proposed application domain in an attempt to ensure researchers thoroughly understand the context of use, and to elicit some expert knowledge for informing robot design and automation. This goes some way to supporting a mutual shaping and responsible robotics approach to social robot development. However, there remains two key disconnects in delivering truly end-to-end PD and mutual shaping in development of an autonomous social robot. Firstly, robot automation is informed but not controlled or developed by domain experts. Secondly, there is a disconnect between this program definition and the real-world interaction requirements and situational specificities that will likely be crucial to overall robot success when deployed in real world interaction. \smallskip \subsection{Alternative Methods to Capture Domain-Expert Knowledge} One of the key assumptions of PD in the context of robotics research is that the knowledge of the desired robot behaviour is held by domain experts and needs to be translated into programs. Typically, this translation is made by engineers, obtaining a number of heuristics from the domain experts and consequently automating the robot. Although widely applied even in PD research, this method only partially delivers on the promises of PD, as domains experts are used to inform robot behaviours but still rely on external actors (the engineers) to transform their intuition, knowledge, and expertise into actual code. Furthermore, this process can lead to a number of communication issues as members from different communities have different ways of expressing needs and desires. Nevertheless, there exists a number of alternative solutions to capture domain-expert knowledge that could support a PD approach to robot automation. \smallskip \subsubsection{End-user programming tools} A first solution is to create tools to allow domain-experts to create robot behaviours themselves. Research in end-user development, or end-user programming, explores tools to allow domain experts or end-users to create programs without requiring coding knowledge. Typical applications are home automation, application synchronisation (e.g., IFTTT or Microsoft Flow), or video games development. Additionally, end-user programming has seen large interest in robotics, for example to create autonomous robot behaviours for both industrial robots \citep{paxton2017costar,gao2019pati} and social robots \citep{leonardi2019trigger,louie2020social}. These \emph{authoring} tools are often developed by engineers and then provided to users to create their own applications without relying on text-based coding, for example by using visual or block programming \citep{huang2017code3}, tangible interfaces \citep{porfirio2021figaro}, or augmented reality \citep{cao2019ghostar}. However, while being more friendly for users, such methods still suffer from two main drawbacks. First, the interface is often developed by engineers without necessarily following principles of participatory design. Second, these methods often see the programming process as a discrete step leading to a static autonomous behaviour with limited opportunity to update the robot behaviour or little focus on testing and evaluating the created behaviour in real interactions. More precisely, users of these tools might not be the actual target of the application interaction and would program robots outside of the real context of use, forcing the aspiring developers to rely on their internal simulation of how the interaction should happen. For example, a shop owner could use an authoring tool to create a welcoming behaviour for a social robot, test it on themselves while developing the behaviour and then deploying it on real clients with limited safeguards. In such process, developer have to use their best guess to figure out how people might interact with the robot, and often have issues to infuse the robot with tacit knowledge, such as timing for actions or proxemics. This disconnect can lead to suboptimal robot behaviour as the robot will face edge cases in the real world that the designer might not have anticipated. \smallskip \subsubsection{Learning from Real-World Interaction(s)} A method to address this gap between an offline design phase and the real world is to mimic the expert while they perform the interaction. Using machine learning, systems can learn from the experts how robots should behave. For example, \citet{liu2016data} asked participants to role play an interaction between a shopkeeper and a client and recorded data about this interaction (e.g., participants location or speech). From these recordings, Liu et al. learned a model of the shopkeeper, transferred it to the robot, and evaluated it human-robot interactions. Similarly, \citet{porfirio2019bodystorming} recorded interaction traces between human actors and formalized them into finite state machine to create a robot behaviour. While relying on simulated interactions, these methods provide more opportunities to developers to explore situations outside of their initial imagination. One assumption of these methods is that robots should replicate human behaviour. Consequently, such methods allow the capture of implicit behaviours such as the timing and idiosyncrasies of human interactions. However, real-world interactions with robots might follow social norms different from ones between humans only. Consequently, learning directly from human-human interaction also presents limitations. Wizard of Oz (WoZ) is an interaction paradigm widely used in robotics \citep{riek2012wizard} whereby a robot is controlled by an expert deciding what actions the robot should execute and when. The main advantage of this paradigm is to ensure that the robot behaviour is at all times appropriate to the current interaction. For this reason, WoZ has been extensively used in Robot-Assisted Therapy and exploratory studies to explore how humans react to robots. Recent research has explored how this interaction can be used to collect data from real human-robot interaction and learn an appropriate robot behaviour. \citet{knox2014learning} proposed the ``Learning from the Wizard'' paradigm, whereby a robot would first be controlled in a WoZ experiment used to acquire the demonstrations and then machine learning would be applied offline to define a policy. \citet{sequeira2016discovering} extended and applied this Learning from Demonstration (LfD), with an emphasis on the concept of ``Restricted-perception WoZ'', in which the wizard only has access to the same input space as the learning algorithm, thus reducing the problem of correspondence between the state and action spaces used by the wizard and the ones available to the robot controller. Both of these works could support a PD approach to robot automation, as they could be used to generate an autonomous robot action policy based on data from (non-roboticist) domain expert WoZ interactions in real-world environments. Nevertheless, the typical WoZ puppeteering setup results in an absence of interaction between the design/development team and the robot, which prevents designers from having a realistic mental model of the robot behaviour and does not allow for any mutual shaping between the wizard, the robot and the contextual environment. When collecting data through LfD, it is not possible to know, during the teleoperated data collection process, at what point `enough' training data has actually been collected, as the system can only be evaluated once the learning process is complete. Similarly, when using end-user programming methods there is little opportunity to know how the system would actually behave when deployed in the real world. This lack of knowledge about the actual robot behaviour implies that robots have to be deployed to interact in the real world with limited guaranties or safeguards ensuring their behaviours are actually efficient in the desired interaction. \smallskip \subsubsection{Interactive Machine Learning} Interactive Machine Learning (IML) refers to a system learning online while it is being used \citep{fails2003interactive,amershi2014power}. The premise of IML is to empower end-users while reducing the iteration time between subsequent improvement of a learning system. Using IML to create robot behaviours through an interaction between a designer and a autonomous agent allows for full utilisation of the expert's teaching skills. It has been shown that humans are skilled teachers who can react to a learner's current performance and provide information specifically relevant to them \citep{bloom19842}. Similarly, previous research showed that this effect also exists, to a certain extent, when teaching robots. Using Socially Guided Machine Learning \citep{thomaz2008teachable} a human teacher adapts their teaching strategy to robot behaviour and thus helps it to learn better. If able to observe (and correct) the robot's autonomous behaviour, seeing the result of the robot behaviour as it progresses, the expert can create a model of the robot's knowledge, capabilities and limitations. This understanding of the robot reduces the risk of over-trusting (both during training and/or autonomous operation) and introduces the potential for expert evaluation to become part of the verification and validation process. \section{A Blueprint for End-to-End Participatory Design} \label{sec:blueprint} We identify the following requirements to extend participatory design into an \emph{end-to-end} methodology, that include the co-design of the robot's automated behaviour. Such a method needs to allow for: \begin{enumerate} \item Systematic observation and study of the use case environment in which the robot is to ultimately be deployed; \item Inclusion and empowerment of relevant stakeholders (users, domain experts) from the initial design phases, such that design and application of the robot/interaction scenario is co-produced by researchers and stakeholders together; \item (Safe and responsible) evaluation of prototypes in the real-world environment(s) into which the robot is eventually intended to be deployed; \item Inclusion of relevant stakeholders in creation of autonomous robot behaviours, which should utilise interaction data collected in the real-world; \item Two-way interaction between the expert `teacher' or designer and robot `learner' such that the teacher can better understand the state of the robot/to what extent learning `progress' is being made and hence adapt their teaching appropriately/flag any significant design issues; \item Inclusion of relevant stakeholders in (safe) evaluation of autonomous robot behaviours, as they perform in the real-world. \end{enumerate} Requirements 1 and 2 can be addressed by the typical PD methods discussed in Section \ref{sec:relatedPD}, and requirements 3 and 6 can be addressed by carefully designed `in-the-wild' studies. In our work, we therefore look to specifically tackle requirements 4 and 5 by demonstrating how robot automation can be approached as an in-situ, triadic interaction between domain expert teacher(s), robot learner and target end user(s). With LEADOR\xspace, we showcase how this approach can be integrated into one continuous, end-to-end PD process that satisfies all of the above requirements. Table~\ref{tab:blueprint} summarises the key outcomes of, and some potential tools for each stage of LEADOR\xspace. Figure~\ref{fig:methodcomp} shows how these steps compare to typical PD, as well as who (domain experts and/or engineers) are involved at each stage. Each stage is detailed in full below. Table~\ref{tab:studies} shows how these steps have been derived from/were represented in our two foundational studies. \begin{table}[] \caption{Key outcomes of and appropriate tools for each stage of LEADOR\xspace.} \begin{centering} \tymin=.3\linewidth \begin{tabulary}{\linewidth}{LLL} \toprule & Outcomes & Tools \\ \midrule 1. Problem Definition & Domain understanding & Ethnographic studies, focus groups, brainstorming \\ \vspace{.2cm}\\ 2. Interaction Design & Interaction scenario, robot selection/design & Workshop, role-playing, low-tech prototyping \\ \vspace{.2cm}\\ 3. System Specification & State-Action space for the robot, teaching tools & Brainstorming, behaviour prototyping \\ \vspace{.2cm}\\ 4. Technical Implementation & Robot system (sensors and actions), teaching system (authoring tools or learning algorithm) & Software development, lab studies, testing workshops \\ \vspace{.2cm}\\ 5. Real-World Deployment & Delivering on the application target, autonomous robot & In-situ teaching by expert \\ \bottomrule \end{tabulary} \end{centering} \label{tab:blueprint} \end{table} \smallskip \subsection{Step 1: Problem Definition} As noted in Figure~\ref{fig:methodcomp}, Step 1 of our method aligns to best practice use of PD as previously demonstrated in social robotics. The purpose of this stage is to generate a thorough understanding of the use context in which the robot is to be deployed, and to invite stakeholders to influence and shape the proposed application. It would likely include observations, focus groups and/or interviews with a variety of stakeholders. The focus group methodology presented in~\citep{winkleMutualShapingDesign2019} is one appropriate method that could be used for engaging with stakeholders at this stage as it facilitates expert establishment of non-roboticists, broad discussion of the application context (without presentation of a pre-defined research agenda), participant reflection on the context of use `as is' and researcher-led sharing of technical expertise; followed by detailed consideration and refinement of the research agenda based on researchers and participants now being equal co-designers. \smallskip \subsection{Step 2: Interaction Design} Similarly to Step 1, Step 2 of our method also aligns to best practice use of PD as previously demonstrated in social robotics. The purpose of this stage is to define and refine the interaction scenario(s) the proposed robot will engage in, and hence the functionalities/capabilities it might require. The robot platform should also be chosen at this stage. For simplicity here we have equated robot platform \textit{choice} with robot platform \textit{design}. Much current social robotics research utilises off-the-shelf robot platforms (e.g. Pepper and NAO from Softbank Robotics) but others focus on the design of new and/or application-specific platforms. Either can be appropriate for LEADOR\xspace as long as the choice/design is participatory with stakeholders (for a good example of PD in design of a novel robot, see \citealt{alves-oliveiraYOLORobotCreativity2017}'s work on designing the YOLO robot. Focusing then on more specific application of the robot and the interaction(s) it should engage in, methods for PD might include focus groups and similar as per Step 1, but could also include more novel and/or creative PD activities such as script writing~\citep{bjorlingParticipatoryResearchPrinciples2019}, roleplaying (including also stakeholder teleoperation of the robot)~\citep{bjorlingParticipatoryResearchPrinciples2019,alves-oliveiraYOLORobotCreativity2017} and accessible, `low-tech' prototyping~\citep{valenciaCodesigningSociallyAssistive2021}. Note that there is an important interaction design decision to be made here regarding what final deployment of the robot `looks like' in terms of long term oversight by/presence of domain expert(s) (those involved in it's co-design or otherwise) and the role those experts play with regards to the target user. This can be reflected in the teaching interaction setup, specifically with regards to the amount of interaction between the domain expert(s) and target users (see Figure~\ref{fig:three-way}). For example, it was decided early on in the design of~\cite{winkle2020insitu}'s fitness coach robot that there was no intention to ever fully remove the expert presence from the interaction environment. As an alternative, in \cite{senft2019teaching} the intention from the onset was to create a fully autonomous and independent robot that interacted alone with the target users. Such decisions regarding the role of domain experts would ultimately emerge (explicitly or implicitly) in conjunction with deciding the robot's functionalities and the further system specification undertaken in Step 3. However, this long-term desired role of the domain expert(s) should be made clear, explicitly, at this stage, such that it can be reflected in the approach to program definition. \smallskip \subsection{Step 3: System Specification} As shown in Figure~\ref{fig:methodcomp}, it is at this stage that our method begins to diverge from the typical PD process, although we continue to utilise PD methods. This step is concerned with co-design of system specifities required to (i) deliver the interaction design resulting from Step 2 and (ii) facilitate expert-led teaching phase on real-world deployment that is fundamental to our method (see Step 5). In summary, the aim of this step is to co-design the robot's action space, input space, and the tool(s) required to facilitate the bi-directional teaching interaction between the domain expert and the robot. There is also some similarity here to the design process for a WoZ or teleoperated system, which would also require design of the robot's action space and an interface for (non-roboticist) teleoperation of the robot. The key difference here is the additional requirement to specify the robot's input space and the choice of teaching tools for the move towards autonomy during Step 5. \smallskip \subsection{Step 4: Technical Implementation} The main development effort for our method lies in producing the full architecture and tools to allow domain experts to specify autonomous robot behaviour. We note here that the technical implementation required is likely to be greater than that required for a typical WoZ setup and might not be simpler than heuristics-based robot controller. Four main components need to be developed during this phase: \begin{enumerate} \item Set of high-level actions for the robot. \item Set of sensory inputs that will be used to drive the future robot behaviour. \item A representation of the program which will encode autonomous behaviour. \item Expert tools to specify the mapping between the sensory state and the actions. \end{enumerate} With our method, the program representation could take the shape of a machine learning algorithm taking inputs from the expert via the interface and learning a mapping between the current state of the world when the action was selected and the action itself (the approach taken in our foundational works). Alternatively, the representation could allow the expert to encode a program explicitly, for example through state machines, or trigger-action programming, while allowing the expert to update the program in real time, and control the robot actions to ensure that they are constantly appropriate. A typical automation system would replace the expert tools with an actual definition of the behaviour making use of the program representation to map sensors to actions and define fully an autonomous behaviour. On the other end of the spectrum, a WoZ setup might not need a representation of the program, but instead would rely on the interface to display relevant sensory inputs to the wizard (if any) and allow them to select what action to do. \subsection{Step 5: Real World Deployment and Teaching Phase} Undertaking robot automation (and evaluation) in-the-wild is a key part of LEADOR\xspace. To support a mutual shaping approach to robot design and ensure appropriate robot behaviour, the teaching phase should adhere to the following requirements: \begin{enumerate} \item it must be undertaken in situ, i.e., in the context of the final context of use, and with the real target population. \item it must utilise a domain expert teaching the robot as it delivers on the application interaction. \item the expert-robot interaction should be bi-directional, i.e., the expert should be able to define and/or refine the robot's autonomous behaviour policy, while the robot informs the expert about its status. \end{enumerate} Requirement 1 ensures that the approach is ecologically valid, and that the information used by the expert for the automation are suited to the real challenges and idiosyncrasies of the desired context of use. Requirement 2 ensures that people with domain knowledge can encode that knowledge in the robot. Furthermore, the presence of the expert should be used to ensure that the robot is expressing an appropriate behaviour at all times. As the teaching happens in the real world, with the real users, there is limited space for trial and error. The expert can be used as a safeguard to ensure appropriate robot behaviour even in the initial phases of the teaching. Requirement 3 ensures that the expert can create a mental model of the robot behaviour. This point is a key difference to non-interactive teaching methods such as the ones based on offline learning (e.g., \citealt{sequeira2016discovering}). With the robot's feedback on its policy (through suggestions or visual behaviour representation), the expert can assess the robot's (evolving) capabilities and decide what inputs would improve the robot's policy further. Finally, during this real-world deployment, if the robot is ultimately expected to interact autonomously/unsupervised, the expert can use their mental model of the robot behaviour to decide when enough teaching has been done and when the robot is ready to interact autonomously. By relying on online teaching, this decision does not have to be final as the expert could seamlessly step back into the teacher position when the robot interacts with sensitive populations or if the robot requires additional refinement of its policy. \section{Foundational Studies} The LEADOR\xspace method is primarily derived from two foundational studies made by the authors, which were themselves informed by the authors' previous experiences working with domain experts in the design of social robots. The first one, presented in \citet{senft2019teaching}, explores a study with 75 children how the teaching interaction could be used to create an autonomous robot behaviour. As shown in Table \ref{tab:studies}, this study did not employ PD, the authors, researchers in HRI, did the early steps by themselves based on their previous related experiences. The second one, presented in~\citet{winkle2020insitu}, built on the first study by utilising the same teaching approach to robot automation, but incorporating that into and end-to-end participatory design process to support mutual shaping. The end goal of each study was also slightly different, as \citet{senft2019teaching} aimed to produce a robot that would ultimately interact with users with little-no further expert involvement. \citet{winkle2020insitu} also aimed to produce an autonomous robot that would primarily interact 1:1 with users, but with no desire to remove the expert, who would have their own interactions with the users, and/or provide provide additional teaching to the robot should they deem it necessary. \begin{table}[] \begin{center} \tymin=1.5cm \begin{tabulary}{\linewidth}{LLL} \toprule & School Based Educational Robot & Gym Based Robot Fitness Coach \\ \midrule Step 1 & Decision by researchers based on experience to focus on learning food chain around an educational game for children of age 8-10. & Researchers identified the NHS C25K exercise programme based on research goals (longitudinal, real-world HRI) but worked with a fitness instructor to observe typical environment and refine problem definition.\\ \vspace{.1cm}\\ Step 2 & Decision by researchers to focus on robot-user interaction, with expert only providing robot commands and oversight of the robot behavior to ensure that each action is validated by them. Goal is to evaluate the creation of an autonomous robot. & Decision in conjunction with the fitness instructor that the robot would lead exercise sessions (in which he would minimise interaction with exercisers) but that he would provide additional support (e.g. health advice, stretching) outside of these. Goal is to create and demonstrate an effective, real-world SAR based intervention via PD (as responsible robotics). \\ \vspace{.1cm}\\ Step 3 & Using SPARC \cite{senft2015sparc} as interaction framework, robot state and action spaces defined by researchers. Teaching through a GUI on a tablet. & Also using SPARC~\cite{senft2015sparc} the robot state and action spaces as well as the teaching GUI were all co-designed with the fitness instructor.\\ \vspace{.1cm}\\ Step 4 & Implementation of all the actions and learning algorithm. Prototype evaluation in lab. Initial pilot study in schools for evaluating the game with the target population and used as teacher training. & Implementation of all the actions and learning algorithm. Fitness instructor provided all dialogue for robot actions. Prototype evaluation was undertaken in the lab and in the final study gym environment, final robot placement and system installation details were also decided in conjunction with the fitness instructor.\\ \vspace{.1cm}\\ Step 5 & Deployment in two local schools with more than 100 children over multiple months. Between-subject evaluation with three conditions: a passive robot, a supervised robot (during the teaching interaction), and an autonomous unsupervised robot. & Deployed in to the university gym for teaching and autonomous evaluation through delivery of the C25K programme (27 sessions over 9-12 weeks) to 10 participants. Ran a total of 232 exercise sessions of which 151 were used for teaching the IML system, 32 were used for evaluating the IML system when allowed to run autonomously and 49 were used to test a heuristic-based `control' condition (all testing was within-subject).\\ \bottomrule \end{tabulary} \caption{Overview of activities undertaken in the two case studies as exemplars for applying our generalised methodology. See Table~\ref{tab:storyboard} for a pictorial `storyboard' of this process and the co-design activities undertaken for development of the Robot Fitness Coach.} \label{tab:studies} \end{center} \end{table} \smallskip \subsection{Study 1: Evaluating the Teaching Interaction} The goal of this first study was to evaluate if the teaching interaction could be used to create autonomous social behaviours \cite{senft2019teaching}. This study was designed by the authors, who had experience designing robots for the application domain, but did not involve external stakeholders such as teachers. During the problem definition phase, researchers decide to contextualize the work in robot tutoring for children, and explore questions such as how robots can provide appropriate comments to children (both in term of context and time) to stimulate learning. This work was based on experience and knowledge from the researchers about educational robotics. During the interaction design phase, researchers decided to focus the application interaction around an educational game where children could move animals on a screen and understand food nets. This part included an initial prototype of the game. As the goal was to explore how autonomous behaviours could be created, the teacher was not involved in the game activity, only the robot was interacting with the child. The robot used was a NAO robot from Softbank Robotics. In the system specification, the state and action spaces of the interaction were selected. Examples of state include game-related component (e.g. distance between animal) and social dynamics elements (e.g. timing since last action of each agents). The robot's actions were divided into five categories: encouragements, congratulations, hints and reminding rules. The teacher-robot interaction used SPARC \cite{senft2015sparc}. In the technical implementation phase, the learning algorithm was developed, tested and interfaced with the other elements of the system. The teaching interface was also created in such a way as to allow the teacher to select actions for the robot to execute and receive suggestions from the robot. At this stage, initial prototypes were tested in lab studies and schools. In the real-world deployment, authors evaluated the system in two different schools with 75 children. The study adopted a between-participant design and explored three conditions: a passive robot, a supervised robot (referring to the teaching interaction), and an autonomous robot (where the teacher was removed from the interaction and the learning algorithm disabled). Results from the study showed that the teaching interaction allowed the teacher to provide demonstrations to the robot to support learning in the real world. The teacher used the teaching interaction to create a mental model of the robot behaviour. When deployed to interact autonomously, the robot enacted a policy presenting similarities with the one used by the teacher in the teaching phase: the frequency of actions was similar and the robot captured relation and timing between specific events and actions (e.g. a congratulation action should normally be executed around two seconds after an eating event from the child's actions). Overall, this study demonstrated that human can teach robot social policy from in-situ guidance. \subsection{Study 2: Teaching Interaction as Participatory Design} The goal of this study was to use the teaching interaction approach to facilitate creation of a fully expert-informed/expert-in-the-loop autonomous socially assistive robot-based intervention for the real-world. The fundamental activity to be delivered by robot, the NHS C25K programme, was selected by the researchers based on this research goal; but all study implementation details were decided and designed in conjunction with a domain expert (fitness instructor) throughout. Given the end-to-end and constant expert involvement for this study there was seamless progression and some overlap between the problem definition, interaction design and system specification phases as we present them for LEADOR\xspace. A number of co-design activities were undertaken (over a total of 6 sessions totalling approx. 12.5 hours) which ultimately covered all of these key phases, sometimes in parallel, allowing for iteration of the overall study design. Problem definition was achieved by researchers working with the fitness instructor to (i) understand how a programme like C25K would be delivered by a (human) fitness instructor and (ii) explore the potential role a social robot might take in supporting such an intervention. This involved the researchers visiting the university gym and undertaking mock exercise sessions with the instructor, and the instructor visiting the robotics lab to see demonstrations of the proposed robot platform and a presentation by the researchers on their previous works and project goals. The robot used was a Pepper robot from Softbank Robotics. For the interaction design, the researchers and fitness instructor agreed that exercise sessions would be led by the robot and primarily represent robot-user interactions, with the fitness instructor supervising from a distance and only interacting to ensure safety (e.g. in the case of over exertion). As this study also aimed to test (within-subject) the appropriateness of resultant autonomous behaviours, it was decided to purposefully leave the details of the fitness instructor's role somewhat ambiguous to exercising participants. The instructor was not hidden away during the interaction and it was clear he was supervising the overall study, but exercisers were not aware of the extent to which he was or wasn't engaging in teaching interactions with the robot during sessions. As noted in Section~\ref{sec:blueprint}, deciding on what long term deployment should `look like' in terms of robot-user-expert interactions is a key design requirement at this stage. For the robot fitness coach, we imagined a `far future' scenario, where one of our robot fitness coaches would be installed next to every treadmill on a gym floor, supervised by one human fitness instructor. That instructor would ensure exercisers' physical safety and still play a role in their motivation and engagement as human-human interaction is known to do. This type of interaction with one expert, multiple robots and multiple target users is a common goal in many assistive robot applications where some tasks could be automated, but there is a desire to keep an expert presence to e.g. maintain important human-human interactions and ensure user safety. The system specification represented somewhat of a `negotiation' between the researchers and the fitness instructor, as he identified the kind of high level action and inputs he felt the robot ought to have, and the researchers identified how feasible that might be for technical implementation. The state space consisted of static and dynamic features that were designed to capture exerciser engagement, task performance and motivation/personality; all identified by the fitness instructor as being relevant to his decisions in undertaking fitness instruction himself and hence teaching the robot how best to interact with a particular participant. The action space was divided into two categories: task actions and social supporting actions. The task actions were fundamentally set by the C25K programme (i.e. when to run or walk and for how long at a time). The social supporting actions were then broken down into eight sub-categories covering time reminders, social interaction, praise, checking on the user, robot animation and proxemics (leaning towards/away from the user). Importantly, system specification for this study also included co-designing the GUI that would facilitate the bi-directional teaching interaction (also utilising SPARC \cite{senft2015sparc}) between the robot and the fitness instructor with the fitness instructor himself. The technical implementation phase essentially mirrored that of Study 1: the learning algorithm was developed, tested and interfaced with the other elements of the system. The teaching interface was also finalised based on the co-design activities described previously, and similarly allowed the fitness instructor to select actions for the robot and to respond to its suggestions. Initial prototypes of both the robot and the GUI were tested in lab studies and the final gym environment. In the real-world deployment, researchers evaluated the system in a university gym with 10 participants recruited to undertake the 27-session C25K programme over a maximum of 12 weeks. The study adopted a within-subject design and explored three conditions: a supervised robot (referring to the teaching interaction), an autonomous robot (where the fitness instructor was still in position but allowed all learner-suggested actions to auto-execute) and a heuristic-based autonomous robot; a `control' condition for comparing the `teaching interaction as PD' approach to, representing a `next-best' alternative for generating expert-informed autonomous behaviour. Results from the study again demonstrated the feasibility of SPARC and IML for generating autonomous socially assistive robot behaviour, suggested that the expert-robot teaching interaction approach can have a positive impact on robot acceptability (by the domain expert and targets users) and that the teaching approach yields better autonomous behaviour that expert informed heuristics as a `next best' alternative for expert-informed autonomous behaviour creation. \subsection{Evidence of Mutual Shaping} Typical PD facilitates mutual shaping as it allows non-roboticist, domain experts to shape research goals, design guidelines, and evaluate robot prototypes etc. Here, we reflect on observations of mutual shaping effects in our foundational works, specifically resulting from our teaching approach to robot automation. During our first study, we observed evidences of mutual shaping and the teacher creating a mental model of the robot. For example, our teacher realised with experience that children tended have issues with some aspect of the game (i.e., what food a dragonfly eats). Consequently, she changed her strategy to provide additional examples and support for this aspect of the game. Similarly, the teacher also found that the robot was not initiating some action often and consequently used some actions more frequently toward the end of the teaching phase to ensure that the robot would exhibit enough of these actions. This exactly evidences that notion that human teachers can tailor their teaching to a (robot) learner's progress \citep{bloom19842,thomaz2008teachable} In the second study, we were able to demonstrate mutual shaping in the way the fitness instructor used the robot differently for different participants and/or at different stages of the C25K programme. The longitudinal nature of this study, combined with our approach in supplementing the diadic robot-user interactions with expert-user interactions, meant the fitness instructor got to know each user's exercise style/needs and could tailor the robot's behaviour accordingly. This resulted in the autonomous robot similarly producing behaviour that varied across participants. Similarly, as the programme progressed, the fitness instructor could tailor the robot's behaviour to reflect the changing exercise demands (e.g. using less actions when the periods of running were longer). The flexibility of our approach was also demonstrated when, in response to this increase in intensity, the fitness instructor requested we add a robot-led cool-down period to the end of each exercise session. This was relatively simple to implement from a technical perspective (an additional `walk' instruction at the end of each session plan) but represented a new part of the session for which there existed no previous training data. As we made this change within the teaching phase (before the switch to autonomous operation) the instructor was able to address this, such that the robot was able to successfully and appropriately support this new cool-down phase when running autonomously. We also saw an interesting, emergent synergy in the way that the fitness instructor utilised and worked alongside the robot coach. Towards the end of the study, as exercise sessions became more demanding, the fitness instructor took more time at the end of each session to undertake stretching exercises with each participant. This lead to small amounts of overlap between each participant, at which point the fitness instructor would start the next participant warming up with the robot, whilst he finished stretching with the previous participant. We find this to be compelling evidence of the way domain experts will change their practice and/or the way they utilise technological tools deployed into their workplace, particularly when they can be confident in their expectations of how that technology will perform, as is particularly fostered by our approach. \smallskip \subsection{IML for the Teaching Interaction: Opportunities and Limitations} As noted previously, both of our foundational studies utilised IML via the SPARC paradigm to facilitate the teaching interaction. From a technical perspective, our foundational studies demonstrate the feasibility and relative effectiveness (in terms of teaching time) of this approach. Fundamentally, LEADOR\xspace is agnostic with regards to the specifical computational approach to facilitating the teaching interaction, but we find IML to be a particularly compelling solution, in-line with the overall aims of the method, as it makes for an intuitive bi-directional teaching interaction for the domain expert. Specifically, through one single interface, they can see what the robot intends to do (and potentially why) before that action is executed, improving their understanding of the robot's learning progress, and instantiate teaching exemplars in real-time, informed by that understanding as well as the instantaneous requirements of the application task. However, here we draw attention to one key limitation here regarding expert-robot interactions and assessment when using IML. An important element of mutual shaping not considered here is if/how/to what extent the suggestions made by the learning robots may have influenced the domain experts. For example, had the learning robots not been making suggestions, such that the robot was entirely controlled/teleoperated by the experts, would the action distribution and timing of actions remained the same? Further, if the experts did not have the ability to actively reject suggestions (indicating that the learner was not producing appropriate robot behaviour) would they still have post-hoc identified those actions as being inappropriate? This is particularly interesting given the high number of suggested actions still being rejected at the end of the training phase, in both of our foundational studies, immediately followed by seemingly appropriate robot behaviour that was positively evaluated by the experts themselves during autonomous operation. Success of our approach inherently assumes that the domain expert/system `teacher' would provide a `correct' and fairly consistent response; i.e. that they (i) can correctly assess the quality of each action suggested by the robot and make an informed about whether this action should be executed and (ii) are always able to ensure that required robot actions are executed in a timely fashion. With SPARC, these robot suggestions are the main mean to help the expert create a mental model of the robot behaviour. Consequently, whilst our results demonstrate the IML does fundamentally `work' for automating robot behaviour, and that our domain experts did construct a mental model of the robots' behaviour, there remains an open question regarding how the robot could improve the transparency of its behaviour to actively support mental model creation for the teacher. \section{Discussion} \subsection{A Flexible and Effective Method for Automating Social Robots} We suggest that LEADOR\xspace can be used to design robots for a variety of interaction settings, in terms of the required autonomy and the nature of expert-robot-user interactions long-term. We propose two axes to describe the different types of interaction that might be desired, based on the application (Figure~\ref{fig:autonomy-exper}). A first axis describes the extent to which the domain expert(s) and user(s) are expected to interact long term, as a supplement to the robot-user interaction(s). The second axis reflects the autonomy of the robot, from full supervision (teleoperation) to full autonomy. These two axes are independent as, for example, cases exist where the expert might be continuously interacting with the target users, while continuing or not to supervise and/or improve the robot's autonomous behaviour long term. In addition, these axes do not represent a discrete space, as the teaching interaction element of LEADOR\xspace specifically makes it possible to move along either axis at any point during real world deployment. The robots developed in our foundational studies demonstrate this flexibility, and exist in slightly different spaces on these axes. \cite{senft2019teaching}'s educational robot is an example of an autonomous robot operating without the expert, and the teaching interaction represented a typical Wizard-of-Oz setup, i.e. there was no interaction. \cite{winkle2020insitu}'s robot fitness coach is closer to an autonomous robot operating side-by-side with the domain expert, and the teaching interaction utilised some interaction between the expert and the users (although this was undertaken \emph{outside} of direct teleoperation). \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figs/png/autonomy_vs_expert_user.png} \caption{Two dimensional representation for visualising the different types of long-term expert-robot-user interactions that a social robot might be designed for, all of which LEADOR\xspace can facilitate. Note this is not a discrete space, and LEADOR\xspace specifically makes it possible to move along these axes upon real-world deployment.} \label{fig:autonomy-exper} \end{figure} The two foundational studies also demonstrate different, complimentary elements of the effectiveness of LEADOR\xspace for designing social robots. \cite{senft2019teaching} fundamentally demonstrated the practical feasibility of the teaching interaction for creating appropriate autonomous behaviour. After a teaching phase with 25 children, the robot was deployed autonomously and without expert supervision. It displayed a similar policy to when it was supervised, for example, capturing connections between some events and actions with appropriate timing. Whilst \cite{winkle2020insitu} again demonstrated similarity between supervised and autonomous behaviour, we also specifically demonstrated that the teaching interaction resulted in a better autonomous robot than an expert-informed heuristic based alternative. In addition, we specifically explored to what extent the overall LEADOR\xspace could support mutual shaping and influence robot acceptability. To this end, the significant co-design work undertaken by the domain expert seems likely to have contributed to the high level of ownership he seemed to feel toward the system, and the way in which he conceptualised the robot, throughout, as an independent agentic colleague he was training. When asked whether he perceived Pepper as more of a tool or a colleague, the fitness instructor commented \textit{``It was definitely more of a colleague than a tool...I like to think her maybe early bugs or quirks definitely gave her a bit more of a personality that maybe I held on to''}. In addition, when evaluating the robot's performance, the instructor also reflected on the difference between how the robot might behave in comparison to himself: \textit{``Pepper's suggestions might not be what *I* would say in that exact same situation, however it doesn't mean that what was said or suggested was wrong''}. This gives credibility to the suggestion that LEADOR\xspace can be used to create robots that do not simply attempt to imitate or replicate the domain expert directly, but instead play a distinct but complimentary role alongside that domain expert in delivering an assistive intervention. The fitness instructor's feedback also suggested use of the robot did not prevent him from still developing a working relationship with the exercisers, nor from having a positive impact on their motivation, as he \textit{``did care about their progress and their health''}. This appears to be true on the exerciser's side too, as their evaluations suggested they perceived the fitness instructor and the robot as playing distinct but complimentary roles in their undertaking of and engagement with the prescribed exercise programme: \textit{``Pepper was a good instructor and positively motivated my runs. The role of Don [the fitness instructor] assisted this in that having him there meant I could follow the robot’s instructions safe in the knowledge that there was some support there should anything go wrong!''} To this end, the fitness coach robot example demonstrates how LEADOR\xspace seemingly contributes to robot acceptability, by both domain experts and target users, and can successfully facilitate meaningful triadic (domain expert - robot - user) interactions in human-centered domains where there might be a desire to reduce domain expert workload without ever removing them from the interaction completely. \subsection{Supporting `Responsible by Design' Robotics} The Foundation for Responsible Robotics (FRR) defines responsible robotics as `the responsible design, development, use, implementation, and regulation of robotics in society'\footnote{https://responsiblerobotics.org/}. Concerning research and development, the FRR demonstrates significant overlap with the goals of mutual shaping, and hence our goals in proposing LEADOR\xspace: \textit{``Responsible robotics starts before the robot has been constructed. Ethical decision-making begins in the R\&D phase. This includes the kind of research practices that are employed; ensuring that a diverse set of viewpoints are represented in the development of the technology; using methods of development and production that are sustainable; and taking into consideration the impact that the technology will have on all stakeholders to mitigate harm preemptively rather than after the fact.''} A significant number of attempts to more formally define the ethical design and development have taken the form of published principles of AI and robotics \footnote{http://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html}, many of which similarly identify the importance of engaging (non-roboticist) users and domain experts in robot design and evaluation processes. Arguably one of the more practical resources is the British standard BS8611-2016 \textit{Guide to the ethical design and application of robots and robotic systems}~\citep{bsiBS861120162016}, which explicitly identifies ethical risks posed by robots, mitigating strategies and suggested methods for verification and validation. Notably, the standard suggests that a number of the identified ethical hazards might be verified and validated through \textit{expert guidance} and \textit{user validation}. Through LEADOR\xspace, such guidance and validation is inherently `built-in' to the design and development process. Based on this, we posit that, in supporting a mutual shaping approach to robot development, and specifically by `opening up' robot automation to non-roboticists (such that they can contribute to robot design and automation, but also better understand robot capabilities and limitations) LEADOR\xspace also represents a concrete implementation of a \emph{responsible robotics} approach, and offers a practical way to create social robots with expert guidance and user validation being inherent to the development process. \subsection{Future Development} \subsubsection{Inclusion of Application Targets in Design, Automation and Evaluation} A key limitation in both of our foundational works was the lack of including target users during the design processes. This is partly because both of these works concerned the development of robots that would be assisting the domain expert practitioners (i.e. a teaching assistant and a fitness instructor), and so it made sense to focus on working with such experts as co-designers of the system. However, as discussed in the introduction, inclusion of all stakeholders is a key aim of mutual shaping approaches to robot design/development. A desire to include target users in the robot’s design and evaluation would raise the interesting question of how target users, who are expecting to beneficiaries of the interaction, could design the robot. In a number of situations where the robot is expected to provide support or additional knowledge, including target users in the co-design of the action state for example could be either complex or negate the \emph{illusion} of the robot as an agent. However, target users could certainly be included in preliminary testing of those actions designed with a domain expert. \smallskip \subsubsection{Alternative Teaching/Learning Interactions} The method presented in this paper focused on a teaching phase where an expert teaches the robot how to interact with a target user, with the target user unaware of the extent to which the expert is involved in the robot behaviour. However, our method is also suited to other interaction designs not explored in our foundational studies. While situations such as therapy or education require the expert and target user to be different persons, a large number of other domain relaxes this constraint. For example, an elderly at home could have a robot carer, and teach the robot how to support them in their daily activity. In this case, the target user is the person knowing best their needs and as such would be the perfect expert. LEADOR\xspace would be highly applicable to this situation as the target users could be involved early in the design process, help specify the state and action spaces and tools they would need and finally teach in situ their robot how to interact while benefiting from the interaction themselves. Alternatively, building on the previously noted limitation regarding target user inclusion, applications where the robot is to play more of a \emph{peer} role, rather than a expert authority might be best achieved by having one target user teach the robot how to interact with another target user. This might be particularly appropriate for e.g. allowing teenagers to automate companion robots that supporting teenagers' mental health~\citet{bjorlingParticipatoryResearchPrinciples2019}. This raises a number of interesting research questions regarding how the teaching interaction might impact on the teacher's (self-)understanding of the application domain, representing another aspect of mutual shaping that could be considered in more detail in future works. An alternative, exciting teaching interaction is having the teaching phase being open and transparent to the target user. Teaching robots could be similar to how adult teach children to interact, by providing explicit feedback guideline openly in the social environment. This situation raises a number of open questions such as to what extent having the expert providing feedback to the robot could impact the ascribed agency of the robot, or how could the target user be included in telling the robot how best to help them. We have good evidence from our work~\citet{winkle2020insitu} that such open interaction would not `break the illusion' of the robot being an independent (credible) social agent. Further, previous work suggests that robot users value the human developers `behind' the robot, as it is their `genuine intentions' that underlie the robot's social and assistive behaviours~\citet{winkleEffectivePersuasionStrategies2019}. In sensitive application environments such as the previously mentioned teenage mental health support, such openness may indeed be crucial to robot effectiveness and acceptability~\citet{bjorlingExploringTeensRobot2020}. However, these alternative teacher/learner configurations need to account for the existing practical constraints of using reinforcement learning (RL) in human robot interaction. Indeed, in the context of human-robot interaction, RL faces two main issues, (1) the large number of datapoints required to improve the policy (which have to come from real world interaction) and (2) the risks posed by the RL `exploration' in the real human-robot interaction, where the RL algorithm might suggest actions that are inappropriate in a given context. In our two studies, the domain expert also acted as a `gate keeper' for the robot's suggestions, and as a general safety net, able to intervene if the autonomous robot behaviour was inappropriate. Likewise, when applying LEADOR\xspace in other scenarios, adequate safeguarding needs to be in place, until further research on reinforcement learning can provide adequate safety guarantees. Alternatively, the expert could serve early on to help create an initial safe and effective policy by providing a high amount of guidance. Then, in a second phase, the expert could revert only to the `gate keeper' role, working as a safeguard to ensure that the robot's policy has a minimum efficacy, while letting the robot self-improve. Finally, when the robot reaches a sufficient expertise in the interaction, it could be left to fine tune its policy with less supervision. \section{Conclusion} In this article we present \emph{Led-by-Experts Automation and Design Of Robots\xspace} (LEADOR\xspace), a method for end-to-end participatory design (PD) of autonomous social robots that supports a mutual shaping approach to social robotics. This general method is derived from two, independent foundational studies and represents a culmination of the authors' experiences of working with domain experts in the development of autonomous socially assistive robots. We describe the activities undertaken in those studies to demonstrate how the method has been derived and give tangible examples of how it might be applied. Together, we suggest these foundational studies also demonstrate both the feasibility and the value of the approach, as both resulted in acceptable, autonomous and effective socially assistive robots successfully utilised in complex real world environments. The first key contribution of LEADOR\xspace is to make robot \emph{automation} participatory, such that non-roboticist, domain experts can contribute directly to generating autonomous robot behaviours. This particularity compliments more typical use of PD for e.g. generating initial robot design guidelines or evaluation robot prototypes. We achieve this expert-led automation by utilising a \emph{teaching interaction}, whereby the domain expert(s) can directly define and refine the robot's autonomous behaviour through a teaching interface. Both of our foundational studies utilised interactive machine learning and the SPARC paradigm~\citep{senft2015sparc} which we suggest is particularly well suited to the overall method goals, therefore we particularly reflect on this approach and its benefits, challenges and limitations. However, whilst we refer to this as a teaching interaction, as the domain expert is `teaching' the robot how to behave, our method is agnostic as to the specific technical approach taken (e.g. machine learning, authoring) to facilitate it. The second key contribution of our LEADOR\xspace is to facilitate a mutual shaping approach throughout robot development. This is achieved, firstly, by the increased domain expert participation in robot automation as described above. In addition however, our integration of the teaching interaction into real world robot deployment means that this automation of robot behaviour can actually be informed by and reflect the complex and nuanced realities of the real world context, capturing the expert's tacit and intuitive responses to real-world social dynamics. Given that teaching can be re-convened at anytime, the method also facilitates the updating of robot behaviours in response to these dynamics evolving, or new dynamics emerging, i.e. observation of mutual shaping effects. More generally, the in-situ robot deployment and expert teaching role maximises the opportunity to identify and understand such mutual shaping effects to better evaluate the robot's overall impact and efficacy for the proposed application. In facilitating end-to-end PD and mutual shaping, we also suggest our method inherently supports responsible robotics, by design. Specifically, it allows for a diverse set of viewpoints to be represented in the development of the technology, and for preemptive consideration of the impact that technology will have on stakeholders. Finally, on a practical level, we also suggest our method can better facilitate multi-disciplinary working as it systematically combines participatory design and technical development such that non-roboticist researchers and stakeholders are no longer excluded from any stage of the development process. In summary, we suggest LEADOR\xspace is an all-around effective approach for creating socially intelligent robots, as \emph{practical} as it is \emph{responsible} in facilitating the creation of expert-informed, intuitive social behaviours. We identify a number of areas for potential future development, which we hope will be of interest to other roboticists in refining the method further, and working further towards democratisation of robot design and development. \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} KW and ES led foundational studies 1 and 2 from which this work is derived, both of which were (independently) conducted in close collaboration with SL. KW and ES led on derivation of the generalisable method based on their shared experiences, with all authors contributing to reflections on the foundational studies, resultant implications for the generalisable methodology and producing the final manuscript. \section*{Funding} This work was partially supported by the EPSRC via the Centre for Doctoral Training in Future Autonomous and Robotic Systems (FARSCOPE) at the Bristol Robotics Laboratory, University of the West of England and University of Bristol (grant number EP/L015293/1), partially supported by KTH Digital Futures Research Centre, partially funded by the EU FP7 DREAM project (grant no. 611391). \section*{Acknowledgments} We wish to acknowledge our PhD supervisors, Paul Bremner, Praminda Caleb-Solly, Ute Leonards, Ailie Turton, Tony Belpaeme, and Paul Baxter with whom we collaborated on the foundational studies and previous experiences that informed this work. In addition, we wish to acknowledge the two domain experts, Madeleine Bartlett and Donald Knight, who took part in our foundational works whose reflections contributed to our refinement of the method. \bibliographystyle{plainnat}
{ "timestamp": "2021-05-17T02:18:19", "yymm": "2105", "arxiv_id": "2105.01910", "language": "en", "url": "https://arxiv.org/abs/2105.01910" }
\section{Introduction}\label{sec:intro} Modern high performance speech recognition systems require thousands of hours of transcribed speech to train an acoustic model. This stands in sharp contrast to how infants acquire a language. They show an innate ability to infer linguistic structure from the speech signal alone, long before they learn to read and write \cite{Dupoux2018}. The research field of unsupervised speech learning is concerned with the task to endow machines with a similar ability \cite{Ondel2021}. Learning acoustic and language models from spoken input has other applications next to providing a computational model of infant language acquisition. It can be an important tool for linguists in their endeavor to document endangered languages, many of them having no written form. Furthermore, it can help to widen the range of languages for which automatic language processing tools, such as \gls{ASR} systems, can be built, because the constraint of requiring many hours of transcribed speech for their training can be relaxed. In this work we are concerned with unsupervised \gls{AUD}. With \glspl{AU} we denote the basic building blocks of speech. These are recurring segments of high similarity which we wish to cluster into syntactic classes. We call them \glspl{AU} rather than phones, since they are defined upon acoustic similarity and may not necessarily correspond to linguistically defined units. We wish those automatically learnt units to capture content information and be independent of speaker characteristics. However, in the considered unsupervised scenario this is known to be a tough goal~\cite{jansen2013clsp, Jansen2013, Walter2013}. This work proposes a novel approach to speaker normalization: Recent advances in generative and predictive modeling like auxiliary adversarial classifiers and \gls{CPC}~\cite{vandenoord2018cpc} have been utilized to build \gls{VC} systems with high performance. The core idea of this approach is to disentangle the speech input into content and style embeddings. Here, the content embedding covers short term variations in the utterance which are strongly influenced by the phonetic content, while the style embedding covers the information of factors which show only slight variations during the utterance. \gls{VC} can be achieved by keeping the content embedding while exchanging the style embedding with one extracted from an utterance spoken by the target speaker and reconstructing an audio signal from these embeddings through a generative model. Expanding this approach, speaker normalization is possible by converting all utterances in the database to the same target style, since the style embedding is mostly influenced by the speaker characteristics. We design the proposed speaker normalization as a front end of an \gls{AUD} system, such that it can be combined with any approach to \gls{AUD}. In the experiments we employ our earlier proposed \gls{HMMVAE}~\cite{ebbers2017hmmvae}, which has a \gls{VAE} structure with a neural encoder and decoder. Unlike conventional \glspl{VAE}, \glspl{HMM} are employed to model the latent space variables, one per \gls{AU}, to capture the temporal structure of \glspl{AU}. The \gls{HMMVAE} has been shown in prior work to outperform HMM-GMM based \gls{AUD} systems~\cite{ebbers2017hmmvae, glarner2018fbhmmvae}. The remainder of this work is organized as follows. Section~\ref{sec:rel_work} discusses related work. In Section~\ref{sec:prop_sys}, the proposed system is explained in detail, with the Adversarial CPC based \gls{VC} system covered in Section \ref{ssec:ACPC_VC}, and the \gls{HMMVAE} discussed in \ref{ssec:HMMVAE}. In Section~\ref{sec:ex_setup}, the experiment setup is explained, covering feature extraction in Section~\ref{ssec:features} and the training details in Section~\ref{ssec:train}. Section~\ref{ssec:measures} introduces the performance metrics used to evaluate the system. The results are presented and discussed in Section~\ref{sec:results}. Finally, conclusions are drawn in Section \ref{sec:conclusions}. \begin{figure*}[t!] \vspace{-1.5em} \begin{minipage}{.5\textwidth} \centering \includegraphics[width=.85\linewidth]{images/system} \captionof{figure}{\gls{AUD} with Speaker Normalization} \label{fig:aud_system} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=.85\linewidth]{images/train_vc} \captionof{figure}{\gls{VC} system training} \label{fig:train_vc} \end{minipage}% \vspace{-1.5em} \end{figure*} \vspace{-0.5em} \section{Related Work}\label{sec:rel_work} A classic approach to tackle the problem of speaker variablity in Automatic Speech Recognition is \gls{VTLN}~\cite{acero1990acousticaland,eide1996vtln,lee1996snwarping}, which has also been applied in low-resource settings, e.g., for spoken term detection~\cite{madhavi2019vtlnforsd}. Furthermore, \gls{fMLLR} is often used, even in low-resource setups~\cite{heck2017dpgmm}. However, a significant amount of training data is needed per speaker and the speaker labels need to be known. In the Zero Resource Speech challenge 2017~\cite{dunbar2017zerospeech}, Track 1 was geared towards improving \gls{AUD} results with speaker adaptation techniques and thus provided speaker labels. In contrast to this, our approach to speaker normalization is completely unsupervised and does not require speaker labels. The Zero Resource Speech challenge 2019~\cite{dunbar2019zerospeech} introduced a setup where \gls{AUD} systems are used to provide the content input of a voice synthesis system. Leading systems of the challenge~\cite{tjandra2019zerospeech_vqvae, feng2019disentangle_zerospeech} are based on the \gls{VQVAE} framework~\cite{vandenoord2017vqvae, chorowski2019vqvae_rep} or the \gls{FHVAE}~\cite{hsu2017fhvae}. They also employ the idea of disentangling content from speaker characteristics, to extract the content information from the input. Our approach is reversed as we aim at extracting and removing the speaker information. Furthermore, we more actively pursue disentanglement by placing an adversarial CPC loss on the content representation. In~\cite{yusuf2020hierarchical}, a multilingual \gls{AUD} system is constructed which defines a subspace of \glspl{AU} which is learned in a supervised way from multilingual data in an attempt to capture the commonalities on what an \gls{AU} is across different languages. They aim at providing a better prior for the \gls{AU} learning, while we are concerned with removing speaker dependence. We thus conjecture that the two approaches are complementary and possibly could be combined. \vspace{-0.5em} \section{System Setup}\label{sec:prop_sys} The proposed system is shown in Figure~\ref{fig:aud_system}. It consists of an adversarial \gls{CPC} based \gls{VC} system~\cite{ebbers2020contrastive} for speaker normalization and a subsequent \gls{HMMVAE}~\cite{ebbers2017hmmvae} to perform the \gls{AUD}. \subsection{Adversarial CPC Based Voice Conversion}\label{ssec:ACPC_VC} Unsupervised disentanglement of speaker and content induced variations in speech signals has attracted increased popularity in recent years. One reason is that it shows a way to exploit unlabeled data to, e.g., obtain a content representation, which is invariant to the speaker and, thus, may enable improved performance in a downstream task. With most approaches employing some kind of auto encoding they further allow to perform voice conversion by decoding the content representation of an utterance with the speaker representation of some other utterance. As we believe that the speaker induced variations in the speech signals are impairing the performance of \gls{AUD} systems, we propose to perform a speaker normalization beforehand. Here, one could either apply an \gls{AUD} system to the speaker invariant latent content representation or one could convert each utterance to the same speaker before inputting it to the \gls{AUD} system. In this paper, we decide for the latter approach as \gls{AUD} systems can then be applied to conversions in just the same way as they could be applied to non-converted signals. For the voice conversion we here employ a \gls{FVAE} along with adversarial \gls{CPC} as proposed in \cite{ebbers2020contrastive}, which has shown to yield a well-balanced trade-off between linguistic content preservation and speaker invariance. The \gls{FVAE} employs two convolutional encoders, namely, a content encoder outputting a series of content embeddings ${\mathbf{C}=(\mathbf{c}_{1}, \dots, \mathbf{c}_{M})}$ and a style encoder with subsequent \gls{GAP} yielding an utterance-level style embedding $\mathbf{s}$, see Fig.~\ref{fig:train_vc}. Note that the encoders use temporal pooling which is why $M<T$. The encoders are trained jointly with a decoder $\hat{\mathbf{Y}}=f_\text{dec}(\mathbf{C}, \mathbf{s})$ to allow reconstruction of the speech signal $\mathbf{Y}$ by employing a reconstruction loss ${L_\text{rec}=\frac{1}{T}||\hat{\mathbf{Y}}-\mathbf{Y}||^2_2}$. Note that the encoder also follows the \gls{VAE} framework and provides the statistics of a variational posterior $q(\mathbf{c}_m)$ rather than $\mathbf{c}_m$ directly, which, however, is neglected in our notation for the sake of clarity. Nonetheless, it contributes a \gls{KL} regularization loss ${L_\text{kld} = \frac{1}{M}\mathrm{D}_\text{KL}(q(\mathbf{c}_m; \bm\phi)\|p(\mathbf{c}_m))}$ with $p(\mathbf{c}_m)$ being a standard normal prior. To foster disentanglement, adversarial \gls{CPC} is employed to prevent the content encoder from encoding style information. \gls{CPC} is an increasingly popular self-supervised learning approach, allowing to extract mutual information of different parts of the same signal by employing a contrastive loss. Here, however, it is to identify static information in the content embeddings and repel it. For that purpose, a CPC encoder $\mathbf{h}_t = f_\text{CPC}(\mathbf{C}'_t)$, with $\mathbf{C}'_t$ denoting the receptive field of $f_\text{CPC}$ at frame $t$, is trained to extract mutual information from $\mathbf{C}'_{t-\tau}$ and $\mathbf{C}'_{t}$ by employing a contrastive loss \begin{align*} \label{eq:cpc} L_\text{cpc} = -\frac{1}{T-\tau}\sum_{t=\tau + 1}^T\frac{\exp(\mathbf{h}_t^\mathrm{T}\mathbf{h}_{t-\tau})}{\sum\limits_{\widetilde{\mathbf{h}}_t\in\mathcal{B}_t} \exp(\widetilde{\mathbf{h}}_t^\mathrm{T}\mathbf{h}_{t-\tau})}\,\, \end{align*} where $\tau$ denotes a lookahead shift and $\mathcal{B}_t$ presents a set of candidate embeddings $\tilde{\mathbf{h}}_t$. By choosing the lookahead shift to correspond to one second, the encoded mutual information that may be extracted represents static attributes such as style information. While the CPC encoder is trained to minimize $L_\text{cpc}$, the content encoder is trained to maximize it, i.e. content and \gls{CPC} encoder are operating adversarially here, which encourages the \gls{FVAE} to not encode style information into the content embeddings but only into the style embedding. Note that computation of $L_\text{cpc}$ does not require any labels but solely relies on self-supervision. The overall \gls{FVAE} loss is given as $$L_\text{fvae}=L_\text{rec}+\beta L_\text{kld}-\lambda L_\text{cpc},$$ where we choose $\beta=0.01$ and $\lambda = 1$ following \cite{ebbers2020contrastive}. After training, the \gls{FVAE} is used for speaker normalization, by converting all utterances of a certain database to the same target style. Ideally, the target style is supposed to be an average style which is representative for the respective database. However, to avoid introducing unnecessary artifacts due to a nonexistent target speaker, we here use the style medoid instead. Firstly, the style embedding vector of each utterance in the respective database is extracted. Then, the style embedding vector which has the minimal mean Euclidean distance to all the style embeddings in the database, is chosen as our target style, which all utterances are converted to. The converted utterances are then forwarded to the \gls{AUD} system, which here is implemented by an \gls{HMMVAE} as explained in the following. \subsection{HMMVAE}\label{ssec:HMMVAE} The \gls{HMMVAE} is based on a \gls{VAE}~\cite{kingma2013auto}, which is a very well-known generative model. Here, we limit ourselves to normal distributions, with $p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}; f(\mathbf{x}; {\theta}), \sigma^2 \cdot \mathbf{I})$ as the conditional distribution of the observation $\mathbf{x}$ given the latent code $\mathbf{x}$, and a decoder \gls{NN} \gls{NN} $f(\cdot; \bm{\theta})$. The variational~\cite{blei2017variational} posterior is given by $q(\mathbf{x}_n, \bm\phi) = \mathcal{N}(\mathbf{x}_n; \bm{\mu}_\mathbf{x}(\mathbf{y}_n; \bm\phi), \mathrm{diag}\{\bm{\sigma}_{\mathbf{x}}^2(\mathbf{y}_n; \bm\phi)\})$, with a decoder \gls{NN} to extract the mean vector $\bm{\mu}_\mathbf{x}(\mathbf{y}_n; \bm\phi)$ and the log-variances $\log\bm{\sigma}_{\mathbf{x}}^2(\mathbf{y}_n; \bm\phi)$. Parameters are learned by minimizing the negative \gls{ELBO} \vspace{-0.5em} \begin{align*} &- \log p(\mathbf{y}_n) \le -\mathrm{ELBO}^\text{(VAE)} \\ &= \frac{\mathbb{E}_{q(\mathbf{x}_n; \bm\phi)}\| \mathbf{y}_n - f(\mathbf{x}_n; \bm\theta)\|^2}{2 \sigma^2} + \mathrm{D}_\text{KL}(q(\mathbf{x}_n; \bm\phi)\|p(\mathbf{x}_n)) + \mathrm{C.} \end{align*} over a dataset $\mathcal{Y} = \{\mathbf{y}_1, \dots, \mathbf{y}_N\}$ of $N$ observations. The intractability of the first term, called the reconstruction loss, is side-stepped by sampling from the posterior. Note that with a fixed observation variance $\sigma^2$, the model can be interpreted as a $\beta$-VAE~\cite{higgins2016betaVAE}. In the vanilla \gls{VAE}, a standard normal distribution is assumed for the latent prior $p(\mathbf{x}_n)$. For the \gls{HMMVAE}, which is an instance of the \gls{SVAE}~\cite{johnson2016composing}, this is replaced by an \gls{HMM}. Here, $\mathbf{Y} {=} (\mathbf{y}_{1}, \dots, \mathbf{y}_{T})$ denotes the observations from a single utterance of length $T$. The corresponding sequence of latent codes, which are emitted by the \gls{HMM}, is $\mathbf{X} {=} (\mathbf{x}_{1}, \dots, \mathbf{x}_{T})$. The corresponding latent state sequence is $\mathbf{Z} {=} (\mathbf{z}_{1}, \dots, \mathbf{z}_{T})$, which consists of a one-hot vector $\mathbf{z}_t = [z_{t,1}, \dots, z_{t,N_\text{S}}]^\mathrm{T}$ for each time step $t$ with the number of states $N_\text{S}$ as length. The state-dependent emission densities are $\mathcal{N}(\mathbf{x}_t; \bm{\mu}_k, \bm\Sigma_k)$, and the state transition matrix is given by $\mathbf{A}$. The full \gls{HMM} parameter set to be learned is thus $\bm\Omega {=} (\mathbf{A}, \bm\mu_1, \bm{\Sigma}_1, \dots, \bm{\mu}_{N_\text{S}}, \bm{\Sigma}_{N_\text{S}})$, and the latent prior becomes $p(\mathbf{X}, \mathbf{Z}; \bm\Omega)$. Constraints on the parameters are fulfilled by proper parametrization as in~\cite{ebbers2017hmmvae}. The variational posterior needs to be expanded to $q(\mathbf{X}, \mathbf{Z}) {=} q(\mathbf{Z}) \prod_{t=1}^{T}q(\mathbf{x}_t; \bm\phi)$ to include the state sequence. For the \gls{HMMVAE}, the \gls{ELBO} thus becomes \vspace{-0.5em} \begin{align*} \mathrm{ELBO}^{\text{(HMMVAE)}} = \mathbb{E}_{q(\mathbf{X},\mathbf{Z})} \left[\log \frac{p(\mathbf{Y}| \mathbf{X}) p(\mathbf{X}, \mathbf{Z}; \bm\theta, \bm\Omega)}{q(\mathbf{X}, \mathbf{Z}; \bm{\phi}, \bm\Omega)} \right]. \end{align*} Note that this expression can be further decomposed due to the mean-field approximation and the fact that the observation $\mathbf{y}_t$ depends only on $\mathbf{x}_t$. Consequently, the same reconstruction loss term appears as in the vanilla \gls{VAE}. The state posterior $q(z_t)$ and joint posterior $q(z_{t{-}1}, z_t)$, which are needed for inference, can be efficiently approximated by Viterbi~\cite{forney1973viterbi} estimates $\hat{\mathbf{z}}^{\mathcal{V}}_{t}$, i.e. $q(\mathbf{z}_t) \rightarrow \delta(\mathbf{z}_t {=} \hat{\mathbf{z}}^{\mathcal{V}}_{t})$, and $q(\mathbf{z}_{t{-}1}, \mathbf{z}_t) \rightarrow \delta(\mathbf{z}_{t-1} {=} \hat{\mathbf{z}}^{\mathcal{V}}_{t-1}) {\cdot} \delta(\mathbf{z}_t {=} \hat{\mathbf{z}}^{\mathcal{V}}_{t})$ , respectively. In the case of \gls{AUD} each \gls{AU} is modeled by three states, giving $N_\text{S} = 3\cdot N_\text{U}$, with $N_\text{U}$ being the number of \glspl{AU}. \vspace{-0.5em} \section{Experiment Setup}\label{sec:ex_setup} To evaluate the performance impact of our speaker normalization approach on the \gls{AUD} task, experiments on three languages were conducted. \subsection{Speech Databases}\label{ssec:speech_db} The \gls{VC} system is trained on all available data at once, i.e., across the languages available. This is plausible since the \gls{VC} system attempts to capture general properties of the human vocal tract which are mostly the same for all speakers regardless of the target language. Furthermore, the voice conversion system needs a certain amount of training data to avoid overfitting the adversarial classifier as well as a certain amount of speaker diversity to avoid having the same two speakers in the same batch too often as this harms contrastive learning. The first language, English, serves two purposes: As a control language and as a resource for large amounts of data during training of the \gls{VC} system. Therefore, we use two databases: The first is TIMIT~\cite{garofolo1993timit}, where the sa utterances are left out, leading to \SI{3.6}{\hour} of recordings with 4288 utterances by 536 speakers. Second, LibriSpeech~\cite{panayotov2015librispeech} is used solely to train the \gls{VC} system, with the subsets \texttt{train-clean-100} and \texttt{train-clean-360}, adding up to \SI{464.2}{\hour} of speech with 132553 utterances by 1172 speakers. Furthermore, the West African language Yoruba~\cite{gutkin2020yoruba}, including \SI{4}{\hour} of recordings with 3583 utterances by 36 speakers, and the Central African language Mboshi~\cite{godard2017mbochi}, containing \SI{4.4}{\hour} of recordings with 5130 utterances by 3 speakers, are used to demonstrate the performance on low-resource languages. Available transcriptions are used for evaluation only. In the case of TIMIT, the proper phone-level transcriptions are used. For Yoruba and Mboshi, forced alignments were provided by the authors of~\cite{yusuf2020hierarchical}. The databases for \gls{AUD} are deliberately chosen as in~\cite{yusuf2020hierarchical} and \cite{Ondel2021} to provide comparability. \subsection{Feature Extraction}\label{ssec:features} In this work, a log-mel feature representation is chosen, which consists of taking the short-time Fourier transform of the input audio, conversion to the power spectrum domain, applying a mel filterbank and taking the logarithm of the resulting mel power spectrum. All data is processed with a sampling rate of \SI{16}{\kilo\hertz}. For the \gls{VC} system, the parameters are: A Blackman window with a window length of 400 samples, a window shift of 160 samples, an FFT size of 512 samples and 80 filters in the mel filterbank. These parameters are taken from~\cite{ebbers2020contrastive}. Each of the mel-bands is normalized to zero mean and unit variance, where normalization statistics are computed using the complete training set. For the HMMVAE, we use 40 filters in the mel filterbank and deltas and delta-deltas of the log-mel features are added. Furthermore, each of the feature maps is normalized to zero mean and unit variance for each input signal individually. \subsection{Training}\label{ssec:train} Both models are trained using Adam~\cite{kingma2014adam}. The \gls{VC} system is trained in a multilingual manner on all datasets of all language databases. Since the \gls{CPC} part uses other examples from the same batch as counter examples, it is ensured that each batch only contains utterances from a single language to avoid easy predictability for the adversarial classifier due to language characteristics. The batch size is chosen to be 16. While the \gls{VC} system is trained on multilingual data, the \gls{HMMVAE} has to be trained language spefic. Its input audio is either taken directly from the database in the control experiments or from the output of the \gls{VC} system. For all languages, the number of \glspl{AU} $N_\text{U}$ is set to 80. Optimization is done with the ADAM optimizer~\cite{kingma2014adam} with a learning rate of $0.001$. Similar to~\cite{ebbers2017hmmvae}, a pre-training scheme is used to initialize the parameters, where the \gls{HMMVAE} is trained in a pseudo-supervised manner on random alignments for 2000 iterations. The actual HMMVAE training is carried out for 20000 iterations since this has shown to be enough for the system to converge. \subsection{Performance Metrics}\label{ssec:measures} The performance is evaluated using three different metrics. For each metric, higher values are better. Firstly, the (symmetric) \gls{NMI} is used as defined in~\cite{yusuf2020hierarchical}: $\mathrm{NMI} = \SI{200}{\percent} \frac{\mathrm{I}(U;P)}{\mathrm{H}(U) + \mathrm{H}(P)}$, where $U$ are the extracted \glspl{AU}, $P$ are the reference phones, $\mathrm{I}(\cdot;\cdot)$ is the mutual information between the label sets and $\mathrm{H}(\cdot)$ is the entropy. The calculation is based on a frame-wise comparison of all proposed and reference transcriptions, calculating a confusion matrix and estimating the joint probability distribution between the two label sets. In comparison to the older (asymmetric) \gls{NMI} definition, using too many \glspl{AU} induces a penalty, which leads to a better comparability across different choices for the number of \glspl{AU}. Secondly the \gls{CP} of each \gls{AU} cluster is calculated from the confusion matrix. These first two metrics assess the cluster performance. As the third measure, we calculate the phone boundary \gls{BFS}, which assesses the agreement of the discovered \gls{AU} boundaries with the phone boundaries provided by the database (TIMIT) or by forced alignment (Yoruba, Mboshi), using a collar of $\pm \SI{20}{\milli\second}$ as in~\cite{yusuf2020hierarchical}. This metric measures the segmentation performance. \section{Results}\label{sec:results} \begin{table}[t!] \caption{Comparison of \gls{AUD} results} \label{tbl:results} \setlength\tabcolsep{5pt} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{lllccc} \toprule language & model & input & \acrshort{NMI} & \acrshort{CP} & \acrshort{BFS} \\ \midrule English & HMM~\cite{yusuf2020hierarchical} & clean & 35.91 {\scriptsize${\pm}$0.27} & --- & 63.86 {\scriptsize${\pm}$0.34} \\ & H-SHMM~\cite{yusuf2020hierarchical} & clean & 40.04 {\scriptsize${\pm}$0.51} & --- & 76.60 {\scriptsize${\pm}$0.54} \\ & HMMVAE & clean & 40.44 {\scriptsize${\pm}$0.18} & 39.00 {\scriptsize${\pm}$0.57} & 75.18 {\scriptsize${\pm}$0.20} \\ & HMMVAE & rec & 41.14 {\scriptsize${\pm}$0.34} & 41.01 {\scriptsize${\pm}$0.36} & 76.03 {\scriptsize${\pm}$0.64} \\ & HMMVAE & vc & 42.85 {\scriptsize${\pm}$0.28} & 42.09 {\scriptsize${\pm}$0.72} & 73.88 {\scriptsize${\pm}$0.62} \\ \midrule Mboshi & HMM~\cite{yusuf2020hierarchical} & clean & 35.85 {\scriptsize${\pm}$0.62} & --- & 47.92 {\scriptsize${\pm}$1.56} \\ & H-SHMM~\cite{yusuf2020hierarchical} & clean & 41.07 {\scriptsize${\pm}$1.09} & --- & 59.15 {\scriptsize${\pm}$1.51} \\ & HMMVAE & clean & 35.87 {\scriptsize${\pm}$0.59} & 58.39 {\scriptsize${\pm}$0.39} & 53.54 {\scriptsize${\pm}$2.34} \\ & HMMVAE & rec & 36.33 {\scriptsize${\pm}$0.43} & 58.84 {\scriptsize${\pm}$0.22} & 53.82 {\scriptsize${\pm}$1.33} \\ & HMMVAE & vc & 38.13 {\scriptsize${\pm}$0.48} & 58.58 {\scriptsize${\pm}$0.22} & 53.33 {\scriptsize${\pm}$1.03} \\ \midrule Yoruba & HMM~\cite{yusuf2020hierarchical} & clean & 36.38 {\scriptsize${\pm}$0.22} & --- & 54.47 {\scriptsize${\pm}$0.64} \\ & H-SHMM~\cite{yusuf2020hierarchical} & clean & 40.06 {\scriptsize${\pm}$0.11} & --- & 66.95 {\scriptsize${\pm}$0.36} \\ & HMMVAE & clean & 36.64 {\scriptsize${\pm}$0.36} & 48.02 {\scriptsize${\pm}$0.28} & 55.26 {\scriptsize${\pm}$0.93} \\ & HMMVAE & rec & 38.14 {\scriptsize${\pm}$0.29} & 50.16 {\scriptsize${\pm}$0.32} & 55.97 {\scriptsize${\pm}$0.35} \\ & HMMVAE & vc & 38.72 {\scriptsize${\pm}$0.32} & 50.40 {\scriptsize${\pm}$0.35} & 54.41 {\scriptsize${\pm}$0.81} \\ \midrule \bottomrule \end{tabular} } \vspace{-.5em} \end{table} We conducted three experiments per language: \gls{AUD} without voice conversion, labeled as \texttt{clean}, \gls{AUD} after reconstruction, labeled \texttt{rec} where source style and target style were the same, and \gls{AUD} after speaker normalization by \gls{VC} to the style medoid of the respective database, labeled \texttt{vc}. Each system is trained 5 times with different random weight initialization and we report mean and variance of the metrics across the different runs. The results are presented in Table~\ref{tbl:results}. It can be seen that for English a significant gain of \SI{2.41}{\%} and \SI{3.09}{\%} can be achieved for \gls{NMI} and \gls{CP}, respectively, by using voice conversion based speaker normalization even allowing to outperform the H-SHMM model. While the \gls{NMI} and \gls{CP} show that the cluster quality can be improved, the \gls{BFS} shows that the segmentation performance is reduced with voice conversion based speaker normalization. Note, however, that the proposed speaker normalization is intended to better find similar units rather than finding more precise boundaries. Indeed, it makes intuitively sense that with voice conversion the segment boundaries may deviate from the original boundaries more frequently, given that the conversion may also slightly shift phoneme boundaries. That this might be the case, is also supported by the fact that the \gls{BFS} does not decrease when using reconstruction instead of voice conversion. For Yoruba and Mboshi speaker normalization also brings a gain in cluster quality, which, however, is more modest for these languages and the H-SHMM model from~\cite{yusuf2020hierarchical} shows on average a better clustering performance than the \gls{HMMVAE}. The lower gain for these languages probably results from the fact that the \gls{VC} system has primarily been trained on English data. Nonetheless, the results show that improvements can indeed be achieved through normalization by multilingual voice conversion. Furthermore, with the availability of larger speech corpora for these languages -- even completely untranscribed and unannotated -- which would allow the language to have a larger impact on \gls{VC} training, higher gains may be achieved. Finally, since our proposed speaker normalization is rather generic it may be also combined with the H-SHMM in future experiments. Generally it can be observed that experiments for Mboshi and Yoruba show both a higher \gls{CP} than experiments on English while simultaneously giving a lower \gls{NMI}. Note that, while the \gls{NMI} and the \gls{CP} both measure the clustering performance, the \gls{NMI} has a higher weighting of infrequently appearing \glspl{AU}, which might explain this discrepancy. Additionally, both African languages have a larger phone inventory and, consequently, higher phone entropy $\mathrm{H}(P)$ than English. Interestingly, results with reconstructed signals show on average better performance in all metrics for all languages compared to the experiments with clean inputs. A possible explanation might be that the voice conversion removes some irrelevant factors on the audio input and thus helps for the \gls{AUD} results to be more robust. \section{Conclusions}\label{sec:conclusions} In this work, we have proposed an approach to improve \gls{AUD} with a voice conversion based speaker normalization system which can be trained in a completely unsupervised manner without any transcriptions and annotations. We have shown that significant performance improvements are achieved if enough training data in the respective language is available for the \gls{VC} system. Furthermore, some improvements in the clustering performance can even be achieved even when the training data stems mostly from another language. Finally, the proposed speaker normalization system is generic and can be combined with any \gls{AUD} system working on audio input. \section{Acknowledgements} This work was in part funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project No. 282835863. \balance \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-06T02:05:57", "yymm": "2105", "arxiv_id": "2105.01786", "language": "en", "url": "https://arxiv.org/abs/2105.01786" }
\section{Introduction}\label{S-Intro} Thanks to their relation to rank-metric codes, $q$-matroids and $q$-polymatroids have recently garnered a lot of attention, \cite{BCIJ21,BCIJS20,BCJ21,GhJo20,GLJ21,GJLR19,JuPe18,Shi19}. Indeed, ${\mathbb F}_{q^m}$-linear rank-metric codes in ${\mathbb F}_{q^m}^n$ give rise to $q$-matroids, whereas ${\mathbb F}_q$-linear rank-metric codes induce $q$-polymatroids. This leads to an abundance of examples of $q$-(poly)matroids. In either case, the $q$-(poly)matroid induced by a rank-metric code arises via a rank function which captures the dimension of certain characteristic subspaces of the code in question. As a consequence, the $q$-(poly)matroid reflects many of the algebraic and combinatorial properties of the code, such as the generalized weights~\cite{GLJ21,GJLR19} and the rank-weight enumerator~\cite{BCIJ21,Shi19}. For $q$-matroids a variety of cryptomorphic definitions are known~\cite{JuPe18,BCJ21}. They are based on independent spaces, bases, circuits, spanning spaces, flats and many more; see the comprehensive account in \cite{BCJ21}. For $q$-polymatroids, most of these notions have yet to be defined. As to our knowledge the only existing notion are flats, which have been introduced in \cite{GLJ21}. In this paper we introduce the notion of independent spaces and bases for $q$-polymatroids. As it turns out, the `standard notion' of independence, namely the equality of rank value and dimension, is too restrictive for $q$-polymatroids. For this reason we introduce a more general notion of independence, which is inspired by the analogue for classical polymatroids in \cite[Sec.~11]{Ox11}. In order to derive properties of the independent spaces, we introduce an auxiliary $q$-matroid on the same ground space (akin to a construction in \cite{Ox11}). Since the independent spaces of the auxiliary $q$-matroid coincide with those of the $q$-polymatroid, the latter inherits all properties known for independent spaces of $q$-matroids --- as long as these properties do not involve the rank function. The independent spaces naturally give rise to a notion of basis, namely the maximal-dimensional independent subspaces. Despite the lack of rigidity of the rank function in a $q$-polymatroid, it turns out that all bases of a subspace have the same rank value and this value agrees with the rank value of the subspace. In other words, the rank function restricted to the independent spaces fully determines the $q$-polymatroid. This result allows us to provide a cryptomorphism for $q$-polymatroids based on independent spaces: we characterize the collections of spaces endowed with a rank function on these spaces that give rise to a $q$-polymatroid with exactly these spaces as independent spaces and whose rank function restricts to the given one. Examples show that no such cryptomorphism is possible using only bases, dependent spaces, or circuits. We finally turn to spanning spaces. These are the spaces that share the same rank value as the ground space. It turns out that in a $q$-polymatroid every minimal spanning space is contained in a basis, but is, in general, not a basis itself. Thus the notions `minimal spanning' and `maximally independent' do not agree. On the plus side, `minimal spanning' is the dual notion to `maximally strongly independent'. This simple fact may be regarded as the generalization of the duality result for bases in $q$-matroids. The latter states that in a $q$-matroid a space is a basis if and only if its orthogonal is a basis of the dual $q$-matroid. It turns out that this equivalence is never true in a proper $q$-polymatroid and therefore characterizes $q$-matroids. \textbf{Notation:} We fix a finite field ${\mathbb F}={\mathbb F}_q$ with~$q$ elements and a finite-dimensional ${\mathbb F}$-vector space~$E$. We write $V\leq E$ if~$V$ is a subspace of~$E$ and denote by ${\mathcal V}(E)$ the collection of all subspaces of~$E$. The standard basis vectors in ${\mathbb F}^n$ are denoted by $e_1,\ldots,e_n$. We write $[n]$ for the set $\{1,\ldots,n\}$. \section{Basic Notions of $q$-Polymatroids}\label{S-Prelims} In this section we define $q$-polymatroids and present some basic properties. We also introduce the main class of examples, namely $q$-polymatroids induced by rank-metric codes. The section is based on the material in \cite{GLJ21}. \begin{defi}\label{D-PMatroid} Set ${\mathcal V}={\mathcal V}(E)$. A \emph{$q$-rank function} on~$E$ is a map $\rho: {\mathcal V}\longrightarrow{\mathbb Q}_{\geq0}$ satisfying: \begin{mylist2} \item[(R1)\hfill] Dimension-Boundedness: $0\leq\rho(V)\leq \dim V$ for all $V\in{\mathcal V}$; \item[(R2)\hfill] Monotonicity: $V\leq W\Longrightarrow \rho(V)\leq \rho(W)$ for all $V,W\in{\mathcal V}$; \item[(R3)\hfill] Submodularity: $\rho(V+W)+\rho(V\cap W)\leq \rho(V)+\rho(W)$ for all $V,W\in{\mathcal V}$. \end{mylist2} A \emph{$q$-polymatroid ($q$-PM) on~$E$} is a pair ${\mathcal M}=(E,\rho)$, where $\rho: {\mathcal V}\longrightarrow{\mathbb Q}_{\geq0}$ is a $q$-rank function. The value $\rho(E)$ is called the \emph{rank} of the $q$-PM. A number $\mu\in{\mathbb Q}_{>0}$ is called a \emph{denominator} of~$\rho$ (and ${\mathcal M}$) if $\mu\rho(V)\in{\mathbb N}_0$ for all $V\in{\mathcal V}$. In that case we call the map $\tau_{\mu}:=\mu\rho$ the \emph{induced integer $\rho$-function} w.r.t.~$\mu$. The smallest denominator is called the \emph{principal denominator}. A $q$-PM{} with principal denominator~$1$ (i.e., $\rho(V)\in{\mathbb N}_0$ for all~$V$) is called a \emph{$q$-matroid}. If~$(E,\rho)$ is the \emph{trivial} $q$-PM, i.e.,~$\rho$ is the zero map, we declare~$1$ to be its principal denominator. \end{defi} We will often make use of the induced integer $\rho$-function $\tau_{\mu}$ for a given denominator~$\mu$. Clearly~$\tau_\mu$ is also monotonic and submodular, and instead of (R1) it satisfies $0\leq\tau_\mu(V)\leq\mu\dim V$ for all $V\in{\mathcal V}$. The above definition appears in various forms in the literature. In \cite[Def.~4.1]{GJLR19} of Gorla et al.\ the same definition occurs with the only difference that the rank function may assume arbitrary real numbers. Next, a $q$-matroid in the sense of Jurrius/ Pellikaan~\cite{JuPe18} is exactly a $q$-matroid as defined above. Finally, for any $r\in{\mathbb N}$ a $(q,r)$-polymatroid as in \cite[Def.~2]{Shi19} by Shiromoto and \cite[Def.~1]{GhJo20} by Ghorpade/Johnson and \cite[Def.~1]{BCIJ21} by Byrne et al.\ can be turned into a $q$-PM{} with denominator~$r$ by dividing the rank function by~$r$. Conversely, given a $q$-PM{} $(E,\rho)$ with denominator~$\mu$, then $(E,\mu\rho)$ is a $(q,\lceil\mu\rceil)$-polymatroid in the sense of these papers. It is possible for a $q$-PM{} that no nonzero subspace attains the upper bound in~(R1). In \cite{GLJ21} we call such $q$-PM{}s non-exact and discuss some basic facts about non-exact $q$-PM{}s. \begin{rem}[\mbox{\cite[Rem.~2.3(3)]{GLJ21}}]\label{R-PrDenominator} Let $(E,\rho)$ be a non-trivial $q$-PM{} with principal denominator~$\mu$. Then the set of all denominators of $(E,\rho)$ is $\mu{\mathbb N}$. \end{rem} The following basic properties are presented for $q$-matroids in \cite[Prop.~6 and~7]{JuPe18}. They hold true for $q$-PM{}s as well, and the proofs are identical to those in \cite{JuPe18}. \begin{prop}\label{P-RankVx} Let $(E,\rho)$ be a $q$-PM. \begin{alphalist} \item Let $V,W\in{\mathcal V}(E)$. Suppose $\rho(V+\subspace{x})=\rho(V)$ for all $x\in W$. Then $\rho(V+W)=\rho(V)$. \item Let $V\in{\mathcal V}(E)$ and $X,Y\in{\mathcal V}(E)$ be $1$-dimensional spaces such that $\rho(V)=\rho(V+X)=\rho(V+Y)$. Then $\rho(V+X+Y)=\rho(V)$. \end{alphalist} \end{prop} In order to discuss duality as well as some details on independent spaces we need the following notions of equivalence. They extend \cite[Def.~4.4]{GJLR19}. $q$-PM{}s are scaling-equivalent if they differ only by a scalar factor of the rank functions and an isomorphism between the ground spaces, and they are equivalent if the scalar factor is~$1$. \begin{defi}\label{D-EquivMatroid} Let $E_i,\,i=1,2,$ be ${\mathbb F}$-vector spaces of the same finite dimension and let ${\mathcal M}_i=(E_i,\rho_i)$ be $q$-PM{}s. \begin{alphalist} \item ${\mathcal M}_1$ and ${\mathcal M}_2$ are called \emph{scaling-equivalent} if there exists an ${\mathbb F}$-isomorphism $\alpha\in\mbox{\rm Hom}_{{\mathbb F}}(E_1,E_2)$ and $a\in{\mathbb Q}_{>0}$ such that $\rho_2(\alpha(V))=a\rho_1(V)$ for all $V\in{\mathcal V}(E_1)$. \item We call ${\mathcal M}_1$ and ${\mathcal M}_2$ \emph{equivalent}, denoted by ${\mathcal M}_1\approx{\mathcal M}_2$, if there exists an ${\mathbb F}$-isomorphism $\alpha\in\mbox{\rm Hom}_{{\mathbb F}}(E_1,E_2)$ such that $\rho_2(\alpha(V))=\rho_1(V)$ for all $V\in{\mathcal V}(E_1)$. \end{alphalist} \end{defi} Clearly, both types of equivalences are indeed equivalence relations. \begin{theo}[\mbox{\cite[Thm.~2.8]{GLJ21} and \cite[4.5--4.7]{GJLR19}, and \cite[Thm.~42]{JuPe18} for $q$-matroids}]\label{T-DualqPM} Let $\inner{\cdot}{\cdot}$ be a non-degenerate symmetric bilinear form on~$E$. For $V\in{\mathcal V}(E)$ define $\mbox{$V^\perp=\{w\in E\mid \inner{v}{w}=0$}\text{ for all }v\in V\}$. Let ${\mathcal M}=(E,\rho)$ be a $q$-PM{} and set \[ \rho^*(V)=\dim V+\rho(V^\perp)-\rho(E). \] Then $\rho^*$ is a $q$-rank function on~$E$ and ${\mathcal M}^*=(E,\rho^*)$ is a $q$-PM. It is called the \emph{dual} of~${\mathcal M}$ with respect to the form $\inner{\cdot}{\cdot}$. Furthermore, the bidual ${\mathcal M}^{**}=({\mathcal M}^*)^*$ satisfies ${\mathcal M}^{**}={\mathcal M}$, and ${\mathcal M}$ and ${\mathcal M}^*$ have the same set of denominators. Finally, the equivalence class of~${\mathcal M}^*$ does not depend on the choice of the bilinear form. More precisely, if $\langle\!\inner{\cdot}{\cdot}\!\rangle$ is another non-degenerate symmetric bilinear form on~$E$ and ${\mathcal M}^{\hat{*}}=(E,\rho^{\hat{*}})$ is the resulting dual $q$-PM{}, then ${\mathcal M}^{\hat{*}}\approx{\mathcal M}^*$. \end{theo} The next result has been proven in~\cite{GJLR19} for $q$-PM{}s on ${\mathbb F}^n$, endowed with the standard dot product. Thanks to the invariance of the dual, it generalizes without the need to specify bilinear forms. \begin{prop}[\mbox{\cite[Prop.~4.7]{GJLR19}}]\label{P-EquivDual} Let ${\mathcal M}=(E,\rho)$ and $\hat{{\mathcal M}}=(\hat{E},\hat{\rho})$ be $q$-PM{}s. Then ${\mathcal M}\approx\hat{{\mathcal M}}$ implies ${\mathcal M}^*\approx\hat{{\mathcal M}}^*$. \end{prop} \begin{exa}[\mbox{\cite[Ex.~4 and Ex.~47]{JuPe18}}]\label{E-DualUnif} Let ${\mathcal U}_{k}(E)=(E,\rho)$ be the \emph{uniform $q$-matroid of rank $k$}, that is, $\rho(V)=\min\{k,\dim V\}$ for all $V\in{\mathcal V}(E)$. Then ${\mathcal U}_{k}(E)^*={\mathcal U}_{\dim E-k}(E)$. \end{exa} The rest of this section is devoted to $q$-PM{}s induced by rank-metric codes. This will provide us with a large class of $q$-PM{}s. We start with collecting some basic properties of codes in $\mbox{$\F^{n\times m}$}$. As usual, we endow $\mbox{$\F^{n\times m}$}$ with the rank-metric, defined as $\mbox{${\rm d}$}(A,B)={\rm rk}(A-B)$. We only consider \emph{linear rank-metric codes}, that is, subspaces of the metric space $(\mbox{$\F^{n\times m}$},\mbox{${\rm d}$})$. The following is standard knowledge in the theory of rank-metric codes. For $V\leq{\mathbb F}^n$ denote by $V^\perp\leq{\mathbb F}^n$ the orthogonal space with respect to the standard dot product. \begin{rem}\label{D-RMCBasics} Let ${\mathcal C}\leq \mbox{$\F^{n\times m}$}$ be a rank-metric code. \begin{alphalist} \item The \emph{rank distance} of~${\mathcal C}$ is defined as $\mbox{${\rm d}_{\rm rk}$}({\mathcal C})=\min\{{\rm rk}(M)\mid M\in{\mathcal C}\setminus0\}$. If $d=\mbox{${\rm d}_{\rm rk}$}({\mathcal C})$, then $\dim({\mathcal C})\leq \max\{m,n\}(\min\{m,n\}-d+1)$, which is known as the \emph{Singleton bound}. If $\dim({\mathcal C})=\max\{m,n\}(\min\{m,n\}-d+1)$, then ${\mathcal C}$ is called an \emph{MRD code}. \item The \emph{dual code} of~${\mathcal C}$ is defined as ${\mathcal C}^\perp=\{M\in\mbox{$\F^{n\times m}$}\mid {\rm tr}(MN\mbox{$^{\sf T}$})=0\text{ for all }N\in{\mathcal C}\}$, where ${\rm tr}(\cdot)$ denotes the trace of the given matrix. \item For $V\in{\mathcal V}({\mathbb F}^n)$ we set ${\mathcal C}(V,{\rm c})=\{M\in{\mathcal C}\mid \mbox{\rm colsp}(M)\leq V\}$, where $\mbox{\rm colsp}(M)$ denotes the column space of~$M$. Then $ \mbox{$\F^{n\times m}$}(V,{\rm c})^\perp=\mbox{$\F^{n\times m}$}(V^\perp,{\rm c})$ and~\cite[Lem.~28]{Ra16a} \begin{align*} \dim{\mathcal C}(V^\perp,{\rm c})&=\dim{\mathcal C}-m\dim V+\dim{\mathcal C}^\perp(V,{\rm c}). \end{align*} \end{alphalist} \end{rem} The following $q$-PM{}s induced by rank-metric codes appeared first in~\cite{GJLR19}. The statement in \eqref{e-rhoV} is immediate with Proposition~\ref{D-RMCBasics}(c). \begin{defiprop}[\mbox{\cite[Thm.~5.3]{GJLR19}}]\label{P-RMCMatroid} For a nonzero rank-metric code ${\mathcal C}\leq{\mathbb F}^{n\times m}$ define \[ \rho_{\rm c}:{\mathcal V}({\mathbb F}^n)\longrightarrow {\mathbb Q}_{\geq0},\quad V\longmapsto \frac{\dim{\mathcal C}-\dim{\mathcal C}(V^\perp,{\rm c})}{m}. \] Then $\rho_{\rm c}$ is a $q$-rank function with denominator~$m$. The $q$-polymatroid ${\mathcal M}_{\rm c}({\mathcal C}):=({\mathbb F}^n,\rho_{\rm c})$ is called the \emph{(column) polymatroid} of~${\mathcal C}$. Its rank is $\dim{\mathcal C}/m$. The rank function satisfies \begin{equation}\label{e-rhoV} \rho_{\rm c}(V)=\dim V-\frac{1}{m}\dim{\mathcal C}^\perp(V,{\rm c}). \end{equation} \end{defiprop} Analogously we can define the row polymatroid of the rank-metric code, which then has denominator~$n$. It is of course the same as the column polymatroid of the transposed code, and thus it suffices to consider column polymatroids. The denominator~$m$ is in general not principal; see for instance~(a) of \cref{P-MRDCodes} below. The rank distances of a code and its dual are closely related to the $q$-PM. The following is immediate with Proposition~\ref{P-RMCMatroid} and also appeared already in \cite[Prop.~6.2]{GJLR19}, \cite[Lem.~30]{JuPe18}, and \cite[Rem.~3.8]{GLJ21}. \begin{rem}\label{R-ddperp} Let ${\mathcal C}\leq{\mathbb F}^{n\times m}$ be a nonzero rank-metric code with rank-distance~$d$ and let $d^\perp$ be the rank distance of ${\mathcal C}^\perp$. Then for any $V\in{\mathcal V}({\mathbb F}^n)$ we have \[ \rho_{\rm c}(V)=\begin{cases}\dim V,&\text{ if }\dim V< d^\perp,\\[1ex] \frac{\dim {\mathcal C}}{m},&\text{ if }\dim V>n-d.\end{cases} \] \end{rem} For MRD codes we can give more detailed information about the associated $q$-PM{}s. \begin{prop}[\mbox{\cite[Cor.~6.6]{GJLR19}, \cite[Thm.~3.12]{GLJ21}}]\label{P-MRDCodes} Let ${\mathcal C}\leq\mbox{$\F^{n\times m}$}$ be an MRD code with $\mbox{${\rm d}_{\rm rk}$}({\mathcal C})=d$. \begin{alphalist} \item Let $n\leq m$. Then ${\mathcal M}_{\rm c}({\mathcal C})={\mathcal U}_{n-d+1}({\mathbb F}^n)$, that is, ${\mathcal M}_{\rm c}({\mathcal C})$ is the uniform $q$-matroid of rank $n-d+1$. Thus ${\mathcal M}_{\rm c}({\mathcal C})$ depends only on $(n,d,|{\mathbb F}|)$. \item Let $n\geq m$. Then ${\mathcal M}_{\rm c}({\mathcal C})$ satisfies \[ \rho_{\rm c}(V)= \left\{\begin{array}{cl} \dim V,&\text{if }\dim V\leq m-d+1,\\[.5ex] \frac{n(m-d+1)}{m},&\text{if }\dim V\geq n-d+1, \end{array}\right. \] and $\rho_{\rm c}(V)\geq \max\{1,\,(\dim V)/m\}(m-d+1)$ if $\dim V\in[m-d+2,\,n-d]$. Thus, if $m=n-1$, then ${\mathcal M}_{\rm c}({\mathcal C})$ depends only on $(n,d,|{\mathbb F}|)$. \end{alphalist} \end{prop} Equivalence of codes, in the usual sense, translates into equivalence of the associated $q$-PM{}s. \begin{prop}[\mbox{\cite[Prop.~3.5]{GLJ21} and \cite[Prop.~6.7]{GJLR19}}]\label{P-EquivCodeMatroid} Let ${\mathcal C},\,{\mathcal C}'\leq\mbox{$\F^{n\times m}$}$ be rank-metric codes. \begin{alphalist} \item Suppose ${\mathcal C},\,{\mathcal C}'$ are \emph{equivalent}, i.e., ${\mathcal C}'=X{\mathcal C} Y:=\{XMY\mid M\in{\mathcal C}\}$ for some $X\in{\rm GL}_n({\mathbb F})$ and $Y\in{\rm GL}_m({\mathbb F})$. Then ${\mathcal M}_{\rm c}({\mathcal C})$ and ${\mathcal M}_{\rm c}({\mathcal C}')$ are equivalent via $\beta\in\mbox{\rm Hom}_{{\mathbb F}}({\mathbb F}^n,{\mathbb F}^n)$ given by $x\mapsto (X\mbox{$^{\sf T}$})^{-1}x$. \item Let $n=m$ and suppose ${\mathcal C},\,{\mathcal C}'$ are \emph{transposition-equivalent}, that is, ${\mathcal C}'=X{\mathcal C}\mbox{$^{\sf T}$} Y$ for some $X,Y\in{\rm GL}_n({\mathbb F})$, and where ${\mathcal C}\mbox{$^{\sf T}$}=\{M\mbox{$^{\sf T}$}\mid M\in{\mathcal C}\}$. Then ${\mathcal M}_{\rm c}({\mathcal C}\mbox{$^{\sf T}$})$ and ${\mathcal M}_{\rm c}({\mathcal C}')$ are equivalent via~$\beta$, where~$\beta$ is as in~(a). \end{alphalist} \end{prop} Occasionally, we will consider ${\mathbb F}_{q^m}$-linear rank-metric codes, which we introduce as follows. Recall that ${\mathbb F}={\mathbb F}_q$. Let~$\omega$ be a primitive element of the field extension~${\mathbb F}_{q^m}$ and $\psi:{\mathbb F}_{q^m}\longrightarrow{\mathbb F}_q^m$ be the coordinate map with respect to the basis $(1,\omega,\ldots,\omega^{m-1})$. Extending~$\psi$ entry-wise, we obtain, for any~$n$, an isomorphism $\Psi:{\mathbb F}_{q^m}^n\longrightarrow{\mathbb F}_q^{n\times m}$ that maps $(c_1,\,\cdots,\, c_n)$ to the matrix with rows $\psi(c_1),\, \ldots,\, \psi(c_n)$. Let $f=x^m-\sum_{i=0}^{m-1}f_ix^i\in{\mathbb F}[x]$ be the minimal polynomial of~$\omega$ over~${\mathbb F}$ and \begin{equation}\label{e-Deltaf} \Delta_f= \begin{pmatrix} &1& & \\ & &\ddots & \\ & & & 1\\ f_0&f_1&\cdots&f_{m-1}\end{pmatrix}\in{\rm GL}_m({\mathbb F}_q) \end{equation} be the companion matrix of~$f$. Given a code ${\mathcal C}\leq\mbox{$\F^{n\times m}$}$, we can now describe linearity of the code $\Psi^{-1}({\mathcal C})\subseteq{\mathbb F}_{q^m}^n$ over ${\mathbb F}_{q^m}$ or a subfield thereof as follows; see also~\cite[Sec.~3]{GLJ21}. The matrix~$X$ below simply allows for arbitrary bases of ${\mathbb F}_{q^m}$ over~${\mathbb F}_q$. \begin{defi}\label{D-Fqn-linear} Let ${\mathcal C}\leq{\mathbb F}_q^{n\times m}$ be a rank-metric code (hence an ${\mathbb F}_q$-linear subspace). Let $s$ be a divisor of~$m$ and set $M=(q^m-1)/(q^s-1)$. Then ${\mathcal C}$ is \emph{right ${\mathbb F}_{q^s}$-linear} if there exists an $X\in{\rm GL}_m({\mathbb F}_q)$ such that the code ${\mathcal C} X$ is invariant under right multiplication by~$\Delta_f^M$. \emph{Left linearity} over ${\mathbb F}_{q^n}$ and its subfields is defined analogously. \end{defi} Obviously, the qualifiers left/right are needed only in the case where ${\mathbb F}_{q^s}$ is a subfield of both ${\mathbb F}_{q^n}$ and ${\mathbb F}_{q^m}$. The following is now easy to verify; see also~\cite[Sec.~3]{GLJ21}. \begin{rem}\label{R-rhoFqnlinear} Let~${\mathbb F}_{q^s}$ be a subfield of ${\mathbb F}_{q^m}$ and ${\mathcal C}\leq{\mathbb F}_q^{n\times m}$ be a right ${\mathbb F}_{q^s}$-linear rank-metric code. Then $\mu=m/s$ is a denominator of the column polymatroid ${\mathcal M}_{\rm c}({\mathcal C})$. In particular, for $s=m$ the poly\-matroid ${\mathcal M}_{\rm c}({\mathcal C})$ is a $q$-matroid. These are exactly the $q$-matroids studied in \cite{JuPe18}. \end{rem} It should be noted that ${\mathcal M}_{\rm c}({\mathcal C})$ may be a $q$-matroid even if~${\mathcal C}$ is not right ${\mathbb F}_{q^m}$-linear. Indeed, in \cref{P-MRDCodes}(a) we saw already that the polymatroid associated to an MRD code in $\mbox{$\F^{n\times m}$}$ is a $q$-matroid if $n\leq m$. Furthermore, \cite[Thm.~5.5]{GLJ21} shows that if ${\mathcal C}$ induces a $q$-matroid, then so does every shortening and puncturing of~${\mathcal C}$. Duality of $q$-PM{}s (see \cref{T-DualqPM}) corresponds to duality of codes. \begin{theo}[\mbox{\cite[Thm.~8.1]{GJLR19}}]\label{T-TraceDual} Let ${\mathcal C}\leq{\mathbb F}^{n\times m}$ be a rank-metric code and ${\mathcal C}^\perp\leq{\mathbb F}^{n\times m}$ be its dual. Then ${\mathcal M}_{\rm c}({\mathcal C})^*={\mathcal M}_{\rm c}({\mathcal C}^\perp)$, where ${\mathcal M}_{\rm c}({\mathcal C})^*$ is the dual of ${\mathcal M}_{\rm c}({\mathcal C})$ w.r.t.\ the standard dot product on~${\mathbb F}^n$. \end{theo} Another instance of the interplay between polymatroid duality and rank-metric codes has been presented in \cite[Thms.~5.3 and 5.5]{GLJ21}. Therein, it is shown that contraction and deletion of $q$-PM{}s are mutually dual and correspond to shortening and puncturing of rank-metric codes. We close this section with the following example of two MRD codes whose associated column polymatroids are not equivalent. The parameters are as in \cref{P-MRDCodes}(b), where the interval $[m-d+2,\,n-d]$ is not empty. We will return to this example later when discussing independent spaces. \begin{exa}[\mbox{\cite[Ex.~3.15]{GLJ21}}]\label{E-RowMatMRD} In ${\mathbb F}_2^{5\times 2}$ consider the codes ${\mathcal C}_1=\subspace{A_1,\dots,A_5}$ and ${\mathcal C}_2=\subspace{B_1,\ldots,B_5}$, where \begin{align*} &A_1=\begin{pmatrix}1&1\\1&0\\0&0\\1&0\\0&0\end{pmatrix},\ A_2=\begin{pmatrix}1&1\\1&1\\1&0\\0&1\\0&0\end{pmatrix},\ A_3=\begin{pmatrix}0&0\\0&0\\1&1\\0&0\\0&1\end{pmatrix},\ A_4=\begin{pmatrix}0&0\\0&1\\0&0\\0&0\\1&1\end{pmatrix},\ A_5=\begin{pmatrix}1&0\\0&1\\1&1\\0&0\\0&1\end{pmatrix},\\[2ex] &B_1=\begin{pmatrix}1&0\\0&1\\0&0\\0&0\\0&0\end{pmatrix},\ B_2=\begin{pmatrix}0&0\\1&0\\0&1\\0&0\\0&0\end{pmatrix},\ B_3=\begin{pmatrix}0&0\\0&0\\1&0\\0&1\\0&0\end{pmatrix},\ B_4=\begin{pmatrix}0&0\\0&0\\0&0\\1&0\\0&1\end{pmatrix},\ B_5=\begin{pmatrix}0&1\\0&0\\0&1\\0&0\\1&0\end{pmatrix}. \end{align*} Both codes are MRD with rank distance~$d=2$, and~${\mathcal C}_2$ is actually a (${\mathbb F}_{2^5}$-linear) Gabidulin code. Consider the $q$-PM{}s ${\mathcal M}_{\rm c}({\mathcal C}_1)=({\mathbb F}^5,\rho_{\rm c}^1)$ and ${\mathcal M}_{\rm c}({\mathcal C}_2)=({\mathbb F}^5,\rho_{\rm c}^2)$. From \cref{P-MRDCodes}(b) we know that $\rho_{\rm c}^1(V)=\rho_{\rm c}^2(V)=\dim V$ for $\dim V\leq 1$ and $\rho_{\rm c}^1(V)=\rho_{\rm c}^2(V)=5/2$ if $\dim V\geq 4$. As for the $2$-dimensional subspaces of~${\mathbb F}_2^5$, it turns out that the map $\rho_{\rm c}^1$ assumes the value~$1$ exactly once and the values $3/2$ and $2$ exactly 28 and 126 times, respectively, whereas~$\rho_{\rm c}^2$ assumes the values~$3/2$ and~$2$ exactly 31 and 124 times, respectively, and never takes the value~$1$. Similar differences occur for the $3$-dimensional subspaces. Thus ${\mathcal M}_{\rm c}({\mathcal C}_1)$ and ${\mathcal M}_{\rm c}({\mathcal C}_2)$ are not equivalent. \end{exa} \section{Independent Spaces} \label{S-Indep} We now return to general $q$-PM{}s and introduce independent spaces. We show that the collection of independent spaces satisfies properties analogous to those of independent spaces in $q$-matroids. However, different from the latter, they do not fully determine the $q$-PM. Only if we also take the rank values of the independent spaces into account, can we fully recover the $q$-PM. This will be dealt with in the next section. Considering the theory of classical matroids and $q$-matroids, one may be inclined to declare a space~$V$ in a $q$-PM{} $(E,\rho)$ independent if $\rho(V)=\dim V$. While this is indeed the right notion for $q$-matroids, it turns out to be too restrictive for $q$-PM{}s: in many $q$-PM{}s the only subspace satisfying $\rho(V)=\dim V$ is the zero space. Nonetheless, the property $\rho(V)=\dim V$ turns out to play a conceptual role (see also~\cite{BCIJ21}), and we will return to it in Section~\ref{S-SpSp}, where we will call such spaces strongly independent. The following definition of independence is inspired by \cite[Cor.~11.1.2]{Ox11}, which deals with classical polymatroids. \begin{defi}\label{D-Indep} Let ${\mathcal M}=(E,\rho)$ be a $q$-PM{} with denominator~$\mu$ (which need not be principal). A space $I\in{\mathcal V}(E)$ is called \emph{$\mu$-independent} if \[ \rho(J)\geq \frac{\dim J}{\mu} \text{ for all subspaces }J\leq I. \] $I$ is called \emph{$\mu$-dependent} if it is not $\mu$-independent. A \emph{$\mu$-circuit} is a $\mu$-dependent space for which all proper subspaces are $\mu$-independent. A $1$-dimensional $\mu$-dependent space is called a \emph{$\mu$-loop}. We define ${\mathcal I}_\mu={\mathcal I}_\mu({\mathcal M})=\{I\in{\mathcal V}(E)\mid I\text{ is $\mu$-independent}\}$. If~$\mu$ is the principal denominator of~${\mathcal M}$, we may skip the quantifier~$\mu$ and simply use independent, dependent, loop, circuit, and~${\mathcal I}$. \end{defi} Clearly, if $\hat{\mu}$ is the principal denominator of~${\mathcal M}$, then $\hat{\mu}$-independence implies $\mu$-independence for any denominator~$\mu$ of~${\mathcal M}$; see \cref{R-PrDenominator}. \begin{rem}\label{R-IndepMatroid} Let $(E,\rho)$ be a $q$-PM{}. Then for all $V\in{\mathcal V}(E)$ \begin{equation}\label{e-rhoVdimV} \rho(V)=\dim V\Longrightarrow \rho(W)=\dim W\text{ for all }W\leq V. \end{equation} Indeed, writing $V=W\oplus Z$ for some complement~$Z$ of~$W$, we obtain from submodularity $\dim V=\rho(V)\leq\rho(W)+\rho(Z)\leq\dim W+\dim Z=\dim V$, and thus we have equality everywhere. As a consequence, the condition $\rho(V)=\dim V$ implies $\mu$-independence for every denominator~$\mu$, and for $q$-matroids our notion of independence (which is $1$-independence) coincides with independence as defined in \cite[Def.~2]{JuPe18}, that is: $V \text{ is $1$-independent }\Longleftrightarrow \rho(V)=\dim V$. \end{rem} We continue with discussing basis properties of independent spaces. A crucial difference to $q$-matroids is the following: While for $q$-matroids a space is independent iff its rank value assumes the maximal possible value, this is not the case for $q$-PM{}s. More generally, for $q$-PM{}s independence is not characterized by the rank value of the given space; see the examples below. In this context we also would like to point out that the inequality $\rho(I)\geq\dim I/\mu$ is not preserved under taking subspaces, which is why the condition for subspaces is built into our definition. \begin{exa}\label{E-IndSpacesRanks} \begin{alphalist} \item Let ${\mathbb F}={\mathbb F}_2$ and consider the code ${\mathcal C}\leq{\mathbb F}^{3\times 3}$ generated by \[ \begin{pmatrix}0&1&0\\0&0&1\\0&0&1\end{pmatrix},\ \begin{pmatrix}0&1&1\\0&0&0\\0&0&1\end{pmatrix},\ \begin{pmatrix}0&1&1\\1&0&0\\0&1&0\end{pmatrix}. \] Let ${\mathcal M}={\mathcal M}_{\rm c}({\mathcal C})=({\mathbb F}^3,\rho_{\rm c})$ be the associated column polymatroid. Then for all $V\in{\mathcal V}({\mathbb F}^3)\setminus0$ \[ \rho_{\rm c}(V)=\left\{\begin{array}{cl} 2/3 &\text{ if $\dim V=1$ or $V=\subspace{e_1+e_2,\,e_3}$,}\\[.4ex] 1 &\text{ otherwise.}\end{array}\right. \] This shows that $3$ is the principal denominator and ${\mathcal I}({\mathcal M})={\mathcal V}({\mathbb F}^3)$, that is, all spaces are independent. In particular, all $2$-dimensional spaces are independent, even though they do not assume the same rank value. \item Dependent spaces may have a larger rank value than independent spaces of the same dimension. For instance, let ${\mathbb F}={\mathbb F}_2$ and ${\mathcal C}\leq{\mathbb F}^{5\times 3}$ be the code generated by the standard basis matrices $E_{11},\,E_{12},\,E_{23},\,E_{32},\,E_{41},\,E_{42}$. In the column polymatroid ${\mathcal M}_{\rm c}({\mathcal C})=({\mathbb F}^5,\rho_{\rm c})$ the subspace $I=\subspace{e_2,e_3}$ is independent with $\rho_{\rm c}(I)=2/3$, and the subspace $V=\subspace{e_1+e_2,e_5}$ is dependent with $\rho_{\rm c}(V)=1$. \end{alphalist} \end{exa} Let us consider the independent spaces of some $q$-PM{}s from the previous section. \begin{exa}\label{E-IndSpaces} \begin{alphalist} \item Consider the uniform $q$-matroid ${\mathcal U}={\mathcal U}_{k}(E)$ from \cref{E-DualUnif}. Then ${\mathcal I}_1({\mathcal U})=\{V\in{\mathcal V}(E)\mid \dim(V)\leq k\}$. \item Let $m\geq n$ and ${\mathcal C}\leq\mbox{$\F^{n\times m}$}$ be an MRD code with rank distance~$d$. \cref{P-MRDCodes}(a) yields ${\mathcal M}_{\rm c}({\mathcal C})={\mathcal U}_{n-d+1}({\mathbb F}^n)$ and thus ${\mathcal I}_1({\mathcal M}_{\rm c}({\mathcal C}))=\{V\in{\mathcal V}({\mathbb F}^n)\mid \dim V\leq n-d+1\}$ by~(a). \item Let $m\leq n$ and ${\mathcal C}\leq\mbox{$\F^{n\times m}$}$ be an MRD code with rank distance~$d$. With the aid of \cref{P-MRDCodes}(b) one verifies that ${\mathcal I}_m({\mathcal M}_{\rm c}({\mathcal C}))={\mathcal V}({\mathbb F}^n)$, that is, every subspace is $m$-independent. In particular, the two non-equivalent $q$-PM{}s ${\mathcal M}_{\rm c}({\mathcal C}_1)$ and ${\mathcal M}_{\rm c}({\mathcal C}_2)$ in \cref{E-RowMatMRD} trivially have the same collection of $2$-independent spaces (and~$2$ is the principal denominator). This shows that the collection of independent spaces does not determine the $q$-PM. \item Part~(c) shows another striking difference to $q$-matroids. If all spaces of a $q$-matroid ${\mathcal M}=(E,\rho)$ are independent (i.e., $1$-independent), then~${\mathcal M}$ is the uniform matroid ${\mathcal U}_{n}(E)$, where $n=\dim E$. If furthermore ${\mathcal M}={\mathcal M}_{\rm c}({\mathcal C})$ for some right ${\mathbb F}_{q^m}$-linear code ${\mathcal C}\leq\mbox{$\F^{n\times m}$}$, then ${\mathcal C}=\mbox{$\F^{n\times m}$}$. This follows immediately from \cref{R-IndepMatroid} together with~\eqref{e-rhoV}. \end{alphalist} \end{exa} We continue with some basic facts. \begin{rem}\label{R-IndJuPe} Let ${\mathcal M}=(E,\rho)$ be a $q$-PM{} with denominator~$\mu$. \begin{alphalist} \item The zero subspace of~$E$ is $\mu$-independent. \item Every dependent space~$V$ contains a circuit: take any subspace $W$ of~$V$ of smallest dimension satisfying $\rho(W)<\dim W/\mu$ (which clearly exists). \item Let $V\in{\mathcal V}(E)$ be a $\mu$-circuit. Then $\mu\rho(V)=\dim V-1=\mu\rho(W)$ for all hyperplanes~$W$ in~$V$. Indeed, independence of~$W$ along with (R2) tells us that \[ \dim V-1=\dim W \leq\mu\rho(W)\leq\mu\rho(V)<\dim V. \] Thus we have equality since $\mu\rho$ takes integer values. \cref{E-FqslinearCircuits} below shows that not every subspace~$V$ satisfying $\mu\rho(V)=\dim V-1=\mu\rho(W)$ for all its hyperplanes~$W$ is a circuit. \item Part~(c) implies that if~$\dim V=1$, then $V$ is a $\mu$-loop iff $\rho(V)=0$. As a consequence, loops do not depend on the choice of denominator. On the other hand, if $\mu_1<\mu_2$ are distinct denominators of~${\mathcal M}$, then the $\mu_2$-circuits with positive rank value are distinct from the $\mu_1$-circuits. But every $\mu_2$-circuit is also $\mu_1$-dependent and thus contains a $\mu_1$-circuit by (b). \item Let $x_1,\ldots,x_t\in E$ be linearly independent vectors such that $\subspace{x_i}$ is a $\mu$-loop for all $i\in[t]$. Then $\rho(\subspace{x_1}+\ldots+\subspace{x_t})=0$. For $t=2$ this is a consequence of the submodularity~(R3) because $\rho(\subspace{x_1}+\subspace{x_2})=\rho(\subspace{x_1}+\subspace{x_2})+\rho(\subspace{x_1}\cap\subspace{x_2}) \leq\rho(\subspace{x_1})+\rho(\subspace{x_2})$ (see also \cite[Lem.~11]{JuPe18} for $q$-matroids), and the general case follows similarly via induction. \end{alphalist} \end{rem} \begin{exa}\label{E-FqslinearCircuits} Let ${\mathbb F}={\mathbb F}_2$. Consider the primitive polynomial $f=x^4+x+1\in{\mathbb F}[x]$ and let $U=\Delta_f^5$, where $\Delta_f$ is defined as in~\eqref{e-Deltaf}. Thus $U\in{\rm GL}_4({\mathbb F})$. Let \[ A_1=\begin{pmatrix}0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1\end{pmatrix},\quad A_2=\begin{pmatrix}1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1\end{pmatrix},\quad A_3=\begin{pmatrix}1 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0\end{pmatrix} \] and define ${\mathcal C}=\subspace{A_1,\,A_2,\,A_3,\,A_1U,\,A_2U,\,A_3U}$. Then~${\mathcal C}$ is a right ${\mathbb F}_{2^2}$-linear rank-metric code of dimension~$6$ and rank distance~$3$ (see \cref{D-Fqn-linear} for right ${\mathbb F}_{2^2}$-linearity). The principal denominator of~${\mathcal M}_{\rm c}({\mathcal C})$ is $\mu=2$. There exist 497 $\mu$-circuits, one of which has dimension~$1$ and all others have dimension~$4$. An additional 169 spaces~$V$ satisfy $\mu\rho_{\rm c}(V)=\dim V-1$ (all of them have dimension~$2,3$, or~$4$), and 97 of them also satisfy $\mu\rho_{\rm c}(W)=\dim V-1$ for all its hyperplanes~$W$. \end{exa} Independence behaves well under scaling-equivalence if the denominator is taken into account. \begin{rem}\label{R-Rescaling} Let $\dim E_1=\dim E_2$ and ${\mathcal M}_i=(E_i,\rho_i), i=1,2$, be $q$-PM{}s with principal denominators $\mu_i$. Suppose ${\mathcal M}_1$ and~${\mathcal M}_2$ are scaling-equivalent, say $\rho_2(\alpha(V))=a\rho_1(V)$ for all $V\in{\mathcal V}(E_1)$, where $a\in{\mathbb Q}_{>0}$ and $\alpha:E_1\longrightarrow E_2$ an isomorphism. Then $a^{-1}\mu_1\rho_2(\alpha(V))=\mu_1\rho_1(V)\in{\mathbb N}$ and thus $a^{-1}\mu_1$ is a denominator of ${\mathcal M}_2$. Hence $a^{-1}\mu_1=k\mu_2$ for some $k\in{\mathbb N}$; see Remark~\ref{R-PrDenominator}. Similarly, $a\mu_2=\hat{k}\mu_1$ for some $\hat{k}\in{\mathbb N}$. Thus $k=\hat{k}=1$, and hence $a\mu_2=\mu_1$. Now we have $\mu_2\rho_2(\alpha(V))=\mu_1\rho_1(V)$ for all $V\in{\mathcal V}(E)$ and therefore \begin{align*} V\text{ is $\mu_1$-independent in ${\mathcal M}_1$}&\Longleftrightarrow \mu_1\rho_1(W)\geq\dim W\text{ for all subspaces }W\leq V\\ &\Longleftrightarrow\mu_2\rho_2(\alpha(W))\geq\dim\alpha(W) \text{ for all subspaces }\alpha(W)\leq\alpha(V)\\ &\Longleftrightarrow \alpha(V)\text{ is $\mu_2$-independent in ${\mathcal M}_2$}. \end{align*} \end{rem} Before we continue with our study of independent spaces, we briefly focus on $q$-PM{}s induced by rank-metric codes and discuss the relation of (in-)dependent spaces and code properties. Obviously, the relation depends on the chosen denominator. \begin{rem}\label{R-RMCMatroidIndSp} Let ${\mathcal C}\leq\mbox{$\F^{n\times m}$}$ be a rank-metric code with column polymatroid ${\mathcal M}=({\mathbb F}^n,\rho_{\rm c})$. \begin{alphalist} \item The most obvious information about the code is contained in \cref{R-ddperp}: the distance~$d^\perp$ of the dual code is the largest integer $\ell$ for which all $\ell$-dimensional subspaces~$V$ satisfy $\rho_{\rm c}(V)=\dim V$. From \cref{R-IndepMatroid} we know that all these subspaces are $\mu$-independent for every denominator~$\mu$ of~${\mathcal M}$. \item \cref{D-Indep} shows that the condition for independence relaxes with increasing denominator. For this reason there may be few dependent spaces if the principal denominator of~${\mathcal M}$ is~$m$. To make this more precise, let us consider the circuits for a given denominator~$\mu$. From \cref{R-IndJuPe}(c) we know that if $V$ is a $\mu$-circuit of dimension~$v$, then $\mu\rho_{\rm c}(V)=v-1$. The definition of $\rho_{\rm c}$ thus implies \[ V\text{ is a $\mu$-circuit}\Longrightarrow \dim{\mathcal C}(V^\perp,{\rm c})=\dim{\mathcal C}-\frac{mv}{\mu}+1 \] (recall from \cref{R-PrDenominator} that $\mu$ is a divisor of~$m$). The right hand side means that~${\mathcal C}$ is equivalent to a code \[ \tilde{{\mathcal C}}=\bigg\langle \begin{pmatrix}A_{1,1}\\0\end{pmatrix},\ldots,\begin{pmatrix}A_{1,k-\gamma+1}\\0\end{pmatrix}, \begin{pmatrix}A_{1,k-\gamma}\{\mathbb A}_{2,k-\gamma}\end{pmatrix},\ldots,\begin{pmatrix}A_{1,k}\{\mathbb A}_{2,k}\end{pmatrix}\bigg\rangle, \] where $k=\dim{\mathcal C},\,\gamma=mv/\mu$, and $A_{1,j}\in{\mathbb F}^{(n-v)\times m},\,A_{2,j}\in{\mathbb F}^{v\times m}$. In particular, if~${\mathcal M}$ contains an $m$-loop, then $\gamma=1$ and~${\mathcal C}$ is equivalent to a row degenerate code: the last row of all matrices in~$\tilde{{\mathcal C}}$ is zero. More generally, for any fixed dimension~$v$, the existence of a $v$-dimensional $\mu$-circuit becomes more restrictive with increasing~$\mu$. \end{alphalist} \end{rem} We now return to general $q$-PM{}s. In order to derive the main result about the collection of $\mu$-independent spaces, we will make use of an auxiliary $q$-matroid. The following construction mimics the corresponding one in \cite[Prop.~11.1.7]{Ox11} for classical polymatroids. \begin{theo}\label{T-AuxMatroid} Let~${\mathcal M}=(E,\rho)$ be a $q$-PM{} with denominator~$\mu$. Define the map \[ r_{\rho,\mu}:{\mathcal V}(E)\longrightarrow{\mathbb N}_0,\quad V\longmapsto \min\{\mu\rho(W)+\dim V-\dim W\mid W\leq V\}. \] Then ${\mathcal Z}:={\mathcal Z}_{{\mathcal M},\mu}:=(E, r_{\rho,\mu})$ is a $q$-matroid, and the independent spaces of ${\mathcal Z}$ coincide with the $\mu$-independent spaces of~${\mathcal M}$, i.e., \[ {\mathcal I}_\mu({\mathcal M})={\mathcal I}({\mathcal Z})=\{I\in{\mathcal V}(E)\mid r_{\rho,\mu}(I)=\dim I\}. \] \end{theo} \begin{proof} We make use of the induced integer $\rho$-function $\tau=\mu\rho$. Thus $\tau(V)=\mu\rho(V)\leq\mu\dim V$ for all $V\in{\mathcal V}(E)$. Clearly the map $r:=r_{\rho,\mu}$ takes integer values. We now verify (R1) -- (R3) of \cref{D-PMatroid} for $r$. \\[.5ex] (R1) Clearly $r(V)\geq0$ for all~$V$. Furthermore, $r(V)\leq \tau(0)+\dim(V)-\dim(0)=\dim V$. \\[.5ex] (R2) Let $V\leq V'$. Without loss of generality we may assume $\dim V'=\dim V+1$ and thus $V'=V\oplus\subspace{x}$ for some $x\in E$. Assume by contradiction that $r(V)>r(V')$. Then there exists $W'\leq V'$ such that \begin{equation}\label{e-rineq} \tau(W')+\dim V'-\dim W'<\tau(W)+\dim V-\dim W\ \text{ for all }W\leq V. \end{equation} Clearly $W'\not\leq V$ and thus we may write $W'=X\oplus\subspace{y}$ for some $X\leq V$ and $y\not\in V$. Then $\dim X=\dim V-\dim V'+\dim W'$ and~\eqref{e-rineq} leads to \[ \tau(W)\!-\!\dim W>\tau(W')\!+\!\dim V'\!-\!\dim W'\!-\!\dim V=\tau(W')\!-\!\dim X\text{ for all }W\leq V. \] Choosing $W=X$, we arrive at $\tau(X)>\tau(W')$ and thus $\rho(X)>\rho(W')$. Since $X\leq W'$ this contradicts that~$\rho$ is a rank function. All of this establishes (R2) for the map~$r$. \\[.5ex] (R3) Let $V,V'\in{\mathcal V}(E)$. Choose $W\leq V,\,W'\leq V'$ such that \[ r(V)=\tau(W)+\dim V-\dim W\ \text{ and }\ r(V')=\tau(W')+\dim V'-\dim W'. \] Then $W+W'\leq V+ V'$ and $W\cap W'\leq V\cap V'$ and therefore \begin{align*} r(V\!+\!V')+r(V\cap V')&\leq\tau(W+W')+\dim(V+V')-\dim(W+W')\\ &\quad\ +\tau(W\cap W')+\dim(V\cap V')-\dim(W\cap W')\\ &=\tau(W\!+\!W')+\tau(W\!\cap\! W')+\dim V-\dim W+\dim V'-\dim W'\\ &\leq \tau(W)+\tau(W')+\dim V-\dim W+\dim V'-\dim W'\\ &=r(V)+r(V'), \end{align*} where the second inequality follows from~(R3) for~$\rho$. This establishes~(R3) for the map~$r$. \\[.5ex] It remains to investigate the $\mu$-independent spaces. From \cref{D-Indep},~(R1), and \cref{R-IndepMatroid} we obtain immediately \begin{align*} \text{$V$ is $\mu$-independent }&\Longleftrightarrow \tau(W)\geq\dim W\text{ for all }W\leq V\\ &\Longleftrightarrow \tau(W)+\dim V-\dim W\geq\dim V\text{ for all }W\leq V\\ &\Longleftrightarrow r(V)\geq\dim V\\ &\Longleftrightarrow r(V)=\dim V, \end{align*} and this establishes the stated result. \end{proof} As the next example shows, if~${\mathcal M}$ is a $q$-matroid, then it coincides with its auxiliary $q$-matroid if we choose the principal denominator $\mu=1$. \begin{exa}\label{E-Auxmu1} Let~${\mathcal M}=(E,\rho)$ be a $q$-matroid, thus $\rho$ takes only integer values. \begin{alphalist} \item Fix $\mu=1$. Then $r_{\rho,1}(V)=\min\{\rho(W)+\dim V-\dim W\mid W\leq V\}$ for $V\in{\mathcal V}(E)$. We show now that $r_{\rho,1}=\rho$. Fix $V\in{\mathcal V}(E)$. Choosing $W=V$ we obtain $r_{\rho,1}(V)\leq \rho(V)$. On the other hand, for every $W\leq V$ there exists $Z\leq V$ such that $W\oplus Z=V$. Thus submodularity (R3) applied to~$\rho$ yields \[ \rho(V)=\rho(W+Z)\leq\rho(W)+\rho(Z)\leq\rho(W)+\dim Z =\rho(W)+\dim V-\dim W. \] This shows $\rho(V)\leq r_{\rho,1}(V)$. \item If we choose $\mu>1$, then there is in general no obvious relation between~$\rho$ and~$r_{\rho,\mu}$. Consider for example the following $q$-matroid. Let $n\geq3$ and fix a $2$-dimensional subspace $X\in{\mathcal V}({\mathbb F}^n)$. Set $\rho(X)=1$ and $\rho(V)=\min\{\dim V,\,2\}$ for $V\neq X$. One can check straightforwardly that ${\mathcal M}=({\mathbb F}^n,\rho)$ is a $q$-matroid (this also follows from \cite[Prop.~4.7]{GLJ21}). Choosing $\mu=2$, one obtains $r_{\rho,2}=\min\{\dim V,4\}$, and thus the $q$-matroids ${\mathcal M}$ and ${\mathcal Z}_{{\mathcal M},2}$ are not equivalent. Furthermore, ${\mathcal Z}_{{\mathcal M},2}={\mathcal Z}_{{\mathcal U},2}$, where ${\mathcal U}$ is the uniform $q$-matroid ${\mathcal U}_2({\mathbb F}^n)$, and therefore the auxiliary $q$-matroid ${\mathcal Z}_{{\mathcal M},\mu}$ of a $q$-PM~${\mathcal M}$ does not uniquely determine~${\mathcal M}$ (even if one specifies the denominator). \end{alphalist} \end{exa} \cref{T-AuxMatroid} shows that the $\mu$-independent spaces of the $q$-PM{} ${\mathcal M}$ coincide with the independent spaces of the auxiliary $q$-matroid ${\mathcal Z}_{{\mathcal M},\mu}$. Therefore, all properties of independent spaces of $q$-matroids that do not involve the value of the rank function hold true for $q$-PM{}s as well. Such properties have been derived in \cite[Thm.~8]{JuPe18}. Before formulating our result we cast the following important notions. \begin{defi}\label{D-MaxInd} Let~${\mathcal M}=(E,\rho)$ be a $q$-PM{} with denominator~$\mu$. For $V\in{\mathcal V}(E)$ we define \[ {\mathcal I}_\mu(V)=\{I\in{\mathcal I}_\mu({\mathcal M})\mid I\leq V\}. \] A subspace $\hat{I}\in{\mathcal I}_\mu(V)$ is said to be a \emph{$\mu$-basis} of~$V$ if $\dim\hat{I}=\max\{\dim I\mid I\in{\mathcal I}_\mu(V)\}$. We denote by ${\mathcal B}_\mu(V)$ the set of all $\mu$-bases of~$V$. The $\mu$-bases of~$E$ are called the \emph{$\mu$-bases} of~${\mathcal M}$. \end{defi} A $\mu$-basis of~$V$ is thus a maximal-dimensional $\mu$-independent subspace of~$V$. Their rank values will be discussed in the next section. Note that the sets ${\mathcal I}_\mu(V)$ and ${\mathcal B}_\mu(V)$ are non-empty for every $V\in{\mathcal V}(E)$ since clearly $\{0\}$ is $\mu$-independent. We are now ready to present the following properties of the collection of $\mu$-independent spaces of a $q$-PM. The result is an immediate consequence of \cref{T-AuxMatroid} together with \cite[Thm.~8]{JuPe18}. \begin{cor}\label{C-Indep} Let~${\mathcal M}=(E,\rho)$ be a $q$-PM{} with denominator~$\mu$ and set ${\mathcal I}_\mu:={\mathcal I}_\mu({\mathcal M})$. Then \begin{mylist} \item[(I1)\hfill] ${\mathcal I}_\mu\neq\emptyset$, in fact $\{0\}\in{\mathcal I}_{\mu}$. \item[(I2)\hfill] If $I\in{\mathcal I}_\mu$ and $J\leq I$, then $J\in{\mathcal I}_\mu$. \item[(I3)\hfill] If $I,\,J\in{\mathcal I}_\mu$ and $\dim I<\dim J$, then there exists $x\in J\setminus I$ such that $I\oplus\subspace{x}\in{\mathcal I}_\mu$. \item[(I4)\hfill] Let $V,\,W\in{\mathcal V}(E)$ and $I\leq V,\,J\leq W$ be $\mu$-bases of $V$ and~$W$, respectively. Then there exists a $\mu$-basis of $V+W$ that is contained in $I+J$. \end{mylist} \end{cor} Note that~(I3) implies that for any $V\in{\mathcal V}(E)$ and any $\hat{I}\in{\mathcal I}_\mu(V)$ we have \[ \hat{I}\text{ is dimension-maximal in }{\mathcal I}_\mu(V)\Longleftrightarrow\hat{I}\text{ is inclusion-maximal in }{\mathcal I}_\mu(V). \] In other words,~$\hat{I}$ is a $\mu$-basis of~$V$ if and only if there exists no $J\in{\mathcal I}_\mu(V)$ such that $\hat{I}\lneq J$. Thus, the $\mu$-bases of~$V$ are exactly the maximal elements of the poset $({\mathcal I}_\mu(V),\leq)$. Since the independent spaces of the auxiliary $q$-matroid ${\mathcal Z}_{{\mathcal M},\mu}$ coincide with those of the $q$-PM{}~${\mathcal M}$, the same is true for the dependent spaces, circuits, and bases. As a consequence, any property about the collection of these spaces in $q$-matroids holds true for $q$-PM{}s as well. Let us illustrate this for the dependent spaces and bases. The following properties for $q$-matroids have been established in \cite[Thm.~63 and Lem.~66]{BCJ21} and therefore apply to $q$-PM{}s as well. \begin{cor}\label{C-DepSpacesCircuits} Let~${\mathcal M}=(E,\rho)$ be a $q$-PM{} with denominator~$\mu$. Let ${\mathcal D}_\mu$ and ${\mathcal B}_\mu$ be the collection of $\mu$-dependent spaces and $\mu$-bases of~${\mathcal M}$, respectively. Then~${\mathcal D}_\mu$ and~${\mathcal B}_\mu$ satisfy \begin{mylist2} \item[(D1)\hfill] $\{0\}\not\in{\mathcal D}_\mu$. \item[(D2)\hfill] If $D_1\in{\mathcal D}_\mu$ and $D_2\in{\mathcal V}(E)$ such that $D_1\subseteq D_2$, then $D_2\in{\mathcal D}_\mu$. \item[(D2)\hfill] Let $D_1,\,D_2\in{\mathcal D}_\mu$ be such that $D_1\cap D_2\not\in{\mathcal D}_\mu$. Then every subspace of $D_1+D_2$ of codimension~$1$ is in ${\mathcal D}_\mu$. \item[(B1)\hfill] ${\mathcal B}_\mu\neq\emptyset$. \item[(B2)\hfill] Let $B_1,\,B_2\in {\mathcal B}_\mu$ be such that $B_1\leq B_2$. Then $B_1=B_2$. \item[(B3)\hfill] Let $B_1,\,B_2\in{\mathcal B}_\mu$ and $A$ be a subspace of~$B_1$ of codimension~$1$ such that $B_1\cap B_2\leq A$. Then there exists a $1$-dimensional subspace~$Y$ of~$B_2$ such that $A+Y\in{\mathcal B}_\mu$. \item[(B4)\hfill] Let $A_1,\,A_2\in{\mathcal V}(E)$ and $I_1,\,I_2$ be maximal dimensional intersections of some members of~${\mathcal B}_{\mu}$ with~$A_1$ and $A_2$, respectively. Then there exist a maximal dimensional intersection of a member of~${\mathcal B}_{\mu}$ with $A_1+A_2$ that is contained in $I_1+I_2$. \end{mylist2} \end{cor} In \cite[Cor.~65]{BCJ21} and \cite[Thm.~37]{JuPe18} it has been shown that any collection of subspaces satisfying (D1)--(D3) (resp.~(B1)--(B4)) is the collection of dependent spaces (resp.\ bases) of a unique $q$-matroid. Similar statements hold true for circuits in $q$-matroids (see \cite[Cor.~68]{BCJ21}). The following examples illustrate that none of these properties extends to $q$-PM{}s -- even if we take the rank values into account. \begin{exa}\label{E-MaxRho} \begin{alphalist} \item Consider the rank-metric codes~${\mathcal C}_1,\,{\mathcal C}_2$ in \cref{E-RowMatMRD} and the associated column polymatroids ${\mathcal M}_{\rm c}({\mathcal C}_1)$ and ${\mathcal M}_{\rm c}({\mathcal C}_2)$. Both have principal denominator~$2$, and in both cases every subspace of ${\mathbb F}^5$ is $2$-independent. Thus the only $2$-basis of ${\mathcal M}_{\rm c}({\mathcal C}_i)$ is ${\mathbb F}^5$ for $i=1,2$, and in both $q$-PM{}s it has rank value $5/2$. Yet, the two $q$-PM{}s are not equivalent. This shows that the bases of a $q$-PM{} along with their rank values do not uniquely determine the $q$-PM. Trivially, this example also shows that the circuits and dependent spaces along with their rank values do not determine the $q$-PM{}. For later purposes we also note that in both ${\mathcal M}_{\rm c}({\mathcal C}_1)$ and ${\mathcal M}_{\rm c}({\mathcal C}_2)$ all 4-dimensional subspaces and plenty of 3-dimensional subspaces have rank value~$5/2$ as well. \item Let ${\mathbb F}={\mathbb F}_2$ and consider the codes ${\mathcal C}=\subspace{A_1,A_2,A_3},\,{\mathcal C}'=\subspace{A_1,A_2,A_3'}\leq{\mathbb F}^{4\times 3}$, where \[ A_1=\begin{pmatrix}0&0&0\\1&0&0\\0&1&1\\0&1&0\end{pmatrix},\quad A_2=\begin{pmatrix}1&0&1\\1&0&0\\0&0&0\\1&1&1\end{pmatrix},\quad A_3=\begin{pmatrix}0&1&0\\0&1&1\\0&1&0\\0&1&1\end{pmatrix},\quad A_3'=\begin{pmatrix}1&0&0\\0&0&0\\1&0&1\\1&0&1\end{pmatrix}. \] Consider the associated polymatroids ${\mathcal M}={\mathcal M}_{\rm c}({\mathcal C})=({\mathbb F}^4,\rho_{\rm c})$ and ${\mathcal M}'={\mathcal M}_{\rm c}({\mathcal C}')=({\mathbb F}^4,\rho_{\rm c}')$. Both have principal denominator~$3$, and in both $q$-PM{}s the space~${\mathbb F}^4$ is the only dependent space. Hence~${\mathcal M}$ and~${\mathcal M}'$ share the same bases, namely all $3$-dimensional spaces. Moreover, $\rho_{\rm c}(V)=1=\rho_{\rm c}'(V)$ for all bases~$V$. Yet,~${\mathcal M}$ and~${\mathcal M}'$ are not equivalent: in~${\mathcal M}$ the rank value~$1$ is assumed by 33 subspaces of dimension~$2$, whereas in ${\mathcal M}'$ it is assumed by 32 subspaces of dimension~$2$ (in both $q$-PM{}s 4 subspaces of dimension~$1$ have rank value~$1$ as well). \end{alphalist} \end{exa} On the positive side, in the next section we will show that we can fully recover a $q$-PM{} from its independent spaces and their rank values. Recall from \cref{E-IndSpaces}(c) that the independent spaces alone (without their rank values) do not uniquely determine the $q$-PM. \section{The Rank Function on Independent Spaces}\label{S-IndRank} We begin by showing that for a $q$-PM{} the rank function is fully determined by its values on the independent spaces. We then go on to prove that all bases of a given subspace have the same rank value, and this value coincides with the rank value of the subspace. This result allows us to investigate whether a collection of spaces satisfying (I1)--(I4) from \cref{C-Indep} gives rise to a $q$-PM{} whose collection of independent spaces is exactly the initial collection. Since the rank value of independent spaces in a $q$-PM{} is not as rigid as in a $q$-matroid, we also need to specify a meaningful rank function on the collection of spaces. All of this results in Theorems~\ref{T-ExtIndSpaces} and \ref{T-ClosureIndep}. \begin{theo}\label{T-rhomax} Let~${\mathcal M}=(E,\rho)$ be a $q$-PM{} with denominator~$\mu$. Then \[ \rho(V)=\max\{\rho(I)\mid I\in{\mathcal I}_{\mu}(V)\}\text{ for all }V\in{\mathcal V}(E). \] \end{theo} \begin{proof} Let $V\in{\mathcal V}(E)$. Set $\rho'(V)=\max\{\rho(I)\mid I\in{\mathcal I}_{\mu}(V)\}$. Thanks to~(R2), $\rho'(V)\leq\rho(V)$, and it remains to establish $\rho(V)\leq \rho'(V)$. Let $\hat{I}\in{\mathcal I}_\mu(V)$ be of maximal possible dimension such that $\rho(\hat{I})=\rho'(V)$. If~$V$ is $\mu$-independent, then $\hat{I}=V$ and we are done. Thus let~$V$ be $\mu$-dependent. \\[.6ex] \underline{Case 1:} $\dim\hat{I}=\dim V-1$.\\ Then $V=\hat{I}\oplus\subspace{x}$ for any $x\in V\setminus\hat{I}$ and submodularity of~$\rho$ implies $\rho(V)\leq \rho(\hat{I})+\rho(\subspace{x})$. As before, we use the integer $\rho$-function $\tau=\mu\rho$. Let~$s$ be minimal such that there exists an $s$-dimensional $\mu$-dependent subspace of~$V$, say~$W$. Such space exists by $\mu$-dependence of~$V$. Then \cref{D-Indep} implies that $\tau(W)<\dim W$. By~(I2) $W$ is not contained in~$\hat{I}$ and thus $W=(W\cap\hat{I})\oplus\subspace{z}$ for some $z\in V\setminus0$. Then $W\cap\hat{I}$ is $\mu$-independent and \[ \dim W-1=\dim(W\cap\hat{I})\leq\tau(W\cap\hat{I})\leq\tau(W)<\dim W, \] and hence $\tau(W\cap\hat{I})=\tau(W)$ because~$\tau$ takes integer values. Using that $V=W+\hat{I}$, we obtain by submodularity of~$\tau$ \[ \tau(V)\leq\tau(W)+\tau(\hat{I})-\tau(W\cap\hat{I})=\tau(\hat{I})=\mu\rho'(V). \] All of this shows that $\rho(V)=\rho'(V)$, as desired. \\[.6ex] \underline{Case 2:} $\dim \hat{I}<\dim V-1$.\\ Let $x\in V\setminus\hat{I}$. Using that $\rho'(W)\leq\rho'(Z)$ for any subspaces $W,Z$ such that $W\leq Z$, we obtain \[ \rho(\hat{I})=\rho'(\hat{I})\leq\rho'(\hat{I}\oplus\subspace{x})\leq\rho'(V)=\rho(\hat{I}), \] and hence $\rho(\hat{I})=\rho'(W)$, where $W:=\hat{I}\oplus\subspace{x}$. Note that~$W$ is $\mu$-dependent thanks to the maximality of~$\hat{I}$. Furthermore, $\dim\hat{I}=\dim W-1$. Therefore Case~1 yields $\rho'(W)=\rho(W)$. Now we arrived at $\rho(\hat{I})=\rho(\hat{I}+\subspace{x})$ for all $x\in V$, and \cref{P-RankVx}(a) tells us that $\rho(\hat{I})=\rho(V)$. Since $\rho(\hat{I})=\rho'(V)$, this concludes the proof. \end{proof} \cref{C-Indep} and \cref{T-rhomax} generalize one direction of \cite[Thm.~8]{JuPe18} where the same properties are proven for the independent spaces of $q$-matroids. Our next goal is to generalize the other direction of \cite[Thm.~8]{JuPe18}, namely to characterize the collections of spaces along with given rank values that give rise to a $q$-PM{} having those spaces as independent spaces. The following result will be crucial. It shows that the rank value of any $\mu$-basis of a subspace~$V$ equals the rank value of~$V$. Recall from \cref{E-MaxRho} that the converse is not true: not every $I\in{\mathcal I}_\mu(V)$ such that $\rho(I)=\rho(V)$ is a $\mu$-basis of~$V$. \begin{theo}\label{T-Basisrho} Let~${\mathcal M}=(E,\rho)$ be a $q$-PM{} with denominator~$\mu$. Let $V\in{\mathcal V}(E)$. Then \[ \rho(I)=\rho(V)\text{ for all }I\in{\mathcal B}_\mu(V). \] In particular, all $\mu$-bases of~$V$ have the same rank value. \end{theo} \begin{proof} Throughout the proof we will omit the subscript~$\mu$. The result is clearly true if $V$ is independent. Thus, let $V$ be dependent. Set $t=\dim V$. In order to avoid denominators we use again the integer $\rho$-function $\tau:=\mu\rho$. First of all, there exists \begin{equation}\label{e-FixJ} J\in{\mathcal B}(V) \text{ such that }\tau(J)=\tau(V). \end{equation} Indeed, by \cref{T-rhomax} there exists $J\in{\mathcal I}(V)$ such that $\tau(J)=\tau(V)$, and by Property~(I2) along with the monotonicity of~$\tau$ we may assume that $J\in{\mathcal B}(V)$. Note that by \cref{D-MaxInd} all spaces in ${\mathcal B}(V)$ have the same dimension, which we denote by~$s$. \\[.6ex] \underline{Case 1:} $s= t-1$. Let $I\in{\mathcal B}(V)$. We want to show that $\tau(I)=\tau(V)$. Choose a circuit, say~$C$, in~$V$. Then $\tau(C)=\dim C-1$ (see \cref{R-IndJuPe}(c)). Clearly, $C\not\subseteq I$ by Property~(I2) and thus $C+I=V$ thanks to $\dim I=\dim V-1$. Furthermore, $C\cap I$ is independent, being a subspace of~$I$, and thus $\tau(C\cap I)\geq\dim(C\cap I)$. Using submodularity, we obtain \begin{align*} \tau(V)=\tau(C+I)&\leq \tau(C)+\tau(I)-\tau(C\cap I)\\ &\leq \dim C-1+\tau(I)-\dim(C\cap I)\\ &=\tau(I)+\dim(C+I)-(\dim I+1)\\ &=\tau(I), \end{align*} where the last step follows from $C+I=V$ and $\dim I+1=\dim V$. All of this implies $\tau(I)=\tau(V)$, and thus all bases of~$V$ have the same rank value. \\[.6ex] \underline{Case 2:} $s<t-1$. We will show that \begin{equation}\label{e-tauIJ} \tau(I)=\tau(J)\text{ for all }I\in{\mathcal B}(V), \end{equation} where~$J$ is as in~\eqref{e-FixJ}. We induct on the codimension of $I\cap J$ in~$I$. Let $\dim(I\cap J)=s-r$, thus $0\leq r\leq s$. The case $r=0$ is trivial. \\ i) Let $r=1$. Then $I=(I\cap J)\oplus\subspace{x}$ for some $x\in I\setminus J$. Set $W=J\oplus\subspace{x}$. Then $W\leq V$ and $\dim W=\dim J +1$. Thus $W$ is dependent by maximality of $J$. Hence~$I$ and~$J$ are elements of ${\mathcal B}(W)$, and Case~1 implies $\tau(I)=\tau(J)$. \\ ii) Assume now $\tau(I)=\tau(J)$ for all $I\in{\mathcal B}(V)$ such that $\dim(I\cap J)\geq s-(r-1)$ for some $r\geq2$. Let $I\in{\mathcal B}(V)$ be such that $\dim(I\cap J)=s-r$. Choose $K\leq I$ and $x\in I\setminus J$ such that $I=(I\cap J)\oplus K\oplus\subspace{x}$ and set $I_1=(I\cap J)\oplus K$. Then~$I_1$ is independent and $\dim I_1=\dim I-1=\dim J-1$. Thanks to Property~(I3) there exists $y\in J\setminus I_1$ such that \[ I':=I_1\oplus\subspace{y}\in{\mathcal B}(V). \] Now we have three bases, $I',\,I,\,J$, of~$V$. We show first $\tau(I)=\tau(I')$. Since $y\not\in I$ we have the subspace $W:=I\oplus\subspace{y}$ of~$V$, which must be dependent due to maximality of~$I$. Furthermore, $I,\,I'\leq W$ and $\dim I'=\dim I=\dim W-1$, and therefore $\tau(I)=\tau(I')$ thanks to Case~1. Next, we show $\tau(I')=\tau(J)$. In order to do so, note that $I'=(I\cap J)\oplus K\oplus\subspace{y}$, where $y\in J$. Thus $\dim(I'\cap J)\geq s-(r-1)$ and the induction hypothesis yields $\tau(I')=\tau(J)$. All of this establishes~\eqref{e-tauIJ} and concludes the proof. \end{proof} \begin{rem}\label{R-BasesqMatroid} In a $q$-matroid~${\mathcal M}=(E,\rho)$ a subspace $V\in{\mathcal V}(E)$ satisfies \[ \text{$V$is independent and } \rho(V)=\rho(E)\Longleftrightarrow V\text{ is a basis of }{\mathcal M}. \] The forward direction is in fact the definition of independence in \cite[Def.~2]{JuPe18}. Thanks to \cref{T-Basisrho} the implication ``$\Longleftarrow$'' holds true for $q$-PM{}s as well. However, ``$\Longrightarrow$'' is not true as the $q$-PM{}s in \cref{E-MaxRho} show. \end{rem} We are now ready to provide a characterization of the pairs $({\mathcal I},\tilde{\rho})$ of collections~${\mathcal I}$ of subspaces and rank functions~$\tilde{\rho}$ on~${\mathcal I}$ that give rise to a $q$-PM{} whose collection of independent spaces is~${\mathcal I}$ and whose rank function restricts to~$\tilde{\rho}$. Clearly,~${\mathcal I}$ has to satisfy (I1)--(I4) from \cref{C-Indep}, and~$\tilde{\rho}$ must satisfy (R1)--(R3). However, for independence we also need the rank condition from \cref{D-Indep}. This leads to (R1$'$) in \cref{T-ExtIndSpaces} below. Furthermore, since the sum of independent spaces need not be independent, we have to adjust~(R3) and replace $\tilde{\rho}(I+J)$ by $\max\{\tilde{\rho}(K)\mid K\in{\mathcal I},\,K\leq I+J\}$, thereby accounting for \cref{T-rhomax}. This results in the submodularity condition (R3$'$) below. Since one can easily find examples showing that (R1$'$)--(R3$'$) are not sufficient to guarantee submodularity of the extended rank function (defined in~\eqref{e-Extrho}), we also have to enforce \cref{T-Basisrho}. This leads to condition~(R4$'$), which states that for any space~$V$ all maximal subspaces that are contained in~${\mathcal I}$ have the same rank value. As we will see, all these conditions together guarantee submodularity of the extended rank function, and the spaces in~${\mathcal I}$ are independent in the resulting $q$-PM{}. However, the $q$-PM{} may have additional independent subspaces; see \cref{E-MoreIndSpaces} below. In order to prevent this, we need a natural closure property. This will be spelled out in \cref{T-ClosureIndep}. \begin{theo}\label{T-ExtIndSpaces} Let ${\mathcal I}$ be a subset of~${\mathcal V}(E)$. For $V\in{\mathcal V}(E)$ set ${\mathcal I}(V)=\{I\in{\mathcal I}\mid I\leq V\}$ and denote by $\mbox{$\cI_{\text{max}}$}(V)$ the set of subspaces in~${\mathcal I}(V)$ of maximal dimension. Suppose~${\mathcal I}$ satisfies the following. \begin{mylist} \item[(I1)\hfill] $\{0\}\in{\mathcal I}$. \item[(I2)\hfill] If $I\in{\mathcal I}$ and $J\leq I$, then $J\in{\mathcal I}$. \item[(I3)\hfill] If $I,\,J\in{\mathcal I}$ and $\dim I<\dim J$, then there exists $x\in J\setminus I$ such that $I\oplus\subspace{x}\in{\mathcal I}$. \item[(I4)\hfill] Let $V,\,W\in{\mathcal V}(E)$ and $I\in\mbox{$\cI_{\text{max}}$}(V),\,J\in\mbox{$\cI_{\text{max}}$}(W)$. Then there exists a space $K\in\mbox{$\cI_{\text{max}}$}(V+W)$ that is contained in $I+J$. \end{mylist} Furthermore, let $\tilde{\rho}:{\mathcal I}\longrightarrow{\mathbb Q}$ and $\mu\in{\mathbb Q}_{>0}$ such that $\mu\tilde{\rho}(I)\in{\mathbb Z}$ for all $I\in{\mathcal I}$. Suppose~$\tilde{\rho}$ satisfies the following. \begin{mylist3} \item[(R1$'$)\hfill] $0\leq \mu^{-1}\dim I\leq\tilde{\rho}(I)\leq\dim I$ for all $I\in{\mathcal I}$. \item[(R2$'$)\hfill] If $I,J\in{\mathcal I}$ such that $I\leq J$, then $\tilde{\rho}(I)\leq\tilde{\rho}(J)$. \item[(R3$'$)\hfill] For all $I,J\in{\mathcal I}$ we have $\max\{\tilde{\rho}(K)\mid K\in{\mathcal I}(I+J)\}+\tilde{\rho}(I\cap J)\leq \tilde{\rho}(I)+\tilde{\rho}(J)$. \item[(R4$'$)\hfill] For all $V\in{\mathcal V}(E)$ and $I,J\in\mbox{$\cI_{\text{max}}$}(V)$ we have $\tilde{\rho}(I)=\tilde{\rho}(J)$. \end{mylist3} Define the map \begin{equation}\label{e-Extrho} \rho:{\mathcal V}(E)\longrightarrow{\mathbb Q},\quad V\longmapsto \max\{\tilde{\rho}(I)\mid I\in{\mathcal I}(V)\}. \end{equation} Then ${\mathcal M}=(E,\rho)$ is a $q$-PM{} with denominator~$\mu$, and ${\mathcal I}\subseteq{\mathcal I}_\mu({\mathcal M})$. \end{theo} Note that thanks to~(I3) the set $\mbox{$\cI_{\text{max}}$}(V)$ is the set of maximal elements in the poset $({\mathcal I}(V),\leq)$. Furthermore, by (R2$'$) and (R4$'$) we have for all $V\in{\mathcal V}(E)$ \[ \rho(V)=\tilde{\rho}(I)\ \text{ for any }\ I\in\mbox{$\cI_{\text{max}}$}(V). \] \begin{proof} It is clear that~$\mu$ is a denominator of~$\rho$. We have to show that~$\rho$ satisfies (R1)--(R3) from \cref{D-PMatroid}. \\ (R1) Let $V\in{\mathcal V}(E)$ and $I\in{\mathcal I}$ such that $I\leq V$ and $\tilde{\rho}(I)=\rho(V)$. Then $0\leq\tilde{\rho}(I)\leq\dim I\leq\dim V$, which establishes~(R1). \\ (R2) Let $V,W\in{\mathcal V}(E)$ be such that $V\leq W$. Let $I\in{\mathcal I}$ be such that $I\leq V$ and $\tilde{\rho}(I)=\rho(V)$. Then $I\leq W$ and the definition of~$\rho$ implies $\rho(W)\geq\tilde{\rho}(I)=\rho(V)$, as desired. \\ (R3) Let $V,W\in{\mathcal V}(E)$. Choose $K\in\mbox{$\cI_{\text{max}}$}(V\cap W)$. Applying~(I3) repeatedly, we can find $I\in\mbox{$\cI_{\text{max}}$}(V)$ and $J\in\mbox{$\cI_{\text{max}}$}(W)$ such that $K\leq I$ and $K\leq J$. By~(I4) there exists $H\in\mbox{$\cI_{\text{max}}$}(V+W)$ such that $H\leq I+J$. Now (R4$'$) implies \[ \tilde{\rho}(I)=\rho(V),\quad \tilde{\rho}(J)=\rho(W),\quad \tilde{\rho}(H)=\rho(I+J)=\rho(V+W),\quad \tilde{\rho}(K)=\tilde{\rho}(I\cap J)=\rho(V\cap W). \] From (R3$'$) we obtain $\rho(I+J)+\tilde{\rho}(I\cap J)\leq \tilde{\rho}(I)+\tilde{\rho}(J)$, and we finally arrive at \[ \rho(V+W)+\rho(V\cap W)= \tilde{\rho}(H)+ \tilde{\rho}(K)= \rho(I+J)+\tilde{\rho}(I\cap J)\leq \tilde{\rho}(I)+\tilde{\rho}(J) =\rho(V)+\rho(W), \] as desired. Finally, (R1$'$) shows that the spaces in ${\mathcal I}$ are $\mu$-independent, thus ${\mathcal I}\subseteq{\mathcal I}_\mu({\mathcal M})$. \end{proof} The following example shows that in general the $q$-PM{} ${\mathcal M}$ from \cref{T-ExtIndSpaces} has more independent spaces than~${\mathcal I}$. \begin{exa}\label{E-MoreIndSpaces} Consider the $q$-PM{} ${\mathcal M}=({\mathbb F}^3,\rho_{\rm c})$ from \cref{E-IndSpacesRanks}(a). We have seen already that ${\mathcal I}_3({\mathcal M})={\mathcal V}({\mathbb F}^3)$. Define the set ${\mathcal I}=\{V\in{\mathcal V}({\mathbb F}^3)\mid V\neq \subspace{e_1+e_2,e_3}\text{ and }V\neq{\mathbb F}^3\}$ and let $\tilde{\rho}=\rho_{\rm c}|_{{\mathcal I}}$. One easily verifies that $({\mathcal I},\tilde{\rho})$ satisfies (I1)--(I4) and (R1$'$)--(R4$'$). Furthermore, the extension~$\rho$ defined in \cref{T-ExtIndSpaces} equals $\rho_{\rm c}$ and thus the induced $q$-PM{} $({\mathbb F}^3,\rho)$ equals~${\mathcal M}$. Now we have ${\mathcal I}\subsetneq{\mathcal I}_3({\mathcal M})$. \end{exa} We can easily force equality ${\mathcal I}={\mathcal I}_\mu({\mathcal M})$ by adding the following natural closure property. \begin{theo}\label{T-ClosureIndep} Let the pair $({\mathcal I},\tilde{\rho})$ be as in \cref{T-ExtIndSpaces}. Suppose $({\mathcal I},\tilde{\rho})$ satisfies (I1)--(I4) and (R1$'$)--(R4$'$) as well as the following closure property: \begin{mylist} \item[(C)\hfill] If $V\in{\mathcal V}(E)$ is such that \begin{romanlist} \item all proper subspaces of~$V$ are in~${\mathcal I}$, \item $\max\{\tilde{\rho}(I)\mid I\in{\mathcal I}(V)\}\geq\mu^{-1}\dim V$, \end{romanlist} then $V$ is in~${\mathcal I}$. \end{mylist} Then ${\mathcal I}={\mathcal I}_\mu({\mathcal M})$ for the $q$-PM{} ${\mathcal M}$ from \cref{T-ExtIndSpaces}. \end{theo} Note that by (I2) and (R1$'$), any subspace $V\in{\mathcal I}$ satisfies the properties in~(i) and~(ii). \begin{proof} Thanks to \cref{T-ExtIndSpaces} it remains to show that any $V\in{\mathcal I}_\mu({\mathcal M})$ is in~${\mathcal I}$. Recall that $\rho(V)=\max\{\tilde{\rho}(I)\mid I\in{\mathcal I}(V)\}$. We induct on $\dim V$. \\ 1) Let $\dim V=1$. Then $\rho(V)\geq\mu^{-1}\dim V$ holds true by the definition of $\mu$-independence, hence~(ii) is satisfied. Property~(i) is trivially satisfied by~(I1). Now~(C) implies $V\in{\mathcal I}$. \\ 2) Let $\dim V=r$ and assume that all subspaces $V\in{\mathcal I}_\mu({\mathcal M})$ of dimension at most $r-1$ are in~${\mathcal I}$. Since~$V\in{\mathcal I}_\mu({\mathcal M})$, the same is true for all its subspaces. Hence all proper subspaces are in~${\mathcal I}$ by induction hypothesis. Again, $\rho(V)\geq\mu^{-1}\dim V$ is true by $\mu$-independence and thus Property~(C) implies that $V\in{\mathcal I}$. \end{proof} \section{Minimal Spanning Spaces and Maximal Independent Spaces}\label{S-SpSp} By definition, the maximal independent spaces in a $q$-PM{} are the bases. In this section, we introduce spanning spaces and show that -- differently from $q$-matroids -- bases are not the same as minimal spanning spaces. However, spanning spaces turn out to be the dual notion to strongly independent spaces, which we also define in this section. This result may be regarded as the generalization of the duality result for bases in $q$-matroids. The latter states that for a $q$-matroid~${\mathcal M}$ a space $B$ is a basis of~${\mathcal M}$ if and only if $B^\perp$ is a basis of~${\mathcal M}^*$. We show that in fact this equivalence characterizes $q$-matroids within the class of $q$-PM{}s. \begin{defi}\label{D-SpSp} Let ${\mathcal M}=(E,\rho)$ be a $q$-PM{}. A subspace $V\in{\mathcal V}(E)$ is called a \emph{spanning space} if $\rho(V)=\rho(E)$. Furthermore,~$V$ is a \emph{minimal spanning space} if it is a spanning space and no proper subspace is a spanning space. \end{defi} For $q$-matroids the minimal spanning spaces are exactly the bases. \begin{prop}\label{R-SpSp} Suppose ${\mathcal M}=(E,\rho)$ is a $q$-matroid. A subspace is a basis if and only if it is a minimal spanning space. As a consequence, all minimal spanning spaces of~${\mathcal M}$ have the same dimension. \end{prop} \begin{proof} ``$\Rightarrow$'' Let $V$ be a basis of~${\mathcal M}$. Then $\dim V=\rho(V)=\rho(E)$. For every proper subspace $W\lneq V$ we have $\rho(W)\leq\dim W<\dim V=\rho(V)=\rho(E)$, hence~$W$ is not a spanning space. This proves minimality of~$V$. ``$\Leftarrow$'' Let now~$V$ be a minimal spanning space. By definition $\rho(V)=\rho(E)$, and thus it remains to show that~$V$ is independent (see \cref{R-BasesqMatroid}). Suppose~$V$ is dependent. Then there exists a maximal independent subspace~$W$ of~$V$, and thanks to \cref{T-Basisrho} we have $\rho(W)=\rho(V)=\rho(E)$. This contradicts minimality of~$V$. \end{proof} The last result is not true for $q$-PM{}s. \begin{exa}\label{E-MinSpSp} \begin{alphalist} \item The $q$-PM{} ${\mathcal M}_{\rm c}({\mathcal C}_1)$ from \cref{E-MaxRho}(a) has 126 minimal spanning spaces and they all have dimension~$3$, whereas the bases are the $4$-dimensional spaces. \item The $q$-PM{} ${\mathcal M}$ from \cref{E-MaxRho}(b) has~$4$ (resp.~$10$) minimal spanning spaces of dimension~$1$ (resp.~$2$), whereas the bases are the $3$-dimensional subspaces. \end{alphalist} \end{exa} There exist $q$-PM{}s that are not $q$-matroids and yet the bases coincide with the minimal spanning spaces (for instance the $q$-PM{} ${\mathcal M}_{\rm c}({\mathcal C})$ in \cite[Ex.~4.2]{GLJ21}). Thus the equivalence in \cref{R-SpSp} does not characterize $q$-matroids. The following describes the relation between bases and minimal spanning spaces in a $q$-PM. \begin{prop}\label{P-mSpSpBasis} Let ${\mathcal M}=(E,\rho)$ be a $q$-PM{} with denominator~$\mu$. \begin{alphalist} \item A minimal spanning space is $\mu$-independent. \item Every $\mu$-basis of~${\mathcal M}$ contains a minimal spanning space and every minimal spanning space is contained in a $\mu$-basis. \end{alphalist} \end{prop} \begin{proof} (a) Let~$V$ be a minimal spanning space. If~$V$ is $\mu$-dependent, then~$V$ contains a maximal $\mu$-independent subspace~$W$, and \cref{T-Basisrho} implies $\rho(W)=\rho(V)=\rho(E)$. This contradicts minimality of~$V$. \\ (b) is clear. \end{proof} Recall the dual $q$-PM{} from \cref{T-DualqPM}. Our next result shows that bases are compatible with duality in ``the expected way'' if and only if the $q$-PM{} is a $q$-matroid. Part~(a) has been established in \cite{JuPe18}. \begin{prop}\label{P-BasesDuality} Let ${\mathcal M}=(E,\rho)$ be a $q$-PM{}. Fix a non-degenerate symmetric bilinear form $\inner{\cdot}{\cdot}$ on~$E$ and let ${\mathcal M}^*=(E,\rho^*)$ be the dual of~${\mathcal M}$ w.r.t.\ $\inner{\cdot}{\cdot}$. \begin{alphalist} \item If ${\mathcal M}$ is a $q$-matroid, then for every basis~$B$ of~${\mathcal M}$ the orthogonal space $B^\perp$ is a basis of~${\mathcal M}^*$. \item Let~$\mu$ be a denominator of~${\mathcal M}$. Suppose there exists a $\mu$-basis $B$ of~${\mathcal M}$ such that the orthogonal space $B^\perp$ is a $\mu$-basis of~${\mathcal M}^*$. Then ${\mathcal M}$ is a $q$-matroid. \end{alphalist} \end{prop} \begin{proof} (a) has been proven in \cite[Thm.~45]{JuPe18}. \\ (b) Let $B$ be a $\mu$-basis of~${\mathcal M}$ and $B^\perp$ be a $\mu$-basis of~${\mathcal M}^*$. Then $\rho(B)=\rho(E)$ and thus $\rho^*(B^\perp)=\dim B^\perp+\rho(B)-\rho(E)=\dim B^\perp$. \cref{T-Basisrho} implies that every basis $\hat{B}$ of ${\mathcal M}^*$ satisfies $\rho^*(\hat{B})=\rho^*(B^\perp)=\dim B^\perp=\dim\hat{B}$. Now~\eqref{e-rhoVdimV} yields $\rho^*(I)=\dim I$ for all $\mu$-independent spaces~$I$ of~${\mathcal M}^*$. Hence the dual rank function $\rho^*$ is integer-valued on the $\mu$-independent spaces. But then the entire rank function~$\rho^*$ is integer-valued thanks to \cref{T-rhomax}. Now $\rho=\rho^{**}$ is also integer-valued, which means that ${\mathcal M}$ is a $q$-matroid. \end{proof} The above result has an interesting consequence. Recall from \cref{T-AuxMatroid} the auxiliary $q$-matroid ${\mathcal Z}_{{\mathcal M},\mu}$ of a $q$-PM{} ${\mathcal M}$ with denominator~$\mu$. Part~(b) above implies that if~${\mathcal M}$ is a $q$-PM{} that is not a $q$-matroid, then ${\mathcal Z}_{{\mathcal M}^*,\mu}\not\approx{\mathcal Z}_{{\mathcal M},\mu}^*$. Indeed, from \cref{T-AuxMatroid} we know that a subspace $B\in{\mathcal V}(E)$ is a $\mu$-basis in~${\mathcal M}$ if and only if it is a basis in ${\mathcal Z}_{{\mathcal M},\mu}$. Thanks to \cref{P-BasesDuality}(a) the latter is equivalent to $B^\perp$ being in basis in ${\mathcal Z}_{{\mathcal M},\mu}^*$. But by \cref{P-BasesDuality}(a) $B^\perp$ is not a basis of~${\mathcal M}^*$, and thus not of ${\mathcal Z}_{{\mathcal M}^*,\mu}$. As we will show next, Part~(a) can be generalized to $q$-PM{}s if one replaces bases by minimal spanning spaces in~${\mathcal M}$ and by maximally strongly independent spaces in~${\mathcal M}^*$, where the latter are defined as follows. \begin{defi}\label{D-StrInd} Let ${\mathcal M}=(E,\rho)$ be a $q$-PM{}. A subspace $V\in{\mathcal V}(E)$ is \emph{strongly independent} if $\rho(V)=\dim V$. A subspace $V\in{\mathcal V}(E)$ is \emph{maximally strongly independent} if it is strongly independent and not properly contained in a strongly independent subspace. \end{defi} From \cref{R-IndepMatroid} we know that strongly independent subspaces are $\mu$-independent for every denominator~$\mu$ of~${\mathcal M}$. Furthermore, in $q$-matroids strong independence coincides with independence. We remark that strongly independent subspaces play a crucial role in \cite{BCIJ21} for the construction of subspace designs. Now we have the following simple result. It shows that spanning spaces and strongly independent spaces are mutually dual. This may be regarded a generalization of \cite[Prop.~83]{BCJ21} and \cite[Thm.~45]{JuPe18}, where the same results have been established for $q$-matroids. \begin{prop}\label{P-StrIndepDuality} Let ${\mathcal M}$ and~${\mathcal M}^*$ be as in \cref{P-BasesDuality} and let $V\in{\mathcal V}(E)$. Then $V$ is a (minimal) spanning space in ${\mathcal M}$ if and only if $V^\perp$ is (maximally) strongly independent in ${\mathcal M}^*$. \end{prop} \begin{proof} This follows immediately from $\rho^*(V^\perp)=\dim V^\perp+\rho(V)-\rho(E)$.\end{proof} In summary, for $q$-matroids the notions `minimal spanning space', `maximally strongly independent space', and `basis' coincide, whereas these are distinct concepts for $q$-PM{}s. We close the paper with a few remarks on the properties -- or rather lack thereof -- of strongly independent spaces and spanning spaces in $q$-PM{}s. In particular, neither the maximally strongly independent spaces nor the minimal spanning spaces behave as well as bases. This is not surprising since neither collection consists of subspaces of constant dimension (see \cref{E-MinSpSp}(b) along with duality). \begin{rem} \begin{alphalist} \item Let ${\mathcal M}$ be a $q$-PM{} and $\tilde{{\mathcal I}}$ be its collection of strongly independent subspaces. The duality in \cref{P-StrIndepDuality} implies that $\tilde{{\mathcal I}}$ is invariant under taking subspaces, i.e., it satisfies~(I2) of \cref{C-Indep} (see also \cref{R-IndepMatroid} and \cite[Lem.~6]{BCIJ21}). It is not hard to find examples showing that $\tilde{{\mathcal I}}$ does not satisfy~(I3) and~(I4). \item In \cref{C-DepSpacesCircuits} we listed conditions (B1)--(B4) that are satisfied for bases in a $q$-PM{}. As discussed earlier, they give rise to a cryptomorphic definition of $q$-matroids, but not for $q$-PM{}s (see \cref{E-MaxRho}). For $q$-PM{}s neither the maximally strongly independent subspaces nor the minimal spanning spaces satisfy (B3) or (B4). \end{alphalist} \end{rem} \bibliographystyle{abbrv}
{ "timestamp": "2021-05-06T02:06:58", "yymm": "2105", "arxiv_id": "2105.01802", "language": "en", "url": "https://arxiv.org/abs/2105.01802" }
\section{Introduction} Unsupervised clustering is a fundamental task that aims to partition data into distinct groups of similar ones without explicit human labels. Deep clustering methods~\citep{xie2016unsupervised,wu2019deep} exploit the representations learned by neural networks and have made large progress on high-dimensional data recently. Often, such methods learn the representations for clustering by reconstructing data in a deterministic~\citep{ghasedi2017deep} or probabilistic manner~\citep{jiang2016variational}, or maximizing certain mutual information~\citep{hu2017learning,ji2019invariant} (see Sec.~\ref{sec:related_work} for the related work). Despite the recent advances, the representations learned by existing methods may not be discriminative enough to capture the semantic similarity between images. The instance discrimination task~\citep{wu2018unsupervised,he2019momentum} in contrastive learning has shown promise in pre-training representations transferable to downstream tasks through fine-tuning. Given that the literature~\citep{shiran2019multi,niu2020gatcluster} shows improved representations can lead to better clustering results, we hypothesize that instance discrimination can improve the performance as well. A straightforward approach is to learn a classical clustering model, e.g. spherical $k$-means~\citep{dhillon2001concept}, directly on the representations pre-trained by the task. Such a two-stage baseline can achieve excellent clustering results (please refer to Tab.~\ref{tab:fullexp}). However, because of the independence of the two stages, the baseline may not fully explore the semantic structures of the data when learning the representations and lead to a sub-optimal solution for clustering. To this end, we propose Mixture of Contrastive Experts (MiCE), a unified probabilistic clustering method that utilizes the instance discrimination task as a stepping stone to improve clustering. In particular, to capture the semantic structure explicitly, we formulate a mixture of conditional models by introducing latent variables to represent cluster labels of the images, which is inspired by the mixture of experts (MoE) formulation. In MiCE, each of the conditional models, also called an {\it expert}, learns to discriminate a subset of instances, while an input-dependent {\it gating function} partitions the dataset into subsets according to the latent semantics by allocating weights among experts. Further, we develop a scalable variant of the Expectation-Maximization (EM) algorithm~\citep{dempster1977maximum} for the nontrivial inference and learning problems. In the E-step, we obtain the approximate inference of the posterior distribution of the latent variables given the observed data. In the M-step, we maximize the evidence lower bound (ELBO) of the log conditional likelihood with respect to all parameters. Theoretically, we show that the ELBO is bounded and the proposed EM algorithm leads to the convergence of ELBO. Moreover, we carefully discuss the algorithmic relation between MiCE and the two-stage baseline and show that the latter is a special instance of the former in a certain extreme case. Compared with existing clustering methods, MiCE has the following advantages. (i) \textbf{Methodologically unified}: MiCE conjoins the benefits of both the discriminative representations learned by contrastive learning and the semantic structures captured by a latent mixture model within a unified probabilistic framework. (ii) \textbf{Free from regularization}: MiCE trained by EM optimizes a single objective function, which does not require auxiliary loss or regularization terms. (iii) \textbf{Empirically effective}: Evaluated on four widely adopted natural image datasets, MiCE achieves significantly better results than a strong contrastive baseline and extensive prior clustering methods on several benchmarks without any form of pre-training. \section{Related Work}\label{sec:related_work} {\bf Deep clustering.} Inspired by the success of deep learning, many researchers propose to learn the representations and cluster assignments simultaneously~\citep{xie2016unsupervised,yang2016joint,yang2017towards} based on data reconstruction~\citep{xie2016unsupervised,yang2017towards}, pairwise relationship among instances~\citep{chang2017deep,haeusser2018associative,wu2019deep}, multi-task learning~\citep{shiran2019multi,niu2020gatcluster}, etc. The joint training framework often ends up optimizing a weighted average of multiple loss functions. However, given that the validation dataset is barely provided, tuning the weights between the losses may be impractical~\citep{ghasedi2017deep}. Recently, several methods also explore probabilistic modeling, and they introduce latent variables to represent the underlying classes. On one hand, deep generative approaches~\citep{jiang2016variational,dilokthanakul2016deep,chongxuan2018graphical,mukherjee2019clustergan,yang2019deep} attempt to capture the data generation process with a mixture of Gaussian prior on latent representations. However, the imposed assumptions can be violated in many cases, and capturing the true data distribution is challenging but may not be helpful to the clustering~\citep{krause2010discriminative}. On the other hand, discriminative approaches~\citep{hu2017learning,ji2019invariant,darlow2020dhog} directly model the mapping from the inputs to the cluster labels and maximize a form of mutual information, which often yields superior cluster accuracy. Despite the simplicity, the discriminative approaches discard the instance-specific details that can benefit clustering via improving the representations. Besides, MIXAE~\citep{zhang2017deep}, DAMIC~\citep{chazan2019deep}, and MoE-Sim-VAE~\citep{kopf2019mixture-of-experts} combine the mixture of experts (MoE) formulation~\citep{jacobs1991adaptive} with the data reconstruction task. However, either pre-training, regularization, or an extra clustering loss is required. {\bf Contrastive learning.} To learn discriminative representations, contrastive learning~\citep{wu2018unsupervised,oord2018representation,he2019momentum,tian2019contrastive,chen2020simple} incorporates various contrastive loss functions with different pretext tasks such as colorization~\citep{zhang2016colorful}, context auto-encoding~\citep{pathak2016context}, and instance discrimination~\citep{dosovitskiy2015discriminative,wu2018unsupervised}. The pre-trained representations often achieve promising results on downstream tasks, \textit{e.g.}, depth prediction, object detection~\citep{ren2015faster,he2017mask}, and image classification~\citep{,kolesnikov2019revisiting}, after fine-tuning with human labels. In particular, InstDisc~\citep{wu2018unsupervised} learns from instance-level discrimination using NCE~\citep{gutmann2010noise}, and maintains a memory bank to compute the loss function efficiently. MoCo replaces the memory bank with a queue and maintains an EMA of the student network as the teacher network to encourage consistent representations. A concurrent work called PCL~\citep{li2020prototypical} also explores the semantic structures in contrastive learning. They add an auxiliary cluster-style objective function on top of the MoCo's original objective, which differs from our method significantly. PCL requires an auxiliary $k$-means~\citep{lloyd1982least} algorithm to obtain the posterior estimates and the prototypes. Moreover, their aim of clustering is to induce transferable embeddings instead of discovering groups of data that correspond to underlying semantic classes. \section{Preliminary} We introduce the contrastive learning methods based on the instance discrimination task~\citep{wu2018unsupervised,ye2019unsupervised,he2019momentum,chen2020simple}, with a particular focus on the recent state-of-the-art method, MoCo~\citep{he2019momentum}. Let $\mathbf{X}= \{\mathbf{x}_n\}_{n=1}^N$ be a set of images without the ground-truth labels, and each of the datapoint $\mathbf{x}_n$ is assigned with a unique surrogate label $y_n \in \{1, 2, ..., N\}$ such that $y_n \ne y_j, \forall j \ne n$\footnote{The value of the surrogate label can be regarded as the index of the image.}. To learn representations in an unsupervised manner, instance discrimination considers a discriminative classifier that maps the given image to its surrogate label. Suppose that we have two encoder networks $f_{\boldsymbol{\theta}}$ and $f_{\boldsymbol{\theta^{\prime}}}$ that generate $\ell_2$-normalized embeddings $\mathbf{v}_{y_n} \in \mathbb{R}^d$ and $\mathbf{f}_n \in \mathbb{R}^d$, respectively, given the image $\mathbf{x}_n$ with the surrogate label $y_n$. We show the parameters of the networks in the subscript, and images are transformed by a stochastic data augmentation module before passing to the networks (please see Appendix~\ref{sec:detail_exp}). We can model the probability classifier with: \begin{align} p(\mathbf{Y} | \mathbf{X} ) = \prod_{n=1}^N p(y_n | \mathbf{x}_n) = \prod_{n = 1}^N \frac{\exp(\mathbf{v}_{y_n}^\top \mathbf{f}_n /\tau)} {\sum_{i=1}^N \exp(\mathbf{v}_{i}^\top \mathbf{f}_n /\tau)}, \label{eqn:contrasive_p_y_x} \end{align} where $\tau$ is the temperature hyper-parameter controlling the concentration level~\citep{hinton2015distilling} \footnote{Due to summation over the entire dataset in the denominator term, it can be computationally prohibitive to get Maximum likelihood estimation (MLE) of the parameters~\citep{ma2018noise}.}. The recent contrastive learning methods mainly differ in: (1) The contrastive loss used to learn the network parameters, including NCE~\citep{wu2018unsupervised}, InfoNCE~\citep{oord2018representation}, and the margin loss~\citep{schroff2015facenet}. (2) The choice of the two encoder networks based on deep neural networks (DNNs) in which $\boldsymbol{\theta^{\prime}}$ can be an identical~\citep{ye2019unsupervised,chen2020simple}, distinct~\citep{tian2019contrastive}, or an exponential moving average (EMA)~\citep{he2019momentum} version of $\boldsymbol{\theta}$. In particular, MoCo~\citep{he2019momentum} learns by minimizing the InfoNCE loss: \begin{align} \log \frac{\exp \left( \mathbf{v}_{y_n}^{\top} \mathbf{f}_n / \tau\right) }{\exp \left( \mathbf{v}_{y_n}^{\top} \mathbf{f}_n / \tau\right) + \sum_{i =1}^{\nu} \exp \left(\mathbf{q}_i^{\top} \mathbf{f}_n / \tau\right)},\label{eqn:infoNCE} \end{align} where $\mathbf{q} \in \mathbb{R}^{\nu \times d}$ is a queue of size $\nu \le N$ storing previous embeddings from $f_{\boldsymbol{\theta^{\prime}}}$. While it adopts the EMA approach to avoid rapidly changing embeddings in the queue that adversely impacts the performance~\citep{he2019momentum}. For convenience, we refer $f_{\boldsymbol{\theta}}$ and $f_{\boldsymbol{\theta^{\prime}}}$ as the student and teacher network respectively~\citep{tarvainen2017mean,tsai2019d}. In the following, we propose a unified latent mixture model based on contrastive learning to tackle the clustering task. \section{Mixture of Contrastive Experts} Unsupervised clustering aims to partition a dataset $\mathbf{X}$ with $N$ observations into $K$ clusters. We introduce the latent variable $z_n \in \{1,2,...,K\}$ to be the cluster label of the image $\mathbf{x}_n$ and naturally extend Eq.~(\ref{eqn:contrasive_p_y_x}) to Mixture of Contrastive Experts (MiCE): \begin{align} \label{eqn:mice} p(\mathbf{Y}, \mathbf{Z} | \mathbf{X}) &= \prod_{n=1}^N \prod_{k=1}^K p(y_n, z_n = k | \mathbf{x}_n)^{\mathds{1}(z_n = k)} \notag \\ &= \prod_{n=1}^N \prod_{k=1}^K p(z_n = k | \mathbf{x}_n )^{\mathds{1}(z_n = k)} p(y_n | \mathbf{x}_n, z_n = k)^{\mathds{1}(z_n = k)}, \end{align} where $\mathds{1}(\cdot)$ is an indicator function. The formulation explicitly introduces a mixture model to capture the latent semantic structures, which is inspired by the mixture of experts (MoE) framework~\citep{jacobs1991adaptive}. In Eq.~(\ref{eqn:mice}), $p(y_n | \mathbf{x}_n, z_n)$ is one of the \textit{experts} that learn to discriminate a subset of instances and $p(z_n | \mathbf{x}_n )$ is a \textit{gating function} that partitions the dataset into subsets according to the latent semantics by routing the given input to one or a few experts. With a divide-and-conquer principle, the experts are often highly specialized in particular images that share similar semantics, which improves the learning efficiency. Notably, MiCE is generic to the choice of the underlying contrastive methods~\citep{wu2018unsupervised,he2019momentum,chen2020simple}, while in this paper, we focus on an instance based on MoCo. Also, please see Fig.~\ref{fig:MiCE} for an illustration of MiCE with three experts. In contrast to the original MoE used in the supervised settings~\citep{jacobs1991adaptive}, our experts learn from instance-wise discrimination instead of human labels. In addition, both gating and expert parts of MiCE are based on DNNs to fit the high-dimensional data. In the following, we will elaborate on how we parameterize the gating function and the experts to fit the clustering task. For simplicity, we omit the parameters in all probability distributions in this section. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{MiCE_flow.pdf} \caption{An illustration of MiCE with three experts. Best view in color.} \label{fig:MiCE} \end{figure} {\bf Gating function.} The gating function organizes the instance discrimination task into $K$ simpler subtasks by weighting the experts based on the semantics of the input image. We define $g_{\boldsymbol{\psi}}$ as an encoder network that outputs an embedding for each input image. We denote the output vector for image $\mathbf{x}_n$ as $\mathbf{g}_{n} \in \mathbb{R}^{d}$. The gating function is then parameterized as: \begin{align} p(z_n | \mathbf{x}_n) &= \frac{\exp (\boldsymbol{\omega}_{z_n}^{\top} \mathbf{g}_{n} / \kappa)} { \sum_{k=1}^K \exp (\boldsymbol{\omega}_{k}^{\top} \mathbf{g}_{n} / \kappa) } ,\label{eqn:gating} \end{align} where $\kappa$ is the temperature, and $\boldsymbol{\omega}=\{\boldsymbol{\omega}_k\}_{k=1}^K$ represent the gating prototypes. All prototypes and image embeddings are $\ell_2$-normalized in the $\mathbb{R}^d$ space. Hence, the gating function performs a soft partitioning of the dataset based on the cosine similarity between the image embeddings and the gating prototypes. We can view it as a prototype-based discriminative clustering module, whereas we obtain the cluster labels using posterior inference to consider additional information in the experts. {\bf Experts.} In MiCE, every expert learns to solve the instance discrimination subtask arranged by the gating function. We define the expert in terms of the unnormalized model $\Phi(\cdot)$ following~\citet{wu2018unsupervised,he2019momentum}. Therefore, the probability of the image $\mathbf{x}_n$ being recognized as the $y_n$-th one by the $z_n$-th expert is formulated as follows: \begin{align} p(y_n | \mathbf{x}_n, z_n) &= \frac{\Phi(\mathbf{x}_n, y_n, z_n)}{Z(\mathbf{x}_n, z_n)}, \label{eqn:experts} \end{align} where $Z(\mathbf{x}_n, z_n)= \sum_{i=1}^N \Phi(\mathbf{x}_n, y_i, z_n)$ is a normalization constant that is often computationally intractable. Similar to MoCo, we have the student network $f_{\boldsymbol{\theta}}$ that maps the image $\mathbf{x}_n$ into $K$ continuous embeddings $\mathbf{f}_n = \{\mathbf{f}_{n,k}\}_{k=1}^K \in \mathbb{R}^{ K \times d}$. Likewise, the teacher network $f_{\boldsymbol{\theta}^{\prime}}$ outputs $\mathbf{v}_{y_n} = \{\mathbf{ v}_{y_n,k}\}_{k=1}^K \in \mathbb{R}^{ K \times d}$ given $\mathbf{x}_n$. To be specific, $\mathbf{f}_{n,z_n} \in \mathbb{R}^d$ and $\mathbf{v}_{ y_n,z_n} \in \mathbb{R}^d$ are the student embedding and the teacher embedding for images $\mathbf{x}_n$ under the $z_n$-th expert, respectively. We then parameterize the unnormalized model as: \begin{align} \Phi(\mathbf{x}_n, y_n, z_n) &= \exp\left( \mathbf{v}^{\top}_{y_n, z_n} \left( \mathbf{f}_{n, z_n} + \boldsymbol{\mu}_{z_n} \right) / \tau\right), \label{unnormalized} \end{align} where $\tau$ is the temperature and $\boldsymbol{\mu}=\{\boldsymbol{\mu}_k\}_{k=1}^K$ represent the cluster prototypes for the experts. In Eq.~(\ref{unnormalized}), the first instance-wise dot product explores the \textit{instance-level} information to induce discriminative representations within each expert. The second instance-prototype dot product incorporates the \textit{class-level} information into representation learning, encouraging a clear cluster structure around the prototype. Overall, the learned embeddings are therefore encoded with semantic structures while being discriminative enough to represent the instances. Eq.~(\ref{unnormalized}) is built upon MoCo with the EMA approach, while in principle, many other potential solutions exist to define the experts, which are left for future studies. Besides, the parameters $\boldsymbol{\theta}$ and $\boldsymbol{\psi}$ are partially shared, please refer to the Appendix~\ref{sec:detail_exp} for more details on the architecture. \section{Inference and Learning} We first discuss the evidence lower bound (ELBO), the single objective used in MiCE, in Sec.~\ref{sec.elbo}. Then, we present a scalable variant of the Expectation-Maximization (EM) algorithm~\citep{dempster1977maximum} to deal with the non-trivial inference and learning of MiCE in Sec.~\ref{sec.em}. Lastly, in Sec.~\ref{sec.two_stage}, we show that a naïve two-stage baseline, in which we run a spherical $k$-means algorithm on the embeddings learned by MoCo, is a special case of MiCE. \subsection{Evidence lower bound (ELBO)}\label{sec.elbo} The parameters to update include the parameters $\boldsymbol{\theta}, \boldsymbol{\psi}$ of the student and gating network respectively, and the expert prototypes $\boldsymbol{\mu}=\{\boldsymbol{\mu}\}_{k=1}^K$. The learning objective of MiCE is to maximize the evidence lower bound (ELBO) of the log conditional likelihood of the entire dataset. The ELBO of the datapoint $n$ is given by: \begin{align} \log p(y_n | \mathbf{x}_n) &\ge \mathcal{L}(\boldsymbol{\theta}, \boldsymbol{\psi}, \boldsymbol{\mu}; \mathbf{x}_n, y_n) \nonumber \\ &:= \mathbb{E}_{q(z_n | \mathbf{x}_n, y_n)} [ \log p(y_n | \mathbf{x}_n, z_n; \boldsymbol{\theta}, \boldsymbol{\mu}) ] - D_\text{KL} ( q(z_n | \mathbf{x}_n, y_n) \| p(z_n | \mathbf{x}_n; \boldsymbol{\psi}) ) , \label{eqn:elbo} \end{align} where $q(z_n | \mathbf{x}_n, y_n)$ is a variational distribution to infer the latent cluster label given the observed data. The first term in Eq.~(\ref{eqn:elbo}) encourages $q(z_n | \mathbf{x}_n, y_n)$ to be high for the experts that are good at discriminating the input images. Intuitively, it can relief the potential {\it degeneracy} issue~\citep{caron2018deep,ji2019invariant}, where all images are assigned to the same cluster. This is because a degenerated posterior puts the pressure of discriminating all images on a single expert, which may result in a looser ELBO. The second term in Eq.~(\ref{eqn:elbo}) is the Kullback–Leibler divergence between the variational distribution and the distribution defined by the gating function. With this term, the gating function is refined during training and considers the capability of the experts when partitioning data. Notably, MiCE does not rely on auxiliary loss or regularization terms as many prior methods~\citep{haeusser2018associative,shiran2019multi,wu2019deep,niu2020gatcluster} do. \subsection{EM algorithm}\label{sec.em} {\bf E-step.}~\label{Sec_estep} Inferring the posterior distribution of latent variables given the observations is an important step to apply MiCE to clustering. According to Bayes' theorem, the posterior distribution given the current estimate of the model parameters is: \begin{align} p(z_n| \mathbf{x}_n, y_n; \boldsymbol{\theta}, \boldsymbol{\psi}, \boldsymbol{\mu}) &= \frac{p(z_n | \mathbf{x}_n; \boldsymbol{\psi}) p(y_n| \mathbf{x}_n, z_n ; \boldsymbol{\theta}, \boldsymbol{\mu})} { \sum_{k=1}^K p(k| \mathbf{x}_n ;\boldsymbol{\psi}) p(y_n | \mathbf{x}_n, k; \boldsymbol{\theta}, \boldsymbol{\mu})}. \end{align} Comparing with the gating function $p(z_n | \mathbf{x}_n; \boldsymbol{\psi})$, the posterior provides better estimates of the latent variables by incorporating the supplementary information of the experts. However, we cannot tractably compute the posterior distribution because of the normalization constants $Z(\mathbf{x}_n, z_n; \boldsymbol{\theta}, \boldsymbol{\mu})$. In fact, given the image $\mathbf{x}_n$ and the cluster label $z_n$, $Z(\mathbf{x}_n, z_n; \boldsymbol{\theta}, \boldsymbol{\mu})$ sums over the entire dataset, which is prohibitive for large-scale image dataset. We present a simple and analytically tractable estimator to approximate them. Specifically, we maintain a queue $\mathbf{q} \in \mathbb{R}^{\nu \times K \times d}$ that stores $\nu$ previous outputs of the teacher network, following MoCo closely. Formally, the estimator $\hat{Z}(\cdot)$ is: \begin{align} &\hat{Z}(\mathbf{x}_n, z_n; \boldsymbol{\theta}, \boldsymbol{\mu}) = \exp\left( \mathbf{v}^{\top}_{y_{n, z_n}} \left( \mathbf{f}_{n, z_n} + \boldsymbol{\mu}_{z_n} \right) / \tau\right) + \sum_{i = 1}^{\nu} \exp\left( \mathbf{q}_{i, z_n}^{\top} \left( \mathbf{f}_{n, z_n} + \boldsymbol{\mu}_{z_n} \right) / \tau\right). \label{approx_z} \end{align} The estimator is biased, while its bias decreases as $\nu$ increases and we can get a sufficient amount of embeddings from the queue $\mathbf{q}$ efficiently\footnote{Even though the bias does not vanish due to the use of queue, we find that the approximation works well empirically.}. With Eq.~(\ref{approx_z}), we approximate the posterior as:\begin{align} q(z_n | \mathbf{x}_n, y_n; \boldsymbol{\theta}, \boldsymbol{\psi}, \boldsymbol{\mu}) &= \frac{ p(z_n | \mathbf{x}_n; \boldsymbol{\psi}) {\Phi\left( \mathbf{x}_{n}, y_n, z_n; \boldsymbol{\theta}, \boldsymbol{\mu} \right)} / { \hat{Z}(\mathbf{x}_n, z_n; \boldsymbol{\theta}, \boldsymbol{\mu}) } } { \sum_{k=1}^K p(k | \mathbf{x}_n; \boldsymbol{\psi}) { \Phi\left( \mathbf{x}_{n}, y_n, k; \boldsymbol{\theta}, \boldsymbol{\mu}\right) } / { \hat{Z}(\mathbf{x}_n, k; \boldsymbol{\theta}, \boldsymbol{\mu})} } . \end{align} {\bf M-step.} We leverage the stochastic gradient ascent to optimize ELBO with respect to the network parameters $\boldsymbol{\theta}, \boldsymbol{\psi}$ and the expert prototypes $\boldsymbol{\mu}$. We approximate the normalization constants appear in ELBO in analogy to the E-step, formulated as follows: \begin{align} \widetilde{\mathcal{L}}(\boldsymbol{\theta}, \boldsymbol{\psi}, \boldsymbol{\mu}; \mathbf{x}_n, y_n) &= \mathbb{E}_{q(z_n | \mathbf{x}_n, y_n; \boldsymbol{\theta}, \boldsymbol{\psi}, \boldsymbol{\mu}) } \left[ \log \frac{\Phi\left( \mathbf{x}_{n}, y_n, z_n; \boldsymbol{\theta}, \boldsymbol{\mu} \right)} { \hat{Z}(\mathbf{x}_n, z_n; \boldsymbol{\theta}, \boldsymbol{\mu}) } \right] \tag*{ } \\ &- D_\text{KL} ( q(z_n | \mathbf{x}_n, y_n; \boldsymbol{\theta}, \boldsymbol{\psi}, \boldsymbol{\mu}) \| p(z_n | \mathbf{x}_n; \boldsymbol{\psi}) ).\label{eqn:approximate_elbo} \end{align} Sampling a mini-batch $B$ of datapoints, we can construct an efficient stochastic estimator of ELBO over the full dataset to learn $\boldsymbol{\theta}, \boldsymbol{\psi}$ and $\boldsymbol{\mu}$: \begin{align} {\mathcal{L}}(\boldsymbol{\theta}, \boldsymbol{\psi}, \boldsymbol{\mu}; \mathbf{X}, \mathbf{Y}) &\approx \frac{N}{|B|} \sum_{n \in B} \widetilde{\mathcal{L}}(\boldsymbol{\theta}, \boldsymbol{\psi}, \boldsymbol{\mu}; \mathbf{x}_n, y_n) . \end{align} It requires additional care on the update of the prototypes, as discussed in many clustering methods~\citep{sculley2010web-scale,xie2016unsupervised,yang2017towards,shiran2019multi}. Some of them carefully adjust the learning rate of each prototype separately~\citep{sculley2010web-scale,yang2017towards}, which can be very different from the one used for the network parameters. Since evaluating different learning rate schemes on the validation dataset is often infeasible in unsupervised clustering, we employ alternative strategies which are free from using per-prototype learning rates in MiCE As for the expert prototypes, we observe that using only the stochastic update can lead to bad local optima. Therefore, at the end of each training epoch, we apply an additional analytical update derived from the ELBO as follows: \begin{align} \hat{\boldsymbol{\mu}}_k &= \sum_{n:\hat{z}_n=k} \mathbf{v}_{y_n, k}, \quad \boldsymbol{\mu}_k = \frac{\hat{\boldsymbol{\mu}}_k} { \|\hat{\boldsymbol{\mu}}_k\|_2} , \quad \forall k, \label{expert_update} \end{align} where $\forall n,$ $\hat{z}_n = \arg \max_k \text{ } q (k | \mathbf{x}_n, y_n; \boldsymbol{\theta}, \boldsymbol{\psi}, \boldsymbol{\mu})$ is the hard assignment of the cluster label. Please refer to Appendix~\ref{sec:derive_expert_update} for the detailed derivation. Intuitively, the analytical update in Eq.~(\ref{expert_update}) considers all the teacher embeddings assigned to the $k$-th cluster, instead of only the ones in a mini-batch, to avoid bad local optima. Beside, we fix the gating prototypes $\boldsymbol{\omega}$ to a set of pre-defined embeddings to stabilize the training process. However, using randomly initialized prototypes may cause unnecessary difficulties in partitioning the dataset if some of them are crowded together. We address the potential issue by using the means of a Max-Mahalanobis distribution (MMD)~\citep{pang2018max} which is a special case of the mixture of Gaussian distribution. The untrainable means in MMD provide the optimal inter-cluster dispersion~\citep{pang2019rethinking} that stabilizes the gating outputs. We provide the algorithm of MMD in Appendix~\ref{appendix:mmd_algo} and a systematical ablation study in Tab.~\ref{tab.ablation} to investigate the effect of the updates on $\boldsymbol{\omega}$ and $\boldsymbol{\mu}$. Lastly, we provide the formal proof on the convergence of the EM algorithm in Appendix~\ref{appendix:em_convergence}. \subsection{Relations to a two-stage baseline}\label{sec.two_stage} The combination of a contrastive learning method and a clustering method is a natural baseline of MiCE. Our analysis reveals that MiCE is the general form of the two-stage baseline in which we learn the image embeddings with MoCo~\citep{he2019momentum} and subsequently run a spherical $k$-means algorithm~\citep{dhillon2001concept} to obtain the cluster labels. On one hand, in the extreme case where $\kappa \rightarrow \infty$ (Assumption A3), the student embeddings $\mathbf{f}_{n,k}$ and teacher embeddings $\mathbf{v}_{y_n,k}$ are identical for different $k$ (Assumption A4), and the class-level information in Eq.~(\ref{unnormalized}) is omitted (Assumption A5), we arrive at the same Softmax classifier (Eq.~(\ref{eqn:contrasive_p_y_x})) and the InfoNCE loss (Eq.~(\ref{eqn:infoNCE})) used by MoCo as a special case of our method. On the other hand, of particular relevance to the analytical update on expert prototypes (Eq.~(\ref{expert_update})) is the spherical $k$-means algorithm~\citep{dhillon2001concept} that leverages the cosine similarity to cluster $\ell_2$-normalized data~\citep{hornik2012spherical}. In addition to Assumptions A3 and A4, if we assume the unnormalized model is perfectly self-normalized (Assumption A2), using the hard assignment to get the cluster labels together with the analytical update turns out to be a single-iteration spherical $k$-means algorithm on the teacher embeddings. Please refer to the Appendix~\ref{appendix_assumption} for a detailed derivation. The performance of the baseline is limited by the independence of the representation learning stage and the clustering stage. In contrast, MiCE provides a unified framework to align the representation learning and clustering objectives in a principled manner. See a comprehensive comparison in Tab.~\ref{tab:fullexp}. \begin{table}[] \centering \caption{Unsupervised clustering performance of different methods on four datasets. The first sector presents the results from the literature, the later ones display the results of the baseline and the proposed MiCE. In the last two sectors, the bold results indicating the one with the highest values. Methods with the legend\textdagger{} are the ones that required post-processing by $k$-means to obtain the clusters since they do not learn the clustering function directly, except that we use spherical $k$-means for MoCo. We calculate the mean and standard deviation (Std.) of MiCE and MoCo based on five runs.} \label{tab:fullexp} \resizebox{1.0\columnwidth}!{ \setlength{\tabcolsep}{1.0mm} { \begin{tabular}{@{}lccc|ccc|ccc|ccc@{}} \toprule Datasets & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{3}{c}{CIFAR-100} & \multicolumn{3}{c}{STL-10} & \multicolumn{3}{c}{ImageNet-Dog} \\ \midrule Methods/Metrics (\%) & NMI & ACC & ARI & NMI & ACC & ARI & NMI & ACC & ARI & NMI & ACC & ARI \\ \hlin $k$-means~\citep{lloyd1982least} & 8.7 & 22.9 & 4.9 & 8.40 & 13.0 & 2.8 & 12.5 & 19.2 & 6.1 & 5.5 & 10.5 & 2.0 \\ SC~\citep{zelnikmanor2004self-tuning} & 10.3 & 24.7 & 8.5 & 9.0 & 13.6 & 2.2 & 9.8 & 15.9 & 4.8 & 3.8 & 11.1 & 1.3 \\ AE\textdagger~\citep{bengio2006greedy} & 23.9 & 31.4 & 16.9 & 10.0 & 16.5 & 4.8 & 25.0 & 30.3 & 16.1 & 10.4 & 18.5 & 7.3 \\ DAE\textdagger~\citep{vincent2010stacked} & 25.1 & 29.7 & 16.3 & 11.1 & 15.1 & 4.6 & 22.4 & 30.2 & 15.2 & 10.4 & 19.0 & 7.8 \\ SWWAE\textdagger~\citep{zhao2015stacked} & 23.3 & 28.4 & 16.4 & 10.3 & 14.7 & 3.9 & 19.6 & 27.0 & 13.6 & 9.4 & 15.9 & 7.6 \\ GAN\textdagger~\citep{radford2015unsupervised} & 26.5 & 31.5 & 17.6 & 12.0 & 15.3 & 4.5 & 21.0 & 29.8 & 13.9 & 12.1 & 17.4 & 7.8 \\ VAE\textdagger~\citep{kingma2013auto} & 24.5 & 29.1 & 16.7 & 10.8 & 15.2 & 4.0 & 20.0 & 28.2 & 14.6 & 10.7 & 17.9 & 7.9 \\ JULE~\citep{yang2016joint} & 19.2 & 27.2 & 13.8 & 10.3 & 13.7 & 3.3 & 18.2 & 27.7 & 16.4 & 5.4 & 13.8 & 2.8 \\ DEC~\citep{xie2016unsupervised} & 25.7 & 30.1 & 16.1 & 13.6 & 18.5 & 5.0 & 27.6 & 35.9 & 18.6 & 12.2 & 19.5 & 7.9 \\ DAC~\citep{chang2017deep} & 39.6 & 52.2 & 30.6 & 18.5 & 23.8 & 8.8 & 36.6 & 47.0 & 25.7 & 21.9 & 27.5 & 11.1 \\ DCCM~\citep{wu2019deep} & 49.6 & 62.3 & 40.8 & 28.5 & 32.7 & 17.3 & 37.6 & 48.2 & 26.2 & 32.1 & 38.3 & 18.2 \\ IIC~\citep{ji2019invariant} & - & 61.7 & - & - & 25.7 & - & - & 49.9 & - & - & - & - \\ DHOG~\citep{darlow2020dhog} & 58.5 & 66.6 & 49.2 & 25.8 & 26.1 & 11.8 & 41.3 & 48.3 & 27.2 & - & - & - \\ AttentionCluster~\citep{niu2020gatcluster} & 47.5 & 61.0 & 40.2 & 21.5 & 28.1 & 11.6 & 44.6 & 58.3 & 36.3 & 28.1 & 32.2 & 16.3 \\ MMDC~\citep{shiran2019multi} & 57.2 & 70.0 & - & 25.9 & 31.2 & - & 49.8 & 61.1 & - & - & - & - \\ PICA~\citep{huang2020deep} & 59.1 & 69.6 & 51.2 & 31.0 & 33.7 & 17.1 & 61.1 & 71.3 & 53.1 & 35.2 & 35.2 & 20.1 \\ \hline MoCo (Mean)\textdagger~\citep{he2019momentum} & 66.0 & 74.7 & 59.3 & 38.8 & 39.5 & 24.0 & 60.5 & 70.7 & 53.0 & 34.2 & 30.8 & 18.4 \\ MoCo (Std.)\textdagger~\citep{he2019momentum} & 0.6 & 1.7 & 0.9 & 0.2 & 0.1 & 0.4 & 0.9 & 2.0 & 0.8 & 0.3 & 1.7 & 0.9 \\ MiCE (Mean, \textbf{Ours}) & \textbf{73.5} & \textbf{83.4} & \textbf{69.5} & \textbf{43.0} & \textbf{42.2} & \textbf{27.7} & \textbf{61.3} & \textbf{72.0} & \textbf{53.2} & \textbf{39.4} & \textbf{39.0} & \textbf{24.7} \\ MiCE (Std., \textbf{Ours}) & 0.2 & 0.2 & 0.3 & 0.5 & 1.4 & 0.4 & 1.2 & 1.8 & 2.4 & 1.8 & 3.0 & 2.4 \\ \hline MoCo (Best)\textdagger~\citep{he2019momentum} & 66.9 & 77.6 & 60.8 & 39.0 & 39.7 & 24.2 & 61.5 & 72.8 & 52.4 & 34.7 & 33.8 & 19.7 \\ MiCE (Best, \textbf{Ours}) & \textbf{73.7} & \textbf{83.5} & \textbf{69.8} & \textbf{43.6} & \textbf{44.0} & \textbf{28.0} & \textbf{63.5} & \textbf{75.2} & \textbf{57.5} & \textbf{42.3} & \textbf{43.9} & \textbf{28.6} \\ \bottomrule \end{tabular}}} \end{table} \section{Experiments} \label{sec:experiments_start} \begin{wraptable}{r}{8.0cm} \caption{Statistics of the datasets. } \begin{tabular}{lccc} \toprule Dataset & Images & Clusters & Image Size \\ \hline CIFAR-10 & 60,000 & 10 & $32 \times 32 \times 3$ \\ CIFAR-100 & 60,000 & 20 & $32 \times 32 \times 3$ \\ STL-10 & 13,000 & 10 & $96 \times 96 \times 3$ \\ ImageNet-Dog & 19,500 & 15 & $96 \times 96 \times 3$ \\\bottomrule \end{tabular}\label{data_stat} \end{wraptable} In this section, we present experimental results to demonstrate the effectiveness of MiCE. We compare MiCE with extensive prior clustering methods and the contrastive learning based two-stage baseline on four widely adopted benchmarking datasets for clustering, including STL-10~\citep{coates2011an}, CIFAR-10~\citep{krizhevsky2009learning}, CIFAR-100~\citep{krizhevsky2009learning}, and ImageNet-Dog~\citep{chang2017deep}. The experiment settings follow the literature closely~\citep{chang2017deep,wu2019deep,ji2019invariant,shiran2019multi,darlow2020dhog} and the numbers of the clusters are known in advance. The statistics of the datasets are summarized in Tab.~\ref{data_stat}. We adopt three common metrics to evaluate the clustering performance, namely normalized mutual information (NMI), cluster accuracy (ACC), and adjusted rand index (ARI). All the metrics are presented in percentage (\%). We use a 34-layer ResNet (ResNet-34)~\citep{he2016deep} as the backbone for MiCE and MoCo following the recent methods~\citep{ji2019invariant,shiran2019multi} for fair comparisons. We set both temperatures $\tau$ and $\kappa$ as 1.0, and the batch size as 256. The datasets, network, hyper-parameters, and training settings are discussed detailedly in Appendix~\ref{sec:detail_exp}. \subsection{Main clustering results} {\bf Comparison with existing deep clustering methods.} As shown in Tab.~\ref{tab:fullexp}, MiCE outperforms the previous clustering approaches by a significant margin on all datasets. The comparison highlights the importance of exploring the discriminative representations and the semantic structures of the dataset. {\bf Comparison with the two-stage baseline.} Compared to the straightforward combination of MoCo and spherical $k$-means, MiCE explores the semantic structures of the dataset that improve the clustering performance. From Tab.~\ref{tab:fullexp}, we can see that MiCE consistently outperforms the baseline in terms of the mean performance, which agrees with the analysis in Sec.~\ref{sec.two_stage}. Specifically, regarding ACC, we improve upon the strong baseline by 8.7\%, 2.7\%, and 8.2\% on CIFAR-10, CIFAR-100, and ImageNet-Dog, respectively. Taking the measurement variance into consideration, our performance overlaps with MoCo only on STL-10. We conjecture that the small data size may limit the performance as each expert learns from a subset of data. Nevertheless, the comparison manifests the significance of aligning the representation learning and clustering objectives in a unified framework, and we believe that MiCE points out a promising direction for future studies in clustering. \begin{figure}[] \begin{center} \subfloat[Epoch 1 ($12.4\%$)]{ \includegraphics[width=0.32\columnwidth]{tsne_comi_gating_epoch0_200.png} } \subfloat[Epoch 500 ($70.2\%$)]{ \includegraphics[width=0.32\columnwidth]{tsne_comi_gating_epoch500_200.png} } \subfloat[Epoch 1000 ($83.5\%$)]{ \includegraphics[width=0.32\columnwidth]{tsne_comi_gating_epoch1000_200.png} } \end{center} \caption{Visualization of the image embeddings of MiCE on CIFAR-10 with t-SNE. Different colors denote the different ground-truth class labels (unknown to the model). The cluster ACC is shown in the parenthesis. MiCE learns a clear cluster structure that matches the latent semantics well. Best view in color. } \label{fig:tSNE_mice_true_label} \end{figure} {\bf Visualization of the learned embeddings.} We visualize the image embeddings produced by the gating network using t-SNE~\citep{maaten2008visualizing} in Fig.~\ref{fig:tSNE_mice_true_label}. Different colors denote the different ground-truth class labels. At the beginning, the embeddings from distinct classes are indistinguishable. MiCE progressively refines its estimates and ends up with embeddings that show a clear cluster structure. The learned clusters align with the ground-truth semantics well, which verifies the effectiveness of our method. Additional visualizations and the comparisons with MoCo are in Appendix~\ref{appendix:additional_exp}. \subsection{Ablation Studies} {\bf Simplified model (Tab.~\ref{tab.ablation} (left)).} We investigate the gating function and the unnormalized model to understand the contributions of different components. Using a simpler latent variable model often deteriorates the performance. (1) With a uniform prior, the experts would take extra efforts to become specialized in a set of images with shared semantics. (2 \& 3) The teacher embedding $\mathbf{v}_{y_n}$ is pushed to be close to all expert prototypes at the same time. It may be difficult for the simplified expert to encode the latent semantics while being discriminative. (4) The performance drop shows that the class-level information is essential for the image embeddings to capture the semantic structures of the dataset, despite the learned representations are still discriminative between instances. Without the term, the learned embeddings are mixed up and scattered over the embedding space without a clear cluster structure. \begin{table}[t] \centering \caption{Ablations of MiCE on the probabilistic model (left) and different ways of learning the gating and expert prototypes (right). Each row shows the ACC (\%) on CIFAR-100 when applying the single change to MiCE. The assumptions are detailed in the Appendix~\ref{appendix_assumption}.} { \begin{subtable}[h]{0.47\textwidth} \centering \resizebox{1.0\columnwidth}! { \begin{tabular}{@{}lc@{}} \toprule \multicolumn{1}{c}{} & CIFAR-100 \\ \midrule (1) A3 ( $p( z_n | \mathbf{x}_n) = 1/K$) & 40.7 \\ (2) A4 (Single output layer) & 39.3 \\ (3) A3 + A4 & 33.6 \\ (4) A5 (Discard class-level information) & 17.0 \\ \midrule \multicolumn{1}{c}{MiCE (Ours)} & 42.2 \\ \bottomrule \end{tabular} } \end{subtable}}% \quad \begin{subtable}[h]{0.49\textwidth} \centering \resizebox{1.0\columnwidth}! { \begin{tabular}{@{}lc@{}} \toprule \multicolumn{1}{c}{} & CIFAR-100 \\ \midrule (a) No analytical update on $\boldsymbol{\mu}$ in Eq.~(\ref{expert_update}) & 21.3 \\ (b) No gradient update on $\boldsymbol{\mu}$ & 41.0 \\ (c) Initialize $\boldsymbol{\omega}$ with a uniform distribution & 41.0 \\%$\pm$ 2.0 \\ (d) Optimize $\boldsymbol{\omega}$ with gradient & 42.0 \\%$\pm$ 1.6 \\ \midrule \multicolumn{1}{c}{MiCE (Ours)} & 42.2 \\ \bottomrule \end{tabular} } \end{subtable} \label{tab.ablation} \end{table} {\bf Prototypes update rules (Tab.~\ref{tab.ablation} (right)).} We also conduct ablation studies to gain insights into the different ways of handling the gating and expert prototypes. We see that (a) without Eq.~(\ref{expert_update}), we may be stuck in bad local optima. As mentioned in Sec.~\ref{sec.em}, a possible reason is that we are using the same learning rate for all network parameters and prototypes~\citep{sculley2010web-scale,yang2017towards}, but tuning separate learning rates for each prototype is impractical for unsupervised clustering. Hence, we derive the analytical update to tackle the issue. As for (b), it shows that the current gradient update rule avoids the potential inconsistency between the expert prototypes and the teacher embeddings during the mini-batch training. Lastly, as discussed in Sec.~\ref{sec.em}, comparing to using (c) uniformly initiated gating prototypes projected onto the unit sphere, utilizing the means of MMD slightly improves performance. This also bypasses the potential learning rate issue that may appear in (d). \section{Conclusion} We present a principled probabilistic clustering method that conjoins the benefits of the discriminative representations learned by contrastive learning and the semantic structures introduced by the latent mixture model in a unified framework. With a divide-and-conquer principle, MiCE comprises an input-dependent gating function that distributes subtasks to one or a few specialized experts, and $K$ experts that discriminate the subset of images based on instance-level and class-level information. To address the challenging inference and learning problems, we present a scalable variant of Expectation-Maximization (EM) algorithm, which maximizes the ELBO and is free from any other loss or regularization terms. Moreover, we show that MoCo with spherical $k$-means, one of the two-stage baselines, is a special case of MiCE under various assumptions. Empirically, MiCE outperforms extensive prior methods and the strong two-stage baseline by a significant margin on several benchmarking datasets. For future work, one may explore different learning pretext tasks that potentially fit the clustering task, other than the instance discrimination one. Also, it would be an interesting and important future work to include dataset with a larger amount clusters, such as ImageNet. Besides, being able to obtain semantically meaningful clusters could be beneficial to weakly supervised settings~\citep{zhou2018a} where quality labels are scarce. \section{Acknowledgements} The authors would like to thank Tianyu Pang and Zihao Wang for the discussion and the reviewers for the valuable suggestions. This work was supported by NSFC Projects (Nos. 62061136001, 61620106010, 62076145), Beijing NSF Project (No. JQ19016), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, and the NVIDIA NVAIL Program with GPU/DGX Acceleration. C. Li was supported by the fellowship of China postdoctoral Science Foundation (2020M680572), and the fellowship of China national postdoctoral program for innovative talents (BX20190172) and Shuimu Tsinghua Scholar.
{ "timestamp": "2021-05-06T02:11:38", "yymm": "2105", "arxiv_id": "2105.01899", "language": "en", "url": "https://arxiv.org/abs/2105.01899" }
\section{Introduction} High speed autonomous racing presents us with unique challenges that have gained recent attention after significant progress in urban autonomous driving. Investigating the deep-learning-based end-to-end learning solution to autonomous racing from a data-centric perspective is novel and no prior work has investigated this approach earlier to the best of our knowledge. Since this approach is new, we have to find and address the challenges associated with Data Collection, Deep Neural Network (DNN) architectures, as well as the suitable training methodology for this specific problem. With Deep learning, it is typically the case that once we have a good dataset for a problem, multiple DNNs can achieve good performance. We call here the approach of exploring different architectures for solving the problem a model-centric approach. For example, an image recognition problem with the ILSVRC dataset \cite{ILSVRC15} can be solved with DNNs such as Alexnet \cite{krizhevsky2012imagenet}, ResNet \cite{he2016deep} and GoogleNet \cite{szegedy2015going}. In addition to that, these architectures can be successfully applied to similar other learning problems \cite{lin2014microsoft}. Since we can adopt the DNNs from other learning problems that are similar, to solve a new learning problem, a \textbf{data-centric approach to investigate the properties of a successful data collection strategy in the initial stages of tackling the considered new problem is necessary}. The urban autonomous driving problem is similar to autonomous racing with one major difference related to speed of operation; therefore, we can adopt DNNs from them and explore them more once we address the dataset challenges for autonomous racing, with particular emphasis on maximum speed encountered during both training and inference. First, with a data-centric approach, we explore the data with a fixed DNN similar to Nvidia's self-driving-car Neural Network Architecture \cite{bojarski2016end}. There are two reasons for selecting Nvidia's architecture, 1. End-to-end learning with this architecture was demonstrated to be successful, 2. Potential for achieving fast response time due to the simplified nature of the architecture, which is necessary for high-speed driving. \textbf{We here investigate successful data collection strategies for autonomous racing with a particular emphasis on the relationship between the amount of data collected and the maximum speed encountered during training for a steering prediction learning task}. Results are validated on two different tracks using the Unity simulation platform \cite{juliani2018unity}. On Track 1, we show results at constant speeds of 80 mph, 50 mph, and 30 mph, and on Track 2, we show results at constant speeds of 60 mph, 50 mph, and 30 mph. For speeds beyond 80 mph on Track 1 and 60 mph on Track 2, the throttle has to be varied requiring a throttle-dependent steering value. To investigate optimal data collection strategies in operating regimes with dependent steering and throttle values, we first need to solve the problem of joint steering and throttle prediction. Our second contribution solves this problem as outlined below. Second, \textbf{unlike previous literature, we show in this work that throttle prediction can be carried out via: 1. Learned convolutional layer weights from steering training without retraining them during throttle learning, and 2. Without using any feedback links \cite{Pan-RSS-18, 8460487} or recurrence (LSTM) \cite{8546189, 8099859, hecker2018end} in the deep neural network architecture}. This is achieved with a carefully designed training methodology explained in detail in Section V. We show that in order to jointly learn steering and throttle, we can use the same convolutional features learned during the steering training and only train subsequent fully connected layers dedicated for throttle inference. To validate the results, we demonstrate this on two different tracks. Video links given in section V results. \section{Experimental Setup} \subsection{Simulator} A custom simulator was built by importing Udacity self-driving-car-sim\footnote{https://github.com/udacity/self-driving-car-sim} into the Unity platform \cite{juliani2018unity} and modifying it to have variable maximum speed as well as the ability to build new tracks. It uses the same car model which is provided by Udacity self-driving-car-sim. We built two new tracks (1) Track 1, which is similar to the Indy Moto Speedway (IMS) (2) Track 2, which is customly designed to have sharp right and left turns. Track 1 and Track 2 are $\approx1.6$ miles and $\approx2.3$ miles long, respectively. \begin{figure}[ht] \centering \begin{minipage}[b]{0.45\linewidth} \includegraphics[scale=0.16]{track1_tilt_color.PNG} \caption{Track 1: Akin IMS} \label{fig:minipage1} \end{minipage} \quad \begin{minipage}[b]{0.45\linewidth} \includegraphics[scale=0.17]{track2_color.PNG} \caption{Track 2: Custom track with sharp right and left turns} \label{fig:minipage2} \end{minipage} \end{figure} \begin{figure}[ht] \centering \begin{minipage}[b]{0.45\linewidth} \includegraphics[scale=0.145]{track1_simulator.PNG} \caption{Track 1: Simulator and car} \label{fig:minipage3} \end{minipage} \quad \begin{minipage}[b]{0.45\linewidth} \includegraphics[scale=0.14]{track2_simulator.PNG} \caption{Track 2: Simulator and car} \label{fig:minipage4} \end{minipage} \end{figure} The car has three cameras placed on/near the dashboard, looking left, right, and center/front. In training mode, the simulator can record images from all three cameras, steering, throttle, and speed data at a frequency of 10 Hz. \subsection{Learning Methodology} Direct Policy Learning \cite{ross2011reduction} \cite{ross2010efficient}, a type of Imitation Learning is used to train all the Neural Network models in this work. Direct policy learning is a combination of Behavioural Cloning and Data Aggregation. Behavioral cloning is supervised learning with labels provided by the expert on the problem. Data Aggregation \cite{ross2011reduction} is done by having an expert driver in the feedback loop to improve the model's performance iteratively. Figure 5 illustrates the training methodology with an expert in the loop. The expert helps to remove the undesirable driving patterns by correcting/modifying old training data and also provides new training data to improve driving patterns based on significant observations during testing. \begin{figure}[h] \centering \includegraphics[scale=0.6]{training_methodology.PNG} \caption{Direct Policy Learning} \label{fig:my_label} \end{figure} \section{Data Collection Guidelines: A Maximum Speed Perspective} The key element to racing is driving at high speeds on an optimal path around the racing track. From our experiments, we observe that merely providing driving data on the optimal path is not sufficient for good autonomous driving performance, as the car would typically crash when slightly deviating from the training path. Hence, we break the racing data collection problem down into two stages. First, to learn the ability to drive at high speeds in different situations on track, and then second to learn to drive on an optimal path. We address the first problem in this work, and the second problem of learning the ability to drive on an optimal path will be explored in future work. \subsection{Driving Data Collection Strategy} The car on a racing track can run into a wide range of situations; therefore, for it to generalize well on the track, along with amount of driving data, we observe that \textbf{diversity in data is key to learning}. More specifically, we followed the guidelines below to diverse training data \begin{itemize} \item Driving on both right and left lanes. \item Changing lanes at different points of the track. \item Driving closer to the edges and coming back towards the center of the track. We observed that this is especially needed for tracks with sharp turns. \end{itemize} Table 1 lists the number of laps collected on each track at different constant speeds. We note that the number of laps provided here can be varied with roughly $\pm3$ laps to produce similar results. \begin{table}[h] \centering \caption{Data in Number of laps} \begin{tabular}{|l|ccc|} \hline \diagbox{Track}{Speed} & 80 mph / 60 mph & 50 mph & 30 mph \\ \hline Track 1 & 65 (80 mph) & 20 & 15\\ Track 2 & 50 (60 mph) & 25 & 15\\ \hline \end{tabular} \label{tab:caption} \end{table} During the data collection process, the simulator records images with width and height of 320 x 160 from all three cameras, steering values ranging from -1 to +1, throttle values between 0 and 1 and speed data. During training and testing, the input image is cropped and resized to width and height of 160 x 80. We note that the height is cropped to remove the upper part of the image corresponding to the background sky. Data augmentation is used during the training process via introducing vertical and horizontal translation of images and horizontal flipping with inverting the sign of the corresponding steering value. \subsection{Deep Neural Network} The deep neural network architecture from Nvidia \cite{bojarski2016end} was used for learning the steering angle. A schematic of the architecture is shown Figure 6. \begin{figure*}[h] \centering \includegraphics[width=14cm, height=3.5cm]{steering_architecture_3d_fulldetails_fullwords.png} \caption{Deep neural network used for learning steering and investigating successful data collection properties. This same network is used for steering learning network when jointly learning steering and throttle.} \label{fig:my_label} \end{figure*} \subsection{Training and Testing} Racing is about optimizing performance on a given specific track \cite{milliken1995race}. Our goal is to learn optimal control on a given specific track using a neural network model that generalizes for different and challenging situations during high-speed driving. To achieve that, we are training the model with large and diverse data while testing on the same track. Generalizability of high speed driving skills for different tracks will be explored in future work. Training in this problem is an iterative process as described above in Section II-B. We first collect data corresponding to a pre-specified number of laps, then train the model using the Mean Squared Error (MSE) loss and the Adaptive Moment estimation (Adam) optimizer \cite{kingma2014adam} with a learning rate of 0.0001. We then evaluate the model's performance on the same track, and test specifically for driving stability. If unstable driving is observed (e.g., the car crashes), we collect more training laps or correct the old data by removing undesired driving patterns and retrain the network. This process continues until the desired maximum speed is maintained in a stable manner. The appropriate number of laps to use for data collection, shown in Table I, is found using this iterative process. The amount of data/number of laps required for learning depends on the track and car speed at which training data is collected. Figure 6 shows the Deep Neural Network architecture used for learning steering control. \textbf{We observe that as the number of laps used for training increases, the number of training epochs has to be increased}. For a number of laps less than 20, the model was trained on 1000 epochs. For a number of laps greater than 20, the model was trained for 2000-3000 epochs. A batch size of 100 was used. The number of epochs during training was found empirically and typically depends on the quality of driving data. We also observe that an important factor which affects the quality of collected data is the availability of smooth turns. \subsection{Stability and Quality Criteria} Vision-based driving model's prediction accuracy/error is weakly correlated to the driving performance \cite{codevilla2018offline}. Hence, in order to evaluate a model's ability to drive at high speeds in different and challenging situations on a track, we test the model by running for five consecutive laps. During different lap runs, the model has a high chance of running into different and difficult situations that have not been seen before during training. If the model can drive the full five laps with zero collisions then we conclude that it demonstrates generalizability of driving in different situations on a given track. Having successfully completed five laps, we measure the average lap time over the five laps and further record if the car goes beyond the track edge. These two criteria serve as our measures of driving quality. \begin{table}[h] \begin{center} \begin{tabular}{| l | l |} \hline Criterion 1 & Successful completion of 5 laps \\ \hline Criterion 2 & Average Lap Time (ALT) \\ \hline Criterion 3 & Going beyond the track edge line \\ \hline \end{tabular} \caption{Considered Stability and Quality Criteria} \label{tab:criteria} \end{center} \end{table} \subsection{Results} Track 1 and Track 2 results are shown in Fig. 7 and 8, respectively. Empirically, the results demonstrate that the amount of training data governs the maximum speed at which the model will be able to drive in a stable manner. \begin{figure}[H] \centering \includegraphics[scale=0.6]{Track1_data_speed.png} \caption{Track 1 Results} \label{fig:my_label} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.6]{Track2_data_speed.png} \caption{Track 2 Results} \label{fig:my_label} \end{figure} Insight 1: Establishing \textit{direct relationship between the amount of data and the maximum speed learned by the model}. As we add training data, the model is able to learn higher and higher speeds until it approaches the constant training speed. This relationship helps to address the data-centric question of machine learning in this problem, which is: \textbf{When will adding more data help improve the model's performance, and when should we switch to a model-centric view and explore different architectures to improve performance?}. In Tables III and IV, \ding{55} indicates that the model is going beyond the edge line and \ding{51} indicates otherwise. We note that the maximum speed indicated in the tables can vary by $\pm1.5$ mph during a single lap run. The average lap times provided in the tables can also vary by $\pm0.5$ seconds. Tables III and IV show that as we increase the number of training laps, the maximum stable speed increases and consequently the lap time of the model improves. Further, we observe that the lap time can slightly improve for cases when the added training set does not result in higher maximum speed. The lap time here can be used as one measure of quality of driving when two models are able to successfully complete five laps. Another measure is our \emph{Edge Check}. If the quality of driving is poor, the car oscillates on track touching the edge of track. This can be seen with 15 laps of data on Track 1, where it is able to achieve 50 mph speed but has poor driving quality. The driving quality is improved through the iterative Learning methodology described in Section II-B. We note that increasing the size of the training set by training for more laps does not always lead to improvement in performance, as the quality of training data added has a significant impact. Due to undesired driving patterns in newly collected training data, we observed in several occasions that adding more data degrades the performance of the model. Having an expert in the feedback loop is hence needed to remove/correct the undesired driving patterns before retraining the model. \begin{table}[h] \centering \caption{Track 1, 80 mph training results} \begin{tabular}{|p{0.9in}|p{0.4in}|p{0.3in}|p{0.4in}|p{0.3in}| \hline \diagbox{Laps}{Metric} & Speed (mph) & 5 laps & ALT (sec) & Edge \\ \hline 10 & 30 & \checkmark & 195.8 & \checkmark \\ \hline 15 & 50 & \checkmark & 117.7 & \xmark \\ \hline 20 & 50 & \checkmark & 116.5 & \checkmark \\ \hline 35 & 50 & \checkmark & 116.1 & \checkmark \\ \hline 45 & 60 & \checkmark & 99.4 & \xmark \\ \hline 55 & 60 & \checkmark & 98.37 & \checkmark \\ \hline 65 & 70 & \checkmark & 84.19 & \checkmark \\ \hline \end{tabular} \label{tab:caption} \end{table} \begin{table}[h] \centering \caption{Track 2, 60 mph training results} \begin{tabular}{|p{0.9in}|p{0.4in}|p{0.3in}|p{0.4in}|p{0.3in}| \hline \diagbox{Laps}{Metric} & Speed (mph) & 5 laps & ALT (sec) & Edge \\ \hline 5 & 10 & \checkmark & 839.5 & \checkmark \\ \hline 10 & 30 & \checkmark & 278.2 & \checkmark \\ \hline 15 & 40 & \checkmark & 211.4 & \xmark \\ \hline 20 & 40 & \checkmark & 209.9 & \checkmark \\ \hline 25 & 50 & \checkmark & 158.1 & \xmark \\ \hline 40 & 50 & \checkmark & 158.8 & \checkmark \\ \hline 50 & 60 & \checkmark & 142.4 & \checkmark \\ \hline \end{tabular} \label{tab:caption} \end{table} Insight 2: \textbf{Training data with higher maximum speed enables stable driving at lower speeds with less data}. We note that the frequency of data collection is fixed at 10 Hz, so if 10 laps of data are collected at 30 mph, 50 mph, and 80 mph, then the number of data points collected at these speeds are different as more images will be available for the lower speeds. For example, the amount of data corresponding to 10 laps at 50 mph is the same as that corresponding to 16 laps at 80 mph. In Fig. 7 and 8, we observe a dominant pattern - that does not hold only in few exceptions - of being able to reach the lower speeds with less amount of data when training at higher speeds. This pattern can be extreme in some cases. For example, in Fig. 7, we see that 30 mph stable driving is achieved with 10 laps of training data at 80 mph, and the same speed can only be achieved when training at 30 mph via having 15 laps of training data; noting that the amount of data corresponding to 15 laps at 30 mph is equivalent to that of 40 laps at 80 mph. We are investigating these observations further with more experiments at various speeds and on different tracks for deeper insights and understanding. \section{Joint Steering and Throttle Prediction} The ability to make correct steering and throttle predictions without feedback links decreases the response time of the system, which is crucial for autonomous racing. This architecture is an attempt towards achieving that goal. Here, \textbf{we demonstrate that throttle can be learned using the convolutional layers from the trained steering model without retraining}. Only the subsequent fully connected layers dedicated to throttle prediction have to be trained. Also unlike the work in \cite{8546189}, we show that the throttle can be learned without any LSTM layers, and unlike the work in \cite{Pan-RSS-18}, without any speed feedback as well as without any other feedback links (e.g., as in \cite{8460487}). \subsection{Training Methodology} We divide the training methodology for learning steering and throttle into two parts: 1. Training for steering prediction, and then 2. Training for throttle prediction. An overview of the procedure is shown in Algorithm 1. \begin{figure}[h] \centering \includegraphics[scale=0.35]{steering_throttle_separate_finalabcaption.PNG} \caption{Two separate models for training steering and throttle} \label{fig:my_label} \end{figure} \subsubsection{Training for Steering Prediction} The model shown in Fig. 9(a) is used to learn steering. This network is the same as the one shown in Fig. 6. We train this model and improve the performance iteratively using the Direct Policy Learning methodology described in Section II-B. \subsubsection{Training for Throttle Prediction} For throttle training, the trained convolutional layers from the steering model are used. We freeze the learning of these convolutional layers, which indicates that we are using the same convolutional features as in the steering model to learn throttle. Different fully connected layers are used for learning throttle and are the only layers which are trained during throttle learning. Fig. 9(b) represents the model used for throttle learning. \begin{algorithm}[h] \SetAlgoLined \textbf{Result: Steering and Throttle Predictive Model} \While{Criteria 1, 2, and 3 are not satisfied}{ Get training data OR add more data SteeringModel $\leftarrow$ NewModel() \For{\texttt{epoch $\leq$ TotalEpochs}}{ trainSteering() } saveTrainedModel() ThrottleModel $\leftarrow$ NewModel() \textbf{ThrottleConvLayers $\leftarrow$ SteeringConvLayers} \For{\texttt{epoch $\leq$ TotalEpochs}}{ trainThrottle() } saveTrainedModel() MergeBothModels() EvaluatePerformanceOnTrack() } \caption{Training procedure for learning steering and throttle} \label{alg3:train} \end{algorithm} \subsubsection{Equivalent Full Model} We note that when testing the model's performance, conceptually, the equivalent final architecture consists of a combination of both networks shown in Fig. 9(a) and 9(b) into one as shown in Fig. 10. Since the convolutional layer weights are the same, these convolutional layers become the common backbone of the full network. However, we note that training this architecture takes place in two stages as outlined above. \begin{figure}[h] \centering \includegraphics[scale=0.4]{FullNNArchitecture_verticle_final.PNG} \caption{Merged Deep Neural Network architecture for joint prediction of steering and throttle} \label{fig:my_label} \end{figure} \subsection{Results} We show results for Track 2, which has sharp turns, to demonstrate the model's ability to control steering and throttle under challenging situations. An additional Lake Track, which is the default track provided in the Udacity simulator, is also used to demonstrate that this neural network architecture and training methodology can be successfully applied to different tracks. At the time of writing this draft, the training and testing process was in progress for the IMS-like Track 1, and the results will be included in future work. \subsubsection{Track 2} We provide a video demonstration on Track 2\footnote{Available at https://youtu.be/On0RhWkMLW4}. The video shows the ability of the considered deep neural network shown in Fig. 10 to successfully control steering and throttle. In particular, the model learns to reduce throttle at turns to achieve needed stability and reaches full throttle in straight stretches to achieve a maximum speed of 90 mph. \subsubsection{Lake Track} We also provide a video demonstration on the Lake Track\footnote{Available at https://youtu.be/ChaoakkGMgs}. \section{Conclusion} In this work, we explored significant challenges related to applying deep-learning-based end-to-end learning solutions for autonomous racing. First, with a data-centric perspective, we explored properties of successful data collection strategies with an emphasis on the maximum reachable speed via stable driving. We demonstrated the details of the data collection process while highlighting the importance of diversity and the amount of data. Important insights were drawn, specifically capturing the relationship between the amount of driving data, in terms of number of training laps, and the model's performance in terms of the maximum achievable speed via stable driving. We used two different tracks for that first study, one similar to the Indy Motor Speedway (IMS), and another with sharp turns. Our second contribution outlined how to jointly learn steering and throttle without feedback links or recurrence (LSTM) in the deep neural network architecture. We demonstrated the success of the proposed deep learning algorithm for joint steering and throttle prediction through the track with sharp turns as well as the Udacity simulator's default Lake Track. The relationship between maximum training speed and the model's steering and throttle prediction performance will be further analyzed in future work. \section{Acknowledgment} We would like to extend appreciation to Aref F. Malek, Alec F. Pannunzio, Mikail H. Khan, Abhimanyu Agarwal and Tommy Wygal at Purdue University for their contribution towards building the custom simulator and tracks. We would also like to thank all members of the Purdue-USMA Indy Autonomous Challenge (IAC) Black \& Gold Autonomous Racing team and the Autonomous Motorsports Purdue (AMP) student club for providing the environment and support that enabled this work, with special thanks to Michael Saxon at the Storm King Group, J. Eric Dietz and Joseph Pekny at Purdue University, Christopher Korpela at the USMA, and Robert Megennis at Andretti Autosport. \bibliographystyle{plain}
{ "timestamp": "2021-05-06T02:06:39", "yymm": "2105", "arxiv_id": "2105.01799", "language": "en", "url": "https://arxiv.org/abs/2105.01799" }
\section{Conclusions} \label{sec_conclusions} We proposed a discontinuous least squares finite element method for the Helmholtz equation. We designed an $L^2$ norm least squares functional with the weak imposition of the continuity across the interior faces, and minimized it over the discontinuous approximation space $\bmr{V}_h^m \times \bmr{\Sigma}_h^m$. We established the $k$-explicit error estimates for our method. The convergence rates were derived to be optimal under the energy norm and suboptimal under the $L^2$ norm for a fixed wavenumber $k$. Particularly, it was proved that our method is stable without any constraint on the mesh size. Numerical results in both two and three dimensions verified the accuracy of our method. \section*{Acknowledgements} This research was supported by the National Natural Science Foundation in China (No. 11971041). \section{Introduction} \label{sec_introduction} The Helmholtz equation is applicable in many physical applications involving time-harmonic wave propagation phenomena such as linear acoustics, elastodynamics and electrodynamics \cite{Thompson1995Galerkin, Hu2020novel,Farhat2003discontinuous, Nguyen2015hybridizable}. These important applications drive people to construct numerical methods to the Helmholtz equation \cite{Nguyen2015hybridizable}. The Helmholtz operator is indefinite with large wave numbers, which brings difficulties in developing efficient numerical schemes and establishing stability estimates \cite{Feng2009discontinuous}. It is well known that the quality of discrete numerical solutions to the Helmholtz equation dramatically depends on the wavenumber $k$, known as the pollution effect \cite{Babuska2000pollution}. In spite of such difficulties, there have been plenty of researches on numerical methods to this problem, such as finite element methods, spectral methods and discontinuous Galerkin methods. The finite element methods are widely used for solving the Helmholtz equation. A common choice is to use the standard conforming elements to approximate the solution. We refer to \cite{Ihlenburg1995finite, Ihlenburg1997finite, Melenk2011wavenumber} for more details of these conforming methods. Compared with conforming finite element methods, discontinuous Galerkin methods (DGMs) have several attractive features on the mesh structure \cite{Chen2013hybrid}. Without the continuity condition across interelement boundaries, the DGMs can be easily applied on the general mesh structure which may include different shapes of the elements and hanging points, and can allow the polynomial degrees be different from element to element. Thus, the DGMs have been applied in the numerical simulation of Helmholtz equation. We refer to \cite{Feng2009discontinuous, Feng2011discontinuous, Congreve2019robust, Hoppe2013convergence, Feng2013absolutely, Chen2013hybrid, Hiptmair2011plane} and the references therein for some typical DGMs. The least squares finite element method (LSFEM) is a general numerical method, which is based on the minimization of a quadratic functional, and we refer to \cite{Bochev1998review} for an overview to this method. The resulting system arising from most of the above Galerkin finite element methods are indefinite with a large wavenumber, while the LSFEM can always provide a positive definite linear system \cite{Chang1990least, Chen2017first}. Considering this attractive property, LSFEM has been applied to numerically solve the Helmholtz equation \cite{Lee2000first, Chen2017first, Thompson1995Galerkin, Chang1990least, Hu2020novel, Monk1999least}. In this paper, we propose a discontinuous least squares finite element method. We introduce an $L^2$ norm least squares functional involving the proper penalty terms which weakly enforce the continuity across the interior faces as well as the boundary conditions. The numerical solution is sought by minimizing the functional over piecewise polynomial spaces. Such similar ideas have been applied to many problems, see \cite{Bensow2005div, Bensow2005discontinuous, Bochev2012locally, li2019least}. With discontinuous elements, the proposed method is easily implemented and has great flexibility on the mesh structure. The discretized system is still shown to be symmetric positive definite. Generally, the advantages of DGM and LSFEM are combined in this method. In finite element methods, the pollution effect lies in the constant $C$ of the error estimate as the wavenumber increases \cite{Babuska2000pollution, Melenk2011wavenumber}. For the proposed method, we establish the wavenumber explicit error estimate. Our method is shown to be stable without any assumption on the mesh size. We prove that with respect to the fixed wavenumber, our method has an optimal convergence rate in the energy norm and a sub-optimal convergence rate in the $L^2$ norm. We observe that the constants in the energy error and $L^2$ error are of the order of $O(k^2)$ and $O(k)$, respectively. Our theoretical estimates are verified by some numerical experiments in two and three dimensions. We also include an example to numerically explore the pollution effect as the wavenumber $k$ increases. We note that the least squares functional naturally provides an {\it a posteriori} indicator, and from this we present an $h$-adaptive algorithm and test its accuracy by solving a low-regularity problem. The rest of this paper is organized as follows. In Section \ref{sec_preliminaries}, we introduce the notation and define the first-order system to the Helmholtz equation. The $k$-explicit stability result of the Helmholtz equation is also recalled in this section. In Section \ref{sec_method}, we define the least squares functional and propose our least squares method. The analysis of errors is also given in this section. In Section \ref{sec_numericalresults}, we conduct a series of 2D and 3D numerical tests to demonstrate the accuracy of the proposed method. \section{Discontinuous Least Squares Method for Helmholtz Equation} \label{sec_method} Aiming to construct a discontinuous least squares finite element method for the system \eqref{eq_firstHelmholtz}, we first define a least squares functional based on \eqref{eq_firstHelmholtz}, which reads \begin{equation} \begin{aligned} J_h(u, \bm{p}) &:= \sum_{K \in \mc{T}_h} \left( \| \nabla \cdot \bm{p} + ku + \wt{f} \|^2_{L^2(K)} + \| \nabla u - k\bm{p} \|^2_{L^2(K)} \right) \\ &+ \sum_{e \in \mc{E}_h^i} \frac{1}{h_e} \left( \| \jump{u} \|^2_{L^2(e)} + \| \jump{\bm{\mr{n}} \cdot \bm{p}} \|^2_{L^2(e)} \right) \\ &+ \sum_{e \in \mc{E}_h^D} \frac{1}{h_e} \| u - g_0 \|^2_{L^2(e)} + \sum_{e \in \mc{E}_h^R} \frac{1}{h_e} \| \bm{\mr{n}} \cdot \bm{p} + \bm{\mr i} u - \wt{g} \|^2_{L^2(e)}, \end{aligned} \label{eq_functional} \end{equation} The terms in \eqref{eq_functional} defined on $\mc{E}_h^i$, $\mc{E}_h^D$ and $\mc{E}_h^R$ weakly impose the continuity condition and the boundary condition, respectively. Then we introduce two approximation spaces $\bmr{V}_h^m$ and $\bmr{\Sigma}_h^m$ for the variables $u$ and $\bm{p}$, respectively: \begin{displaymath} \bmr{V}_h^m := V_h^m, \qquad \bmr{\Sigma}_h^m := (V_h^m)^d, \end{displaymath} where $V_h^m$ is the {\it complex-valued} piecewise polynomial space, \begin{displaymath} V_h^m := \left\{ v_h \in L^2(\Omega) \ | \ v_h|_K \in \mb{P}_m(K), \ \forall K \in \mc{T}_h \right\}. \end{displaymath} One can write any function $v_h \in \bmr{V}_h^m$ and any function $\bm{q}_h \in \bmr{\Sigma}_h^m$ as \begin{equation*} v_h = \sum_l v_l \varphi_l, \quad \bm{q}_h = \sum_l q_l \bm{\psi}_l, \end{equation*} where $\{ \varphi_l \}$ is a basis of the standard real-valued scalar piecewise polynomial space, and $\{ \bm{\psi_l} \}$ is a basis of the standard real-valued vector piecewise polynomial space, and $\{v_l\}$ and $\{ q_l \}$ are both complex combination coefficients. Apparently, the functions in both spaces $\bmr{V}_h^m$ and $\bmr{\Sigma}_h^m$ may be discontinuous across interior faces. In this paper, we seek the numerical solution $(u_h, \bm{p}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m$ by minimizing the functional \eqref{eq_functional} over the space $\bmr{V}_h^m \times \bmr{\Sigma}_h^m$, which takes the form: \begin{equation} (u_h, \bm{p}_h) = \mathop{\arg \min}_{(v_h, \bm{q}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m} J_h(v_h, \bm{q}_h). \label{eq_minJ} \end{equation} To solve the minimization problem \eqref{eq_minJ}, we can write the corresponding Euler-Lagrange equation, which reads: {\it find $(u_h, \bm{p}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m$ such that} \begin{equation} a_h(u_h, \bm{p}_h; v_h, \bm{q}_h) = l_h(v_h, \bm{q}_h), \qquad \forall (v_h, \bm{q}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m, \label{eq_bilinear} \end{equation} where the bilinear form $a_h(\cdot; \cdot)$ and the linear form $l_h(\cdot)$ are defined as \begin{equation} \begin{aligned} a_h(u_h, \bm{p}_h; v_h, \bm{q}_h) &:= \sum_{K \in \mc{T}_h} \int_K (\nabla \cdot \bm{p}_h + ku_h) \ \overline{(\nabla \cdot \bm{q}_h + kv_h)} \d{x} \\ &+ \sum_{K \in \mc{T}_h} \int_K (\nabla u_h - k\bm{p}_h) \cdot \overline{(\nabla v_h - k\bm{q}_h)} \d{x} \\ &+ \sum_{e \in \mc{E}_h^i} \frac{1}{h_e} \left( \int_e \jump{u_h} \cdot \overline{\jump{v_h}} \d{s} + \int_e \jump{\bm{\mr{n}} \cdot \bm{p}_h} \ \overline{\jump{\bm{\mr{n}} \cdot \bm{q}_h}} \d{s} \right) \\ &+ \sum_{e \in \mc{E}_h^D} \frac{1}{h_e} \int_e u_h \ \overline{v_h} \d{s} + \sum_{e \in \mc{E}_h^R} \frac{1}{h_e} \int_e (\bm{\mr{n}} \cdot \bm{p}_h + \bm{\mr i} u_h) \ \overline{(\bm{\mr{n}} \cdot \bm{q}_h + \bm{\mr i} v_h)} \d{s}, \end{aligned} \label{eq_bilinearform} \end{equation} and \begin{displaymath} \begin{aligned} l_h(v_h, \bm{q}_h) &:= \sum_{K \in \mc{T}_h} \int_K \wt{f} \ \overline{\nabla \cdot \bm{q}_h} \d{x} + \sum_{K \in \mc{T}_h} \int_K f \ \overline{v_h} \d{x} \\ &+ \sum_{e \in \mc{E}_h^D} \frac{1}{h_e} \int_e g_0 \ \overline{v_h} \d{s} \\ &+ \sum_{e \in \mc{E}_h^R} \frac{1}{h_e} \int_e \wt{g} \ \overline{(\bm{\mr{n}} \cdot q_h + \bm{\mr i} v_h)} \d{s} \end{aligned} \end{displaymath} Next, we will derive the error estimates to the problem \eqref{eq_bilinear} and focus on how the error bounds depend on the wavenumber $k$. To do so, we first define two spaces $\bmr{V}_h$ and $\bmr{\Sigma}_h$ for variables $u$ and $\bm{p}$, respectively, as \begin{displaymath} \bmr{V}_h := \bmr{V}_h^m + H_D^1(\Omega), \qquad \bmr{\Sigma}_h := \bmr{\Sigma}_h^m + H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega), \end{displaymath} which are equipped with the following energy norms, \begin{displaymath} \unorm{u}^2 := \sum_{K \in \mc{T}_h} \left( k^2 \| u \|^2_{L^2(K)} + \| \nabla u \|^2_{L^2(K)} \right) + \sum_{e \in \mc{E}_h^i \cup \mc{E}_h^D} \frac{1}{h_e} \| \jump{u} \|_{L^2(e)}^2, \qquad \forall u \in \bmr{V}_h, \end{displaymath} and \begin{displaymath} \pnorm{\bm{p}}^2 := \sum_{K \in \mc{T}_h} \left( k^2 \| \bm{p} \|^2_{L^2(K)} + \| \nabla \cdot \bm{p} \|^2_{L^2(K)} \right) + \sum_{e \in \mc{E}_h^i} \frac{1}{h_e} \| \jump{\bm{\mr{n}} \cdot \bm{p}} \|_{L^2(e)}^2, \qquad \forall \bm{p} \in \bmr{\Sigma}_h, \end{displaymath} and we define the energy norm $\enorm{\cdot}$ as \begin{displaymath} \enorm{(u, \bm{p})}^2 := \unorm{u}^2 + \pnorm{\bm{p}}^2 + \sum_{e \in \mc{E}_h^R} \frac{1}{h_e} \| \bm{\mr{n}} \cdot \bm{p} + \bm{\mr i} u \|^2_{L^2(e)}, \qquad \forall (u, \bm{p}) \in \bmr{V}_h \times \bmr{\Sigma}_h. \end{displaymath} It is easy to see that $\unorm{\cdot}$, $\pnorm{\cdot}$ and $\enorm{\cdot}$ are well-defined norms for their corresponding spaces. We will derive the error estimates for the numerical solution to the problem \eqref{eq_bilinear} under the Lax-Milgram framework, which requires us to indicate the continuity and the coercivity of the bilinear form \eqref{eq_bilinear}. We first state the continuity result of the bilinear form $a_h(\cdot; \cdot)$ under the norm $\enorm{\cdot}$. \begin{lemma} Let the bilinear form $a_h(\cdot; \cdot)$ be defined as \eqref{eq_bilinearform}, there exists a constant $C$ such that \begin{equation} | a_h(u, \bm{p}; v, \bm{q}) | \leq C \enorm{(u, \bm{p})} \enorm{(v, \bm{q})}, \label{eq_continuity} \end{equation} for any $(u, \bm{p}), (v, \bm{q}) \in \bmr{V}_h \times \bmr{\Sigma}_h$. \label{le_continuity} \end{lemma} \begin{proof} Using the Cauchy-Schwartz inequality, we have that \begin{displaymath} \sum_{K \in \mc{T}_h} \int_K \nabla \cdot \bm{p}_h \ \overline{\nabla \cdot \bm{q}_h} \d{x} \leq \left( \sum_{K \in \mc{T}_h} \| \nabla \cdot \bm{p}_h \|^2_{L^2(K)} \right)^{\frac{1}{2}} \left( \sum_{K \in \mc{T}_h} \| \nabla \cdot \bm{q}_h \|^2_{L^2(K)} \right)^{\frac{1}{2}}. \end{displaymath} Other terms that appear in the bilinear form \eqref{eq_bilinearform} can be bounded similarly, which gives us the inequality \eqref{eq_continuity} and completes the proof. \end{proof} Then we will focus on the coercivity to the bilinear form $a_h(\cdot; \cdot)$. We first prove the stability property in the continuous level by making the use of result \eqref{eq_stability}. In this step, the wavenumber $k$ is extracted from the constant that appeared in the inequality, which allows us to obtain $k$-explicit error estimates. \begin{lemma} Let $k_0$ be an arbitrary strictly positive number. For $k \geq k_0$, there exists a constant $C$ such that \begin{equation} \unorm{u} + \pnorm{\bm{p}} \leq C k \left( \| \nabla u - k\bm{p} \|_{L^2(\Omega)} + \| \nabla \cdot \bm{p} + ku \|_{L^2(\Omega)} + \| \bm{\mr{n}} \cdot \bm{p} + \bm{\mr i} u \|_{L^2(\Gamma_R)} \right), \label{eq_HelmholtzInequality} \end{equation} for all $u \in H_{D}^1(\Omega)$ and $\bm{p} \in H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega)$. \label{le_HelmholtzInequality} \end{lemma} \begin{proof} For any $u \in H_D^1(\Omega)$ and $\bm{p} \in H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega)$, we define \begin{displaymath} \begin{aligned} f_1 := &-\nabla \cdot \bm{p} - ku, \quad \bm{f}_2 := \nabla u - k\bm{p}, \quad \text{ in } \Omega, \\ &g := \bm{\mr{n}} \cdot \bm{p} + \bm{\mr i} u, \quad \text{ on } \Gamma_R. \end{aligned} \end{displaymath} and let \begin{displaymath} a(u,v) := (\nabla u, \nabla v)_{L^2(\Omega)} - k^2(u,v)_{L^2(\Omega)} + \bm{\mr i} k(u,v)_{L^2(\Gamma_R)}, \quad \forall v \in H^1_D(\Omega). \end{displaymath} Using the integration by parts, we obtain that \begin{equation*} a(u,v) = k(f_1, v)_{L^2(\Omega)} + (\bm{f}_2, \nabla v)_{L^2(\Omega)} + k(g,v)_{L^2(\Gamma_R)}, \quad \forall v \in H^1_D(\Omega). \end{equation*} We take $v = u + \xi$, where $\xi \in H_D^1(\Omega)$ is the unique solution of the adjoint problem: \begin{equation} a(\xi, \phi) = 2k^2(u, \phi)_{L^2(\Omega)}, \quad \forall \phi \in H_D^1(\Omega). \label{eq_adjoint} \end{equation} Then by the stability result \eqref{eq_stability}, we have that \begin{equation} \| \nabla \xi \|_{L^2(\Omega)} + k \| \xi \|_{L^2(\Omega)} \leq C k^2 \| u \|_{L^2(\Omega)}. \label{eq_xistability} \end{equation} From \eqref{eq_adjoint} and \eqref{eq_xistability}, we get that \begin{displaymath} \begin{aligned} \text{Re}(a(u,&u+\xi)) = \| \nabla u \|_{L^2(\Omega)}^2 + k^2 \| u \|_{L^2(\Omega)}^2 \vspace{1ex}\\ &\leq k \| f_1 \|_{L^2(\Omega)} (\| u \|_{L^2(\Omega)} + \| \xi \|_{L^2(\Omega)}) + \| \bm{f}_2 \|_{L^2(\Omega)} ( \| \nabla u \|_{L^2(\Omega)} + \| \nabla \xi \|_{L^2(\Omega)}) \vspace{1ex}\\ &\,\, + k \| g \|_{L^2(\Gamma_R)}(\| u \|_{L^2(\Gamma_R)} + \| \xi \|_{L^2(\Gamma_R)}) \vspace{1ex} \\ & \leq C k (\| f_1 \|_{L^2(\Omega)} + \| \bm{f}_2 \|_{L^2(\Omega)}) ( \| \nabla u \|_{L^2(\Omega)} + k \| u \|_{L^2(\Omega)}) \vspace{1ex}\\ & \,\, + k \| g \|_{L^2(\Gamma_R)} ( \| u \|_{L^2(\Gamma_R)} + \| \xi \|_{L^2(\Gamma_R)}). \end{aligned} \end{displaymath} The rest is to bound the boundary terms $\| u \|_{L^2(\Gamma_R)}$ and $\| \xi \|_{L^2(\Gamma_R)}$. Taking $\phi = \xi$ in \eqref{eq_adjoint} gives us that \begin{displaymath} \begin{aligned} \text{Im}(a(\xi, &\xi)) = k \| \xi \|_{L^2(\Gamma_R)}^2 \leq 2k^2 \| \xi \|_{L^2(\Omega)} \| u \|_{L^2(\Omega)} \vspace{1ex}\\ &\leq k(\| \xi \|_{L^2(\Omega)}^2 + k^2 \| u \|_{L^2(\Omega)}^2), \end{aligned} \end{displaymath} which implies \begin{equation} \| \xi \|_{L^2(\Gamma_R)} \leq C(\| \xi \|_{L^2(\Omega)} + k \| u \|_{L^2(\Omega)}). \label{eq_xitrace} \end{equation} The term $\| u \|_{L^2(\Gamma_R)}$ is bounded by the trace theorem \begin{displaymath} \begin{aligned} \| u \|_{L^2(\Gamma_R)}^2 &\leq C \| u \|_{L^2(\Omega)} \| u \|_{H^1(\Omega)} \vspace{1ex} \\ &\leq C(\frac{k^2}{2} \| u \|_{L^2(\Omega)}^2 + \frac{1}{2k^2} \| u \|_{H^1(\Omega)}^2), \end{aligned} \end{displaymath} which implies \begin{equation} \| u \|_{L^2(\Gamma_R)}^2 \leq C(k\| u \|_{L^2(\Omega)} + \| \nabla u \|_{L^2(\Omega)}). \label{eq_utrace} \end{equation} Combing \eqref{eq_xitrace} and \eqref{eq_utrace}, we get \begin{displaymath} k \| u \|_{L^2(\Omega)} + \| \nabla u \|_{L^2(\Omega)} \leq C k (\| f_1 \|_{L^2(\Omega)} + \| \bm{f}_2 \|_{L^2(\Omega)} + \| g \|_{L^2(\Gamma_R)}). \end{displaymath} Further, \begin{displaymath} \begin{aligned} \pnorm{\bm{p}} &\leq C( k \| \bm{p} \|_{L^2(\Omega)} + \| \nabla \cdot \bm{p} \|_{L^2(\Omega)}) \vspace{1ex}\\ &\leq C( \| \nabla u \|_{L^2(\Omega)} + \| \bm{f}_2 \|_{L^2(\Omega)} + k \| u \|_{L^2(\Omega)} + \| f_1 \|_{L^2(\Omega)} ) \vspace{1ex}\\ & \leq Ck (\| f_1 \|_{L^2(\Omega)} + \| \bm{f}_2 \|_{L^2(\Omega)} + \| g \|_{L^2(\Gamma_R)}), \end{aligned} \end{displaymath} which gives us the estimate \eqref{eq_HelmholtzInequality} and completes the proof. \end{proof} We state the following lemmas, together with Lemma \ref{le_HelmholtzInequality}, to prove the coercivity to the bilinear form $a_h(\cdot; \cdot)$. \begin{lemma} For any ${u}_h \in \bmr{V}_h^m$, there exists a piecewise polynomial function $v_h \in H^1_D(\Omega)$ such that \begin{equation} \begin{aligned} \sum_{K \in \mc{T}_h} \left( h_K^{-2} \|u_h - v_h \|_{L^2(K)}^2 + \|\nabla (u_h - v_h) \|_{L^2(K)}^2 \right) \leq C \sum_{e \in \mc{E}_h^i \cup \mc{E}_h^D} h_e^{-1} \| \jump{u_h}\|_{L^2(e)}^2. \end{aligned} \label{eq_projection1} \end{equation} \label{le_projection1} \end{lemma} \begin{proof} The proof follows from the techniques as in \cite{Karakashian2003post}. For each $K \in \mc{T}_h$, let $\mc{N}_K = \left\{ \bm{x}_K^{(i)}, \,\, i = 1, \cdots, M \right\}$ be the Lagrange points of $K$ and $\left\{ \varphi_K^{(i)}, \,\, i = 1 ,\cdots, M \right\}$ be the corresponding Lagrange basis, where $M$ is the number of degrees of freedom for the Lagrange element of order $m$. We set $\mc{N} := \cup_{K \in \mc{T}_h} \mc{N}_K$ and \begin{equation*} \begin{aligned} \mc{N}_i &:= \left\{ \nu \in \mc{N} : \exists K \in \mc{T}_h, \,\, \nu \text{ is interior to } K \right\}, \\ \mc{N}_b & := \left\{ \nu \in \mc{N} : \nu \text{ lies on } \Gamma_D \right\}, \\ \mc{N}_{e} & := \mc{N} \backslash (\mc{N}_i \cup \mc{N}_b). \end{aligned} \end{equation*} Let $\omega_{\nu} = \left\{ K \in \mc{T}_h | \,\, \nu \in K \right\}$ and denote its cardinality by $| \omega_{\nu} |$. Since the mesh is shape-regular, $|\omega_{\nu}|$ is bounded by a constant. For any given $u_h \in \bmr{V}_h^m$, there exists a group of coefficients $\{a_K^{(j)}\}$ such that \begin{equation*} u_h = \sum_{K \in \mc{T}_h} \sum_{1 \leq j \leq M} \alpha_K^{(j)} \varphi_K^{(j)}. \end{equation*} To each node $\nu \in \mc{N}$, we associate the basis function $\varphi^{(\nu)}$ given by \begin{displaymath} \varphi^{(\nu)} |_{K} := \left\{ \begin{aligned} &\varphi_K^{(j)}, && \text{if } \bm{x}_K^{(j)} = \nu, \\ &0, && \text{otherwise}. \\ \end{aligned} \right. \end{displaymath} We define $v_h \in \bmr{V}_h^m \cap H^1_D(\Omega)$ by \begin{equation*} v_h = \sum_{\nu \in \mc{N}} \beta^{(\nu)} \varphi^{(\nu)}. \end{equation*} where \begin{displaymath} \beta^{(\nu)} := \left\{\begin{aligned} &0, && \text{if } \nu \in \mc{N}_b, \\ &\frac{1}{|\omega_{\nu}|} \sum_{\bm{x}_K^{(j)} = \nu} \alpha_K^{(j)}, && \text{if } \nu \in \mc{N} \backslash \mc{N}_b. \\ \end{aligned} \right. \end{displaymath} Let $\beta_K^{(j)} = \beta^{(\nu)}$ whenever $\bm{x}_K^{(j)} = \nu$. By scaling argument, we have that \begin{equation*} \| \nabla \varphi_K^{(j)} \|_{L^2(K)}^2 \leq C h_K^{d-2}, \qquad \|\varphi_K^{(j)} \|^2_{L^2(K)} \leq C h_K^d. \end{equation*} Hence, \begin{equation*} \begin{aligned} \sum_{K \in \mc{T}_h} \| \nabla(u_h &- v_h) \|_{L^2(K)}^2 \leq C \sum_{K \in \mc{T}_h} h_K^{d-2} \sum_{j = 1}^{M} | \alpha_K^{(j)} - \beta_K^{(j)} |^2 \\ &\leq C \sum_{\nu \in \mc{N}_{e}} h_{\nu}^{d-2} \sum_{\bm{x}_K^{(j)} = \nu} |\alpha_K^{(j)} - \beta^{(\nu)}|^2 + C \sum_{\nu \in \mc{N}_b} h_{\nu}^{d-2} \sum_{\bm{x}_K^{(j)} = \nu} | \alpha_K^{(j)} |^2 \\ &\leq C \sum_{e \in \mc{E}_h^i} h_e^{ d-2 } \sum_{\nu \in e} | \alpha_{K^+}^{(j_{\nu}^{+})} - \alpha_{K^-}^{(j_{\nu}^{-})} |^2 + C \sum_{e \in \mc{E}_h^D} h_e^{ d-2 } \sum_{\nu \in e} | \alpha_K^{(j_{\nu})} |^2, \end{aligned} \end{equation*} with $h_{\nu} = \max\limits_{K \in \omega_{\nu}} h_K$ and $\bm{x}_{K^+}^{(j_{\nu}^{+})} = \bm{x}_{K^-}^{(j_{\nu}^{-})} = \nu$. Note that $ |\alpha_{K^+}^{(j_{\nu}^{+})} - \alpha_{K^-}^{(j_{\nu}^{-}) }| \leq C\|\jump{u_h} \|_{L^\infty(e)} $, together with the inverse inequality, we have \begin{equation*} \sum_{K \in \mc{T}_h} \| \nabla (u_h - v_h) \|_{L^2(K)}^2 \leq C \sum_{e \in \mc{E}_h^D \cup \mc{E}_h^i} h_e^{d-2} \|\jump{u_h} \|_{L^{\infty}(e)}^2 \leq C \sum_{e \in \mc{E}_h^D \cup \mc{E}_h^i} h_e^{-1} \| \jump{u_h} \|_{L^2(e)}^2. \end{equation*} Similarly, \begin{equation*} \sum_{K \in \mc{T}_h} h_K^{-2} \| u_h - v_h \|_{L^2(K)}^2 \leq C \sum_{e \in \mc{E}_h^i \cup \mc{E}_h^D} h_e^{-1} \| \jump{u_h} \|_{L^2(e)}^2, \end{equation*} which deduces to \eqref{eq_projection1} and completes the proof. \end{proof} \begin{lemma} For any $\bm{p}_h \in \bmr{\Sigma}_h^m$, there exists a piecewise polynomial function $\bm{w}_h \in H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega)$ such that \begin{equation} \begin{aligned} \sum_{K \in \mc{T}_h} \left( h_K^{-2} \|\bm{p}_h- \bm{w}_h \|_{L^2(K)}^2 + \|\nabla \cdot ( \bm{p}_h- \bm{w}_h ) \|_{L^2(K)}^2 \right) \leq C \sum_{e \in \mc{E}_h^i} h_e^{-1} \| \jump{\bm{\mr{n}} \cdot \bm{p}_h }\|_{L^2(e)}^2. \end{aligned} \label{eq_Hdivprojection} \end{equation} \label{le_Hdivprojection} \end{lemma} \begin{proof} We prove the result by using the projection techniques as in \cite{Karakashian2003post, Li2020discontinuous}. We will construct a new piecewise polynomial function in the \ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi space that satisfies the estimate \eqref{eq_Hdivprojection}. We first present some details about the Raviart-Thomas (\ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi\!) element, which is the well-known $H(\ifmmode \mathrm{div} \else \text{div}\fi,\Omega)$-conforming element proposed in \cite{Raviart1977mixed}. For a bounded domain $D$, we denote by $\wt{\mb{P}}_k(D)$ the set of homogeneous polynomials of degree $k$ on $D$. For the element $K \in \mc{T}_h$, the \ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi element $\ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K)$ of degree $k$ is given as \begin{displaymath} \ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K) := \mb{P}_k(K)^d + \bm{x}\wt{\mb{P}}_k(K). \end{displaymath} For a face $e$, we denote by $\left\{ \bm{q}_e^i \right\}_{i =1}^{N_e}$ a basis of the polynomial space $\mb{P}_k(e)$, and for an element $K$, we deonte by $\left\{ \bm{q}_K^i \right\}_{i=1}^{N_b}$ a basis of the polynomial space $\mb{P}_{k - 1}(K)$. For a vector field $\bm{v} \in \ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K)$, the moments associated with faces of $K$ and $K$ itself are defined as \begin{equation} \begin{aligned} M_K^e(\bm{v}) & := \left\{ \int_e (\bm{\mr{n}}_e \cdot \bm{v}) \bm{q}_e^i \d{s} \right\}, \quad \text{for any face } e \in \mc{E}(K), \\ M_K^b(\bm{v}) & := \left\{ \int_K \bm{v} \cdot \bm{q}_K^i \d{x} \right\},\\ \end{aligned} \label{eq_RTmoments} \end{equation} where $\mc{E}(K)$ denotes the set of faces of the element $K$. The polynomials in $\ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K)$ can be uniquely determined by the moments given in \eqref{eq_RTmoments}\cite{Raviart1977mixed}. For any $\bm{q} \in \ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K)$, we define $q_{K, e}^i \in M_K^e(\bm{q})(1 \leq i \leq N_e)$, $q_{K,b}^i \in M_K^b(\bm{q})(1 \leq i \leq N_b)$ as its corresponding moments, respectively. We denote by $\left\{ \bm{\phi}_{K, e}^i \right\}(1 \leq i \leq N_e)$ and $\left\{ \bm{\phi}_{K,b}^i \right\}(1 \leq i \leq N_b)$ the basis functions of $\ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K)$ with respect to the moments $M_K^e(\cdot)$ and $M_K^b(\cdot)$, respectively. From $\left\{ \bm{\phi}_{K, e}^i \right\}$ and $\left\{ \bm{\phi}_{K,b}^i \right\}$ and the moments in \eqref{eq_RTmoments}, any polynomial $\bm{q} \in \ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K)$ can be expressed as \begin{displaymath} \bm{q} = \sum_{e \in \mc{E}(K)} \sum_{i = 1}^{N_e} q_{K, e}^i \bm{\phi}_{K, e}^i + \sum_{i = 1}^{N_b} q_{K, b}^i \bm{\phi}_{K, b}^i. \end{displaymath} Then we will take advantage of the affine equivalence of elements. Let $\wh{K}$ be the reference simplex of $d$ dimensions and we employ the Piola transformation that maps a vector $\wh{\bm{v}}: \wh{K} \rightarrow \mb{R}^d$ to a vector $\bm{v}: K \rightarrow \mb{R}^d$. The Piola transformation preserves the moments and we refer to \cite{Raviart1977mixed, Brezzi1991mixed} for detailed properties of the Piola transformation. Then we have that \begin{equation} h_K^{-2}\|{\bm{q}} \|_{L^2({K})}^2 + \|\nabla \cdot {\bm{q}} \|_{L^2({K})}^2 \leq C h_K^{-d} \left( \sum_{e \in \mc{E}(K)} \sum_{i = 1}^{N_e} (q_{K, e}^i)^2 + \sum_{i = 1}^{N_b} (q_{K, b}^i)^2 \right). \label{eq_Kdivmoments} \end{equation} It is clear that \eqref{eq_Kdivmoments} holds on the reference element. On a general element $K$, we obtain the estimate \eqref{eq_Kdivmoments} from the properties of the Piola transformation, $\|\bm{q}\|_{L^2(K)}^2 \leq C h_K^{-d + 2} \|\wh{\bm{q}} \|_{L^2(\wt{K})}^2$ and $\| \nabla \cdot \bm{q}\|_{L^2(K)}^2 \leq Ch_K^{-d} \|\wh{\nabla} \cdot \wh{\bm{q}} \|_{L^2(\wt{K})}^2$. We let $e \in \mc{E}_h^i$ be an interior face shared by two adjacent elements $K_1$ and $K_2$. For two polynomials $\bm{q}_1 \in \ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K_1)$ and $\bm{q}_2 \in \ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K_2)$, we state that there exists a constant $C$ such that \begin{equation} \sum_{i = 1}^{N_e} (q_{K_1, e}^i - q_{K_2, e}^i)^2 \leq C h_e^{d - 1} \int_e (\bm{\mr{n}} \cdot(\bm{q}_1 - \bm{q}_2))^2 \d{s}. \label{eq_q1q2f} \end{equation} We also apply the scaling argument to obtain \eqref{eq_q1q2f}. We first assume that both $K_1$ and $K_2$ are of the reference size. We note that the left hand side of \eqref{eq_q1q2f} vanishes implies that the right hand side of \eqref{eq_q1q2f} also equals to zero and vice verse. The estimate \eqref{eq_q1q2f} holds due to the equivalence of norms over finite dimensional spaces. For general cases, we can obtain \eqref{eq_q1q2f} from the scaling estimate $\| \bm{\wh{q}} \|_{L^2(e)}^2 \leq C h_K^{d - 1} \| \bm{q} \|_{L^2(e)}^2$. Now we are ready to prove Lemma \ref{le_Hdivprojection} by constructing a new piecewise polynomial $\bm{w}_h \in H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega)$ satisfying \eqref{eq_Hdivprojection}. Clearly, $\mb{P}_k(K)^d \subset \ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k(K)$ for any $K \in \mc{T}_h$ and we let $\{ p_{K, e}^i \}$ and $\{ p_{K, b}^i\}$ be the moments of $\bm{p}_h$ for any $K \in \mc{T}_h$ and any $e \in \mc{E}(K)$. We construct $\bm{w}_h$ by defining the following moments on faces and elements: \begin{equation} w_{K, e}^i := \frac{1}{|N(e)|} \sum_{K' \in N(e)} p_{K', e}^i, \quad 1 \leq i \leq N_e, \quad \forall e \in \mc{E}_h, \label{eq_wKfi} \end{equation} and \begin{equation} w_{K, b}^i = p_{K, b}^i, \quad 1 \leq i \leq N_b, \quad \forall K \in \mc{T}_h, \label{eq_wKbi} \end{equation} where $N(e) := \left\{ K' \in \mc{T}_h \ | \ e \subset \mc{E}(K') \right\}$ and $|N(e)|$ denotes the cardinality of $N(e)$. Obviously, $1 \leq |N(e)| \leq 2$ and $|N(e)| = 1$ implies $e \in \mc{E}_h^b$. By the property of $\ifmmode \mathrm{\bf RT} \else \text{\bf RT} \fi_k$ space, $\bm{w}_h \in H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega)$ from these moments. The rest is to bound $\bm{p}_h - \bm{w}_h$. On the element $K$, by \eqref{eq_Kdivmoments} and \eqref{eq_wKfi} we have that \begin{displaymath} h_K^{-2} \|\bm{p}_h - \bm{w}_h \|_{L^2(K)}^2 + \| \nabla \cdot (\bm{p}_h - \bm{w}_h) \|_{L^2(K)}^2 \leq C h_K^{-d} \left( \sum_{e \in \mc{E}(K)} \sum_{i = 1}^{N_e}(p_{K, e}^i - w_{K, e}^i)^2\right). \end{displaymath} On the boundary face $e$, $\bm{p}_h$ and $\bm{w}_h$ clearly have the same moments on $e$. A summation over all elements, together with the mesh regularity and \eqref{eq_wKfi} and \eqref{eq_q1q2f}, gives that \begin{displaymath} \begin{aligned} \sum_{K \in \mc{T}_h} \big( h_K^{-2} \|\bm{p}_h &- \bm{w}_h \|_{L^2(K)}^2 + \|\nabla \cdot ( \bm{p}_h- \bm{w}_h )\|_{L^2(K)}^2 \big) \leq C \sum_{e \in \mc{E}_h}\sum_{i=1}^{N_e} h_e^{-d} (p_{K, e}^i - w_{K, e}^i)^2 \\ & \leq C \sum_{e \in \mc{E}_h^i} \sum_{i = 1}^{N_e} h_e^{-d} \left(p_{K, e}^i - \frac{p_{K, e}^i + p_{K', e}^i}{2} \right)^2 \quad (e \text{ is shared by } K \text{ and } K') \\ &\leq C \sum_{e \in \mc{E}_h^i} \sum_{i = 1}^{N_e} h_e^{-d} \left(p_{K, e}^i - p_{K', e}^i \right)^2 \leq C \sum_{e \in \mc{E}_h^i} h_e^{-1} \| \jump{\bm{\mr{n}} \cdot \bm{p}_h} \|_{L^2(e)}^2. \end{aligned} \end{displaymath} This gives the estimate \eqref{eq_Hdivprojection} and completes the proof. \end{proof} Now we are ready to state that the bilinear form $a_h(\cdot; \cdot)$ is coercive under the energy norm $\enorm{\cdot}$. \begin{lemma} Let the bilinear form $a_h(\cdot, \cdot)$ be defined as \eqref{eq_bilinearform}, there exists a constant $C$ such that \begin{equation} a_h(u_h, \bm{p}_h; u_h, \bm{p}_h) \geq C k^{-2} (1 + h + k^2 h^2)^{-1} \enorm{(u_h, \bm{p}_h)}^2, \label{eq_coercivity} \end{equation} for any $(u_h, \bm{p}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m$. \label{le_coercivity} \end{lemma} \begin{proof} Clearly, we have that \begin{displaymath} \begin{aligned} a_h(u_h, \bm{p}_h; u_h&, \bm{p}_h) = \sum_{K \in \mc{T}_h} \left( \| \nabla u_h - k \bm{p}_h \|^2_{L^2(K)} + \| \nabla \cdot \bm{p}_h + k u_h \|^2_{L^2(K)} \right) \\ &+ \sum_{e \in \mc{E}_h^i} \frac{1}{h_e} \left( \| \jump{u_h} \|^2_{L^2(e)} + \| \jump{\bm{p}_h} \|^2_{L^2(e)} \right) \\ &+ \sum_{e \in \mc{E}_h^D} \frac{1}{h_e} \| u_h \|^2_{L^2(e)} + \sum_{e \in \mc{E}_h^R} \frac{1}{h_e} \| \bm{\mr{n}} \cdot \bm{p}_h + \bm{\mr i} u_h \|^2_{L^2(e)}. \end{aligned} \end{displaymath} By Lemma \ref{le_projection1}, there exists a polynomial $v_h \in \bmr{V}_h^m \cap H^1_D(\Omega)$ and a polynomial $\bm{q}_h \in \bmr{\Sigma}_h^m \cap H(\ifmmode \mathrm{div} \else \text{div}\fi,\Omega)$, such that \begin{displaymath} \unorm{u_h - v_h}^2 \leq C \sum_{e \in \mc{E}_h^i \cup \mc{E}_h^D} (h_e^{-1} + k^2 h_e) \| \jump{u_h} \|^2_{L^2(e)} \leq C (1 + k^2 h^2) a_h(u_h,\bm{p}_h; u_h, \bm{p}_h), \end{displaymath} and \begin{displaymath} \pnorm{\bm{p}_h - \bm{q}_h}^2 \leq C \sum_{e \in \mc{E}_h^i} (h_e^{-1} + k^2 h_e) \| \jump{\bm{p}_h} \|^2_{L^2(e)} \leq C (1 + k^2 h^2) a_h(u_h, \bm{p}_h; u_h, \bm{p}_h). \end{displaymath} Hence, \begin{displaymath} \begin{aligned} \enorm{(u_h, \bm{p}_h)}^2 &\leq C \left(\unorm{u_h - v_h}^2 + \pnorm{\bm{p}_h - \bm{q}_h}^2 + \unorm{v_h}^2 + \pnorm{\bm{q}_h}^2 + \sum_{e \in \mc{E}_h^R} \frac{1}{h_e} \| \bm{\mr{n}} \cdot \bm{p}_h + \bm{\mr i} u_h \|^2_{L^2(e)} \right) \\ &\leq C \left( (1+k^2h^2) a_h(u_h, \bm{p}_h; u_h, \bm{p}_h) + \unorm{v_h}^2 + \pnorm{\bm{q}_h}^2 + \sum_{e \in \mc{E}_h^R} \frac{1}{h_e} \| \bm{\mr{n}} \cdot \bm{p}_h + \bm{\mr i} u_h \|^2_{L^2(e)} \right). \end{aligned} \end{displaymath} By Lemma \ref{le_HelmholtzInequality}, we get that \begin{displaymath} (\unorm{v_h} + \pnorm{\bm{q}_h})^2 \leq C k^2 \left(\| \nabla v_h - k\bm{q}_h \|_{L^2(\Omega)} + \| \nabla \cdot \bm{q}_h + kv_h \|_{L^2(\Omega)} + \| \bm{\mr{n}} \cdot \bm{q}_h + \bm{\mr i} v_h \|_{L^2(\Gamma_R)} \right)^2. \end{displaymath} We apply the triangle inequality to derive that \begin{displaymath} \begin{aligned} \| \nabla v_h - k\bm{q}_h \|^2_{L^2(\Omega)} &\leq C \left( \| \nabla u_h - k\bm{p}_h \|^2_{L^2(\mc{T}_h)} + \| \nabla (u_h - v_h) \|^2_{L^2(\mc{T}_h)} + k^2 \|\bm{p}_h - \bm{q}_h \|^2_{L^2(\mc{T}_h)} \right) \\ &\leq C\left(\| \nabla u_h - k\bm{p}_h \|^2_{L^2(\mc{T}_h)} + \unorm{u_h - v_h}^2 + \pnorm{\bm{p}_h - \bm{q}_h}^2 \right) \\ &\leq C (1 + k^2 h^2) a_h(u_h, \bm{p}_h; u_h, \bm{p}_h). \end{aligned} \end{displaymath} Similarly, \begin{displaymath} \| \nabla \cdot \bm{q}_h + kv_h \|^2_{L^2(\Omega)} \leq C (1 + k^2 h^2) a_h(u_h, \bm{p}_h; u_h, \bm{p}_h). \end{displaymath} From the proof of Lemma \ref{le_Hdivprojection}, $\bm{p}_h$ and $\bm{q}_h$ has the same moments on any boundary face $e$, which implies $\|\bm{\mr{n}} \cdot \bm{q}_h - \bm{\mr{n}} \cdot \bm{p}_h \|_{L^2(e)} = 0$ for any $e \in \mc{E}_h^R$. Together with the triangle inequality, we have that \begin{displaymath} \| \bm{\mr{n}} \cdot \bm{q}_h + \bm{\mr i} v_h \|_{L^2(\Gamma_R)}^2 \leq \sum_{e \in \mc{E}_h^R} \left( \| \bm{\mr{n}} \cdot \bm{p}_h + \bm{\mr i} u_h \|_{L^2(e)}^2 + \| u_h - v_h \|_{L^2(e)}^2 \right). \end{displaymath} The trace inequality gives us \begin{displaymath} h_e^{-1} \| u_h-v_h \|^2_{L^2(e)} \leq C \left( h_e^{-2} \|u_h-v_h\|^2_{L^2(K)} + \| \nabla(u_h-v_h) \|^2_{L^2(K)} \right), \qquad \forall e \in \mc{E}_h^R, \end{displaymath} where $K$ is an element such that $e \in \mc{E}(K)$. We apply Lemma \ref{le_projection1} to conclude that \begin{displaymath} \|\bm{\mr{n}} \cdot \bm{q}_h + \bm{\mr i} v_h \|_{L^2(\Gamma_R)}^2 \leq C h a_h(u_h, \bm{p}_h; u_h, \bm{p}_h). \end{displaymath} Combining all the inequalities above, we arrive at \begin{displaymath} a_h(u_h, \bm{p}_h; u_h, \bm{p}_h) \geq C k^{-2} (1 + h + k^2 h^2)^{-1} \enorm{(u_h, \bm{p}_h)}^2, \end{displaymath} which gives the estimate \eqref{eq_coercivity} and completes the proof. \end{proof} In addition, the bilinear form $a_h(\cdot; \cdot)$ satisfies the Galerkin orthogonality: \begin{lemma} Let the bilinear form $a_h(\cdot; \cdot)$ be defined as \eqref{eq_bilinearform}. Let $(u, \bm{p}) \in H^1(\Omega) \times H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega)$ be the exact solution to \eqref{eq_firstHelmholtz}, and let $(u_h, \bm{p}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m$ be the solution to \eqref{eq_bilinear}. Then, the following equation holds true \begin{equation} a_h(u - u_h, \bm{p} - \bm{p}_h; v_h, \bm{q}_h) = 0, \label{eq_orthogonality} \end{equation} for any $(v_h, \bm{q}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m$. \label{le_othogonality} \end{lemma} \begin{proof} The regularity of the exact solution $(u, \bm{p})$ directly brings us that \begin{equation*} \jump{u} = 0, \qquad \jump{\bm{\mr{n}} \cdot \bm{p}} = 0, \qquad \text{on } \forall e \in \mc{E}_h^i. \end{equation*} Hence, \begin{align*} a_h(u-u_h, &\bm{p}-\bm{p}_h; v_h, \bm{q}_h) = \sum_{K \in \mc{T}_h} \int_K (\nabla \cdot (\bm{p} - \bm{p}_h) + k(u - u_h)) \ \overline{(\nabla \cdot \bm{q}_h + k v_h)} \d{x} \\ &+ \sum_{K \in \mc{T}_h} \int_K (\nabla (u - u_h) - k(\bm{p} - \bm{p}_h)) \cdot \overline{(\nabla v_h - k \bm{q}_h)} \d{x} \\ &- \sum_{e \in \mc{E}_h^i} \frac{1}{h_e} \int_e \jump{u_h}\ \overline{\jump{v_h}} \d{s} - \sum_{e \in \mc{E}_h^i} \frac{1}{h_e} \int_e \jump{\bm{\mr{n}} \cdot \bm{p}_h} \ \overline{\jump{\bm{\mr{n}} \cdot \bm{q}_h}} \d{s} \\ &+ \sum_{e \in \mc{E}_h^D} \frac{1}{h_e} \int_e (u-u_h)\ \overline{v_h} \d{s} + \sum_{e \in \mc{E}_h^R} \frac{1}{h_e} \int_e (\bm{\mr{n}} \cdot (\bm{p} - \bm{p}_h) + \bm{\mr i}(u-u_h)) \ \overline{(\bm{\mr{n}} \cdot \bm{q}_h + \bm{\mr i} v_h)} \d{s} \\ & = -\sum_{K \in \mc{T}_h} \int_K \wt{f} \ \overline{(\nabla \cdot \bm{q}_h + k v_h)} \d{x} + \sum_{e \in \mc{E}_h^D} \frac{1}{h_e} \int_e g_0 \ \overline{v_h} \d{s} \\ &\quad + \sum_{e \in \mc{E}_h^R} \frac{1}{h_e} \int_e \wt{g} \ \overline{\bm{\mr{n}} \cdot \bm{q}_h + \bm{\mr i} v_h} \d{s} - a_h(u_h, \bm{p}_h;v_h, \bm{q}_h) \\ & = l_h( v_h, \bm{q}_h) - a_h(u_h, \bm{p}_h; v_h, \bm{q}_h) \\ & = 0, \end{align*} which yields the equation \eqref{eq_orthogonality} and completes the proof. \end{proof} Finally, we arrive at the {\it a priori} error estimate (with respect to a fixed wavenumber $k$) of the method under the energy norm $\enorm{\cdot}$. \begin{theorem} Let $(u, \bm{p}) \in H^{m+1}(\Omega) \times H^{m+1}(\Omega)^d$ be the exact solution to \eqref{eq_firstHelmholtz}. Let $(u_h, \bm{p}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m$ be the numerical solution to \eqref{eq_bilinear}. Then there exists a constant $C$ such that \begin{equation} \enorm{(u-u_h, \bm{p}-\bm{p}_h)} \leq C k^2 (1+h+k^2h^2) (1+ k^2 h^2)^{\frac{1}{2}} h^m (\| u \|_{H^{m+1}(\Omega)} + \| \bm{p} \| _{H^{m+1}(\Omega)}). \label{eq_estimate} \end{equation} \label{th_estimate} \end{theorem} \begin{proof} By Lemma \ref{le_othogonality}, we have that \begin{displaymath} a_h(u - u_h, \bm{p} - \bm{p}_h; v_h, \bm{q}_h) = 0, \quad \forall (v_h, \bm{q}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m \end{displaymath} Together with Lemma \ref{le_coercivity} and Lemma \ref{le_continuity}, we obtain that \begin{displaymath} \begin{aligned} \enorm{(u_h-v_h, \bm{p}_h-\bm{q}_h)}^2 &\leq C k^2 (1 + h + k^2 h^2) a_h(u_h-v_h, \bm{p}_h - \bm{q}_h; u_h-v_h, \bm{p}_h - \bm{q}_h) \\ &= C k^2 (1 + h + k^2 h^2) a_h(u-v_h, \bm{p}-\bm{q}_h; u_h-v_h, \bm{p}_h - \bm{q}_h) \\ &\leq C k^2(1 + h + k^2 h^2) \enorm{(u-v_h, \bm{p}-\bm{q}_h)} \enorm{(u_h-v_h, \bm{p}_h - \bm{q}_h)}, \end{aligned} \end{displaymath} for any $(v_h, \bm{q}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m$. We eliminate the term $\enorm{(u_h-v_h, \bm{p}_h-\bm{q}_h)}$ on both sides and apply the triangle inequality to get that \begin{equation} \enorm{(u-u_h, \bm{p}-\bm{p}_h)} \leq C k^2(1 + h + k^2 h^2) \inf_{(v_h, \bm{q}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m} \enorm{(u - v_h, \bm{p} - \bm{q}_h)}. \label{eq_Cea} \end{equation} We denote by $u_I \in \bmr{V}_h^m$ the standard Lagrange interpolant of the exact solution $u$, and by $\bm{p}_I \in \bmr{\Sigma}_h$ the BDM interpolant of the exact solution $\bm{p}$. We refer to \cite{ciarlet2002finite} and \cite{Brezzi1985two} for details of two interpolation operators. By the approximation properties of these interpolant operators, we get that \begin{equation} \begin{aligned} &\| u-u_I \|_{L^2(\Omega)} \leq Ch^{m+1} \|u\|_{H^{m+1}(\Omega)}, \quad \| \nabla(u-u_I) \|_{L^2(\Omega)} \leq Ch^m \|u\|_{H^{m+1} (\Omega)}, \\ &\| \bm{p}-\bm{p}_I \|_{L^2(\Omega)} \leq C h^{m+1} \|\bm{p}\|_{H^{m+1}(\Omega)}, \quad \| \nabla \cdot (\bm{p}- \bm{p}_I) \|_{L^2(\Omega)} \leq Ch^{m} \|\nabla \cdot \bm{p}\| _{H^{m}(\Omega)}. \end{aligned} \label{eq_interpolant} \end{equation} We refer to \cite[Theorem 3.2.1]{ciarlet2002finite} and \cite[Proposition 2.5.4]{boffi2013mixed} for the proof of these inequalities. Since $u_I \in H^1(\Omega)$ and $\bm{p}_I \in H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega)$, we have \begin{equation} \jump{u-u_I} = 0, \quad \jump{\bm{\mr{n}} \cdot (\bm{p} - \bm{p}_I)} = 0, \qquad \text{ on } \forall e \in \mc{E}_h^i. \label{eq_jump} \end{equation} The trace inequality brings us that \begin{equation} h_e^{-1} \| u-u_I \|^2_{L^2(e)} \leq C \left( h_e^{-2} \|u-u_I\|^2_{L^2(K)} + \| \nabla(u-u_I) \|^2_{L^2(K)} \right), \qquad \forall e \in \mc{E}_h^b, \label{eq_trace} \end{equation} where $K$ is an element having $e$ as a face. Denote by $\Pi_h^0$ the $L^2$ projection onto $\bmr{\Sigma}^m_h$. Using \eqref{eq_trace} and the inverse inequality , we derive that \begin{equation} \begin{aligned} h_e^{-1} \| \bm{\mr{n}} \cdot (\bm{p} - \bm{p}_I) &+ \bm{\mr i} (u-u_I) \|^2_{L^2(e)} \leq C h_e^{-1} \left( \| \bm{p} - \bm{p}_I \|^2_{L^2(e)} + \| u-u_I \|^2_{L^2(e)} \right) \\ & \leq C h_e^{-1} \left( \| \Pi_h^0(\bm{p} - \bm{p}_I)\|^2_{L^2(e)} + \| \bm{p} - \Pi_h^0 \bm{p} \|^2_{L^2(e)} + \|u-u_I\|^2_{L^2(e)} \right) \\ &\leq C \left( h_e^{-2} \| \bm{p}-\bm{p}_I \|^2_{L^2(K)} + h_e^{-2} \| u-u_I \|^2_{L^2(K)} + \| \nabla(u-u_I) \|^2_{L^2(K)} \right. \\ &\ \ \left. + h_e^{-1} \|\bm{p} - \Pi_h^0 \bm{p} \|^2_{L^2(e)} \right). \end{aligned} \label{eq_Robin} \end{equation} Combining with \eqref{eq_interpolant}, \eqref{eq_jump}, \eqref{eq_trace}, \eqref{eq_Robin} and the approximation property of the $L^2$ projection \cite[lemma 4.3]{Houston2005interior}, we arrive at \begin{displaymath} \enorm{(u-u_I, \bm{p}-\bm{p}_I)}^2 \leq C(1+k^2h^2)h^{2m}(\| u \|_{H^{m+1}(\Omega)}^2 + \| \bm{p} \|_{H^{m+1}(\Omega)}^2) \end{displaymath} Let $v_h = u_I$ and $\bm{q}_h = \bm{p}_I$ in \eqref{eq_Cea}, then the above estimate gives the error estimate \eqref{eq_estimate}, which completes the proof. \end{proof} \begin{remark} We have proved that the numerical solution $(u_h, \bm{p}_h)$ of our method has the optimal convergence rate under the energy norm $\enorm{\cdot}$. By the definition of the energy norm, the error under the $L^2$ norm for both variables has at least sub-optimal convergence rate, i.e. \begin{displaymath} \begin{aligned} \|u-u_h\|_{L^2(\Omega)} &+ \| \bm{p}-\bm{p}_h \|_{L^2(\Omega)} \\ &\leq C k (1+h+k^2h^2) (1+ k^2h^2)^{\frac{1}{2}} h^m (\| u \|_{H^{m+1}(\Omega)} + \| \bm{p} \|_{H^{m+1}(\Omega)}). \end{aligned} \end{displaymath} It can be seen that the degree of $k$ in the $L^2$ error estimate is one less than that in the error estimate under the energy norm $\enorm{\cdot}$. In numerical experiments in the next section, we observe the optimal convergence rate for the variable $u$ and sub-optimal convergence rate for the variable $\bm{p}$ for the $L^2$ error. \end{remark} Another advantage of our method is that the least squares functional \eqref{eq_functional} can provide a natural mesh refinement indicator $\eta_K$ for any element $K$, which is defined by \begin{equation} \begin{aligned} \eta_K^2 := &\| \nabla \cdot \bm{p}_h + ku_h + \wt{f} \|_{L^2(K)}^2 + \| \nabla u_h - k \bm{p}_h \|_{L^2(K)}^2 \\ &+ \sum_{e \in \mc{E}_h^i \cap \mc{E}(K)} \frac{1}{h_e} ( \| \jump{u_h} \|_{L^2(e)}^2 + \| \jump{\bm{\mr{n}} \cdot \bm{p}_h} \|_{L^2(e)}^2) \\ &+ \sum_{e \in \mc{E}_h^D \cap \mc{E}(K)} \frac{1}{h_e} \| u_h - g_0 \|_{L^2(e)}^2 + \sum_{e \in \mc{E}_h^R \cap \mc{E}(K)} \frac{1}{h_e} \| \bm{\mr{n}} \cdot \bm{p}_h + \bm{\mr i} u_h - \wt{g} \|_{L^2(e)}^2. \end{aligned} \label{eq_etaK} \end{equation} where $\mc{E}(K)$ is the $d-1$ dimensional faces of $K$. We have the following lemma to show that the indicator is exact with respect to the energy norm $\enorm{\cdot}$. \begin{lemma} Let $(u, \bm{p})$ be the exact solution to \eqref{eq_firstHelmholtz}, and let $(u_h, \bm{p}_h) \in \bmr{V}_h^m \times \bmr{\Sigma}_h^m$ be the numerical solution to \eqref{eq_bilinear}. Then there exists a constant $C$ such that \begin{equation} \sum_{K \in \mc{T}_h} \eta_K^2 \leq C \enorm{(u - u_h, \bm{p} - \bm{p}_h)} ^2. \label{eq_estimatorup} \end{equation} \label{le_estimatorup} \end{lemma} \begin{proof} From the definition of $\eta_K$, it is easy to see that $\sum_{K \in \mc{T}_h} \eta_K^2 \leq C a_h(u - u_h, \bm{p} - \bm{p}_h; u - u_h, \bm{p} - \bm{p}_h)$. The estimate \eqref{eq_estimatorup} directly follows from the boundedness property \eqref{eq_continuity}. \end{proof} The adaptive procedure consists of loops of the standard form: \begin{displaymath} \text{Solve} \ \rightarrow \ \text{Estimate}\ \rightarrow \ \text{Mark} \ \rightarrow \ \text{Refine}. \end{displaymath} The longest-edge bisection algorithm is used to adaptively refine the mesh and the detailed adaptive procedure is presented as follow: \begin{enumerate}[Step 1] \item Given the initial mesh $\mc{T}_0$ and a positive parameter $\lambda$, and set the iteration number $l = 0$; \item Solve the Helmholtz equations on the mesh $\mc{T}_l$; \item Obtain the error indicator $\eta_K$ for all $K \in \mc{T}_l$ with respect to the numerical solutions from the Step 2; \item Find the minimal subset $\mc{M} \subset \mc{T}_l$ such that $\lambda \sum_{K \in \mc{T}_l} \eta_K^2 \leq \sum_{K \in \mc{M}} \eta_K^2$ and mark all elements in $\mc{M}$. \item Refine all marked elements to generate the next level mesh $\mc{T}_{l + 1}$; \item If the stop criterion is not satisfied, then go to the Step 2 and set $l = l + 1$. \end{enumerate} \section{Numerical Results} \label{sec_numericalresults} In this section, we present several numerical examples in two and three dimensions to demonstrate the performance of the proposed method. We assume that the domain $D = \emptyset$ without indication, so the Dirichlet boundary is empty. We adopt the BiCGstab solver together with the ILU preconditioner to solve the resulting linear algebraic system. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{./figure/tri1-crop.pdf} \hspace{25pt} \includegraphics[width=0.4\textwidth]{./figure/cubic-crop.pdf} \caption{2d triangular partition with $ h = 1/10$ (left) / 3d tetrahedral partition with $h = 1/4$ (right).} \label{fig_partition} \end{figure} \noindent \textbf{Example 1.} First, we consider a smooth problem defined on the unit square domain $\Omega = (0,1)^2$. The exact solution for the Helmholtz equation is given by \cite{Lee2000first}, \begin{displaymath} u(x,y) = \mr{e}^{\bm{\mr i} k (x \cos{\frac{\pi}{5}} + y \sin{\frac{\pi}{5}})}, \end{displaymath} where the source term $f$ and the Robin boundary data $g$ are chosen accordingly. To obtain the convergence order, we solve this problem on a series of shape-regular meshes with the mesh size $h = 1/5$, $h = 1/10$, $\ldots$, $1/40$, see Fig~.\ref{fig_partition}. The convergence histories with the wave number $k = 1,2,8$ for the accuracy $m = 1, 2, 3, 4$ are present in Tab.~\ref{tab_ex1k1}, Tab.~\ref{tab_ex1k2} and Tab.~\ref{tab_ex1k8}, respectively. From the numerical errors, we observe that the convergence order of the error under the energy norm $\enorm{u-u_h, \bm{p}-\bm{p}_h}$ is $O(h^m)$, which is consistent to the theoretical result in Section \ref{sec_method}. In addition, for the $L^2$ errors, we can see that $\|u - u_h \|_{L^2(\Omega)}$ and $ \| \bm{p} - \bm{p}_h \| _{L^2(\Omega)}$ converge to zero at the rate $O(h^{m+1})$ and $O(h^m)$, respectively, as the mesh is refined. Due to the finite machine precision, the convergence order is lower than the expected result for the case $m=4$ with the finest mesh. The pollution effect occurs as the wavenumber $k$ increases, since all the errors between the numerical solution and the exact solution become larger. \begin{table} \centering \renewcommand\arraystretch{1.3} \scalebox{1.}{ \begin{tabular}{p{0.5cm} | p{3.3cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1cm} } \hline\hline $m$ & mesh size & $1/5$ & $1/10$ & $1/20$ & $1/40$ & order \\ \hline \multirow{3}{*}{$1$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 6.468e-2 & 3.240e-2 & 1.620e-2 & 8.095e-3 & 1.00 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 2.382e-3 & 6.074e-4 & 1.532e-4 & 3.844e-5 & 2.00 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 1.980e-2 & 1.008e-2 & 5.026e-3 & 2.498e-3 & 0.99 \\ \hline \multirow{3}{*}{$2$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 1.385e-3 & 3.492e-4 & 8.758e-5 & 2.193e-5 & 2.00 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 1.905e-5 & 2.378e-6 & 2.968e-7 & 3.707e-8 & 3.00 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 4.084e-4 & 1.082e-4 & 2.765e-5 & 6.981e-6 & 1.99 \\ \hline \multirow{3}{*}{$3$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 2.085e-5 & 2.624e-6 & 3.301e-7 & 4.145e-8 & 3.00 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 2.407e-7 & 1.514e-8 & 9.533e-10 & 5.987e-11 & 4.00 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 5.464e-6 & 7.259e-7 & 9.509e-8 & 1.227e-8 & 2.99 \\ \hline \multirow{3}{*}{$4$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 2.482e-7 & 1.552e-8 & 9.704e-10 & 1.756e-10 & 3.48 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 2.250e-9 & 6.999e-11 & 2.191e-12 & 1.232e-12 & 4.72 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 6.688e-8 & 4.229e-9 & 2.659e-10 & 1.330e-10 & 2.99 \\ \hline \end{tabular}} \caption{Convergence history for Example 1 with $k=1$.} \label{tab_ex1k1} \end{table} \begin{table} \centering \renewcommand\arraystretch{1.3} \scalebox{1.}{ \begin{tabular}{p{0.5cm} | p{3.3cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1cm} } \hline\hline $m$ & mesh size & $1/5$ & $1/10$ & $1/20$ & $1/40$ & order \\ \hline \multirow{3}{*}{$1$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 2.803e-1 & 1.327e-1 & 6.520e-2 & 3.243e-2 & 1.00 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 2.112e-2 & 5.557e-3 & 1.412e-3 & 3.550e-4 & 1.99 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 4.890e-2 & 2.154e-2 & 1.024e-2 & 5.020e-3 & 1.10 \\ \hline \multirow{3}{*}{$2$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 1.109e-2 & 2.793e-3 & 7.006e-4 & 1.754e-4 & 2.00 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 1.628e-4 & 1.937e-5 & 2.386e-6 & 2.970e-7 & 3.00 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 1.629e-3 & 4.323e-4 & 1.106e-4 & 2.792e-5 & 1.99 \\ \hline \multirow{3}{*}{$3$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 3.340e-4 & 4.199e-5 & 5.281e-6 & 6.632e-7 & 3.00 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 3.858e-6 & 2.424e-7 & 1.525e-8 & 9.578e-10 & 4.00 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 4.395e-5 & 5.814e-6 & 7.609e-7 & 9.812e-8 & 2.99 \\ \hline \multirow{3}{*}{$4$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 7.939e-6 & 4.967e-7 & 3.104e-8 & 1.941e-9 & 3.99 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 7.229e-8 & 2.242e-9 & 6.978e-11 & 2.367e-12 & 4.96 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 1.067e-6 & 6.763e-8 & 4.236e-9 & 2.664e-10 & 3.99 \\ \hline \end{tabular} } \caption{Convergence history for Example 1 with $k=2$.} \label{tab_ex1k2} \end{table} \begin{table} \centering \renewcommand\arraystretch{1.3} \scalebox{1.}{ \begin{tabular}{p{0.5cm} | p{3.3cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1cm} } \hline\hline $m$ & mesh size & $1/5$ & $1/10$ & $1/20$ & $1/40$ & order \\ \hline \multirow{3}{*}{$1$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 1.220e+1 & 9.277e+0 & 5.208e+0 & 1.963e+0 & 0.87 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 7.373e-1 & 5.652e-1 & 3.170e-1 & 1.174e-1 & 0.87 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 7.511e-1 & 5.715e-1 & 3.201e-1 & 1.193e-1 & 0.87 \\ \hline \multirow{3}{*}{$2$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 2.916e+0 & 3.122e-1 & 4.786e-2 & 1.127e-2 & 2.66 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 1.753e-1 & 1.586e-2 & 1.048e-3 & 6.796e-5 & 3.76 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 1.772e-1 & 1.724e-2 & 2.041e-3 & 4.507e-4 & 2.90 \\ \hline \multirow{3}{*}{$3$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 1.066e-1 & 1.084e-2 & 1.353e-3 & 1.698e-4 & 3.10 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 3.945e-3 & 9.116e-5 & 4.071e-6 & 2.463e-7 & 4.63 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 4.905e-3 & 3.859e-4 & 4.888e-5 & 6.281e-6 & 3.23 \\ \hline \multirow{3}{*}{$4$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 8.138e-3 & 5.083e-4 & 3.180e-5 & 1.987e-6 & 4.00 \\ \cline{2-7} & $\| u - u_h \|_{L^2(\Omega)}$ & 7.508e-5 & 1.828e-6 & 5.472e-8 & 1.688e-9 & 5.14 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)}$ & 2.777e-4 & 1.753e-5 & 1.105e-6 & 6.919e-8 & 3.99 \\ \hline \end{tabular} } \caption{Convergence history for Example 1 with $k=8$.} \label{tab_ex1k8} \end{table} \noindent \textbf{Example 2.} For the second example, we consider a 2d example defined on $\Omega = (-0.5,0.5)^2$ \cite{Feng2009discontinuous}, \begin{eqnarray*} \left\{ \begin{array}{ll} -\Delta u - k^2 u &= f := \frac{\sin(kr)}{r}, \qquad \text{in } \Omega, \\ \npar{u} + \bm{\mr i} k u &= g, \qquad \text{on } \partial \Omega. \end{array} \right. \end{eqnarray*} The analytical solution can be written as \begin{displaymath} u = \frac{\cos(kr)}{k} - \frac{\cos k + \bm{\mr i} \sin k}{k (J_0(k) + \bm{\mr i} J_1(k))} J_0(kr), \end{displaymath} in the polar coordinates $(r, \theta)$, where $J_v(z)$ are Bessel functions of the first kind. First, we test the convergence order for the case $k=1$. We set the initial mesh size to be $h = 1/5$ and uniformly refine the mesh for three times to solve this problem. The numerical errors are shown in Tab.~\ref{tab_ex2k1} with the degree of approximation spaces $m = 1,2,3$. We observe that the numerical error under the energy norm tends to zero at the speed $O(h^m)$ as the mesh size approachs to zero, and the convergence order of $L^2$ errors are $O(h^{m+1})$ for the variable $u$ and $O(h^m)$ for the varible $\bm{p}$. We note that all these results are still consistent with the theoretical error estimates. Fig.~\ref{fig_surface} exhibits the surface plots of the exact solution and the numerical solution for $k=100$. Next, we numerically examine the changes of the error under the energy norm when the wavenumber $k$ and the mesh size $h$ are correlated. We use piecewise linear spaces to approximate the variables $u$ and $\bm{p}$, so that the error estimate in Theorem \ref{th_estimate} suggests that \begin{displaymath} \enorm{(u-u_h, \bm{p}-\bm{p}_h)} \leq C k^2 h(1+h+k^2h^2) (1+ k^2 h^2)^{\frac{1}{2}} (\| u \|_{H^{2}(\Omega)} + \| \bm{p} \|_{H^{2}(\Omega)}). \end{displaymath} In Fig.~\ref{fig_k2h}, we plot the relative energy error of the discontinuous least squares method for $k$ and $h$ determined by $k^2h = 1$. We see that the error gradually decreases and tends to be invariant when $k$ becomes large, which verifies our $k$-explicit error estimates. \begin{table} \centering \renewcommand\arraystretch{1.3} \scalebox{1.}{ \begin{tabular}{p{0.5cm} | p{3.3cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1cm} } \hline\hline $m$ & mesh size & $1/5$ & $1/10$ & $1/20$ & $1/40$ & order \\ \hline \multirow{3}{*}{$1$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 3.386e-2 & 1.692e-2 & 8.466e-3 & 4.234e-3 & 1.00 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 1.848e-3 & 4.664e-4 & 1.170e-4 & 2.929e-5 & 1.99 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 2.430e-3 & 1.763e-3 & 9.741e-4 & 4.993e-4 & 0.76 \\ \hline \multirow{3}{*}{$2$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 1.135e-3 & 2.841e-4 & 7.106e-5 & 1.777e-5 & 2.00 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 6.182e-6 & 7.257e-7 & 8.910e-8 & 1.107e-8 & 3.03 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 9.629e-5 & 2.723e-5 & 7.185e-6 & 1.839e-6 & 1.90 \\ \hline \multirow{3}{*}{$3$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 1.543e-5 & 1.938e-6 & 2.428e-7 & 3.039e-8 & 3.00 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 2.622e-7 & 1.633e-8 & 1.022e-9 & 6.403e-11 & 4.00 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 1.778e-6 & 2.772e-7 & 3.738e-8 & 4.845e-9 & 2.83\\ \hline \end{tabular} } \caption{Convergence history for Example 2 with $k=1$.} \label{tab_ex2k1} \end{table} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{./figure/sol2realexact.pdf} \hspace{25pt} \includegraphics[width=0.4\textwidth]{./figure/sol2ureal.pdf} \caption{Surface plots for the exact solution of Example 2 (left) and the numerical solution with $k=100$ and $m=3$ (right). The number of elements is 139264.} \label{fig_surface} \end{figure} \begin{figure} \centering \includegraphics[width = 0.6\textwidth]{./figure/k2h.pdf} \caption{Relative error of Example 2 with $k^2 h = 1$.} \label{fig_k2h} \end{figure} \noindent \textbf{Example 3.} In this example, we solve a three-dimensional problem defined in the cube $\Omega = (-1,1)^3$. The analytical solution is selected as \begin{displaymath} u(x,y,z) = \mr{e}^{\bm{\mr i} k (x \sin \theta \cos \phi + y \sin \theta \sin \phi + z \cos \theta)}, \end{displaymath} where the parameter $\theta$ and $\phi$ are set to be $\frac{\pi}{4}$ and $\frac{\pi}{5}$ respectively. We solve this test problem on a series of tetrahedral meshes with the resolution $h = 1/4$, $1/8$, $1/16$, and $1/32$, see Fig.~\ref{fig_partition}. We use the approximation spaces $\bmr{V}_h^m$ and $\bmr{\Sigma}_h^m$ to approximate $u$ and $\bm{p}$, respectively. The convergence histories for $k=1$ are displayed in Tab.~\ref{tab_ex3k1}. We obseve that the convergence order under the energy norm $\enorm{\cdot}$ is still the optimal order $O(h^m)$, and the $L^2$ errors for $u$ and $\bm{p}$ are still $O(h^{m})$ and $O(h^{m+1})$, respectively. We note that all numerical convergence orders are consistent with the theoretical error estimate as before. \begin{table} \centering \renewcommand\arraystretch{1.3} \scalebox{1.}{ \begin{tabular}{p{0.5cm} | p{3.3cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1cm} } \hline\hline $m$ & mesh size & $1/4$ & $1/8$ & $1/16$ & $1/32$ & order \\ \hline \multirow{3}{*}{$1$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 1.940e-1 & 9.754e-2 & 4.898e-2 & 2.458e-2 & 0.99 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 1.591e-2 & 4.333e-3 & 1.117e-3 & 2.829e-4 & 1.94 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 6.325e-2 & 3.538e-2 & 1.875e-2 & 9.595e-3 & 0.90 \\ \hline \multirow{3}{*}{$2$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 1.218e-2 & 3.181e-3 & 8.055e-4 & 2.030e-4 & 1.96 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 4.952e-4 & 6.136e-5 & 7.612e-6 & 9.542e-7 & 3.00 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 3.781e-3 & 1.194e-3 & 3.209e-4 & 8.287e-5 & 1.83 \\ \hline \multirow{3}{*}{$3$} & $\enorm{(u-u_h, \bm{p}-\bm{p}_h)}$ & 5.628e-4 & 7.438e-5 & 9.458e-6 & 1.198e-6 & 2.95 \\ \cline{2-7} & $\| u-u_h \|_{L^2(\Omega)}$ & 1.770e-5 & 1.153e-6 & 7.299e-8 & 4.618e-9 & 3.96 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 1.741e-4 & 2.808e-5 & 3.818e-6 & 4.991e-7 & 2.81\\ \hline \end{tabular} } \caption{Convergence history for Example 3 with $k=1$.} \label{tab_ex3k1} \end{table} \noindent \textbf{Example 4.} In this test, we apply the proposed method to a problem with low regularity near the origin. The domain $\Omega$ is selected to be an L-shaped domain $\Omega = (-1,1) \backslash [0,1) \times (-1,0]$. We set $f=0$ and choose the exact solution, in polar coordinates $(r, \theta)$, to be \begin{equation*} u(x,y) = J_{\alpha}(kr) \cos(\alpha \theta). \end{equation*} This exact solution belongs to the space $H^{\alpha + 1-\epsilon}(\Omega)$. We select the parameter $\alpha = 2/3$ and set the initial mesh size to be $h = 1/4$. We uniformly refine the mesh for three times to solve this problem for $k=1$. Tab.~\ref{tab_ex4k1} shows the convergence rate of $\|u-u_h\|_{L^2(\Omega)}$ and $\|\bm{p} - \bm{p}_h \|_{L^2(\Omega)}$ with $m=1,2,3$. The convergence rate of $\|\bm{p} - \bm{p}_h \| _{L^2(\Omega)}$ is about $0.67$, which is in agreement with with the regularity of the exact solution and error estimates. For the error $\|u-u_h\|_{L^2(\Omega)}$, we note that the convergence rate is lower than its regularity exponent, and seems to decrease when $m$ increases. \begin{table} \centering \renewcommand\arraystretch{1.3} \scalebox{1.}{ \begin{tabular}{p{0.5cm} | p{3.3cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1.6cm} | p{1cm} } \hline\hline $m$ & mesh size & $1/4$ & $1/8$ & $1/16$ & $1/32$ & order \\ \hline \multirow{2}{*}{$1$} & $\| u-u_h \|_{L^2(\Omega)}$ & 1.019e-2 & 3.307e-3 & 1.109e-3 & 3.872e-4 & 1.57 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 8.031e-2 & 4.900e-2 & 3.061e-2 & 1.920e-2 & 0.67 \\ \hline \multirow{2}{*}{$2$} & $\| u-u_h \|_{L^2(\Omega)}$ & 1.292e-3 & 4.483e-4 & 1.641e-4 & 6.226e-5 & 1.45 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 4.023e-2 & 2.619e-2 & 1.652e-2 & 1.041e-2 & 0.66 \\ \hline \multirow{2}{*}{$3$} & $\| u-u_h \|_{L^2(\Omega)}$ & 7.098e-4 & 2.677e-4 & 1.032e-4 & 4.309e-5 & 1.37 \\ \cline{2-7} & $\| \bm{p} - \bm{p}_h \|_{L^2(\Omega)} $ & 2.073e-2 & 1.706e-2 & 1.075e-2 & 6.776e-3 & 0.66\\ \hline \end{tabular} } \caption{Convergence history for Example 4 with $k=1$.} \label{tab_ex4k1} \end{table} \noindent \textbf{Example 5.} In this example, we consider a circumferentially harmonic radiation from a rigid infinite circular cylinder of radius $a$ \cite{Harari1992galerkin}. The exact solution is chosen by \begin{equation} u(x,y) = \frac{H^{(1)}_n(kr) \cos n\theta}{H^{(1)}_n (ka)}, \end{equation} where $H^{(1)}_n$ is the Hankel function of the first kind of order $n$. The domain is set to be a circular ring $\Omega = B(0,2a) \slash B(0,a)$. We apply the Dirichlet boundary condition on $\partial B(0,a)$ and the Robin boundary condition on $\partial B(0,2a)$. In our numerical simulation, we compute the fifth circumferential mode ($n = 4$) and choose $k=\pi$, $a=1$. We use the discontinuous piecewise linear approximation spaces $\bmr{V}_h^1 \times \bmr{\Sigma}_h^1$ for this example. We use the polygon approximation to the domain $\Omega$, and then triangulate it into a shape-regular mesh, see Fig.~\ref{fig_ex5mesh} . In Fig.~\ref{fig_exhan}, we show the contours of the real part of the numerical solution and the exact solution, respectively. We observe that the least squres discontinuous finite element solution recovers the essential features of the exact solution. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./figure/circlemesh.pdf} \caption{The mesh used in Example 5 with 8908 elements.} \label{fig_ex5mesh} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{./figure/8908case2.pdf} \hspace{15pt} \includegraphics[width=0.4\textwidth]{./figure/8908case2exact.pdf} \caption{The real part of the numerical solution (left) and the exact solution (right) with $a=1$, $k=\pi$ and $m=1$.} \label{fig_exhan} \end{figure} \noindent \textbf{Example 6.} In this example, we test the performance of our adaptive algorithm proposed in Section \ref{sec_method}. We solve the low-regularity problem defined in Example 4 with $\alpha = 2/3$. For the adaptive algorithm, we choose the parameter $\lambda = 0.45$ and we use the longest-edge bisection algorithm to refine the mesh. We use approximation spaces with $m=1$ to solve the problem. In Fig.~\ref{fig_ex6}, we compare the original mesh (left) with the mesh after 5 adaptive refinement steps (right). The mesh is refined remarkably around the corner $(0, 0)$, where the exact solution contains a singularity. The convergence history under $L^2$ norms is displayed in Fig.~\ref{fig_adaptive}. From Fig.~\ref{fig_adaptive}, we see that the convergence orders of $\| u- u_h \|_{L^2(\Omega)}$ and $\| \bm{p}-\bm{p}_h \|_{L^2(\Omega)}$ are $O(N^{-1})$ and $O(N^{-1/2})$ , respectively, where $N$ is the number of degrees of freedom. These results match the convergence rates for smooth cases in Example 1 and Example 2. The convergence rates are better than that in Tab.~\ref{tab_ex4k1}, where the $L^2$ errors tend to zero at the speed $O(N^{-1.57/2})$ and $O(N^{-0.67/2})$ for the variables $u$ and $\bm{p}$, respectively. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{./figure/Lshapemesh.pdf} \hspace{15pt} \includegraphics[width=0.4\textwidth]{./figure/admesh.pdf} \caption{The initial mesh (left) / The mesh after 5 adaptive refinement steps (right)} \label{fig_ex6} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{./figure/adaptiveuL22.pdf} \hspace{15pt} \includegraphics[width=0.4\textwidth]{./figure/adaptivepL22.pdf} \caption{Convergence history for Example 6.} \label{fig_adaptive} \end{figure} \section{preliminaries} \label{sec_preliminaries} Let $\Omega_1 \in \mb{R}^d$ be an open, bounded, strictly star-shaped polygonal (polyhedral) domain, where $d = 2$ or $3$. $D \subset \Omega_1$ is a star-shaped domain, which represents a scatterer. We define $\Omega = \Omega_1 \backslash D$ and $\Gamma_R = \partial \Omega_1$, $\Gamma_D = \partial D$. In this paper, we concern the following Helmholtz equation: seek $u$ such that \begin{equation} \begin{aligned} -\Delta u - k^2 u &= f &&\text{in } \Omega, \\ u &= g_0 &&\text{on } \Gamma_D, \\ \npar{u} + \bm{\mr i} k u &= g && \text{on } \Gamma_R, \\ \end{aligned} \label{eq_H3} \end{equation} where $k>0$ is the wavenumber, $\bm{\mr i} = \sqrt{-1}$ is the imaginary unit, and $\bm{\mr{n}}$ denotes the unit outward normal to $\Omega$. The Robin boundary condition of \eqref{eq_H3} is known as the first order absorbing boundary condition \cite{Engquist1979radiation}. We allow the case $D = \varnothing$. We denote by $\mc{T}_h$ a shape-regular triangulation over the domain $\Omega$. Let $\mc{E}_h^i$ be the collection of all $d - 1$ dimensional interior faces with respect to the partition $\mc{T}_h$, $\mc{E}_h^D$ be the collection of all $d - 1$ dimensional faces that are on the boundary $\Gamma_D$ and $\mc{E}_h^R$ be the collection of all $d-1$ dimensional faces that are on the boundary $\Gamma_R$. We then set $\mc{E}_h:= \mc{E}_h^i \cup \mc{E}_h^D \cup \mc{E}_h^R$. For any element $K \in \mc{T}_h$ and any face $e \in \mc{E}_h$, we let $h_K$ and $h_e$ be their diameters, respectively, and we denote by $h := \max_{K \in \mc{T}_h} h_K$ the mesh size of $\mc{T}_h$. Then the shape regularity of $\mc{T}_h$ is in the sense of that: there exists a constant $C>0$ such that \begin{displaymath} \frac{h_K}{\rho_K} \leq C, \end{displaymath} for any element $K \in \mc{T}_h$, and $\rho_K$ denotes the diameter of the largest disk (ball) inscribed in $K$. Next, we introduce the following trace operators which are commonly used in the DG framework. For the scalar-valued piecewise smooth function $v$ and the vector-valued piecewise smooth function $\bm{v}$, we define the jumps of $v$ and $\bm{v}$ on the interior face $e = \partial K^+ \cap \partial K^-$ as \begin{displaymath} \begin{aligned} \jump{v} &:= v|_{K^+} \bm{\mr{n}}^+ + v|_{K^-} \bm{\mr{n}}^-, \text{ for scalar-valued } v, \\ \jump{\bm{\mr{n}} \cdot \bm{v}} &:= \bm{\mr{n}}^+ \cdot \bm{v}|_{K^+} + \bm{\mr{n}}^- \cdot \bm{v}|_{K^-}, \text{ for vector-valued } \bm{v}, \\ \end{aligned} \end{displaymath} where $\bm{\mr{n}}^+$ and $\bm{\mr{n}}^-$ are the unit outward normal to $e$ of $K^+$ and $K^-$, respectively. For the boundary face $e \in \mc{E}_h^D \cup \mc{E}_h^R$, we set \begin{displaymath} \begin{aligned} \jump{v} &:= v \bm{\mr{n}}, \text{ for scalar-valued } v, \\ \jump{\bm{\mr{n}} \cdot \bm{v}} &:= \bm{\mr{n}} \cdot \bm{v}, \text{ for vector-valued } \bm{v}, \\ \end{aligned} \end{displaymath} where $\bm{\mr{n}}$ is the unit outward normal to $e$. Given the bounded domain $Q$, we follow the standard notations $L^2(Q)$, $L^2(Q)^d$, $H^r(Q)$ and $H^r(Q)^d$ to represent the {\it complex-valued} Sobolev spaces with the regular exponent $r \geq 0$. The $L^2$ inner products to these spaces are defined as \begin{displaymath} \begin{aligned} (u,v)_{L^2(Q)} &:= \int_Q u \ \overline{v} \ \d{x}, \text{ for scalar-valued Sobolev spaces}, \\ (\bm{u},\bm{v})_{L^2(Q)} &:= \int_Q \bm{u} \cdot \overline{\bm{v}} \ \d{x},\text{ for vector-valued Sobolev spaces}, \end{aligned} \end{displaymath} and the corresponding semi-norms and norms are induced from the $L^2$ inner products. Further, we denote by $H^r_D(\Omega)$ the space of functions in $H^r(\Omega)$ with vanishing trace on $\Gamma_D$, \begin{displaymath} H^r_D(\Omega) := \left\{ v \in H^r(\Omega) \ | \ v = 0, \text{ on } \Gamma_D \right\}. \end{displaymath} Besides, the following space will be used in our analysis, \begin{displaymath} H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega) := \left\{ \bm{v} \in L^2(\Omega)^d \ | \ \nabla \cdot \bm{v} \in L^2(\Omega) \right\}, \end{displaymath} with the norm \begin{displaymath} \| \bm{v} \|_{H(\ifmmode \mathrm{div} \else \text{div}\fi, \Omega)}^2 := \| \bm{v} \|_{L^2(\Omega)}^2 + \| \nabla \cdot \bm{v} \|_{L^2(\Omega)}^2. \end{displaymath} For the partition $\mc{T}_h$, we will use the notations and the definitions for the broken Sobolev space $L^2(\mc{T}_h)$, $L^2(\mc{T}_h)^d$, $H^r(\mc{T}_h)$ and $H^r(\mc{T}_h)^d$ with the exponent $r \geq 0$ and their associated inner products and norms \cite{arnold2002unified}. We note that the capital $C$ with or without subscripts are generic positive constants, which are possibly different from line to line, but are independent of the mesh size $h$ and the wavenumber $k$. Under the assumptions of the domain $\Omega$, the following $k$-explicit stability result of the Helmholtz equation holds true, which is critical in our error estimates: \begin{theorem} Suppose $g_0 = 0$, and $\Omega_1$ is a strictly star-shaped domain and $D \subset \Omega_1$ is a star-shaped domain. Let $k_0$ be an arbitrary strictly positive number. Then there is a constant $C > 0$ such that for any $f \in L^2(\Omega)$ , $g \in L^2(\Gamma_R)$, and $k \geq k_0$, the Helmholtz equation \eqref{eq_H3} has a unique solution $u \in H_D^1(\Omega)$ satisfying \begin{equation} k\| u \|_{L^2(\Omega)} + \| \nabla u \|_{L^2(\Omega)} \leq C \left( \| f \|_{L^2(\Omega)} + \| g \|_{L^2(\Gamma_R)} \right). \label{eq_stability} \end{equation} \label{th_stability} \end{theorem} We refer to \cite[Section 3.4]{Hetmaniuk2007stability} for details of this result. In this paper, we propose a least squares finite element method for the Helmholtz equation \eqref{eq_H3} based on the discontinuous approximation. We begin by introducing an auxiliary variable $\bm{p} = \frac{1}{k} \nabla u$ to recast the Helmholtz equation \eqref{eq_H3} into a first-order system, \begin{equation} \begin{aligned} -\nabla \cdot \bm{p} - k u = \wt{f}, &\quad \text{in } \Omega, \\ \nabla u - k \bm{p} = \bm{0}, &\quad \text{in } \Omega, \\ u = g_0, &\quad \text{on } \Gamma_D, \\ \bm{\mr{n}} \cdot \bm{p} + \bm{\mr i} u = \wt{g}, &\quad \text{on } \Gamma_R, \end{aligned} \label{eq_firstHelmholtz} \end{equation} where $\wt{f} = \frac{1}{k} f$ and $\wt{g} = \frac{1}{k} g$. The variable $u$ and $\bm{p}$ give the electric field and the magnetic field, respectively. Rewriting the problem into a first-order system is the fundamental idea in the modern least squares finite element method \cite{Bochev1998review, Lee2000first, Chen2017first}, and our discontinuous least squares method is then based on the system \eqref{eq_firstHelmholtz}.
{ "timestamp": "2021-05-06T02:12:06", "yymm": "2105", "arxiv_id": "2105.01909", "language": "en", "url": "https://arxiv.org/abs/2105.01909" }
\section{introduction} Domain walls are sheet-like topological defects, which may be created in the early Universe when a discrete symmetry is spontaneously broken. A domain wall separates the space into two distinguishable vacuum states. The fundamental constants (like particle masses, fine structure constant etc) can change after crossing the domain wall \cite{Kibble:1976sj,Vilenkin:1982ks,Vilenkin:1984ib}. The energy density of the the domain wall network might dominate the total energy density of the Universe if the domain walls are generated at early times \cite{1975ZhETF..67....3Z}. This problem could be solved if the walls are unstable or using some other elaborate mechanism \cite{Vilenkin:1981zs,Gelmini:1988sf,Larsson:1996sp,Dvali:1995cc, Stojkovic:2005zh,Stojkovic:2004hz}. On the other hand, if the walls are formed in the late time epochs of the universe, they will be light and thus not easily detectable. Such domain walls can even play the role of dark matter candidates or generate other interesting structures \cite{Dai:2020rnc}. Since domain wall can change the fundamental constants, there are many experiments invented to test their existence. These methods include mass difference causing acceleration \cite{McNally:2019lcg}, variance of the fine structure constant \cite{Roberts:2019sfo}, electric dipole moment measurements \cite{Stadnik:2014cea}, measuring magnetometers \cite{Pospelov:2012mt,2013arXiv1303.5524P,JacksonKimball:2017qgk,Afach:2021pfd}, satellite synchronization \cite{Derevianko:2013oaa,Roberts:2017hla,Kalaydzhyan:2017jtv} and gravitational wave detectors \cite{Hall:2016usm,Jaeckel:2020mqa,Grote:2019uvn}. The interaction between the domain wall and regular matter can be classified using a collection of gauge-invariant operators parameterizing the Standard Model interactions with the fields that make up the composition of the wall (see e.g. \cite{Essig:2013lka}). However, in these studies matter was considered to be a fundamental point-like particle which will not distort the domain wall during the crossing. In contrast, this paper focuses on classical objects of finite sizes. Interaction of black holes with domain walls was studied in \cite{Stojkovic:2004wy,Frolov:2003mc,Frolov:2004wy,Frolov:2004bq,Frolov:1998td,Christensen:1998hg}, however no study was performed for finite but regular classical objects. Earth or a satellite in Earth's orbit are in general such classical objects which, in principle, can distort a domain wall. We find that when a macroscopic object encounters the domain wall, it may be reflected or it may pass through depending on the strength of the interaction, relative velocity and size. During the encounter, the object loses its energy to deformations of the wall. In addition, the object changes its mass by passing from one side of the wall to another. We track motions of satellites in Earth's orbit with a great precision, so minor changes in satellites' parameters are observable in principle. For example, a typical velocity precision of a satellite is about $0.5$ mm/s, which directly puts an upper limit on its mass change to $\Delta M/M \lessapprox 5\times 10^{-17} $. Therefore, this effect can offer a new method for detection of domain walls by observing whether a satellite's velocity gets suddenly changed. The famous flyby anomaly -- anomalous increase in speed observed during a planetary flyby by a satellite -- is very similar to this effect. In the second part of the paper we show how matter can trigger the decay of the false vacuum. The presence of matter modifies the scalar filed potential and can locally create a bubble of the true vacuum. For a critical bubble which is able to expand, interaction with the domain wall must be strong enough. \section{Interaction of a scalar field with classical objects} Consider a solid massive spherically symmetric object, with a uniform mass distribution and radius $R$ in its own rest frame. The object is moving with velocity $v_O$ in $z$-direction. The location of the center of the object along the $z$-axis is labeled by $z_O$. The normalized mass distribution function of the object in axially symmetric (around $z$-axis) coordinates is \begin{equation} \label{dist} f(z,r,z_O,v_O)=\Theta(R-\sqrt{r^2+\frac{(z-z_O)^2}{1-v_O^2}}) , \end{equation} where $\Theta(x)$ is the Heaviside function. The radial coordinate is $r^2=x^2+y^2$, while $1/\sqrt{1-v_O^2}$ is the standard Lorentz contraction factor. The relativistic action for this object is \begin{equation} S_m= -m\int \sqrt{1-v_O^2} dt , \end{equation} where $m$ is the object's mass in the absence of any couplings. Consider now a domain wall made of a scalar field $\phi$. The scalar field's action is \begin{equation} S_\phi =\int \frac{(\partial_t \phi)^2}{2} - \frac{( \nabla \phi )^2}{2} - V(\phi ) d^4x , \end{equation} The coupling between the massive object and the scalar field can be described by \begin{equation} S_{\rm int}= -A \int f(\vec{r}) I(\phi) d^4x , \end{equation} where $A$ is the coupling constant, while $f(\vec{r})$ is given in Eq.~(\ref{dist}). $I(\phi)$ is a coupling function, which is model dependent. The equations of motion that follow from the total action $S_{\rm tot}= S_m+S_\phi+S_{\rm int}$ are \begin{eqnarray} \label{eom} &&\partial_t^2 \phi-\partial_r^2 \phi -\frac{1}{r}\partial_r \phi-\partial_z^2 \phi+V'+A f(\vec{r})I'=0\\ &&\frac{d}{dt} \Big(\frac{v_O}{\sqrt{1-v_O^2}} M\Big)=A\int_0^R (I(\phi_{d} )-I(\phi_{u})2 \pi rdr\\ &&M=m+A \int_0^R (I(\phi_{u})+I(\phi_{d}))\sqrt{R^2-r^2}2\pi rdr , \end{eqnarray} where $\phi_{u}=\phi(r,z_{u})$, $\phi_{d}=\phi(r,z_{d})$, with $z_{d}=z_O-\sqrt{R^2-r^2}\sqrt{1-v_O^2}$ and $z_{u}=z_O+\sqrt{R^2-r^2}\sqrt{1-v_O^2}$. The parameters $z_d$ and $z_u$ are the limits within which the object moves in $z$-direction. $M$ is the object's effective mass which is modified by the presence of $\phi$. The effective potential of the scalar field can be now written as \begin{equation} V_{eff}=V+AfI . \end{equation} \subsection{Concrete model } We choose to work with a well known scalar field potential that provides the existence of domain walls \begin{eqnarray} \label{scalar-potential} V= B (\phi^2-\Lambda^2)^2, \end{eqnarray} where $B$ and $\Lambda$ are constants. In particular, $\Lambda$ is the vacuum expectation value of the scalar field. One dimensional kink and anti-kink solutions are described by the following profile \begin{equation} \label{wall-solution} \phi=\Lambda \tanh (\pm \Lambda \sqrt{2B} \frac{z-vt-z_i}{\sqrt{1-v^2}}) , \end{equation} where $z_i$ is the initial position of the kink, while $v$ is it's velocity. Without any loss of generality we boost the object to a frame where the domain wall is at rest initially, i.e. $v=0$, and its center is located at $z_i=0$. These are the initial conditions of the domain wall before it interacts with the massive object. The kink solution is shown in the plot $t=0$ in Fig. \ref{collide}. The energy per unit area of the domain wall is \begin{equation} \rho=\frac{4}{3}\sqrt{2B}\Lambda^3. \end{equation} We take the coupling function to be \begin{equation} \label{I-term} I=(\phi+\Lambda)^2 . \end{equation} With this choice, the object will not acquire any extra mass at $\phi=-\Lambda$ vacuum, so $M= m$ before the object encounters the wall. Fig.~\ref{potential-mass} shows that the presence of the object inevitably modifies the potential. The true and false vacuum states in empty space are different from the same states inside the matter distribution. The presence of matter can therefore change the scalar field's vacuum state. \begin{figure} \includegraphics[width=8cm]{potential-mass.eps} \caption{ The effective potential of a scalar field which interacts with a massive object. The solid line (no interaction) shows a potential with two equal minima at $\phi =\pm 1$. The potential and couplings are chosen as in Eqs.~\eqref{scalar-potential} and \eqref{I-term} with $B=\Lambda =1$. The dashed line, $A=0.1$, shows a potential with the true vacuum at $\phi =-1$ and the false vacuum at $\phi=1$. For larger values of $A$ there is no false vacuum. } \label{potential-mass} \end{figure} \section{Method} The equations of motion in Eq.~(\ref{eom}) must be solved numerically. We replace the derivatives with \begin{eqnarray} \partial_t^2\phi&=&\frac{\phi (t+\Delta t)-2\phi(t)+\phi(t-\Delta t)}{\Delta t^2}\\ \partial_z^2\phi&=&\frac{\phi (z+\Delta z)-2\phi(z)+\phi(z-\Delta z)}{\Delta z^2}\\ \partial_r^2\phi&=&\frac{\phi (r+\Delta r)-2\phi(r)+\phi(r-\Delta r)}{\Delta r^2}\\ \partial_r\phi&=&\frac{\phi (r+\Delta r)-\phi(r-\Delta r)}{2\Delta r} , \end{eqnarray} where $\phi(t=0)$, $\phi (t=\Delta t)$, $v_O(t=0)$ and $x_0(t=0)$ are given as initial conditions. The evolution of the field and object is obtained by iterations. We choose $\Delta r=\Delta z= 3\Delta t=10^{-2}$ to preserve the stability of the numerical method. The numerical range is $0\leq r\leq 30$ and $-30\leq z\leq 30$. The singularity at $r=0$ is avoided by replacing \begin{equation} \lim_{r\rightarrow 0}\frac{1}{r}\partial_r \phi =\lim_{r\rightarrow 0}\partial_r^2 \phi . \end{equation} The integral term in Eq.~(\ref{eom}) is calculated using the Simpson method. \begin{figure*} \centering \subfigure{ \includegraphics[width=0.4\textwidth]{collide-0.eps} \includegraphics[width=0.4\textwidth]{collide-20.eps} } \subfigure{ \includegraphics[width=0.4\textwidth]{collide-40.eps} \includegraphics[width=0.4\textwidth]{collide-60.eps} } \subfigure{ \includegraphics[width=0.4\textwidth]{collide-80.eps} \includegraphics[width=0.4\textwidth]{collide-90.eps} } \subfigure{ \includegraphics[width=0.4\textwidth]{collide-94.eps} \includegraphics[width=0.4\textwidth]{collide-100.eps} } \caption{Distortion of the wall in cylindrical coordinates $(r,z)$, during the collision with a massive object moving in the $z$-direction. The color code represents the values of the scalar filed. The initial conditions are chosen to be $m=10^4$, $R=1.5$, $v_O=0.1$, $A= 0.5$ and $x_0 =-3.5$. At $t=0$, the domain wall is located at $z=0$ without any distortion. At $t=20$, the wall is pushed by the object to the right around $r=0$ and $z=0$. From $t =40$ to $t= 80$, the distortion becomes clearer. At $t=90$, the object starts to separate from the wall, where a significant bump can be observed. From $t=94$ to $t =100$, the object is completely separated from the wall, and a separate closed region with $\phi=\Lambda$ (new vacuum) is created. This is a bubble of the scalar field formed around the massive object. The wall bounces back to its equilibrium configuration. There are domain wall waves generated in this process, but they cannot be clearly seen in these plots because of the small amplitudes. } \label{collide} \end{figure*} \section{Results} We discuss two distinct cases. The first case is an object crossing the domain wall from one vacuum to another. The second case is an object triggering the vacuum decay by generating a bubble of the local vacuum state. \subsection{Collision between a domain wall and a classical object} We consider the scalar potential in Eq.~\eqref{scalar-potential} with $B=1$ and $\Lambda=1$. The scalar field generates a domain wall structure as in Eq.~\eqref{wall-solution}. In the absence of the interaction, i.e. $A=0$, Fig.~\ref{potential-mass} shows that the potential has two equal minima, which are at $\phi =\pm \Lambda$. Initially, the domain wall is at rest, located at $z_i=0$. The $t=0$ plot in Fig.~\ref{collide} shows the scalar field with the domain wall structure. There is an obvious transition at $z=0$. The vacuum state at $z<0$ ($z>0$) is $\phi=-\Lambda$ ($\phi=\Lambda$). Initially the object is at $z_O=-3.5$ and moves to the right with velocity $v_O$. The crossing starts when the object reaches the wall. The wall is pushed to the right and it is distorted (segments from $t=0$ to $t=90$ in Fig.~\ref{collide}). The wall is then pushed farther, and if the object has enough energy it crosses the wall, while the wall bounces back to its original place (segments from $t=90$ to $t=100$ in Fig.~\ref{collide}). If the object does not have enough energy, the object is reflected back (as described with $m<5000 $ case in Fig.~\ref{path}). In this process, the object changes its mass and velocity and its energy is applied to distorting the wall and generating waves, which cannot be seen directly in the plots in Fig.~\ref{collide}, because of their small amplitude. These waves can be seen in Fig.~\ref{true-vacuum} and \ref{false-vacuum}. We note that we can chose the values for $R$ and $m$ independently since gravity is not included in the action. In other words, an object with $R=1$ and $m=10^5$ is not necessarily within its own Schwarzschild radius. Fig.~\ref{velocity} shows how the velocity of an object changes in the process. Very light objects are reflected completely, which resembles the classical collision between a very light and very heavy object. As the object becomes more massive, it distorts the wall more and more. The object's energy is applied toward the wall's distortion and wave dispersion. The reflected object's speed is lower than the initial speed. An object with a mass in a medium range may gain some distortion energy back when the wall is bounced back to its balanced position (e.g. $m=10^3$ case). In this case the object loses less energy than in some of the lower mass cases, since the wall acts like a spring. In contrast, very massive objects just pass through the wall and do not recover any energy back from the distorted wall, so they lose more energy than less massive objects. Fig.~\ref{energy} shows how the energy of an object changes during the collision. The energy is calculated according to \begin{equation} E_O=M/\sqrt{1-v_O^2}, \end{equation} where $v_O$ is the velocity of the object. At the beginning, the object gains some energy, because the value of $\phi$ is increasing and the effective mass, $M$, is increasing along with it. Then this gain is not sufficient to compensate for the loss, and more and more energy is released to the wall distortion and waves. At the end, the lost energy is higher than the energy gained from the wall. This lost energy does not come back to the object. The velocity of the object plays twofold role during the crossing. First, the velocity increases available kinetic energy, and second, it causes the Lorentz contraction which also changes the interaction between the object and wall. Fig.~\ref{vary-velocity} shows that only objects with high enough velocity can cross the wall. Fig.~\ref{radius} shows how the motion of the object varies as a function of its radius. If the radius is small, the wall resistance is smaller, and the object easier penetrates the wall. Objects of larger radius induce greater distortion during the crossing, and in turn more energy is lost. \begin{figure} \includegraphics[width=8cm]{path.eps} \caption{ The path of the object during its interaction with the wall. On the y-axis we have $z_O$, which represents the location of the center of the object at some moment in time. The parameters are chosen to be $A=0.5$, initial $v_O=0.1$, $R=1.5$, while the values for the object's mass $m$ are marked in the plot. If the object is light, its momentum and energy are low, so it bounces back from the wall. As it becomes more massive, the wall bends more and more, up until the point where energy and momentum are high enough so that the wall cannot stop it any more, and the object passes through the wall. } \label{path} \end{figure} \begin{figure} \includegraphics[width=8cm]{velocity.eps} \caption{ Evolution of the velocity of the object, $v_O$, during the collision with the wall as a function of time. The parameters are chosen to be $A=0.5$, initial $v_O=0.1$, $R=1.5$, while the values for the object's mass $m$ are marked in the plot. A very light object is bounced back (velocity is reversed) without any energy loss nor gain. Medium mass objects are reflected by the wall, but they also gain some wall distortion energy back during the collision. In these cases the wall acts like a spring. Therefore their velocity can be larger than in some of the lighter cases. Very massive objects pass through the wall and cannot gain the distortion energy back, so they lose the highest amount of energy. } \label{velocity} \end{figure} \begin{figure} \includegraphics[width=8cm]{energy.eps} \caption{ The relative change in energy of the object, $\Delta E/E_i$, in the collision process with the wall as a function of time. The parameters are chosen to be $A=0.5$, initial $v_O=0.1$, $R=1.5$, while the values for the object's mass $m$ are marked in the plot. As an object is approaching the wall its energy is increasing. Since the value of the scalar field at the object's location is increasing during the approach, the effective mass of the object, $M$, is increasing. Thus the energy of the object is increasing. During and after the interaction, the energy is decreasing since the velocity is decreasing, and also some of the energy is used to deform the wall. For medium mass cases, the object regains some energy when the wall bounces back (e.g. for $M=10^3$), so energy is increasing again at late times. } \label{energy} \end{figure} \begin{figure} \includegraphics[width=8cm]{vary-velocity.eps} \caption{The path of the object in the collision process with the wall as a function of its velocity. On the y-axis we have $z_O$, which represents the location of the center of the object at some moment in time. The parameters are chosen to be $A=5\times 10^{-4}$, $m=1$, $R=1.5$, while the initial values for initial $v_O$ are marked in the plot. For low velocities the object cannot cross the wall. For high enough velocities the object has enough energy to pass through the wall. } \label{vary-velocity} \end{figure} \begin{figure} \includegraphics[width=8cm]{radius.eps} \caption{The path of the object in the collision process with the wall as a function of its radius. On the y-axis we have $z_O$, which represents the location of the center of the object at some moment in time. The parameters are chosen to be $m=5000$. $A=0.5$. initial $v_O =0.1$, while the values for $R$ are marked in the plot. If the radius of the object is small the resistance of the wall is small, and it is easier to go through the wall. Objects of larger radii create greater distortion of the wall and lose more energy. } \label{radius} \end{figure} \subsection{Vacuum decay in the presence of a massive object} The presence of matter can change the scalar field effective potential, as we have seen in Fig.~\ref{potential-mass}. The true vacuum can be modified by the matter distribution and become a false vacuum locally. The opposite is also true, a region of false vacuum can be converted into a true vacuum state. In other words, the presence of matter can trigger a vacuum decay. To study this effect, we add an extra term in the scalar filed potential which breaks the $\phi \rightarrow -\phi$ symmetry and introduces the difference in the vacua \begin{equation} \label{VTF} V=B(\phi^2-\Lambda^2)^2+ a (\phi^3-3\Lambda^2 \phi -2\Lambda^3) . \end{equation} This potential is plotted in Fig.~\ref{potential-decay}. We see that for $a=-0.2$ ($a=0.2$), $\phi=\Lambda$($\phi=-\Lambda$) is the false vacuum expectation value, while $\phi=-\Lambda$($\phi=\Lambda$) is the true vacuum expectation value. Consider the region which is in the true vacuum ($a=0.2$ and $\phi=\Lambda$). The presence of the massive object will perturb the space and modifies the scalar field potential. Fig.~\ref{true-vacuum} shows that the scalar field is in the $\phi=\Lambda$ state at the beginning. The scalar field inside the object evolves toward the lower values, because in this region the true vacuum is no longer at $\phi=\Lambda$. Outside of the object the field stays in the true vacuum at $\phi=\Lambda$. At the same time the waves are generated which propagate away and take energy away from the location. The object mostly changes the vacuum state in its own neighborhood (within the bubble which is formed around it), while the energy from the vacuum decay is released. This is a clear demonstration that a massive object can affect the vacuum state around it. Consider now the opposite situation, i.e. the region of space which is in the false vacuum ($a=-0.2$ and $\phi=\Lambda$) at the beginning. The presence of the massive object again perturbs the space and modifies the scalar field potential. Fig.~\ref{false-vacuum} shows that the scalar field is in the $\phi=\Lambda$ state at the beginning, but it starts to evolve toward the lower values inside the object. The crucial difference from the previous case is that after the transition starts, the region converted into the true vacuum grows. In other words, the true vacuum bubble expands, which means that a massive object can trigger (or catalyze) the vacuum decay. At the end of the process the whole space could be converted into the true vacuum. Whether this will happen or not depends on whether there is enough energy to support the the domain wall bubble expansion. Fig.~\ref{decay-not} shows that if the strength of the interaction (determined by the parameter $A$) is low, then the true vacuum will be created only locally around the object. This happens because the scalar field still sits in the region where the potential is higher than the false vacuum outside the matter, though inside the matter the scalar field may reach its true minimum. In that case, the bubble needs some extra energy source to keep expanding. In other words, an expanding domain wall bubble must generate more energy than that released from the difference in vacua, or the scalar field cannot freely settle down into the true vacuum in the region outside of the matter. The condition for the false vacuum decay to release enough energy to support an expanding bubble is \begin{equation} 4\pi R_b^2 \frac{\Delta \phi^2}{L}\lessapprox \frac{4\pi}{3} R^3\Delta V , \end{equation} where $L$ is the domain wall width, $R_b$ is the domain wall bubble radius, while $\Delta V$ is the potential difference between the false and true vacuum. The left side represents the surface energy of the domain wall bubble, while the right side is the volume energy released by the vacuum decay. If the volume of the matter distribution is large enough, then a large enough region populated by the scalar field will be pushed to its true vacuum, and once it passes the potential barrier outside the matter distribution, the false vacuum decay will become spontaneous. Figs.~\ref{decay-not} and \ref{decay-large} demonstrate how larger massive objects can trigger a successful vacuum decay easier than smaller ones. Higher values of the coupling parameter, $A$, can also trigger vacuum decay easier (as shown in Figs.~\ref{false-vacuum} and \ref{decay-not}), because more energy is generated in the process. \begin{figure} \includegraphics[width=8cm]{potential-decay.eps} \caption{The potential of a scalar field in the presence of a massive object as given in Eq.~\eqref{VTF}. We set $\Lambda=B=1$. The solid line, $a=0$, shows the potential with two equal minima at $\phi =\pm 1$. The doted line, $a=-0.2$, shows the potential with a false vacuum at $\phi =1$ and a true vacuum at $\phi=-1$. The dashed line, $a=0.2$, shows a potential with a false vacuum at $\phi =-1$ and a true vacuum at $\phi=1$. } \label{potential-decay} \end{figure} \begin{figure} \includegraphics[width=8cm]{false-1.eps} \caption{ A massive object affecting the vacuum in its neighborhood. We set initial $v_O=0$,$A=20$, $R=1.5$, $\Lambda =1$, $B=1$, $m= 10^{10}$, $a=0.2$, $r=0$. The whole space was initially occupied by $\phi=\Lambda$. The scalar field inside the object evolves toward the lower values, because in this region the true vacuum is no longer at $\phi=\Lambda$. Outside of the object the field stays in the true vacuum at $\phi=\Lambda$. In this process, the waves are generated which propagate away and take energy away from the location. The scalar field potential is modified only locally (in the bubble) around the object. } \label{true-vacuum} \end{figure} \begin{figure} \includegraphics[width=8cm]{true-2.eps} \caption{ A massive object triggering a false vacuum decay. We set initial $v_O=0$, $A=20$, $R=1.5$, $\Lambda =1$, $B=1$, $m= 10^{10}$, $a=-0.2$, $r=0$. The whole space was initially occupied by $\phi=\Lambda$. The scalar field starts to evolve toward the lower values (true minimum) inside the object. The crucial difference from the previous case in Fig.~\ref{true-vacuum} is that after the transition starts, the region converted into the true vacuum grows. In other words, the true vacuum bubble expands, which means that a massive object can trigger (or catalyze) the vacuum decay. } \label{false-vacuum} \end{figure} \begin{figure} \includegraphics[width=8cm]{true-1.eps} \caption{Not all of the bubbles of the true vacuum are able to expand. If there is no enough energy to support the expansion of the bubble (e.g. if the strength of the interaction between the object and the field is low), the vacuum decay process cannot be completed. We set here initial $v_O=0$, $A=2$, $R=1.5$, $m= 10^{10}$, $a=-0.2$, $r=0$. Note that the value of the parameter $A$ is much lower than in Fig.~\ref{false-vacuum}. } \label{decay-not} \end{figure} \begin{figure} \includegraphics[width=8cm]{true-3.eps} \caption{ If the massive object is large enough, the bubble of true vacuum will expand. We set here initial $v_O=0$, $A=2$, $R=3$, $m= 10^{10}$, $a=-0.2$, $r=0$. } \label{decay-large} \end{figure} \section{Conclusions} On the quantum level, a microscopic particle can either tunnel through or be reflected by a domain wall. However, a large classical object cannot tunnel through a domain wall, so it can either pass through, if it has enough energy, or bounce back, if it is not energetic enough. Obviously, a classical object encountering a domain wall must be treated differently from a microscopic particle. In our analysis, we considered a very general Lagrangian for a massive classical object interacting with a domain wall. By solving the equations of motion, we showed that the domain wall gets distorted as the classical object encounters it. The degree of distortion depends on the object's mass, velocity and size, and the strength of the interaction between the object and the wall. Some of the energy of the object is dissipated away by exciting waves on the wall. To cross the wall, the object must have enough energy to overcome the wall distortion and energy dissipation. Otherwise, the object will rebound and gain some energy back, since the wall will act as a spring in that case. Our results imply that if Earth (or any other planet), or a satellite in Earth's orbit, crosses the domain wall, it will lose (or under certain conditions gain) energy and momentum, and change its mass. One may then use this fact to put a constraint on theories that allow for the existence of domain walls by studying orbits of planets or satellites. A sub-decimeter position accuracy and a sub mm/s velocity accuracy can be achieved in a ground-based reduced dynamic orbit determination using dual-frequency carrier phase measurements along with precise GPS ephemeris products and auxiliary environmental information. Here we adopt 0.5mm/s as the relevant velocity precision \cite{Montenbruck}. When the satellite mass changes when it passes through the wall, its velocity will change too. If the interaction is weak, we may estimate the mass difference from energy conservation \begin{equation} \frac{M_f}{1-v_f^2} \lessapprox \frac{M_i}{1-v_i^2} , \end{equation} where the subscripts $f$ and $i$ label the final and initial states respectively. Since the satellite's velocity is of the order of $10$km/s, the satellite's mass change is \begin{equation} \frac{\delta M}{M}\lessapprox v\delta v \approx 5\times 10^{-17} . \end{equation} We note that this is an underestimate of the effect because the domain wall will also be distorted which will in turn also cause energy and momentum change of the satellite. Therefore, the existence of any domain wall that causes satellite mass distortion greater than this is already excluded. This method is different from using atomic clocks to constrain domain wall crossing \cite{Derevianko:2013oaa,Roberts:2017hla}. On the other hand, it is known that velocities of some satellites deviate from theoretical predictions when they pass their perigee. This is known as the flyby anomaly \cite{2017arXiv170402094S}. Such an effect can easily be explained by the satellite crossing a domain wall. In Fig.~\ref{collide} we saw that a closed domain wall (bubble) can be formed when a massive object like Earth passes through the domain wall. A boundary of the bubble is also a domain wall of the same kind. Depending on the exact radius and shape of the bubble, a satellite's orbit might or might not intersect the bubble. This might explain why the anomaly does not show up consistently for all the satellites all the time. Since the typical magnitude of the flyby anomaly is $\delta v \sim 10 mm/s$, the relative mass change of $\frac{\delta M}{M} \sim 10^{-15}$ would be sufficient to explain the anomaly. We also showed than a massive object can trigger vacuum decay by creating a bubble of true vacuum around it. However, to create a critical bubble which is able to expand, enough energy must be released in this process. The released energy depends on the strength of the coupling between the massive object and the scalar field, and also on the size of the object. A similar effect was described in \cite{Burda:2015isa} where black holes trigger the vacuum decay. \begin{acknowledgments} D.C Dai is supported by the National Natural Science Foundation of China (Grant No. 11775140). D. M. is supported in part by the US Department of Energy (under grant DE-SC0020262) and by the Julian Schwinger Foundation. D.S. is partially supported by the US National Science Foundation, under Grant No. PHY-2014021. We thank Yu Sang for very useful suggestions. \end{acknowledgments}
{ "timestamp": "2021-05-06T02:11:23", "yymm": "2105", "arxiv_id": "2105.01894", "language": "en", "url": "https://arxiv.org/abs/2105.01894" }
\section{Introduction} Polymer models have recently been used to obtain algorithms for spin systems in regimes where standard algorithmic tools (such as correlation-decay algorithms or Gibbs sampling/Glauber dynamics) are inefficient. The prototypical class of graphs where polymer models have been applied to are classes of expander and random regular graphs \cite{JKP, Cannon, cluster, BR19, liao2019counting, chen2019fast, biclique}, see also \cite{helmuth2019algorithmic,Borgs,flows} for applications on the grid. Random bipartite regular graphs are particularly tantalizing \cite{JKP,liao2019counting, biclique}, since on the one hand there is a somewhat standard probabilistic framework to obtain rough analysis estimates for arbitrary spin systems on them (using first/second moment arguments \cite{antiferro}), but on the other hand the corresponding algorithmic framework, and in particular the development of efficient sampling/counting algorithms, is lacking. This paper will focus on finding the algorithmic limits of the polymer method for the two canonical models of interest, the $q$-colorings and the hard-core model (weighted independent sets), though our results apply much more generally as we will detail later. One of the main contributions of this work is to elevate the rough guarantees obtained by analytic/probabilistic methods into efficient approximate sampling/counting algorithms. We begin with the colorings problem: given an integer $q\geq 3$ and a graph $G=(V,E)$ of maximum degree $\Delta$, the goal is to approximate the number of proper $q$-colorings of $G$, and sample a proper $q$-coloring uniformly at random. For general graphs there is an intriguing computational phase transition that is conjectured to occur at the statistical physics phase transition for uniqueness/non-uniqueness of the Gibbs measure on the infinite $\Delta$-regular tree. When $q\geq\Delta+2$ it is conjectured that the simple single-site update Markov chain known as the Glauber dynamics is rapidly mixing on any graph of maximum degree $\Delta$ (rapid mixing refers to a convergence rate which is polynomial in $n=|V|$). In contrast when $q\leq\Delta$ it is believed that the problem is intractable. Current bounds are far from resolving this conjecture but have made considerable progress. On the algorithmic side, recent results establish $O(n\log{n})$ mixing time of the Glauber dynamics on an $n$-vertex graph of maximum degree $\Delta$ when $q>(11/6-\varepsilon_0)\Delta$ for a positive constant $\varepsilon_0\approx 10^{-5}$~\cite{BCCPSV,liu,CDMPP19} and on triangle-free graphs when $q>1.764\Delta$~\cite{CLV20,FGYZ21,CGSV21}. On the negative side, it was shown in~\cite{antiferro} that for {\em even} $q<\Delta$ it is NP-hard to approximate the number of $q$-colorings. The restriction that $q$ is even in this hardness result is rather technical and is a byproduct of a certain maximisation which was carried out in \cite{antiferro} for even $q$. The above results address the problem on worst-case graphs; in this paper we address the behavior on \emph{typical/random} graphs. In this vein, random regular {\em bipartite} graphs are particularly interesting as they manifest the phase transition of regular trees, and consequently they serve as the key gadget in hardness results~\cite{Sly,SlySun,GSV-ising,antiferro,DFJ,CCGL}. However, standard approximate counting techniques, such as Markov Chain Monte Carlo (MCMC), fail in the non-uniqueness region; e.g., the Glauber dynamics is exponentially slow to converge, with high probability over the choice of the random regular bipartite graph, for even $q<\Delta$~\cite{antiferro}. Intriguing algorithmic results for the non-uniqueness region of $q\ll\Delta$ on random bipartite graphs were devised using the recently introduced polymer method of~\cite{JKP} and~\cite{helmuth2019algorithmic}. Jenssen, Keevash, and Perkins~\cite{JKP} presented an $\mathsf{FPTAS}$ for almost every regular bipartite graph when $q\leq C\tfrac{\sqrt{\Delta}}{(\log \Delta)^2}$ for a constant $C>0$ (see also the independent result of Liao, Lin, Lu, and Mao~\cite{liao2019counting}). The running time of these algorithms was improved to $O(n^2(\log n)^3)$ in~\cite{chen2019fast} using a randomized method, see Remark~\ref{rem:faster} below. For a graph $G=(V,E)$ and an integer $q\geq 3$, the partition function $Z_G$ is the number of $q$-colorings of $G$. An algorithm $\mathcal{A}$ is an $\mathsf{FPRAS}$ for the partition function on almost all $\Delta$-regular bipartite graphs if, with probability $1-o(1)$ over a graph $G$ chosen u.a.r. from $n$-vertex $\Delta$-regular bipartite graphs, given $G$, an accuracy $\epsilon>0$, and a tolerance $\delta>0$, the algorithm $\mathcal{A}$ produces in time $poly(n,1/\epsilon,\log(1/\delta))$ an estimate $\hat{Z}$ of the partition function $Z_G$ satisfying $(1-\epsilon)Z_G\leq \hat{Z}\leq (1+\epsilon)Z_G$ with probability $\geq 1-\delta$. The algorithm is an $\mathsf{FPTAS}$ if it achieves $\delta=0$. Here we present an $\mathsf{FPRAS}$ for $q$-colorings on almost every regular bipartite graph for even $q=O(\tfrac{\Delta}{\log{\Delta}})$. This improves significantly over the best previous known bound of $q=O\big(\tfrac{\sqrt{\Delta}}{(\log\Delta)^2}\big)$ given in \cite{JKP}, and is within only an $O(\log \Delta)$-factor from the uniqueness/hardness threshold. In fact, we also provide strong evidence that this is the limit of the polymer method up to the implicit constants in the given bounds, see the upcoming Lemma~\ref{lem:fail} for details. \begin{theorem}\label{thm:main1} For all even $q\geq 4$ and all $\Delta \ge 100 q\log q$, there is an $\mathsf{FPRAS}$ for the number of $q$-colorings on almost all $\Delta$-regular bipartite graphs. \end{theorem} We provide analogous results for the hard-core model on weighted independent sets. For a graph $G=(V,E)$, let $\Omega_G$ denote the collection of independent sets of $G$. For a parameter $\lambda>0$, let independent set $\sigma\in\Omega_G$ have weight $w(\sigma)=\lambda^{|\sigma|}$. The partition function for the hard-core model on graph $G$ at fugacity $\lambda$ is defined as $Z_G = \sum_{\sigma\in\Omega} w(\sigma)$ and the Gibbs distribution is $\mu(\sigma) = w(\sigma)/Z_G$. The hard-core model on the infinite $\Delta$-regular tree undergoes a phase transition corresponding to uniqueness vs. non-uniqueness of the infinite-volume Gibbs measure at $\lambda_c(\Delta) = \tfrac{(\Delta-1)^{\Delta-1}}{(\Delta-2)^\Delta}\sim\tfrac{\mathrm{e}}{\Delta}$. For any graph $G$ of maximum degree $\Delta$, for all $\lambda<\lambda_c(\Delta)$, the Glauber dynamics mixes in $O(n\log{n})$ time~\cite{CLV20}. On the other side, when $\lambda>\lambda_c(\Delta)$, the problem of approximating the partition function is NP-hard on $\Delta$-regular graphs~\cite{Sly,SlySun,GSV-ising}. For random $\Delta$-regular bipartite graphs, \cite{JKP} presented an $\mathsf{FPTAS}$ for $\lambda>50\tfrac{(\log\Delta)^2}{\Delta}$ when $\Delta$ is sufficiently large, and \cite{liao2019counting} for $\lambda\geq 1$ and $\Delta\geq 53$, see also \cite{Cannon, BR19,liulu} for related results on bipartite graphs. We get an improved range of $\lambda=\Omega(\tfrac{\log{\Delta}}{\Delta})$, which is again within an $O(\log \Delta)$-factor from the uniqueness/hardness threshold. \begin{theorem}\label{thm:main2} For all $\Delta\geq 53$ and all $\lambda > 100\frac{\log\Delta}{\Delta}$, there is an $\mathsf{FPRAS}$ for the partition function of the hard-core model with parameter $\lambda$ on almost all $\Delta$-regular bipartite graphs. \end{theorem} \begin{remark}\label{rem:faster} In Theorems~\ref{thm:main1} and~\ref{thm:main2}, we can also obtain deterministic approximation schemes ($\mathsf{FPTAS}$) by applying the interpolation method, analogously to \cite{JKP}. Here, we follow the Markov-chain framework of \cite{chen2019fast}, which provides substantially stronger running time guarantees than those we state for convenience here. In particular, the FPRASes in Theorems~\ref{thm:main1} and~\ref{thm:main2} run in time $O\big((\tfrac{n}{\epsilon})^2\log^3(\tfrac{n}{\epsilon})\big)$ when the desired accuracy error is not exponentially small (i.e., $\epsilon\geq \mathrm{e}^{-\Omega(n)}$). Moreover, in the same range of the parameters, we obtain in addition approximate samplers from the Gibbs distribution with analogous running-time guarantees. \end{remark} We remark that the condition that $q$ is even in Theorem~\ref{thm:main1} is for the same technical reasons that the hardness results of \cite{antiferro} were obtained for $q$ even which we stated earlier; we conjecture that the result can be extended to odd $q$ and our proof approach extends verbatim (once one has the analogue of the upcoming Lemma~\ref{lem:coloring}). In fact, Theorems~\ref{thm:main1} and~\ref{thm:main2} will be proved as special cases of a general algorithmic result that applies to arbitrary spin systems on random bipartite regular graphs. We first introduce general spin systems following the framework of \cite{biclique}. Note that the techniques in \cite{biclique} were targeted to obtain bounds for general spin system and do not yield tight results, e.g., for colorings the bound obtained therein is roughly $q=O(\Delta^{1/4})$, cf. with the bound on $q$ in Theorem~\ref{thm:main1}. Also, to obtain the result of the hard-core model in Theorem~\ref{thm:main2} we will also need to explicitly account for the presence of external fields, as detailed in the next section. \section{Proof Outline} \subsection{Preliminaries: general spin systems and bicliques} Let $q\geq 2$ be an integer. A general $q$-spin system $(\mathbf{B},\boldsymbol{\lambda})$ consists of a symmetric interaction matrix $\mathbf{B}=\{B_{ij}\}_{i,j\in [q]}$, whose entries are between 0 and 1, and an activity vector $\boldsymbol{\lambda}=\{\lambda_i\}_{i\in [q]}$ with strictly positive entries which are $\leq 1$. Note, that up to normalising, we may assume that $\mathbf{B}$ and $\boldsymbol{\lambda}$ have each at least one entry equal to 1. For a graph $G=(V,E)$, an assignment $\sigma:V\rightarrow[q]$ has weight $w_G(\sigma)=\prod_{(u,v)\in E}B_{\sigma(u),\sigma(v)}$. The Gibbs distribution is given by $\mu_G(\sigma)=w_G(\sigma)/Z_G$, where $Z_G=\sum_{\sigma:V\rightarrow[q]}w_G(\sigma)$ is the partition function. We let $\Sigma_G$ be the set of all spin assignments. For a spin system on a bipartite graph $G$, the following notion of \emph{bicliques} is relevant \cite{biclique,JKP, galanis2016approximately, sampling}. \begin{definition}[Biclique] For a $q$-spin system with interaction matrix $\mathbf{B}$, we say that a pair $(S,T)$ where $S,T \subseteq [q]$ is a \emph{biclique} if $B_{ij} = 1$ for all $i \in S, j\in T$. A biclique $(S,T)$ is \emph{maximal} if there is \emph{no} other biclique $(S',T')\neq (S,T)$ satisfying $S \subseteq S' \subseteq [q]$ and $T \subseteq T' \subseteq [q]$. \end{definition} \noindent Note that bicliques are defined using only the interaction matrix $\mathbf{B}$ and do not depend on $\boldsymbol{\lambda}$. \begin{example}\label{example} For the $q$-colorings model, we have that $\mathbf{B}$ is the $q\times q$ matrix with all ones except on the diagonal where the entries are zero (and $\boldsymbol{\lambda}$ is the all-ones vector). The bicliques $(S,T)$ are given by pairs of disjoint sets $S,T\subseteq [q]$, whereas maximal bicliques by pairs of $S,T\subseteq[1]$ that form a partition of $[q]$. For the hard-core model, we have $\mathbf{B}=\big[\begin{smallmatrix} 1&1\\1&0\end{smallmatrix}\big]$ and $\boldsymbol{\lambda}=\big[\begin{smallmatrix} 1\\ \lambda\end{smallmatrix}\big]$. Indexing the rows/columns of $\mathbf{B}$ with $\{0,1\}$ (instead of $\{1,2\}$), the bicliques are $\{(0, 0), (0, 1), (1, 0), (0, 01), (01, 0)\}$ and the maximal bicliques are $\{(0,01), (01,0)\}$. \end{example} \subsection{Our approach: phase vectors and phase maximality} \label{subsec:our_approach} Let $(\mathbf{B},\boldsymbol{\lambda})$ be an arbitrary spin system and $G$ be a $\Delta$-regular bipartite graph, whose vertex set $V$ is partitioned as $(L,R)$ with $|L|=|R|=n$. Our approach to obtain approximation algorithms is to consider the likely frequencies of the spins on each side of the graph in the Gibbs distribution of $G$. Adapting methods from \cite{biclique, JKP}, we show that we can obtain efficient approximation schemes for those spin systems where the ``likely'' frequency vectors are captured by maximal bicliques $(S,T)$, see the upcoming Definition~\ref{def:maximality}. The main new ingredient in our work is to give a tight method to study when this condition is satisfied for general spin systems, which ultimately yields Theorems~\ref{thm:main1} and~\ref{thm:main2} as special cases. To formalise the above, for $q$-dimensional probability vectors $\boldsymbol{\alpha}=\{\alpha_i\}_{i\in [q]},\boldsymbol{\beta}=\{\beta_i\}_{i\in [q]}$, we let \begin{equation}\label{eq:sigmaalphabeta} \Sigma^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G=\{\sigma:V\rightarrow[q]\, \big|\, |\sigma^{-1}(i)\cap L|= n\alpha_i, |\sigma^{-1}(i)\cap R|= n\beta_i\} \end{equation} be the set of spin assignments where exactly $n\alpha_i,n\beta_i$ vertices are assigned the spin $i\in [q]$ on $L,R$, respectively. Denote by $Z_G^{\boldsymbol{\alpha},\boldsymbol{\beta}}$ the contribution to the partition function from configurations in $\Sigma^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G$, i.e., $Z_G^{\boldsymbol{\alpha},\boldsymbol{\beta}}=\sum_{\sigma\in \Sigma^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G} w_G(\sigma)$. We will be interested in those pairs $(\boldsymbol{\alpha},\boldsymbol{\beta})$ that contribute significantly to the partition function, as detailed below. \begin{definition}[Phase vectors] Let $\eta>0$. For a $q$-spin system on an $n$-vertex regular bipartite graph $G$, we say that a pair $(\boldsymbol{\alpha},\boldsymbol{\beta})$ of $q$-dimensional probability vectors is an $\eta$-phase vector of $G$ if $Z_G^{\boldsymbol{\alpha},\boldsymbol{\beta}}/Z_G\geq \mathrm{e}^{-\eta n}$. \end{definition} Understanding the phase vectors is in general a hard task. For random bipartite regular graphs, these have been identified to lie among the set of fixpoints $(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}})$ of the following so-called tree recursions on the $\Delta$-regular tree~\cite{antiferro}: \begin{equation}\label{eq:tr} r_i \propto \lambda_i \left(\mbox{$\sum_{j\in [q]}$} B_{ij} c_j\right)^{\Delta-1} \mbox{ for $i \in [q]$}; \qquad c_j \propto \lambda_j \left(\mbox{$\sum_{i\in [q]}$} B_{ij} r_i\right)^{\Delta-1} \mbox{ for $j \in [q]$}. \end{equation} The underpinning principle here leading to this correspondence is that the neighbourhood structure of a random $\Delta$-regular bipartite graph is similar to a $\Delta$-regular tree. Nevertheless, identifying the actual phase vectors, even among the finite-set of fixpoints in \eqref{eq:tr}, has turned out to be rather challenging. Even in the canonical case of $q$-colorings, the current best known analysis works for even $q$ and is a result of technically intense arguments. Before looking into this in more detail, we first explain how to convert the information about phase vectors into algorithms. Adapting methods from \cite{biclique, JKP}, we show that this is feasible when the phase vectors correspond to maximal bicliques. More precisely, for a non-empty set $S \subseteq [q]$, define the $q$-dimensional probability vector $\mathbf{g}_S$ whose $i$-th entry is given by $\frac{\lambda_i}{\sum_{j \in S} \lambda_j}$ for $i \in S$, and 0 otherwise. The following notion of ``phase maximality'' will be important in what follows. \begin{definition}[Phase Maximality]\label{def:maximality} Let $(\mathbf{B},\boldsymbol{\lambda})$ be a $q$-spin system and $\Delta\geq 3$. For $\rho>0$ and a set of maximal bicliques $\mathcal{B}_\Delta$, we say that the spin system is $\rho$-maximal with respect to $\mathcal{B}_\Delta$ if there is $\eta>0$ such that, for almost all $\Delta$-regular bipartite graphs, every $\eta$-phase vector $(\boldsymbol{\alpha},\boldsymbol{\beta})$ satisfies $\norm{(\boldsymbol{\alpha},\boldsymbol{\beta})-(\mathbf{g}_S,\mathbf{g}_T)}_\infty \le \rho$ for some maximal biclique $(S,T)\in \mathcal{B}_\Delta$. \end{definition} The key new ingredient to prove Theorems~\ref{thm:main1} and \ref{thm:main2} is to establish maximality for the colorings and hard-core models in the corresponding parameter regimes, as detailed in the following theorems. \newcommand{For even $q\geq 4$ and $\Delta \ge 8 q \log \Delta$, the $q$-colorings model is $\tfrac{1}{12\Delta q}$-maximal with respect to the set of bicliques $\Bc_\Delta=\{(S,[q]\backslash S) \, \big| \, |S|=\frac{q}{2}\}$.}{For even $q\geq 4$ and $\Delta \ge 8 q \log \Delta$, the $q$-colorings model is $\tfrac{1}{12\Delta q}$-maximal with respect to the set of bicliques $\mathcal{B}_\Delta=\{(S,[q]\backslash S) \, \big| \, |S|=\frac{q}{2}\}$.} \begin{lemma}\label{lem:coloring} For even $q\geq 4$ and $\Delta \ge 8 q \log \Delta$, the $q$-colorings model is $\tfrac{1}{12\Delta q}$-maximal with respect to the set of bicliques $\Bc_\Delta=\{(S,[q]\backslash S) \, \big| \, |S|=\frac{q}{2}\}$. \end{lemma} \newcommand{For $\Delta\geq 50$ and $\lambda \ge \tfrac{50}{\Delta}$, the hard-core model with fugacity $\lambda$ is $\tfrac{1}{24\Delta}$-maximal with respect to the set of bicliques $\Bc_\Delta=\{(0,01),(01,0)\}$.}{For $\Delta\geq 50$ and $\lambda \ge \tfrac{50}{\Delta}$, the hard-core model with fugacity $\lambda$ is $\tfrac{1}{24\Delta}$-maximal with respect to the set of bicliques $\mathcal{B}_\Delta=\{(0,01),(01,0)\}$.} \begin{lemma}\label{lem:ind} For $\Delta\geq 50$ and $\lambda \ge \tfrac{50}{\Delta}$, the hard-core model with fugacity $\lambda$ is $\tfrac{1}{24\Delta}$-maximal with respect to the set of bicliques $\Bc_\Delta=\{(0,01),(01,0)\}$. \end{lemma} Previous approaches in \cite{JKP,biclique,liao2019counting} to establish the analogues of Lemmas~\ref{lem:coloring} and~\ref{lem:ind} used expansion properties of random $\Delta$-regular bipartite graphs which do not however give tight results in terms of the range of the parameters that they apply. Instead, we follow a more direct analytical approach, using the tree-recursions view mentioned in \eqref{eq:tr}, further details are given in Section~\ref{sec:phases} with the final technical bounds obtained in Section~\ref{sec:maximal}. These more precise bounds allow us to push significantly further the applicability of the polymer method, see also the beginning of Section~\ref{sec:alg} for further explanation. Indeed, we show that $\rho$-maximality yields approximation algorithms on random $\Delta$-regular bipartite graphs, provided that $\rho$ is sufficiently small and that the weight of configurations corresponding to maximal bicliques is sufficiently big relatively to other type of configurations. To capture the latter condition, recall that the entries of $\mathbf{B},\boldsymbol{\lambda}$ are between 0 and 1, and each of them includes at least one entry equal to 1. We say that $\mathbf{B}$ is a $\delta$-matrix for some $\delta\in [0,1)$ if the second largest entry of $\mathbf{B}$ is $\leq \delta$, and we denote by $\min(\boldsymbol{\lambda})$ the minimum entry in $\boldsymbol{\lambda}$ (note that this is strictly bigger than 0). By applying the polymer method appropriately (inspired by \cite{biclique}), we show the following in Section~\ref{sec:alg}. \newcommand{\statelemmamainthree}{Let $(\mathbf{B},\boldsymbol{\lambda})$ be a $q$-spin system, $\Delta \geq 3$ be an integer, and $\rho=\tfrac{1}{12\Delta q}$. Suppose further that $\mathbf{B}$ is a $\delta$-matrix for some $\delta\in [0,1)$ and that $\Delta(1-\delta)\min(\boldsymbol{\lambda})\geq 7q\big(5+\log \tfrac{(q-1)\Delta^3}{\min(\boldsymbol{\lambda})}\big)$. If the spin system is $\rho$-maximal, then there is an $\mathsf{FPRAS}$ for the partition function for almost all $\Delta$-regular bipartite graphs. In fact, for almost all $\Delta$-regular bipartite graphs, for $\epsilon=\exp(-\Omega(n))$, the algorithm produces an $\epsilon$-estimate for the partition function and an $\epsilon$-sample from the Gibbs distribution in time $O\big((\tfrac{n}{\epsilon})^2(\log\tfrac{n}{\epsilon})^3\big)$.} \begin{lemma}\label{lem:main3} \statelemmamainthree \end{lemma} Using the above ingredients, we can prove our main Theorems~\ref{thm:main1} and~\ref{thm:main2}. \begin{proof}[Proof of Theorems~\ref{thm:main1} and~\ref{thm:main2}] We first prove the result for colorings, Theorem~\ref{thm:main1}. We just need to combine Lemmas~\ref{lem:coloring} and~\ref{lem:main3}. In the setting of Lemma~\ref{lem:main3} and Example~\ref{example}, we have that the interaction matrix for colorings is a $\delta$-matrix for $\delta=0$ and $\min(\boldsymbol{\lambda})=1$. Hence, for $\Delta\geq 100q\log q$, we have that $\Delta(1-\delta)\min(\boldsymbol{\lambda})\geq 7q\big(5+\log \tfrac{(q-1)\Delta^3}{\min(\boldsymbol{\lambda})}\big)$ as needed. Moreover, Lemma~\ref{lem:coloring} establishes the required $\rho$-maximality that is further needed. Therefore, the conclusion of Lemma~\ref{lem:main3} applies and we obtain the Theorem~\ref{thm:main1}. The proof for independent sets, Theorem~\ref{thm:main2}, is analogous, by now combining Lemmas~\ref{lem:ind} and~\ref{lem:main3}. We may assume that $\lambda<1$, otherwise the result follows from the FPRAS for $\Delta\geq 53$ in \cite[Theorem 1]{liao2019counting}. In the setting of Example~\ref{example}, we have that $q=2$, $\delta=0$ and $\min(\boldsymbol{\lambda})=\lambda$. Then, for $\lambda>100\tfrac{\log \Delta}{\Delta}$, we have that $\Delta(1-\delta)\min(\boldsymbol{\lambda})\geq 7q\big(5+\log \tfrac{(q-1)\Delta^3}{\min(\boldsymbol{\lambda})}\big)$, and the result follows analogously to above. \end{proof} Finally, as mentioned in the introduction, we give evidence that the bounds on $q$ in Theorem~\ref{thm:main1} capture the limit of the polymer method for colorings, by showing that maximality fails when we go beyond the relevant range (note, some form of maximality is either implicitly or explicitly shown in all previous works on the problems). \begin{lemma}\label{lem:fail} For all even $q\geq 4$ and $\Delta=O(q \log q)$, for the $q$-colorings model, $O(\tfrac{1}{\Delta q})$-maximality fails with respect to any set of bicliques on almost all $\Delta$-regular bipartite graphs. \end{lemma} We note that Lemma~\ref{lem:fail} does not exclude the possibility of some exotic polymer model that can perhaps break the barrier therein. It does show however that the current approach cannot go substantially beyond the guarantee in Theorem~\ref{thm:main1}, and at the very least some major refinement of the framework will be needed. We conjecture that a similar barrier applies for the result of Theorem~\ref{thm:main2}, though here the bottleneck is in Lemma~\ref{lem:main3}. More precisely, for $\lambda=O(\tfrac{\log \Delta}{\Delta})$ in the non-uniqueness region, polymers can be of size roughly $n^{\Omega(1)}$, which is in contrast to what happens when the polymer method applies (where the size of polymers turns out to be logarithmic in $n$). \section{Phase vectors on random bipartite regular graphs}\label{sec:phases} Let $(\mathbf{B},\boldsymbol{\lambda})$ be a $q$-spin system. In this section, we use results from \cite{antiferro} to pinpoint the phase vectors on random $\Delta$-regular bipartite graphs, and give a sufficient condition to conclude maximality (Corollary~\ref{lem:phases}). We will invoke this in Section~\ref{sec:maximal} to prove Lemmas~\ref{lem:coloring} and~\ref{lem:ind}. For $q$-dimensional probability vectors $\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}}$, we will consider the function \[\Phi_{\mathbf{B},\boldsymbol{\lambda},\Delta}(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}})= \frac{\ensuremath{\mathbf{r}}^\intercal \mathbf{B} \ensuremath{\mathbf{c}}}{\|\boldsymbol{\Lambda}^{-1}\ensuremath{\mathbf{r}}\|_p \|\boldsymbol{\Lambda}^{-1}\ensuremath{\mathbf{c}}\|_p},\] where $p=\tfrac{\Delta}{\Delta-1}$ and $\boldsymbol{\Lambda}$ is the $q\times q$ diagonal matrix whose $i$-th diagonal entry is equal to $\lambda_i^{1/\Delta}$. We will be interested in the maximizers $(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}})$ of $\Phi$. \begin{lemma}\label{lem:maxima} Suppose that the interaction matrix $\mathbf{B}$ is ergodic, i.e., irreducible and aperiodic. Then, the maximizers of $\Phi_{\mathbf{B},\boldsymbol{\lambda},\Delta}(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}})$ are fixpoints of the tree recursions~\eqref{eq:tr}. \end{lemma} \begin{proof} The proof follows by a relatively standard Lagrange multiplier argument. The assumption that $\mathbf{B}$ is ergodic is needed to exlcude maximizers at the boundary, i.e., that some entry of $\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}}$ is equal to zero. A closely related argument in the case $\boldsymbol{\lambda}=\ensuremath{\mathbf{1}}$ can be found in \cite[Lemma 4.11]{antiferro}. \end{proof} Let $\mathbf{L}$ denote the matrix $\big\{\tfrac{B_{ij}r_ic_j}{\sqrt{r_i'c_j'}}\big\}_{i,j\in [q]}$, where $r_i':=r_i(\sum_{j\in [q]} B_{ij} c_j)$ for $i\in [q]$ and $c_j':=c_j(\sum_{i\in [q]}B_{ij}r_i)$ for $j\in [q]$. A maximiser $(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}})$ of $\Phi$ is called Hessian dominant in \cite{antiferro} if the eigenvalues of the matrix $\mathbf{L}$ apart from the largest (which is equal to 1) are less in absolute value than $\tfrac{1}{\Delta-1}$. Let $\mathbf{f}: \ensuremath{\mathbf{r}}\mapsto \boldsymbol{\alpha}$ be the map given by $\alpha_i=(\lambda^{-1/\Delta}_ir_i/\|\boldsymbol{\Lambda}^{-1}\ensuremath{\mathbf{r}}\|_p)^p$ for $i\in[q]$. \begin{lemma}[\cite{antiferro}]\label{lem:hessiandominant} Let $\Delta\geq 3$ be an integer and consider a $q$-spin system $(\mathbf{B},\boldsymbol{\lambda})$. Suppose that all the maximizers of $\Phi_{\mathbf{B},\boldsymbol{\lambda},\Delta}$ are Hessian dominant. Then, for every $\kappa>0$, there is $\eta>0$ such that for almost all $\Delta$-regular bipartite graphs, every $\eta$-phase vector $(\boldsymbol{\alpha},\boldsymbol{\beta})$ satisfies $\|(\boldsymbol{\alpha},\boldsymbol{\beta})-(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)\|_\infty\leq \kappa$, where $(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)=(\mathbf{f}(\ensuremath{\mathbf{r}}^*),\mathbf{f}(\ensuremath{\mathbf{c}}^*))$ and $(\ensuremath{\mathbf{r}}^*,\ensuremath{\mathbf{c}}^*)$ is a maximizer of $\Phi_{\mathbf{B},\boldsymbol{\lambda},\Delta}$. \end{lemma} \begin{proof} The lemma is proved in \cite[Section 6.4.1]{antiferro} for $\boldsymbol{\lambda}=\mathbf{1}$. To extend to general $\boldsymbol{\lambda}$, consider the spin system $(\widehat{\mathbf{B}},\widehat{\boldsymbol{\lambda}})$ where $\widehat{\mathbf{B}}=\boldsymbol{\Lambda}\mathbf{B}\boldsymbol{\Lambda}$ and $\widehat{\boldsymbol{\lambda}}=\ensuremath{\mathbf{1}}$. Note, on $\Delta$-regular bipartite graphs and arbitrary $\eta>0$, an $\eta$-phase vector $(\boldsymbol{\alpha},\boldsymbol{\beta})$ of the spin system $(\mathbf{B},\boldsymbol{\lambda})$ is also an $\eta$-phase vector of the spin system with interaction matrix $\widehat{\mathbf{B}}$ and activity vector $\mathbf{1}$, and vice versa. Moreover, the maximizers $(\ensuremath{\mathbf{r}}^*,\ensuremath{\mathbf{c}}^*)$ of $\Phi=\Phi_{\mathbf{B},\boldsymbol{\lambda},\Delta}(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}})$ are in 1-1 correspondence with the maximizers $(\widehat{\ensuremath{\mathbf{r}}}^*,\widehat{\ensuremath{\mathbf{c}}}^*)$ of $\widehat{\Phi}=\widehat{\Phi}_{\widehat{\mathbf{B}},\widehat{\boldsymbol{\lambda}},\Delta}$ via the relation $(\ensuremath{\mathbf{r}}^*,\ensuremath{\mathbf{c}}^*)=(\boldsymbol{\Lambda}\widehat{\ensuremath{\mathbf{r}}}^*,\boldsymbol{\Lambda} \widehat{\ensuremath{\mathbf{c}}}^*)$. Note also that $(\ensuremath{\mathbf{r}}^*,\ensuremath{\mathbf{c}}^*)$ is Hessian dominant for $\Phi$ iff $(\widehat{\ensuremath{\mathbf{r}}}^*,\widehat{\ensuremath{\mathbf{c}}}^*)$ is Hessian dominant for $\widehat{\Phi}$, therefore establishing the result for general $\boldsymbol{\lambda}$. \end{proof} Using Lemma~\ref{lem:hessiandominant}, we obtain the following using the definition of maximality (cf. Definition~\ref{def:maximality}). \begin{corollary}\label{lem:phases} Let $(\mathbf{B},\boldsymbol{\lambda})$ be a $q$-spin system and $\Delta \geq 3$ be an integer. Suppose that there is a set of maximal bicliques $\mathcal{B}_\Delta$ such that all maximizers $(\ensuremath{\mathbf{r}}^*,\ensuremath{\mathbf{c}}^*)$ of $\Phi_{\mathbf{B},\boldsymbol{\lambda},\Delta}$ are Hessian dominant and satisfy $\|(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)-(\mathbf{g}_S,\mathbf{g}_T)\|_\infty\leq \tfrac{1}{15\Delta q}$ for some maximal biclique $(S,T)\in \mathcal{B}_\Delta$, where $(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)=(\mathbf{f}(\ensuremath{\mathbf{r}}^*),\mathbf{f}(\ensuremath{\mathbf{c}}^*))$. Then, the spin system is $\tfrac{1}{12\Delta q}$-maximal with respect to $\mathcal{B}_\Delta$. \end{corollary} \section{Algorithms from maximality: Proof of Lemma~\ref{lem:main3}}\label{sec:alg} Let $\Delta\geq 3$ be an integer, and $(\mathbf{B},\boldsymbol{\lambda})$ be a $q$-spin system, which is $\rho$-maximal for $\rho=\tfrac{1}{12\Delta q}$. Consider also a bipartite graph $G=(V,E)$ with vertex bipartition $(L, R)$ and $|L|=|R|=n$. The following expansion property of sets $U\subseteq V$ in random regular bipartite graphs relaxes the previous expansion properties that were used in \cite{JKP,liao2019counting} which needed to consider bigger sets $U$; instead, whenever the spin system is $\tfrac{1}{12\Delta q}$-maximal, we only need to consider sets $U$ with size roughly $\tfrac{1}{\Delta}|V|$, whose expansion is $\Omega(\Delta)$. For a set $U\subseteq V$, we use $\partial U$ to denote the vertices in $G$ which have a neighbor in $U$ but do not belong to $U$, and by $U^+$ the set $U \cup \partial U$. \begin{lemma}\label{lem:expansion} Let $\Delta\geq 3$ be an integer. For almost all $\Delta$-regular bipartite graphs $G=(V,E)$ with bipartition $(L,R)$, the following expansion properties hold: \begin{enumerate} \item every set $U\subseteq V$ with $|U\cap L|\leq \tfrac{1}{3\Delta}|L|$ and $|U\cap R|\leq \tfrac{1}{3\Delta}|R|$ satisfies $|U^+|\geq \tfrac{\Delta-1}{2}|U|$. \item every set $U\subseteq V$ with $|U\cap L|\leq \tfrac{1}{6\Delta}|L|$ and $|U\cap R|\leq \tfrac{1}{6\Delta}|R|$ satisfies $|\partial U|\geq \tfrac{\Delta}{7}|U|$. \end{enumerate} \end{lemma} \begin{proof} For the first item, consider a subset $U\subseteq V$ with $|U\cap L|\leq \tfrac{1}{3\Delta}|L|$ and $|U\cap R|\leq \tfrac{1}{3\Delta}|R|$. We will show that \begin{equation}\label{eq:expansion1} |\partial (U \cap L)|\geq \tfrac{\Delta-1}{2}|U\cap L| \mbox{ and } |\partial (U \cap R)|\geq \tfrac{\Delta-1}{2}|U\cap R|. \end{equation} From this, we obtain that $|U^+|=|U\cup \partial U|\geq |\partial (U \cap L)|+|\partial (U \cap R)|\geq \tfrac{\Delta-1}{2}|S|$. To verify \eqref{eq:expansion1}, we use a sufficient condition due to Bassalygo \cite{bassalygo1981}, see also \cite[Theorem 22]{JKP}. Namely, for $a=\tfrac{1}{3\Delta}$, $b= \tfrac{\Delta-1}{2}$ and $H(x)=-x \log_2(x)-(1-x)\log_2(1-x)$, we check that $\Delta>\frac{H(a)+H(a b)}{H(a)- a b H(1/b)}$, which indeed holds for all $\Delta\geq 3$. The proof of the second item is analogous. Consider a subset $U\subseteq V$ with $|U\cap L|\leq \tfrac{1}{6\Delta}|L|$ and $|U\cap R|\leq \tfrac{1}{6\Delta}|R|$. We will show that \begin{equation}\label{eq:expansion2} |\partial (U \cap L)|\geq \big(\tfrac{\Delta}{7}+1\big)|U\cap L| \mbox{ and } |\partial (U \cap R)|\geq \big(\tfrac{\Delta}{7}+1\big)|U\cap R|. \end{equation} From this, we obtain that $|\partial U|\geq |\partial (U \cap L)|+|\partial (U \cap R)|-|U|\geq \tfrac{\Delta}{7}|U|$. The proof of \eqref{eq:expansion2} is by verifying again the same condition as above, now for the values $a=\tfrac{1}{6\Delta}$ and $b= \tfrac{\Delta}{7}+1$. \end{proof} Following \cite{biclique}, we will define a polymer model corresponding to a biclique $(S,T)$ of the spin system. Let $G^3$ be the graph on vertex set $V$ where two vertices $u,v$ are adjacent iff $\mathrm{dist}(u,v)\le 2$. A subset $U\subseteq V$ of vertices is said to be $G^3$-connected if the induced subgraph $G^3[U]$ is connected. A polymer $\gamma=(V_\gamma,\sigma_\gamma)$ consists of a subset of vertices of $G$, $V_\gamma$, which is $G^3$ connected, and a spin assignment on $V_\gamma$, $\sigma_\gamma:V_\gamma\rightarrow [q]$, such that every vertex in $V_\gamma \cap L$ gets a spin in $[q]\backslash S$ and every vertex in $V_\gamma \cap R$ gets a spin in $[q]\backslash T$. Two polymers $\gamma_1,\gamma_2$ are compatible (written as $\gamma_1 \sim \gamma_2$) if and only if $\mathrm{dist}(\gamma_1,\gamma_2) > 3$, i.e., $\gamma_1 \cup \gamma_2$ is not $G^3$-connected. The size of a polymer $\gamma$, denoted by $|\gamma|$, is the number of vertices it contains. We use $E_\gamma$ to denote the edges of $G$ whose both endpoints lie in $\gamma$, $\partial V_\gamma$ to denote the vertices in $G$ which have a neighbor in $V_\gamma$ but do not belong to $V_\gamma$, and by $V_\gamma^+$ the set $V_\gamma \cup \partial V_\gamma$. For a polymer $\gamma$, the weight $w^{S,T}_G(\gamma)$ of the polymer is given by \begin{equation}\label{eq:wSTg} w^{S,T}_G(\gamma)=\frac{\prod_{u\in V_\gamma}\lambda_{\sigma_\gamma(u)}\prod_{(u,v)\in E_\gamma}B_{\sigma_\gamma(u),\sigma_\gamma(v)}\prod_{u\in \partial V_\gamma}F_u}{\big(\sum_{i\in S} \lambda_i)^{|V_\gamma^+\cap L|}\big(\sum_{j\in T} \lambda_j\big)^{|V_\gamma^+\cap R|}}, \end{equation} where \begin{equation}\label{eq:Fu} F_u=\sum_{i\in S}\lambda_i\prod_{v\in V_\gamma\cap \partial u}B_{i,\sigma_\gamma(v)} \mbox{ if } u\in \partial V_\gamma\cap L,\quad F_u=\sum_{j\in T}\lambda_j\prod_{v\in V_\gamma\cap \partial u}B_{j,\sigma_\gamma(v)} \mbox{ if } u\in \partial V_\gamma\cap R. \end{equation} Let $\mathcal{P}^{S,T}_G$ be the set of all polymers $\gamma=(V_\gamma,\sigma_\gamma)$ with $|V_\gamma|\leq 2q \rho n=\tfrac{n}{6\Delta}$. A configuration $\Gamma=(V_\Gamma, \sigma_\Gamma)$ of polymers is a collection of mutually compatible polymers $\gamma_1,\hdots, \gamma_k\in \mathcal{P}^{S,T}_G$ with $V_\Gamma=\cup_{t\in [k]} V_{\gamma_t}$ and $\sigma_\Gamma$ the spin assignment on $V_\Gamma$ which agrees with $\sigma_{\gamma_t}$ on $V_{\gamma_t}$ for each $t\in [k]$. Let $\Omega^{S,T}_G$ be the set of all possible configurations $\Gamma$. The size of a configuration is $|\Gamma| = \sum_{\gamma\in\Gamma} |V_\gamma|$. \begin{lemma}\label{lem:configupper} Every configuration $\Gamma$ satisfies $|V_\Gamma|\leq 12n/\Delta$. \end{lemma} \begin{proof} Suppose that there exists a configuration $\Gamma$ with $|V_\Gamma|> 12n/\Delta$. Then, we can extract greedily disjoint configurations $\Gamma_1,\hdots, \Gamma_{36}\subseteq \Gamma$ (which are a collection of polymers belonging to $\Gamma$) such that $\tfrac{n}{6\Delta}< |\Gamma_i|\leq\tfrac{n}{3\Delta}$. By Lemma~\ref{lem:expansion}, we have that $|V_{\Gamma_i}^+|\geq \tfrac{\Delta-1}{2}|\Gamma_i|>\frac{n}{6\Delta}\tfrac{\Delta-1}{2}$ and therefore $\sum^{36}_{t=1}|V_{\Gamma_i}^+|> \frac{6n}{\Delta}\tfrac{\Delta-1}{2}\geq 2n$. Therefore, since $G$ has $2n$ vertices, the sets $V_{\Gamma_1}^+,\hdots, V_{\Gamma_{36}}^+$ cannot be pairwise disjoint, contradicting the fact that the configuration $\Gamma$ consists of pairwise compatible polymers. \end{proof} The weight $w^{S,T}_G(\Gamma)$ of a configuration $\Gamma$ is given by the product of the weights of the polymers that $\Gamma$ consists of. We define the partition function of the polymer model as \[Z^{S,T}_G=\mbox{$\sum_{\Gamma\in \Omega^{S,T}_G}$}\, w^{S,T}_G(\Gamma), \mbox{ and its Gibbs distribution by } \mu^{S,T}_G(\Gamma)=w^{S,T}_G(\Gamma)/Z^{S,T}_G \mbox{ for $\Gamma\in \Omega^{S,T}_G$}.\] Finally, we let $Z^{\mathrm{pmer}}_G=\sum_{(S,T)\in \mathcal{B}_\Delta}\,\big(\mbox{$\sum_{i\in S}$}\,\lambda_i\big)^n \big(\mbox{$\sum_{j\in T}$}\,\lambda_j\big)^n Z^{S,T}_G$. \begin{lemma}\label{lem:estimate} Let $\Delta\geq3$ be an integer, and $(\mathbf{B},\boldsymbol{\lambda})$ be a $q$-spin system which is $\tfrac{1}{12\Delta q}$-maximal with respect to a set of maximal bicliques $\mathcal{B}_\Delta$. Suppose further that $\Delta \min(\boldsymbol{\lambda})\geq 15q$. Then, there is $\epsilon=\mathrm{e}^{-\Omega(n)}$ such that, for almost all $\Delta$-regular bipartite graphs $G$ with $n$ vertices on each part, it holds that $(1-\epsilon)Z_G\leq Z^{\mathrm{pmer}}_G\leq (1+\epsilon) Z_G$. \end{lemma} \begin{proof} By the $\tfrac{1}{12\Delta q}$-maximality of the spin system with respect to $\mathcal{B}_\Delta$ (cf. Definition~\ref{def:maximality}), there is an $\eta>0$ such that for almost all $\Delta$-regular graphs $G$, every $\eta$-phase vector $(\boldsymbol{\alpha},\boldsymbol{\beta})$ of $G$ belongs to \[\mathcal{F}_\Delta:=\Big\{(\boldsymbol{\alpha},\boldsymbol{\beta})\,\Big|\, \| (\boldsymbol{\alpha},\boldsymbol{\beta})-(\mathbf{g}_S,\mathbf{g}_T)\|_\infty \le \tfrac{1}{12\Delta q} \mbox{ for some maximal biclique } (S,T)\in \mathcal{B}_\Delta\Big\}.\] Let $\Sigma_G^{\max}=\{\sigma \mid \sigma\in \Sigma^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G \mbox{ for some } (\boldsymbol{\alpha},\boldsymbol{\beta})\in \mathcal{F}_\Delta\}$ where, recall from \eqref{eq:sigmaalphabeta}, that $\Sigma^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G$ is the set of spin assignments where exactly $n\alpha_i,n\beta_i$ vertices are assigned the spin $i\in [q]$ on $L,R$, respectively. We first show the lower bound on $Z^{\mathrm{pmer}}_G$. Consider the polymer model corresponding to a maximal biclique $(S,T)\in \mathcal{B}_\Delta$. Every configuration $\Gamma\in \Omega^{S,T}_G$ maps to a set of spin assignments \[\Sigma^{S,T}_G(\Gamma)=\{\sigma:V\rightarrow [q]\mid \sigma(V_\Gamma)=\sigma_\Gamma,\, \sigma(L\backslash V_\Gamma)\subseteq S,\, \sigma(R\backslash V_\Gamma)\subseteq T\},\] where recall that $\sigma_\Gamma$ is a spin assignment on $V_\Gamma$ that satisfies $\sigma_\Gamma(V_\Gamma\cap L)\subseteq [q]\backslash S$ and $\sigma_\Gamma(V_\Gamma\cap R)\subseteq [q]\backslash T$. Therefore, for distinct $\Gamma,\Gamma'\in \Omega^{S,T}_G$ we have that the sets $\Sigma^{S,T}_G(\Gamma)$ and $\Sigma^{S,T}_G(\Gamma')$ are disjoint. Let $\Sigma^{S,T}_G=\bigcup_{\Gamma\in \Omega^{S,T}_G} \Sigma^{S,T}_G(\Gamma)$. Using that configurations $\Gamma$ consist of disjoint $G^3$-connected sets, we obtain that the aggregate weight $\sum_{\sigma\in \Sigma^{S,T}_G(\Gamma)}w_G(\sigma)$ equals $\big(\mbox{$\sum_{i\in S}$}\,\lambda_i\big)^n \big(\mbox{$\sum_{j\in T}$}\,\lambda_j\big)^n w^{S,T}_G(\Gamma)$ (see for example \cite[Lemma 17]{biclique}), and therefore \[Z^{S,T}_G=\sum_{\sigma\in \Sigma^{S,T}_G}w_G(\sigma).\] Moreover, note that for $(\boldsymbol{\alpha},\boldsymbol{\beta})$ with $\| (\boldsymbol{\alpha},\boldsymbol{\beta})-(\mathbf{g}_S,\mathbf{g}_T)\|_\infty \le \tfrac{1}{12\Delta q}$, the number of vertices in $L$ that do not get a spin in $S$ is at most $\tfrac{n}{12\Delta}$, and similarly for vertices in $R$ that do not get a spin in $T$, for a total of $\tfrac{n}{6\Delta}$ vertices, giving that $\Sigma^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G\subseteq \Sigma^{S,T}_G$. Observe now that every $(\boldsymbol{\alpha},\boldsymbol{\beta})\notin\mathcal{F}_\Delta$ is not an $\eta$-phase vector and therefore $Z^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G\leq \mathrm{e}^{-\eta n} Z_G$. There are at most $n^{2q}$ such pairs with $n\boldsymbol{\alpha},n\boldsymbol{\beta}\in\mathbb{Z}^q$ and therefore, combining the above, it follows that \[Z_G-Z^{\mathrm{pmer}}_G\leq \sum_{(\boldsymbol{\alpha},\boldsymbol{\beta})\notin\mathcal{F}_\Delta} Z^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G\leq n^{2q} \mathrm{e}^{-\eta n} Z_G\leq \mathrm{e}^{-\Omega(n)}Z_G,\] showing that $Z^{\mathrm{pmer}}_G\geq (1-\mathrm{e}^{-\Omega(n)})Z_G$. We next show the upper bound on $Z^{\mathrm{pmer}}_G$. Consider $\Sigma^{\mathrm{overlap}}_G=\bigcup_{(S,T)\neq (S',T')\in \mathcal{B}_\Delta} (\Sigma^{S,T}_G\cap \Sigma^{S',T'}_G)$. We will show shortly that $\Sigma^{\mathrm{overlap}}_G\subseteq \Sigma_G\backslash \Sigma_G^{\max}$. Assuming this for the moment, we conclude the proof by noting first that for $(\boldsymbol{\alpha},\boldsymbol{\beta})$ which is not an $\eta$-phase vector it holds that $Z^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G/Z_G<\mathrm{e}^{-\eta n}$. Therefore we obtain that the aggregate weight of spin assignments in $\Sigma^{\mathrm{overlap}}_G$ is at most $n^{2q} \mathrm{e}^{-\eta n}Z_G=\mathrm{e}^{-\Omega(n)}Z_G$, yielding that $Z_G\geq (1-\mathrm{e}^{-\Omega(n)})Z^{\mathrm{pmer}}_G$. It remains to prove that $\Sigma^{\mathrm{overlap}}_G\subseteq \Sigma_G\backslash \Sigma_G^{\max}$. For the sake of contradiction, suppose otherwise. Then there exists a spin assigment $\sigma$, distinct bicliques $(S,T), (S',T')\in \mathcal{B}_\Delta$, and a biclique $(S^*,T^*)\in \mathcal{B}_\Delta$ such that $\sigma\in \Sigma^{S,T}_G\cap \Sigma^{S',T'}_G$ and $\sigma\in \Sigma^{\boldsymbol{\alpha},\boldsymbol{\beta}}_G$ for some $\| (\boldsymbol{\alpha},\boldsymbol{\beta})-(\mathbf{g}_{S^*},\mathbf{g}_{T^*})\|_\infty \le \tfrac{1}{12\Delta q}$. Since $(S,T)$ and $(S',T')$ are distinct and maximal, we may assume w.l.o.g. have that $S\neq S^*$ and $T\neq T^*$. Since $(S^*,T^*)$ is maximal, it cannot be the case that $S^*\subseteq S$ and $T^*\subseteq T$, so assume w.l.o.g. that $i\in S^*\backslash S$. Let $n_i$ be the vertices in $L$ that have the spin $i$ under $\sigma$. Since $\sigma\in \Sigma^{S,T}_G(\Gamma)$ for some $\Gamma\in \Omega^{S,T}_G$ and $i\notin S$, from Lemma~\ref{lem:configupper} we have that $n_i\leq |V_{\Gamma}|\leq 12n/\Delta$. Then, using the assumption $\Delta \min(\boldsymbol{\lambda})\geq 15q$ and the fact that the entries of $\boldsymbol{\lambda}$ are $\leq 1$, we have the crude bound $\frac{\lambda_i}{\sum_{i'\in S^*}\lambda_{i'}}\geq \min (\boldsymbol{\lambda})/q\geq 15/\Delta$, and therefore $|\frac{\lambda_i}{\sum_{i'\in S^*}\lambda_{i'}}-\tfrac{n_i}{n}|\geq \tfrac{3}{\Delta }>\tfrac{1}{12\Delta q}$ contradicting the choice of $(S^*,T^*)$. \end{proof} We are now ready to prove Lemma~\ref{lem:main3}, which we restate here for convenience. The proof uses the Markov chain approach for studying polymer models in \cite{chen2019fast}, as employed for general spin systems in \cite{biclique}. \begin{lemmamainthree} \statelemmamainthree \end{lemmamainthree} \begin{proof} The main ingredient that we need to check is the so-called polymer sampling condition \cite[Definition 4]{chen2019fast} for each polymer model defined by bicliques $(S,T)\in \mathcal{B}_{\Delta}$; this gives an $\epsilon$-counting algorithm for $Z^{S,T}_G$ and an $\epsilon$-sampling algorithm for $\mu^{S,T}_G$ with the desired guarantees. The estimates in Lemma~\ref{lem:estimate} then yield that these algorithms can be extended to algorithms for $Z_G$, by the argument in \cite[Proof of Theorem 3]{biclique}. The polymer sampling condition captures that the weight of the polymers as a function of their size decays exponentially relatively to the growth rate of the number of polymers (containing a vertex); in this case, since we are working with $G^3$ whose degree is bounded by $\Delta^3$, the condition we need to check, cf. \cite[Definition 4]{chen2019fast}, is that $w^{S,T}_G\leq \mathrm{e}^{-\tau |\gamma|}$ for some constant $\tau\geq 5+3\log((q-1)\Delta^3)$. Let $\gamma=(V_\gamma,\sigma_\gamma)\in \mathcal{P}^{S,T}_G$. Since the entries of $\mathbf{B},\boldsymbol{\lambda}$ are $\leq 1$, we have from \eqref{eq:wSTg} that \[w^{S,T}_G(\gamma)\leq \frac{\prod_{u\in \partial V_\gamma}F_u}{\big(\sum_{i\in S} \lambda_i)^{|V_\gamma^+\cap L|}\big(\sum_{j\in T} \lambda_j\big)^{|V_\gamma^+\cap R|}},\] Now for $u\in \partial V_\gamma\cap L$, recall that $F_u=\sum_{i\in S}\lambda_i\prod_{v\in V_\gamma\cap \partial u}B_{i,\sigma_\gamma(v)}$. We have that there exist $i\in S$ and $v\in V_\gamma\cap\partial u$ such that $B_{i,\sigma_\gamma(v)}\leq \delta$; otherwise, since $\mathbf{B}$ is a $\delta$-matrix, we would have that $B_{i,\sigma_\gamma(v)}=1$ for all $i\in S$, and therefore $(S,T\cup \{\sigma_\gamma(v)\})$ would be a biclique, contradicting the maximality of $(S,T)$ since $\sigma_\gamma(v)\notin T$ (by the definition of $\mathcal{P}^{S,T}_G$). We therefore obtain that $F_u\leq \sum_{i\in S}\lambda_i -(1-\delta)\min(\boldsymbol{\lambda})$. Similarly, for $u\in \partial V_\gamma\cap R$, we have that $F_u\leq \sum_{j\in T}\lambda_j -(1-\delta)\min(\boldsymbol{\lambda})$. Using the crude bounds $\min(\boldsymbol{\lambda})\leq \sum_{i\in S}\lambda_i,\sum_{j\in T}\lambda_j\leq q$, we obtain that \[w^{S,T}_G(\gamma)\leq (\min(\boldsymbol{\lambda}))^{-|V_\gamma|}\Big(1-\frac{(1-\delta)\min(\boldsymbol{\lambda})}{q}\Big)^{-|\partial V_{\gamma}|}.\] By the $\rho$-extremality assumption (or, more precisely, by the definition of the set of polymers $\mathcal{P}^{S,T}_G$), we have that $|V_{\gamma}|\leq 2q \rho n\leq \tfrac{n}{6\Delta}$, and therefore, by Lemma~\ref{lem:expansion}, $|\partial V_{\gamma}|\geq \tfrac{\Delta}{7} V_{\gamma}$. We therefore have that \[w^{S,T}_G(\gamma)\leq \mathrm{e}^{-\big(\log(\min{\boldsymbol{\lambda}})+\tfrac{\Delta(1-\delta)\min(\boldsymbol{\lambda})}{7q}\big)|V_{\gamma}|}\] Thus the polymer sampling condition is satisfied as long as $\tfrac{\Delta(1-\delta)\min(\boldsymbol{\lambda})}{7q}+\log(\min(\boldsymbol{\lambda}))\geq 5+3\log((q-1)\Delta^3)$, which gives the desired conclusion. \end{proof} \section{Establishing phase maximality}\label{sec:maximal} In this section we establish phase maximality for colorings and hard-core model. In particular, we prove Lemmas~\ref{lem:coloring} and \ref{lem:ind} from Section~\ref{subsec:our_approach}. Recall that the tree-recursion on the $\Delta$-regular tree for a general $q$-spin system with interaction matrix $\mathbf{B}$ and activity vector $\boldsymbol{\lambda}$ is given by \begin{equation}\tag{\ref{eq:tr}} r_i \propto \lambda_i \left(\mbox{$\sum_{j\in [q]}$} B_{ij} c_j\right)^{\Delta-1} \mbox{ for $i \in [q]$}; \qquad c_j \propto \lambda_j \left(\mbox{$\sum_{i\in [q]}$} B_{ij} r_i\right)^{\Delta-1}, \mbox{ for $j \in [q]$}. \end{equation} For the colorings and hard-core models, Lemma~\ref{lem:maxima} shows that the fixpoints of \eqref{eq:tr} include all maximizers of the function \[ \Phi_{\mathbf{B},\boldsymbol{\lambda},\Delta}(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}})= \frac{\ensuremath{\mathbf{r}}^\intercal \mathbf{B} \ensuremath{\mathbf{c}}}{\|\boldsymbol{\Lambda}^{-1}\ensuremath{\mathbf{r}}\|_p \|\boldsymbol{\Lambda}^{-1}\ensuremath{\mathbf{c}}\|_p} \] where $p=\frac{\Delta}{\Delta-1}$ and $\boldsymbol{\Lambda}$ is the diagonal matrix whose $i$-th diagonal entry is $\lambda_i^{1/\Delta}$. Finally, Corollary~\ref{lem:phases} implies that, to show $\tfrac{1}{12\Delta q}$-maximality for a set of maximal bicliques $\mathcal{B}_\Delta$, it is enough to show that all maximizers $(\ensuremath{\mathbf{r}}^*,\ensuremath{\mathbf{c}}^*)$ of $\Phi_{\mathbf{B},\boldsymbol{\lambda},\Delta}$ are Hessian dominant and satisfy that $\|(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)-(\mathbf{g}_S,\mathbf{g}_T)\|_\infty\leq \tfrac{1}{15\Delta q}$ for some $(S,T)\in \mathcal{B}_\Delta$, where $(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)=(\mathbf{f}(\ensuremath{\mathbf{r}}^*),\mathbf{f}(\ensuremath{\mathbf{c}}^*))$ is given by \[ \alpha^*_i= \frac{ ( \lambda_i^{-1/\Delta} r^*_i )^p }{\|\boldsymbol{\Lambda}^{-1}\ensuremath{\mathbf{r}}^*\|_p^p} \quad\text{and}\quad \beta^*_i= \frac{ ( \lambda_i^{-1/\Delta} c^*_i )^p }{\|\boldsymbol{\Lambda}^{-1}\ensuremath{\mathbf{c}}^*\|_p^p} \quad\text{for~} i \in [q]. \] \subsection{Phase maximality for colorings} In this subsection we prove Lemma~\ref{lem:coloring} by showing phase maximality for colorings. Let $q,\Delta\geq 3$ be integers and let $d = \Delta - 1$. For $q$-colorings, using the correspondence in Example~\ref{example}, the tree-recursion can be written as: \begin{equation}\label{eq:coloring-tr} r_i = \frac{(1-c_i)^{\Delta-1}}{\sum_{j\in [q]}(1-c_j)^{\Delta-1}}, \quad c_i=\frac{(1-r_i)^{\Delta-1}}{\sum_{j\in [q]}(1-r_j)^{\Delta-1}} \quad\mbox{for~} i\in [q]. \end{equation} Note that $(\mathbf{g}_{[q]}, \mathbf{g}_{[q]})$ is a trivial solution to \eqref{eq:coloring-tr}. The following lemma summarizes results from \cite[Section 7]{antiferro} and describes the nontrivial fixpoints of the tree recursion \eqref{eq:coloring-tr} when $q\ge4$ is an even integer in the non-uniqueness region $q<\Delta$. \begin{lemma}[\cite{antiferro}] \label{lem:tree-dphase} Suppose that $q\ge 4$ is even and $\Delta > q$. Then there is a one-to-one correspondence between all maximizers of $\Phi_{\mathbf{B},\mathbf{1},\Delta}$ and all bicliques in $\mathcal{B}_\Delta=\{(S,[q]\backslash S) \, \big| \, |S|=\frac{q}{2}\}$: there exists $a=a(\Delta,q)$, $b=b(\Delta,q)$ satisfying $0< b < a < \frac{2}{q}$ and $a+b=\frac{2}{q}$, such that every biclique $(S,[q] \setminus S)\in \mathcal{B}_\Delta$ corresponds to a maximizer $(\ensuremath{\mathbf{r}}^*,\ensuremath{\mathbf{c}}^*)$ of the form \[ r^*_i = \left\{ \begin{aligned} a,\quad & i\in S;\\ b,\quad & i\in [q] \setminus S, \end{aligned} \right. \quad\text{and}\quad c^*_i = \left\{ \begin{aligned} b,\quad & i\in S;\\ a,\quad & i\in [q] \setminus S. \end{aligned} \right. \] Furthermore, all maximizers of $\Phi_{\mathbf{B},\mathbf{1},\Delta}$ are Hessian dominant. \end{lemma} We now prove Lemma~\ref{lem:coloring}, which we restate here for convenience. \begin{lemmacoloring} For even $q\geq 4$ and $\Delta \ge 8 q \log \Delta$, the $q$-colorings model is $\tfrac{1}{12\Delta q}$-maximal with respect to the set of bicliques $\Bc_\Delta=\{(S,[q]\backslash S) \, \big| \, |S|=\frac{q}{2}\}$. \end{lemmacoloring} \begin{proof Let $k = \frac{q}{2}$ and $d = \Delta - 1$ for convenience. By Lemmas~\ref{lem:maxima} and \ref{lem:tree-dphase}, for a given biclique $(S, [q] \setminus S) \in \mathcal{B}_\Delta$, the corresponding maximizer $(\ensuremath{\mathbf{r}}^*,\ensuremath{\mathbf{c}}^*)$ of $\Phi_{\mathbf{B},\mathbf{1},\Delta}$ satisfies the tree-recursion \eqref{eq:coloring-tr} as follows: \[ \left\{ \begin{aligned} a &= \frac{\left( ka + (k-1)b \right)^d}{k \left[ \left( ka + (k-1)b \right)^d + \left( (k-1)a + kb \right)^d \right]};\\ b &= \frac{\left( (k-1)a + kb \right)^d}{k \left[ \left( ka + (k-1)b \right)^d + \left( (k-1)a + kb \right)^d \right]}, \end{aligned} \right. \] where $a,b$ are the constants given in Lemma~\ref{lem:tree-dphase}. We are going to show that, for sufficiently large $\Delta$, the constant $a$ is close to $\frac{2}{q}$ and the constant $b$ is close to $0$. Taking the ratio of $a$ and $b$, we get \[ \frac{a}{b} = \left( \frac{ka + (k-1)b}{(k-1)a + kb} \right)^d, \mbox{ and therefore } h = \left( \frac{h + t}{th + 1} \right)^d. \] where $h = \frac{a}{b} > 1$ and $t = \frac{k-1}{k} = 1- \frac{2}{q}$. Consider the function $f(x) = \left( \frac{x+t}{tx+1} \right)^d$. Then $h$ is a fixpoint of $f$ (i.e., $f(h) = h$). In fact, the function $f$ has three fixpoints: $x = h>1$, $x = 1$, and $x = \frac{1}{h} < 1$. Let $h_0 = \mathrm{e}^{\frac{d}{2q}} > 3$ when $d\ge 3q$. We show next that $h > h_0$. By considering the monotone intervals of $f(x) - x$, it suffices to show that $f(h_0) > h_0$. We then compute that \begin{align*} \frac{1}{d} \log f(h_0) &= \log \left( \frac{h_0+t}{th_0+1} \right) = \log \left( 1 + \frac{(1-t)(h_0-1)}{th_0+1} \right)\\ &> \frac{(1-t)(h_0-1)}{2(th_0+1)} > \frac{1-t}{4} = \frac{1}{2q}, \end{align*} where the first inequality follows from $\log(1+\varepsilon) > \frac{\varepsilon}{2}$ for $\varepsilon\in(0,1] $ and the second inequality is due to $h_0>3$ and $t<1$. Therefore, $f(h_0) > \mathrm{e}^{\frac{d}{2q}} = h_0$ and thus $h > h_0$. Finally, notice that $(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)=(\mathbf{f}(\ensuremath{\mathbf{r}}^*),\mathbf{f}(\ensuremath{\mathbf{c}}^*))$ is also in the form of \[ \alpha^*_i = \left\{ \begin{aligned} a',\quad & i\in S;\\ b',\quad & i\in [q] \setminus S, \end{aligned} \right. \quad\text{and}\quad \beta^*_i = \left\{ \begin{aligned} b',\quad & i\in S;\\ a',\quad & i\in [q] \setminus S, \end{aligned} \right. \] where $0< b'<a'<\frac{2}{q}$, $a'+b' = \frac{2}{q}$, and $\frac{a'}{b'} = \left( \frac{a}{b} \right)^p > \frac{a}{b} = h > h_0 = \mathrm{e}^{\frac{d}{2q}}$. It follows that \[ \norm{(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)-(\mathbf{g}_S,\mathbf{g}_{[q] \setminus S})}_\infty = b' = \frac{2}{q (\frac{a'}{b'} + 1)} < \frac{2\mathrm{e}^{-\frac{d}{2q}}}{q} \le \frac{1}{15 \Delta q}, \] where the last inequality holds for $\Delta \ge 8 q \log \Delta$. The lemma then follows from Corollary~\ref{lem:phases}, using the fact from Lemma~\ref{lem:tree-dphase} that all maximizers are Hessian dominant. \end{proof} Our proof of Lemma~\ref{lem:coloring} can also be modified to show that $O(\frac{1}{\Delta q})$-maximality fails when $q = \Omega(\frac{\Delta}{\log \Delta})$. This allows us to prove Lemma~\ref{lem:fail} from Section~\ref{subsec:our_approach}. \begin{proof}[Proof of Lemma~\ref{lem:fail}] We use the same notation and approach as in the proof of Lemma~\ref{lem:coloring}. In particular, we show that $h < h_1$ for $h_1 = \mathrm{e}^{\frac{4d}{q}}$, which can be deduced from $f(h_1) < h_1$. We have that \[ \frac{1}{d} \log f(h_1) = \log \left( \frac{h_1+t}{th_1+1} \right) = \log \left( 1 + \frac{(1-t)(h_1-1)}{th_1+1} \right)\\ < \frac{(1-t)(h_1-1)}{th_1+1} \le \frac{1-t}{t} \le \frac{4}{q}, \] where the last inequality is because $t = 1-\frac{2}{q} \ge \frac{1}{2}$. Therefore, we get $f(h_1) < \mathrm{e}^{\frac{4d}{q}} = h_1$ and consequently $h < h_1$. It follows that $\frac{a'}{b'} = h^p < \mathrm{e}^{\frac{4dp}{q}} = \mathrm{e}^{\frac{4\Delta}{q}}$, and thus for $q \ge \frac{4\Delta}{\log \Delta}$ one has \[ \norm{(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)-(\mathbf{g}_S,\mathbf{g}_{[q] \setminus S})}_\infty = b' = \frac{2}{q (\frac{a'}{b'} + 1)} > \frac{\mathrm{e}^{-\frac{4\Delta}{q}}}{q} \ge \frac{1}{\Delta q}. \] Combining with Lemma~\ref{lem:hessiandominant}, this gives that $\frac{1}{2\Delta q}$-maximality fails when $q \ge \frac{4\Delta}{\log \Delta}$. \end{proof} \subsection{Phase maximality for hard-core model} In this subsection we consider the hard-core model and establishes phase maximality. The goal is to prove Lemma~\ref{lem:ind} from Section~\ref{subsec:our_approach}. Let $\Delta \ge 3$ be an integer and $\lambda > 0$ be a real. Recall from Example~\ref{example} that the interaction matrix $\mathbf{B}=\{B_{ij}\}_{i,j\in\{0,1\}}$ for the hard-core model is given by $B_{00} = B_{01} = B_{10} = 1$ and $B_{11} = 0$, and the activity vector with fugacity $\boldsymbol{\lambda}=\{\lambda_i\}_{i\in\{0,1\}}$ is given by $\lambda_0 = 1$ and $\lambda_1 = \lambda$. Hence, the tree-recursion \eqref{eq:tr} becomes: \[ r_1 = \frac{\lambda c_0^{\Delta-1}}{\lambda c_0^{\Delta-1} + 1}, \quad r_0 = \frac{1}{\lambda c_0^{\Delta-1} + 1}, \quad c_1 = \frac{\lambda r_0^{\Delta-1}}{\lambda r_0^{\Delta-1} + 1}, \quad c_0 = \frac{1}{\lambda r_0^{\Delta-1} + 1}. \] As is standard, it would be easier to work with the ratios $x = \frac{r_1}{r_0}$ and $y = \frac{c_1}{c_0}$, so that the tree-recursion can be equivalently written as \begin{equation}\label{eq:hc-tr} x = \frac{\lambda}{(1+y)^{\Delta-1}}, \quad y = \frac{\lambda}{(1+x)^{\Delta-1}}. \end{equation} Note that the function $f(x) = \frac{\lambda}{(1+x)^{\Delta-1}}$ has a unique fixpoint $x_0$, and we are interested in the nontrivial solutions to \eqref{eq:hc-tr} (i.e., $(x,y) \neq (x_0,x_0)$). We restate Lemma~\ref{lem:ind} here for convenience. \begin{lemmaind} For $\Delta\geq 50$ and $\lambda \ge \tfrac{50}{\Delta}$, the hard-core model with fugacity $\lambda$ is $\tfrac{1}{24\Delta}$-maximal with respect to the set of bicliques $\Bc_\Delta=\{(0,01),(01,0)\}$. \end{lemmaind} \begin{proof Take an arbitrary maximizer $(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}})$ of $\Phi_{\mathbf{B},\boldsymbol{\lambda},\Delta}$ and let $x = r^*_1 / r^*_0$, $y = c^*_1 / c^*_0$. It is known that $x \neq x_0$, $y \neq x_0$, and $(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{c}})$ is Hessian dominant when $\lambda > \lambda_c(\Delta)$ is in the non-uniqueness region; see, e.g., \cite{GSV-ising,antiferro}. Suppose that $x < y$ without loss of generality. We first show that $x \le \frac{1}{30\lambda\Delta^2}$ when $\Delta \ge 50$ and $\lambda \ge \frac{50}{\Delta}$. By \eqref{eq:hc-tr} we have \[ \lambda = x(1+y)^{\Delta-1} = y(1+x)^{\Delta-1}. \] Define $f(t) = \frac{(1+t)^{\Delta-1}}{t}$, and note that $f(x) = f(y)$. The function $f(t)$ is monotone decreasing when $t < \frac{1}{\Delta-2}$ and monotone increasing when $t > \frac{1}{\Delta-2}$. This implies $x < \frac{1}{\Delta-2} < y$. We then deduce from \eqref{eq:hc-tr} that for $\Delta \ge 50$ and $\lambda \ge \frac{50}{\Delta}$, \[ y = \frac{\lambda}{(1+x)^{\Delta-1}} \ge \frac{\lambda}{( 1+\frac{1}{\Delta-2})^{\Delta-1}} \ge \frac{\lambda}{3} > \frac{1}{\Delta -2}. \] Hence, \[ f(x) = f(y) \ge f\left( \frac{\lambda}{3} \right) = \frac{3}{\lambda} \left( 1 + \frac{\lambda}{3} \right)^{\Delta-1}. \] Meanwhile, we have \[ f\left( \frac{1}{30\lambda \Delta^2} \right) = 30\lambda \Delta^2 \left( 1 + \frac{1}{30\lambda \Delta^2} \right)^{\Delta-1} \le 30\lambda \Delta^2 \mathrm{e}^{\frac{1}{30\lambda \Delta}} \le 33 \lambda \Delta^2. \] We claim that \begin{equation}\label{eq:toshow} 11 (\lambda \Delta)^2 \le \left( 1 + \frac{\lambda}{3} \right)^{\Delta-1} \end{equation} when $\Delta \ge 50$ and $\lambda \ge \frac{50}{\Delta}$. Given \eqref{eq:toshow}, we get \[ f\left( \frac{1}{30\lambda \Delta^2} \right) \le 33 \lambda \Delta^2 \le \frac{3}{\lambda} \left( 1 + \frac{\lambda}{3} \right)^{\Delta-1} \le f(x) \] and thus $x \le \frac{1}{30\lambda \Delta^2}$ as wanted. It remains to prove \eqref{eq:toshow}. We consider two cases. If $\lambda \le 1$, then we have \[ \frac{4}{3} \left( 1 + \frac{\lambda}{3} \right)^{\Delta-1} \ge \left( 1 + \frac{\lambda}{3} \right)^\Delta \ge \mathrm{e}^{\frac{\lambda \Delta}{3+\lambda}} \ge \mathrm{e}^{\frac{\lambda \Delta}{4}} \ge 15 (\lambda \Delta)^2, \] where the second inequality follows from $1+\varepsilon \ge \exp(\frac{\varepsilon}{1+\varepsilon})$ for $\varepsilon \in [0,1]$, and the last inequality holds when $\lambda \Delta \ge 50$. Meanwhile, if $\lambda > 1$ then we have \[ \frac{9}{\lambda^2} \left( 1 + \frac{\lambda}{3} \right)^{\Delta-1} \ge \left( 1 + \frac{\lambda}{3} \right)^{\Delta-3} \ge \left( \frac{4}{3} \right)^{\Delta-3} \ge 100 \Delta^2, \] where the last inequality holds when $\Delta \ge 50$. Therefore, \eqref{eq:toshow} holds when $\Delta \ge 50$ and $\lambda \ge \frac{50}{\Delta}$, and we conclude with $x \le \frac{1}{30\lambda\Delta^2}$ in this parameter regime. Now, for a fixed $\Delta$, both the fixpoint $(\ensuremath{\mathbf{r}}^*,\ensuremath{\mathbf{c}}^*)$ of the tree recursion with $r^*_1 / r^*_0 < c^*_1 / c^*_0$ and the ground state $(\mathbf{g}_0,\mathbf{g}_{01})$ converge to $(1,0,0,1)$ as $\lambda$ tends to infinity. Consequently, $(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)$ converges to the same point as well. Hence, for $3\le \Delta < 50$, there exists a universal constant $C>0$ such that $\norm{(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)-(\mathbf{g}_0,\mathbf{g}_{01})}_\infty \le \frac{1}{30 \Delta}$ whenever $\lambda \ge \frac{C}{\Delta}$. It remains to deal with the case that $\Delta \ge 50$ and $\lambda \ge \frac{50}{\Delta}$. Observe that \[ \norm{(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)-(\mathbf{g}_0,\mathbf{g}_{01})}_\infty = \max\left\{ \alpha^*_1, \frac{\lambda}{1+\lambda}-\beta^*_1 \right\}. \] We will upper bound the two terms using our bound on $x$. Recall that $p = \frac{\Delta}{\Delta-1}$. First, we have \[ \alpha^*_1 \le \frac{\alpha^*_1}{\alpha^*_0} = \left( \lambda^{-\frac{1}{\Delta}} \frac{r^*_1}{r^*_0} \right)^p = \lambda^{-\frac{1}{\Delta-1}} x^p \le 2 x \le \frac{1}{15 \lambda \Delta^2} \le \frac{1}{30\Delta}, \] where $\lambda^{-\frac{1}{\Delta-1}} \le 2$ in our parameter regime. Next, notice that \[ \frac{\lambda}{1+\lambda}-\beta^*_1 = \frac{\lambda}{1+\lambda} - \frac{\beta^*_1 / \beta^*_0}{1 + \beta^*_1 / \beta^*_0} = \frac{\lambda - \beta^*_1 / \beta^*_0}{(1+\lambda) ( 1 + \beta^*_1 / \beta^*_0 )} \le \lambda - \frac{\beta^*_1}{\beta^*_0}. \] Since we have \[ \frac{1}{\lambda} \frac{\beta^*_1}{\beta^*_0} = \frac{1}{\lambda} \left( \lambda^{-\frac{1}{\Delta}} \frac{c^*_1}{c^*_0} \right)^p = \left( \frac{y}{\lambda} \right)^p = \frac{1}{(1+x)^\Delta}, \] it follows that \[ \frac{\lambda}{1+\lambda}-\beta^*_1 \le \lambda \left( 1 - \frac{1}{(1+x)^\Delta} \right) \le \lambda \left( 1 - \mathrm{e}^{-\Delta x} \right) \le \lambda\Delta x \le \frac{1}{30\Delta}. \] This yields $\norm{(\boldsymbol{\alpha}^*,\boldsymbol{\beta}^*)-(\mathbf{g}_0,\mathbf{g}_{01})}_\infty \le \frac{1}{30\Delta}$. The lemma then follows from Corollary~\ref{lem:phases}. \end{proof} \bibliographystyle{plain}
{ "timestamp": "2021-05-06T02:05:50", "yymm": "2105", "arxiv_id": "2105.01784", "language": "en", "url": "https://arxiv.org/abs/2105.01784" }
\section{Introduction} \label{sec:intro} Economists, social scientists, engineers, and computer scientists have long studied models for human preferences, under the broad umbrella of social choice theory~\cite{black1948, arrow1951}. Learning from human preferences has found applications in interactive robotics for learning reward functions~\cite{sadigh2017, palan2019}, in medical domains for personalizing assistive devices~\cite{zhang2017, biyik2020}, and in recommender systems for optimizing search engines~\cite{chapelle2012, hofmann2011}. The recent focus on safety in AI has popularized human-in-the-loop learning methods that use human preferences in order to promote value alignment~\cite{christiano2017, saunders2018, amershi2014}. The most popular form of preference elicitation is to make pairwise comparisons~\cite{thurstone1927, bradley1952, luce1959}. Eliciting such feedback involves showing users a pair of objects and asking them a query: Do you prefer object A or object B? Depending on the application, an object could correspond to a product in a search query, or a policy or reward function in reinforcement learning. A vast body of classical work dating back to Condorcet and Borda~\cite{condorcet1785, borda1784} has focused on defining and producing a ``winning'' object from the result of a set of pairwise comparisons. Dudik et al.~\cite{dudik2015} proposed the concept of a von Neumann winner, corresponding to a distribution over objects that beats or ties every other object in the collection. They showed that under an expected utility assumption, such a randomized winner always exists and overcomes limitations of existing winning concepts---the Condorcet winner does not always exist, while the Borda winner fails an independence of clones test~\cite{schulze2011}. However, the assumption of expected utility relies on a strong hypothesis about how humans evaluate distributions over objects: it posits that the probability with which any distribution over objects $\pi$ beats an object is linear in $\pi$. \begin{figure}[t!] \centering\hspace*{-4ex} \captionsetup{font=small} \begin{tabular}{cc} \includegraphics[width=.50\textwidth]{figures/traj_comp.png}& \hspace{3ex} \includegraphics[width=.32\textwidth]{figures/pref_t2.png}\\ (a)&(b) \end{tabular} \caption{\small{(a) Policy A focuses on optimizing comfort, whereas policy B focuses on speed, and we consider pairwise comparisons of these two policies in different environments. (b) Preference matrices, where entry $(i, j)$ of the matrix contains the proportion of comparisons between the pair $(i, j)$ that are won by object $i$. (The diagonals are set to half by convention). The overall pairwise comparisons are given by the matrix $\pref_{\textsf{ex}}^\textsf{Overall}$, and preferences along each of the criteria by matrices $\pref_{\textsf{ex}}^\textsf{Comfort}$ and $\pref_{\textsf{ex}}^\textsf{Speed}$. Policy R is a randomized policy \mbox{$\nicefrac{1}{2}$ A $+ \nicefrac{1}{2}$ B}. While the preference matrices satisfy the linearity assumption individually along speed and comfort, the assumption is violated overall, wherein R is preferred over both A and B.}} \label{fig:intro} \end{figure} \paragraph{Consequences of assuming linearity:} In order to better appreciate these consequences, consider as an example\footnote{Note that while this is an illustrative example, we observe a similar trend in our actual user study in Section~\ref{sec:pol_drive}.} the task of deciding between two policies (say A and B) to deploy in an autonomous vehicle. Suppose that these policies have been obtained by optimizing two different objectives, with policy A optimized for comfort and policy B optimized for speed. Figure~\ref{fig:intro}(a) shows a snapshot of these two policies. When compared overall, 60\% of the people preferred Policy A over B -- making A the von Neumann winner. The linearity assumption then posits that a randomized policy that mixes between A and B can \emph{never} be better than both A and B; but we see that the Policy R = $\nicefrac{1}{2}$ A $+$ $\nicefrac{1}{2}$ B is actually preferred by a majority over both A and B! Why is the linearity assumption violated here? One possible explanation for such a violation is that the comparison problem is actually \emph{multi-criteria} in nature. If we look at the preferences for the speed and comfort criteria individually in Figure~\ref{fig:intro}(b), we see that Policy A does quite poorly on the speed axis while B lags behind in comfort. In contrast, Policy R does acceptably well along both the criteria and hence is preferred overall to both Policies A and B. It is indeed impossible to come to this conclusion by only observing the overall comparisons. This observation forms the basis of our main proposal: decompose the single overall comparison and ask humans to provide preferences along \emph{simpler} criteria. This decomposition of the comparison task allows us to place structural assumptions on comparisons along each criterion. For instance, we may now posit the linearity assumption along each criterion separately rather than on the overall comparison task. In addition to allowing for simplified assumptions, breaking up the task into such simpler comparisons allows us to obtain richer and more accurate feedback as compared to the single overall comparison. Indeed, such a motivation for eliciting simpler feedback from humans finds its roots in the study of cognitive biases in decision making, which suggests that the human mind resorts to simple heuristics when faced with a complicated questions~\cite{tversky1974}. \paragraph{Contributions:} In this paper, we formalize these insights and propose a new framework for preference learning when pairwise comparisons are available along multiple, possibly conflicting, criteria. As shown by our example in Figure~\ref{fig:intro}, a single distribution that is the von Neumann winner along every criteria might not exist. In order to address this issue, we formulate the problem of finding the ``best'' randomized policy by drawing on tools from the literature on vector-valued pay-offs in game theory. Specifically, we take inspiration from Blackwell's approachability~\cite{blackwell1956} and introduce the notion of a Blackwell winner. This solution concept generalizes the concept of a von Neumann winner, and recovers the latter when there is only a single criterion present. Section~\ref{sec:prob} describes this framework in detail, and Section~\ref{sec:main} collects our statistical and computational guarantees for learning the Blackwell winner from data. Section~\ref{sec:pol_drive} describes a user study with an autonomous driving environment, in which we ask human subjects to compare self-driving policies along multiple criteria such as safety, aggressiveness, and conservativeness. Our experiments demonstrate that the Blackwell winner is able to better trade off utility along these criteria and produces randomized policies that outperform the von Neumann winner for the overall preferences. \section{Related work} \label{sec:rw} This paper sits at the intersection of multiple fields of study: learning from pairwise comparisons, multi-objective optimization, preference aggregation, and equilibrium concepts in games. Here we discuss those papers from these areas most relevant to our contributions. \paragraph{Winners from pairwise comparisons.} Most closely related to our work is the field of computational social choice, which has focused on defining notions of winners from overall pairwise comparisons (see the survey~\cite{moulin_2016} for a review). Amongst them, three deterministic notions of a winner---the Condorcet~\cite{condorcet1785}, Borda~\cite{borda1784}, and Copeland~\cite{copeland1951} winners---have been widely studied. In more recent work, Dudik et al.~\cite{dudik2015} introduced the notion of a (randomized) von Neumann winner. Starting with the work of Yue et al.~\cite{yue2012}, there have been several research papers studying an online version of preference learning, called the Dueling Bandits problem. This is a partial information version of the classic $K$-armed bandit problem, in which feedback takes the forms of pairwise comparisons between arms of the bandit. Many algorithms have been proposed, including versions that compete with Condorcet~\cite{zoghi2013, zoghi2015b, ailon2014}, Copeland~\cite{zoghi2015, wu2016}, Borda~\cite{jamieson2015} and von Neumann~\cite{dudik2015} winners. \paragraph{Multi-criteria decision making.} The theoretical foundations of decision making based on multiple criteria have been widely studied within the operations research community. This sub-field---called multiple-criteria decision analysis---has focused largely on scoring, classification, and sorting based on multiple-criteria feedback. See the surveys~\cite{pomerol2012multicriterion, zopounidis2002multicriteria} for thorough overviews of existing methods and their associated guarantees. The problem of eliciting the user's relative weighting of the various criteria has also been considered~\cite{doumpos2007regularized}. However, relatively less attention has been paid to the study of randomized decisions and statistical inference, both of which form the focus of our work. From an applied perspective, the combination of multi-criteria assessments has received attention in disparate fields such as psychometrics~\cite{papay2011different, mcbee2014combining}, healthcare~\cite{teixeira2008statistical}, and recidivism prediction~\cite{walters2011taking}. In many of these cases, a variety of approaches---both linear and non-linear---have been empirically evaluated~\cite{douglas2010estimating}. Justification for non-linear aggregation of scores has a long history in psychology and the behavioral sciences~\cite{goldstein1991judgments,frisch1994beyond,tversky1979prospect}. \paragraph{Blackwell's approachability.} In the game theory literature, Blackwell~\cite{blackwell1956} introduced the notion of approachability as a generalization of a zero-sum game with vector-valued payoffs; see Appendix~\ref{app:blackwell} for more details. Blackwell's approachability and its connections with no-regret learning and calibrated forecasting have been extensively studied~\cite{abernethy2011, perchet2013, mannor2014}. These connections have enabled applications of Blackwell's results to problems ranging from constrained reinforcement learning~\cite{miryoosefi2019} to uncertainty estimation for question-answering tasks~\cite{kuleshov2017}. In contrast with such applications of the repeated vector-valued game, our framework for preference learning along multiple criteria deals with a single shot game and uses the idea of the target set to define the concept of a Blackwell winner. \paragraph{Stability of Nash equilibria.} Another related body of literature focuses on Nash equilibria in games with perturbed payoffs, under both robust~\cite{aghassi2006robust,lehrer2012partially} and uncertain or Bayesian~\cite{fudenberg1993self} formulations; see the recent survey by Perchet~\cite{perchet2014note}. Perturbation theory for Nash equilibria has been derived in these contexts, and it is well-known that the Nash equilibrium is not (in general) stable to perturbations of the payoff matrix. On the other hand, Dudik et al.~\cite{dudik2015}, working in the context of dueling bandits, consider Nash equilibria of perturbed, symmetric, zero-sum games, but show that the \emph{payoff} of the perturbed Nash equilibrium is indeed stable. That is, even if the equilibrium itself can change substantially with a small perturbation of the payoff matrix, the corresponding payoff is still close to the payoff of the original equilibrium. Our work provides a similar characterization for the multi-criteria setting. \section{Framework for preference learning along multiple criteria} \label{sec:prob} We now set up our framework for preference learning along multiple criteria. We consider a collection of $d$ objects over which comparisons can be elicited along $k$ different criteria. We index the objects by the set $[d] :\,= \{1, \ldots, d\}$ and the criteria by the set $[k]$. \subsection{Probabilistic model for comparisons} Since human responses to comparison queries are typically noisy, we model the pairwise preferences as random variables drawn from an underlying population distribution. In particular, the result of a comparison between a pair of objects $(i_1, i_2)$ along criterion $j$ is modeled as a draw from a Bernoulli distribution, with $p(i_1, i_2; j) = \mathbb{P}(i_1 \succeq i_2 \text{ along criterion } j).$ By symmetry, we must have \begin{equation} \label{eq:symm} p(i_2, i_1; j) = 1 - p(i_1, i_2; j) \quad \mbox{for each triple $(i_1, i_2, j) \in [d] \times [d] \times [k]$.} \end{equation} Letting $\Delta_d$ denote the $d$-dimensional probability simplex, consider two probability distributions $\pi_1 , \pi_2 \in \Delta_{d}$ over the $d$ objects. With a slight abuse of notation, let $p(\pi_1, \pi_2; j)$ denote the probability with which an object drawn from distribution $\pi_1$ beats an object drawn independently from distribution $\pi_2$ along criterion $j$. We assume for each individual criterion $j$ that the probability $p(\pi_1, \pi_2; j)$ is linear in the distributions $\pi_1$ and $\pi_2$, i.e. that it satisfies the relation \begin{equation} \label{eq:expec_pref} p(\pi_1, \pi_2; j) :\,= \ensuremath{\mathbb{E}}_{ \substack{i_1 \sim \pi_1 \\ i_2 \sim \pi_2}} \left[p(i_1, i_2; j) \right]. \end{equation} Equation~\eqref{eq:expec_pref} encodes the per-criterion linearity assumption highlighted in Section~\ref{sec:intro}. We collect the probabilities $\{ p(i_1, i_2; j) \}$ into a \emph{preference tensor} $\mathbf{P} \in [0,1]^{d \times d \times k}$ and denote by $\mathcal{P}_{d, k}$ the set of all preference tensors that satisfy the symmetry condition~\eqref{eq:symm}. Specifically, we have \begin{equation} \label{eq:pref_set} \mathcal{P}_{d, k} = \{\mathbf{P} \in [0,1]^{d\times d \times k}\; | \; \mathbf{P}(i_1, i_2;j) = 1 - \mathbf{P}(i_2, i_1;j) \text{ for all } (i_1, i_2, j) \}\;. \end{equation} Let $\mathbf{P}^{j}$ denote the $d \times d$ matrix corresponding to the comparisons along criterion $j$, so that $p(\pi_1, \pi_2; j) = \pi_1^\top \mathbf{P}^{j} \pi_2$. Also note that a comparison between a pair of objects $(i_1,i_2)$ induces a \emph{score vector} containing $k$ such probabilities. Denote this vector by $\mathbf{P}(i_1, i_2) \in [0, 1]^k$, whose $j$-th entry is given by $p(i_1, i_2; j)$. Denote by $\mathbf{P}(\pi_1, \pi_2)$ the score vector for a pair of distribution $(\pi_1, \pi_2)$. In the single criterion case when $k = 1$, each comparison between a pair of objects is along an \emph{overall} criterion. We let ${\pref}_{\textsf{ov}} \in [0,1]^{d \times d }$ represent such an overall comparison matrix. As mentioned in Section~\ref{sec:intro}, most preference learning problems are multi-objective in nature, and the overall preference matrix ${\pref}_{\textsf{ov}}$ is derived as a non-linear combination of per-criterion preference matrices $\{ \mathbf{P}^j \}_{j = 1}^k$. Therefore, even when the linearity assumption~\eqref{eq:expec_pref} holds across each criterion, it might not hold for the \emph{overall} preference ${\pref}_{\textsf{ov}}$. In contrast, when the matrices $\mathbf{P}^j$ are aggregated linearly to obtain the overall matrix ${\pref}_{\textsf{ov}}$, we recover the assumptions of Dudik et al.~\cite{dudik2015}. \subsection{Blackwell winner} \label{sec:prob-bw} Given our probabilistic model for pairwise comparisons, we now describe our notion of a Blackwell winner. When defining a winning distribution for the multi-criteria case, it would be ideal to find a distribution $\robj^*$ that is a von Neumann winner along \emph{each} of the criteria separately. $p(\pi^*,i;j)\geq 0.5$ for all items $i$ along all criteria $j$. However, as shown in our example from Figure~\ref{fig:intro}, such a distribution need not exist: policy A is preferred along the comfort axis, while policy B along speed. We thus need a generalization of the von Neumann winner that explicitly accounts for conflicts between the criteria. \begin{figure}[t!] \centering\hspace*{-4ex} \captionsetup{font=small} \begin{tabular}{cc} \includegraphics[ scale= 0.28]{figures/S1_t2.png}& \hspace{4ex} \includegraphics[scale = 0.28]{figures/S2_t1.png}\\ (a)&(b) \end{tabular} \caption{\small{ In the context of the example introduced in Figure~\ref{fig:intro}, two target sets $S_1$ and $S_2$ that capture trade-offs between comfort and speed. Set $S_1$ requires feasible score vectors to satisfy 40\% of the population along both comfort and speed. Set $S_2$ requires both scores to be greater than $0.3$ but with a linear trade-off: the combined score must be at least $0.9$.}} \label{fig:setup} \end{figure} Blackwell~\cite{blackwell1956} asked a related question for the theory of zero-sum games: how can one generalize von Neumann's minimax theorem to vector-valued games? He proposed the notion of a \emph{target set}: a set of acceptable payoff vectors that the first player in a zero-sum game seeks to attain. Within this context, Blackwell proposed the notion of approachability, i.e. how the player might obtain payoffs in a repeated game that are close to the target set on average. We take inspiration from these ideas to define a solution concept for the multi-criteria preference problem. Our notion of a winner also relies on a target set, which we denote by $S \subset [0,1]^k$, and which in our setting contains \emph{score vectors}. This set provides a way to combine different criteria by specifying combinations of preference scores that are acceptable. Figure~\ref{fig:setup} provides an example of two such sets. Observe that for our preference learning problem, the target set $S$ is by definition monotonic with respect to the orthant ordering, that is, if $z_1 \geq z_2$ coordinate-wise, then $z_2\in S$ implies $z_1 \in S$. Our goal is to then produce a distribution $\pi^*$ that can achieve a target score vector for any distribution with which it is compared---that is $ \mathbf{P}(\pi^*, \pi) \in S \text{ for all } \pi \in \Delta_{d}. $ When such a distribution $\pi^*$ exists, we say that the problem instance $(\mathbf{P}, S)$ is \emph{achievable}. On the other hand, it is clear that there are problem instances $(\mathbf{P}, S)$ that are not achievable. While Blackwell's workaround was to move to the setting of repeated games, preference aggregation is usually a one-shot problem. Consequently, our relaxation instead introduces the notion of a \emph{worst-case distance} to the target set. In particular, we measure the distance between any pair of score vectors $u, v \in [0, 1]^k$ as $\rho(u, v) = \| u - v \|$ for some norm $\| \cdot \|$. Using the shorthand $\rho(u, S) :\,= \inf_{v \in S} \| u - v \|$, the \emph{Blackwell winner} for an instance $(\mathbf{P}, S, \|\cdot\|)$ is now defined as the one that minimizes the maximum distance to the set $S$, i.e., \begin{equation} \label{eq:pop-opt} \pi(\mathbf{P}, S, \| \cdot \|) \in \arg \min_{\pi \in \Delta_{d}}[v(\pi; \mathbf{P}, S, \|\cdot\|)], \quad \text{where} \quad v(\pi; \mathbf{P}, S, \|\cdot\|) :\,= \pmax_{\pi' \in \Delta_{d}} \rho(\mathbf{P}(\pi, \pi'), S) \;. \end{equation} Observe that equation~\eqref{eq:pop-opt} has an interpretation as a zero-sum game, where the objective of the minimizing player is to make the score vector $\mathbf{P}(\pi, \pi')$ as close as possible to the target set $S$. We now look at commonly studied frameworks for single criterion preference aggregation and multi-objective optimization and show how these can be naturally derived from our framework. \paragraph{Example: Preference learning along a single criterion.} A special case of our framework is when we have a single criterion ($k =1$) and the preferences are given by a matrix ${\pref}_{\textsf{ov}}$. The score ${\pref}_{\textsf{ov}}(i_1, i_2)$ is a scalar representing the probability with which object $i_1$ beats object $i_2$ in an overall comparison. As a consequence of the von Neumann minimax theorem, we have \begin{equation}\label{eq:vn_half} \max_{\pi_1 \in \Delta_{d}}\min_{\pi_2 \in \Delta_{d}} {\pref}_{\textsf{ov}}(\pi_1, \pi_2) {=} \min_{\pi_2 \in \Delta_{d}} \max_{\pi_1 \in \Delta_{d}} {\pref}_{\textsf{ov}}(\pi_1, \pi_2) {=} \frac{1}{2}, \end{equation} with any maximizer above called a von Neumann winner~\cite{dudik2015}. Thus, for \emph{any} preference matrix ${\pref}_{\textsf{ov}}$, a von Neumann winner is preferred to any other object with probability at least $\frac{1}{2}$. Let us show how this uni-criterion formulation can be derived as a special case of our framework. Consider the target set $S = [\frac{1}{2}, 1]$ and choose the distance function $\rho(a, b) = |a - b|$. By equation~\eqref{eq:vn_half}, the target set $S = [\frac{1}{2}, 1]$ is achievable \emph{for all} preference matrices ${\pref}_{\textsf{ov}}$, and so the von Neumann winner and the Blackwell winner~$\pi({\pref}_{\textsf{ov}}, [\frac{1}{2}, 1], |\cdot|)$ coincide. \hfill $\clubsuit$ \paragraph{Example: Weighted combinations of a multi-criterion problem.} We saw in the previous example that the single criterion preference learning problem is quite special: achievability can be guaranteed by the von Neumann winner for set $S = [\frac{1}{2}, 1]$ for any preference matrix ${\pref}_{\textsf{ov}}$. One of the common approaches used in multi-objective optimization to reduce a multi-dimensional problem to a uni-dimensional counterpart is by introducing a weighted combinations of objectives. Formally, consider a weight vector $w \in \Delta_{k}$ and the corresponding preference matrix \begin{align*} \mathbf{P}(w) :\,= \sum_{j \in [k]} w_j \mathbf{P}^j, \end{align*} obtained by combining the preference matrices along the different criteria. A winning distribution can then be obtained by solving for the von Neumann winner of $\mathbf{P}(w)$ given by $\pi(\mathbf{P}(w), [\frac{1}{2}, 1], |\cdot|)$. The following proposition establishes that such an approach is a special case of our framework, and conversely, that there are problem instances in our general framework which cannot be solved by a simple linear weighing of the criteria. \begin{proposition} \label{prop:lin_uni} \begin{enumerate}[label=(\alph*)] \item For every weight vector $w\in \Delta_k$, there exists a target set $S_w \in [0, 1]^k$ such that for any norm $\|\cdot\|$, we have \begin{equation*} \pi(\mathbf{P}, S_w, \|\cdot\|) = \pi(\mathbf{P}(w), [1/2, 1], |\cdot|) \;\; \text{ for all } \;\; \mathbf{P} \in \mathcal{P}_{d, k}. \end{equation*} \item Conversely, there exists a set $S$ and a preference tensor $\mathbf{P}$ with a \emph{unique} Blackwell winner $\robj^*$ such that for all $w \in \Delta_k$, exactly one of the following is true: \begin{equation*} \pi(\mathbf{P}(w), [{1}/{2}, 1], |\cdot|) \neq \robj^* \;\; \text{or} \;\; \arg \max_{\pi \in \Delta_{d}} \big \{ \pmin_{i \in [d]} \mathbf{P}(\pi, i) \big \} = \Delta_d\;. \end{equation*} \end{enumerate} \end{proposition} Thus, while the Blackwell winner is always able to recover any linear combination of criteria, the converse is not true. Specifically, part~(b) of the proposition shows that for a choice of preference tensor $\mathbf{P}$ and target set $S$, either the von Neumann winner for $\mathbf{P}(w)$ is not equal to the Blackwell winner, or it degenerates to the entire simplex $\Delta_d$ and is thus uninformative. Consequently, our framework is strictly more general than weighting the individual criteria. \hfill $\clubsuit$ \section{Statistical guarantees and computational approaches}\label{sec:main} In this section, we provide theoretical results on computing the Blackwell winner from samples of pairwise comparisons along the various criteria. \subsection{Observation model and evaluation metrics.} We operate in the natural passive observation model, where a sample consists of a comparison between two randomly chosen objects along a randomly chosen criterion. Specifically, we assume access to an oracle that when queried with a tuple $\eta = (i_1, i_2, j)$ comprising a pair of objects $(i_1, i_2)$ and a criterion $j$, returns a comparison \mbox{$y(\eta) \sim {\sf Ber}(p(i_1, i_2; j))$}. Each query to the oracle constitutes one sample. In the passive sampling model, the tuple of objects and criterion is sampled uniformly, with replacement, that is $(i_1, i_2) {\sim} \mathsf{Unif}\{\binom{[d]}{2}\} \text{ and } j {\sim} \mathsf{Unif}\{[k]\}$ where $\mathsf{Unif}\{A\}$ denotes the uniform distribution over the elements of a set $A$. Given access to samples $\{y_1(\eta_1), \ldots, y_n(\eta_n)\}$ from this observation model, we define the empirical preference tensor (specifically the upper triangular part) \begin{equation} \label{eq:emp_pref} \widehat{\pref}_n(i_1, i_2, j) :\,= \frac{\sum_{\ell = 1}^n y_\ell(\eta_\ell)\mathbb{I}[\eta_\ell = (i_1, i_2, j)]}{1 \vee \sum_{\ell}\mathbb{I}[\eta_\ell = (i_1, i_2, j)]}\quad \text{for } i_1 < i_2 \;, \end{equation} where each entry of the upper-triangular tensor is estimated using a sample average and the remaining entries are calculated to ensure the symmetry relations implied by the inclusion $\widehat{\pref}_n \in \mathcal{P}_{d, k}$. As mentioned before, we are interested in computing the solution $\robj^* :\,= \pi(\mathbf{P}, S, \| \cdot \|)$ to the optimization problem~\eqref{eq:pop-opt}, but with access only to samples from the passive observation model. For any estimator $\widehat{\pi} \in \Delta_d$ obtained from these samples, we evaluate its error based on its value with respect to the tensor $\mathbf{P}$, i.e., \begin{equation} \label{eq:error} \Delta_{\pref} (\widehat{\pi}, \pi) :\,= v(\widehat{\pi}; S, \mathbf{P}, \| \cdot \| ) - v(\pi^*; S, \mathbf{P}, \| \cdot \| ). \end{equation} Note that the error $\Delta_{\pref}$ implicitly also depends on the set $S$ and the norm $\| \cdot \|$, but we have chosen our notation to be explicit only in the preference tensor $\mathbf{P}$. For the rest of this section, we restrict our attention to convex target sets $S$ and refer to them as \emph{valid sets}. Having established the background, we are now ready to provide sample complexity bounds on the estimation error $\Delta_{\pref}(\widehat{\pi}, \pi^*)$. \subsection{Upper bounds on the error of the plug-in estimator} Recall the definition of the function $v$ from equation~\eqref{eq:pop-opt}, and define, for each preference tensor $\widetilde{\mathbf{P}}$, an optimizer \begin{equation} \label{eq:problem} \pi(\widetilde{\mathbf{P}}) \in \arg \min_{\pi \in \Delta_{d}} v(\pi; S, \widetilde{\mathbf{P}}, \|\cdot\|)\;. \end{equation} Also recall the empirical preference tensor $\widehat{\pref}_n$ from equation~\eqref{eq:emp_pref}. With this notation, the plug-in estimator is given by $\pihat_{{\sf plug}} = \pi(\widehat{\pref}_n)$ and the target (or true) distribution by $\pi^* = \pi(\mathbf{P})$. While, our focus in this section is to provide upper bounds on the error of the plug-in estimator $\pihat_{{\sf plug}}$, we first state a general perturbation bound which relates the error of the optimizer $\pi(\widetilde{\mathbf{P}})$ to the deviation of the tensor $\widetilde{\mathbf{P}}$ from the true tensor $\mathbf{P}$. We use $\mathbf{P}(\cdot, i) \in [0,1]^{d \times k}$ to denote a matrix formed by viewing the $i$-th slice of $\mathbf{P}$ along its second dimension. Finally, recall our definition of the error $\Delta_{\pref}(\widehat{\pi}, \pi^*)$ from equation~\eqref{eq:error}. \begin{theorem}\label{thm:plugin_upper} Suppose the distance $\rho$ is induced by the norm $\|\cdot\|_q$ for some $q \geq 1$. Then for each valid target set $S$ and preference tensor $\widetilde{\mathbf{P}}$, we have \begin{equation}\label{eq:perturb} \Delta_{\pref}(\pi(\widetilde{\mathbf{P}}), \pi^*) \leq 2 \max_{i \in [d]} \|\widetilde{\mathbf{P}}(\cdot, i) - \mathbf{P}(\cdot, i)) \|_{\infty, q}. \end{equation} \end{theorem} Note that this theorem is entirely deterministic: it bounds the deviation in the optimal solution to the problem~\eqref{eq:pop-opt} as a function of perturbations to the tensor $\mathbf{P}$. It also applies uniformly to all valid target sets $S$. In particular, this result generalizes the perturbation result of Dudik et al.~\cite[Lemma~3]{dudik2015} which obtained such a deviation bound for the single criterion problem with $\robj^*$ as the von Neumann winner. Indeed, one can observe that by setting the distance $\rho(u, v) = |u-v|$ in Theorem~\ref{thm:plugin_upper} for the uni-criterion setup, we have the error $\Delta_{\pref}(\pi(\widetilde{\mathbf{P}}), \pi^*) \leq 2 \|\widetilde{\mathbf{P}} - \mathbf{P}\|_{\infty, \infty}$, matching the bound of \cite{dudik2015}. Let us now illustrate a consequence of this theorem by specializing it to the plug-in estimator, and with the distances given by the $\ell_\infty$ norm. \begin{corollary} \label{cor:linfty} Suppose that the distance $\rho$ is induced by the $\ell_\infty$-norm $\|\cdot\|_\infty$. Then there exists a universal constant $c>0$ such that given a sample size $n > cd^2k \log(\frac{c\adimk}{\delta})$, we have for each valid target set $S$ \begin{equation} \label{eq:linfty-exp-ub} \Delta_{\pref}(\pihat_{{\sf plug}}, \pi^*) \leq c \sqrt{ \frac{d^2 k}{n}\log\left(\frac{c d k}{\delta} \right)},\; \end{equation} with probability greater than $1-\delta$. \end{corollary} The bound~\eqref{eq:linfty-exp-ub} implies that the plug-in estimator $\pihat_{{\sf plug}}$ is an $\epsilon$-approximate solution whenever the number of samples scales as $n = \widetilde{ O}(\frac{d^2k}{\epsilon^2})$. Observe that this sample complexity scales quadratically in the number of objects $d$ and linearly in the number of criteria $k$. This scaling represents the effective dimensionality of the problem instance, since the underlying preference tensor $\mathbf{P}$ has $O(d^2k)$ unknown parameters. Notice that the corollary holds for sample size $n= \widetilde{\Omega}(d^2k)$; this should not be thought of as restrictive, since otherwise, the bound~\eqref{eq:linfty-exp-ub} is vacuous. \iffalse which satisfies the uniform approximation condition of Eq.~\eqref{eq:unif_approx}. Observe that this complexity crucially depends on all three aspects of the problem $G = (\mathbf{P}, S, \rho)$. In order to understand how this sample complexity behaves as a function of the problem dependent parameters, we now focus our attention on the special case when the distance function is given by the $\ell_p$ norm and establish sufficient conditions which allows for uniform estimation. \begin{lemma}[Sufficient Conditions for $\ell_p$ norms]\label{lem:suff_lp} Consider a preference learning problem instance given by $G = (\mathbf{P}, S, \rho = \|\cdot\|_p)$ for some $p \geq 1$ and any preference tensor $\widehat{\pref}$ estimated from samples such that: \begin{equation*} \max_i\|\mathbf{P}(\cdot, i) - \widehat{\pref}(\cdot, i)) \|_{\infty, p} \leq \epsilon\;, \end{equation*} where $\mathbf{P}(\cdot, i) \in \mathbb{B}^{d \times k}$ represents the submatrix corresponding to object $i$ when chosen along the column. Then, the tensor $\widehat{\pref}$ satisfies the uniform estimation property of Equation~\eqref{eq:unif_pref_est_1}. \end{lemma} The above lemma states that one can reduce the uniform estimation requirement in Eq.~\eqref{eq:unif_pref_est_1} to that of estimating the underlying preference tensor in some associated norm. This greatly simplifies the problem of understanding the sample complexity $n(d, k, \epsilon)$ for any instance of the preference learning problem since the condition, $\max_i\|\mathbf{P}(\cdot, i) - \widehat{\pref}(\cdot, i)) \|_{\infty, p} \leq \epsilon$, decouples the learning problem from the underlying set $S$. Going forward, we first describe our procedure for forming the plug-in estimate $\widehat{\pref}$ and then turn to illustrating its sample complexity for specific $\ell_p$ norms. Further, an application of the Hoeffding's inequality for Bernoulli variables shows that with probability at least $1-\delta$ for for $n = O(d^2k\log(1/\delta))$, we have \begin{equation*} \frac{n}{d^2k} - \sqrt{\frac{4n}{d^2k}\log\left( \frac{\adimk}{\delta}\right)} \leq \hat{\samp}_i \leq \frac{2n}{d^2k} + \sqrt{\frac{4n}{d^2k}\log \left(\frac{\adimk}{\delta}\right)}\;. \quad \mbox{for any $i \in I$.} \end{equation*} For the remainder of the section, we condition on this high probability event and derive sample complexity bounds for specific $\ell_p$ norms. Thus, for a sufficiently large choice of $n$, we have the relation $n = \Theta(d^2k \cdot \hat{\samp}_i)$ with high probability. \fi \subsection{Information-theoretic lower bounds} \label{sec:lower} While Corollary~\ref{cor:linfty} provides an upper bound on the error of the plug-in estimator that holds \emph{for all} valid target sets $S$, it is natural to ask if this bound is sharp, i.e., whether there is indeed a target set $S$ for which one can do no better than the plug-in estimator. In this section, we address this question by providing lower bounds on the minimax risk \begin{equation} \mathfrak{M}_{n, d, k} (S, \| \cdot \|_\infty) :\,= \pinf_{\widehat{\pi}} \psup_{\mathbf{P} \in \mathcal{P}} \ensuremath{\mathbb{E}} \left[ \Delta_{\pref} (\widehat{\pi}, \pi^*) \right], \end{equation} where the infimum is taken over \emph{all} estimators that can be computed from $n$ samples from our observation model. It is important to note that the error $\Delta_{\pref}$ is computed using the $\ell_\infty$ norm and for the set $S$. Our lower bound will apply to the particular choice of target set $S_0 = [1/2, 1]^{k}$. \begin{theorem} \label{thm:lower_passive} There are universal constants $c, c'$ such that for all $d \geq 4$, $k \geq 2$, and $n \geq cd^4k$, we have \begin{equation} \label{eq:minimax-lb} \mathfrak{M}_{n, d, k} (S_0, \| \cdot \|_\infty ) \geq c' \sqrt{\frac{d^2k}{n}}. \end{equation} \end{theorem} Comparing equations and~\eqref{eq:linfty-exp-ub} and~\eqref{eq:minimax-lb}, we see that for the $\ell_\infty$-norm and the set $S_0$, we have provided upper and lower bounds that match up to a logarithmic factor in the dimension. Thus, the plug-in estimator is indeed optimal for this pair $(\| \cdot \|_\infty, S_0)$. Further, observe that the above lower bound is non-asymptotic, and holds for all values of $n \gtrsim d^4k$. This condition on the sample size arises as a consequence of the specific packing set used for establishing the lower bound, and improving it is an interesting open problem. However, Theorem~\ref{thm:lower_passive} raises the question of whether the set $S_0$ is special, or alternatively, whether one can obtain an $S$-dependent lower bound. The following proposition shows that at least \emph{asymptotically}, the sample complexity for \emph{any} polyhedral set $S$ obeys a similar lower bound. \begin{proposition}[Informal]\label{prop:lower_gens} Suppose that we have a valid polyhedral target set $S$, and that $d \geq 4$. There exists a positive integer $n_0(d, k, S)$ such that for all $n \geq n_0(d, k, S)$ we have \begin{equation} \label{eq:minimax-lb-asymptotic} \mathfrak{M}_{n, d, k} (S, \|\cdot\|_\infty ) \gtrsim \sqrt{\frac{d^2k}{n}}\;. \end{equation} \end{proposition} We defer the formal statement and proof of this proposition to Appendix~\ref{app:proof_main}. This proposition establishes that the plugin estimator $\pihat_{{\sf plug}}$ is indeed optimal in the $\ell_\infty$ norm for a broad class of sets~$S$. Note that the result is asymptotic in nature: in order for the proposition to hold, we require that the number of samples is greater than the value $n_0$. This number $n_0$ depends on problem dependent parameters, and we provide an exact expression for $n_0$ in the appendix. \subsection{Instance-specific analysis for the plug-in estimator} In the previous section we established that the error $\Delta_{\pref}(\pihat_{{\sf plug}}, \robj^*)$ of the plug-in estimator scales as $\widetilde{O}\left(\sqrt{\frac{d^2k}{n}}\right)$ for any choice of preference tensor $\mathbf{P}$ and target set $S$ when the distance function $\rho = \|\cdot\|_\infty$. In this section, we study the adaptivity properties of the plug-in estimator $\pihat_{{\sf plug}}$ and obtain upper bounds on the error $\Delta_{\pref}(\pihat_{{\sf plug}}, \robj^*)$ that depend on the properties of the underlying problem instance. In the main text, we will restrict our focus to the uni-criterion setup with $k = 1$ and the target set $S = [\frac{1}{2},1]$, in which case the Blackwell winner coincides with the von Neumann winner. Furthermore, we will consider the case where the preference matrix $\mathbf{P}$ has a unique von Neumann winner $\robj^*$. This is formalized in the following assumption. \begin{assumption}[Unique Nash equilibrium]\label{ass:unique-nash-main} The matrix $\mathbf{P}$ belongs to the set of preference matrices $\mathcal{P}_{d,1}$ and has a unique mixed Nash equilibrium $\robj^*$, that is, $\robj^*_i > 0$ for all $i \in [d]$. \end{assumption} For the more general analysis, we refer the reader to Appendix~\ref{app:local-asymp}. For any preference matrix $\mathbf{P} \in \mathcal{P}_{d, 1}$ and the Bernoulli passive sampling model discussed in Section~\ref{sec:main} let us represent by $\Sigma_i$ the diagonal matrix corresponding to the variances along the $i^{th}$ column of the matrix $\mathbf{P}$ with \begin{align*} \Sigma_i = \text{diag}(\mathbf{P}({1,i})\cdot(1-\mathbf{P}({1,i})), \ldots, \mathbf{P}({d,i})\cdot(1-\mathbf{P}({d,i})). \end{align*} Given this notation, we now state an informal corollary (of Theorem~\ref{thm:local-nash} in the appendix) which shows that the error $\Delta_{\pref}(\pihat_{{\sf plug}}, \robj^*)$ depends on the worst-case alignment of the Nash equilibrium $\robj^*$ with the underlying covariance matrices $\Sigma_i$. \begin{corollary}[Informal]\label{cor:local-mixed-main} For any preference matrix $\mathbf{P}$ satisfying Assumption~\ref{ass:unique-nash-main}, confidence $\delta >0$, and number of samples $n > n_0(\mathbf{P}, \delta)$, we have that the error $\Delta_{\pref}$ of the plug-in estimate $\pihat_{{\sf plug}}$ satisfies \begin{align} \Delta_{\pref}(\pihat_{{\sf plug}}, \robj^*) &\leq c\cdot\sqrt{\frac{\sigma_{\mathbf{P}}^2d^2}{n}\log\left( \frac{d}{\delta}\right)}, \end{align} with probability at least $1-\delta$, where the variance $\sigma_{\mathbf{P}}^2 :\,= \max_{i \in [d]} (\robj^*)^{\top}\Sigma_i \robj^*$. \end{corollary} We defer the proof of the above to Appendix~\ref{app:local-asymp}. A few comments on the above corollary are in order. Observe that it gives a high probability bound on the error $\Delta_{\pref}$ of the plug-in estimator $\pihat_{{\sf plug}}$. Compared with the upper bounds of Corollaries~\ref{cor:linfty} and~\ref{cor:lone}, the asymptotic bound on the error above is instance-dependent -- the effective variance $\sigma_{\mathbf{P}}^2$ depends on the underlying preference matrix $\mathbf{P}$. In particular, this variance measures how well the underlying von Neumann winner $\robj^*$ aligns with the variance associated with each column of the matrix $\mathbf{P}$. In the worst case, since each entry of $\mathbf{P}$ is bounded above by $1$, the variance $\sigma_\mathbf{P}^2 = 1$ and we recover the upper bounds from Corollaries~\ref{cor:linfty} and~\ref{cor:lone} for the uni-criterion case. More interestingly, the bound provided by Corollary~\ref{cor:local-mixed-main} can be significantly sharper (by a possibly dimension-dependent factor) than its worst-case counterpart. We explore concrete examples of this in Appendix~\ref{app:local-asymp}. \subsection{Computing the plug-in estimator} In the last few sections, we discussed the statistical properties of the plug-in estimator, and showed that its sample complexity was optimal in a minimax sense. We now turn to the algorithmic question: how can the plug-in estimator $\pihat_{{\sf plug}}$ be computed? Our main result in this direction is the following theorem that characterizes properties of the objective function $v(\pi; \mathbf{P}, S, \| \cdot \|)$. \begin{theorem} \label{thm:opt} Suppose that the distance function is given by an $\ell_q$ norm $\| \cdot \|_q$ for some $q \geq 1$. Then for each valid target set $S$, the objective function $v(\pi; \mathbf{P}, S, \| \cdot \|_q)$ is convex in $\pi$, and Lipschitz in the $\ell_1$ norm, i.e., \begin{align*} |v(\pi_1; \mathbf{P}, S, \| \cdot \|_q) - v(\pi_2; \mathbf{P}, S, \| \cdot \|_q)| \leq k^{\frac{1}{q}}\cdot \| \pi_1 - \pi_2 \|_1 \text{ for each } \pi_1, \pi_2 \in \Delta_{d}. \end{align*} \end{theorem} Theorem~\ref{thm:opt} establishes that the plug-in estimator can indeed be computed as the solution to a (constrained) convex optimization problem. In Appendix~\ref{app:add_res}, we discuss a few specific algorithms based on zeroth-order and first-order methods for obtaining such a solution and an analysis of the corresponding iteration complexity for these methods; see Propositions~\ref{prop:conv_zero} and~\ref{prop:conv_fopl} in the appendix. These methods differ in the way they access the target set $S$: while zeroth-order methods require a \emph{distance oracle} to the target set, the first-order methods require a stronger \emph{projection oracle} to this constraint set. \section{Autonomous driving user study} \label{sec:pol_drive} In order to evaluate the proposed framework, we applied it to an autonomous driving environment. The objective is to study properties of the randomized policies obtained by our multi-criteria framework---the Blackwell winner for specific choices of the target set---and compare them with the alternative approaches of linear combinations of criteria and the single-criterion (overall) von Neumann winner. We briefly describe the components of the experiment here; see Appendix~\ref{app:exp} for more details. \paragraph{Self-driving Environment.} Figure~\ref{fig:intro}(a) shows a snapshot of one of the worlds in this environment with the autonomous car shown in orange. We construct three different worlds in this environment: \begin{itemize} \item[W1:] The first world comprises an empty stretch of road with no obstacles (20 steps). \vspace{-1mm} \item[W2:] The second world consists of cones placed in a given sequence (80 steps).\vspace{-1mm} \item[W3:] The third world has additional cars driving at varying speeds in their fixed lanes (80 steps).\vspace{-1mm} \end{itemize} \paragraph{Policies.} For our \emph{base policies}, we design five different reward functions encoding different self-driving behaviors. These polices, named Policy A-E, are then set to be the model predictive control based policies based on these reward functions wherein we fix the planning horizon to $6$. See Appendix~\ref{app:exp} for a detailed description of these reward functions. A \emph{randomized policy} $\pi \in \Delta_5$ is given by a distribution over the base policies A-E. Such a randomized policy is implemented in our environment by randomly sampling a base policy from the mixture distribution after every $H = 18$ time steps and executing this selected policy for that duration. To account for the randomization, we execute each such policy for $5$ independent runs in each of the worlds and record these behaviors. \paragraph{Subjective Criteria.} We selected five subjective criteria with which to compare the policies, with questions asking which of the two policies was C1: Less aggressive,\; C2: More predictable,\; C3: More quick,\; C4:~More conservative,\; and had C5: Less collision risk. Such a framing of question ensures that higher score value along any of C1-C5 is preferred; thus a higher score along C1 would imply less aggressive while along C2 would mean more predictable. In addition to these base criteria, we also consider an \emph{Overall Preference} which compares any pair of policies in an aggregate manner. For this criterion, the users were asked to select the policy they would prefer when riding to their destination. Additionally, we also asked the users to rate the importance of each criterion in their overall preference. \paragraph{Main Hypotheses.} Our hypotheses focus on comparing the randomized policies given by the Blackwell winner, the overall von Neumann winner, and those given by weighing the criteria linearly. \begin{itemize} \item[MH1] There exists a set $S$ such that the Blackwell winner with respect to $S$ and $\ell_\infty$-norm produced by our framework outperforms the overall von Neumann winner. \item[MH2] The Blackwell winner for oblivious score sets $S$ outperforms both oblivious\footnote{We use the term oblivious to denote variables that were \emph{fixed} before the data collection phase and data-driven to denote those which are based on collected data.} and data-driven weights for linear combination of criteria. \end{itemize} \paragraph{Independent Variables.} The independent variable of our experiment is the choice of algorithms for producing the different randomized winners. These comprise the von Neumann winner based on overall comparisons, Blackwell winners based on two oblivious target sets, and 9 different linear combinations weights (3 data-driven and 6 oblivious). We begin with the two target sets $S_1$ and $S_2$ for our evaluation of the Blackwell winner, which were selected in a data-oblivious manner. Set $S_1$ is an axis-aligned set promoting the use of safer policies with score vector constrained to have a larger value along the collision risk axis. Similar to Figure~\ref{fig:setup}(b), the set $S_2$ adds a linear constraint along aggressiveness and collision risk. This target set thus favors policies that are less aggressive and have lower collision risk. For evaluating hypothesis MH2, we considered several weight vectors, both oblivious and data-dependent, comprising the average of the users' self-reported weights, that obtained by regressing the overall criterion on C1-C5, and a set of oblivious weights. See Appendix~\ref{app:exp} for details of the sets $S_1$ and $S_2$, and the weights $w_{1:9}$. \paragraph{Data collection.} The experiment was conducted in two phases, both of which involved human subjects on Amazon Mechanical Turk (Mturk). See Appendix~\ref{app:exp} for an illustration of the questionnaire. The first phase of the experiment involved preference elicitation for the five base policies A-E. Each user was asked to provide comparison data for all ten combinations of policies. The cumulative comparison data is given in Appendix~\ref{app:exp}, and the average weight vector elicited from the users was found to be $w_1 = \left[ 0.21, 0.19, 0.20, 0.18, 0.22\right]$. We ran this study with 50 subjects. In the overall preference elicitation, we saw an approximate ordering amongst the base policies: \mbox{$\text{C} \succ \text{E} \succsim \text{D} \succsim \text{B} \succ \text{A}$}. Thus, Policy C was the von Neumann winner along the overall criterion. For each of the linear combination weights $w_1$ through $w_9$, Policy C was the weighted winner. The Blackwell winners R1 and R2 for the sets $S_1$ and $S_2$ with the $\ell_\infty$ distance were found to be $\text{R1} = [0.09, 0.15, 0.30, 0.15, 0.31]$ and $\text{R2} = [0.01, 0.01, 0.31, 0.02, 0.65]$. In the second phase, we obtained preferences from a set of 41 subjects comparing the randomized polices R1 and R2 with the baseline policies A-E. The results are aggregated in Table~\ref{tab:phase_one} in Appendix~\ref{app:exp}. \paragraph{Analysis for main hypotheses.} Given that the overall von Neumann winner and those corresponding to weights $w_{1:9}$ were all Policy C, hypotheses MH1 and MH2 reduced to whether users prefer at least one of \{R1, R2\} to the deterministic policy C, that is whether ${\pref}_{\textsf{ov}}(\text{C}, \text{R1}) < 0.5$ or ${\pref}_{\textsf{ov}}(\text{C}, \text{R2}) < 0.5$. Policies C and E were preferred to R1 by $0.71$ and $0.61$ fraction of the respondents, respectively. On the other hand, R2 was preferred to the von Neumann winner C by $0.66$ fraction of the subjects. Using the data, we conducted a hypothesis test with the null and alternative hypotheses given by \begin{align*} H_0: {\pref}_{\textsf{ov}}(\text{C}, \text{R2}) \geq 0.5, \quad \text{ and } \quad H_1: {\pref}_{\textsf{ov}}(\text{C}, \text{R2}) < 0.5. \end{align*} Among the hypotheses that make up the (composite) null, our samples have the highest likelihood for the distribution ${\sf Ber}(0.5)$. We therefore perform a one-sided hypothesis test with the Binomial distribution with number of samples $n = 41$, success probability $p = 0.5$ and number of successes $x = 14$ (indicating number of subjects which preferred Policy C to R1). The p-value for this test was obtained to be $0.0298$. This supports both our claimed hypotheses MH1 and MH2. \section{Discussion and future work} In this paper, we considered the problem of eliciting and learning from preferences along multiple criteria, as a way to obtain rich feedback under weaker assumptions. We introduced the notion of a Blackwell winner, which generalizes many known winning solution concepts. We showed that the Blackwell winner was efficiently computable from samples with a simple and optimal procedure, and also that it outperformed the von Neumann winner in a user study on autonomous driving. Our work raises many interesting follow-up questions: How does the sample complexity vary as a function of the preference tensor $\mathbf{P}$? Can the process of choosing a good target set be automated? What are the analogs of our results in the setting where pairwise comparisons can be elicited actively? \section*{Acknowledgments} We would like to thank Niladri Chatterji, Robert Kleinberg and Karthik Sridharan for helpful discussions, and Andreea Bobu, Micah Carroll, Lawrence Chan and Gokul Swamy for helping with the user study setup. KB is supported by a JP Morgan AI Fellowship, and AP was supported by a Swiss Re research fellowship at the Simons Institute for the Theory of Computing. This work was partially supported by NSF grant DMS-2023505 to PLB, by Office of Naval Research Young Investigator Award, NSF CAREER and a AFOSR grant to ADD, and by Office of Naval Research Grant DOD ONR-N00014-18-1-2640 to MJW. \newpage
{ "timestamp": "2021-05-06T02:09:09", "yymm": "2105", "arxiv_id": "2105.01850", "language": "en", "url": "https://arxiv.org/abs/2105.01850" }
\section{Infinite ladder charging by arbitrary spin states}\label{app:infLadderCharging} \section{Charging a quantum battery by a qubit}\label{app:infLadderCharging} We model the quantum battery as a regular energy ladder of equidistant levels $|n\rangle$ separated by the energy gap $E$. It shall be charged in discrete steps of resonant energy exchange with identical qubits prepared in arbitrary quantum states $\rho_Q$, as mediated by the energy-preserving unitary \begin{equation}\label{eq:map} \Op{U}_\theta = \exp \left[-i\theta \left( \Op{A} |e\rangle\langle g| + \Op{A}^\dagger |g\rangle\langle e| \right) \right], \end{equation} and its inverse $\Op{U}^\dagger_\theta = \Op{U}_{-\theta}$. Here, $|g\rangle,|e\rangle$ denote the qubit's ground and excited state and $\Op{A},\Op{A}^\dagger$ denote the lowering and raising operators on the battery ladder. The reduced battery state transforms as $\rho_B \to \rho_B' = \operatorname{tr}_Q \{ \Op{U}_\theta \rho_B \otimes \rho_Q \Op{U}_{-\theta} \}$. For an idealized, infinite battery, the ladder operators are defined through $\Op{A}|n\rangle = |n-1\rangle$ and $\Op{A}^\dagger |n\rangle = |n+1\rangle$ for all $n$, which leads to \begin{equation}\label{eq:U_matrix} \Op{U}_\theta |n,g \rangle = \cos \theta |n,g \rangle - i\sin \theta |n-1,e \rangle, \quad \Op{U}_\theta |n,e \rangle = \cos \theta |n,e \rangle - i\sin \theta |n+1,g \rangle . \end{equation} Given a generic qubit state of the form \begin{equation} \rho_Q = q |g \rangle \langle g| + (1-q) |e \rangle \langle e| + c \sqrt{q(1-q)} \left( e^{-i\alpha} |g \rangle \langle e| + e^{i\alpha} |e \rangle \langle g| \right), \label{eq:qubitState} \end{equation} a straightforward application of \eqref{eq:U_matrix} shows that the transformation of the battery state after one step is \begin{align} \langle n |\rho_B' | n'\rangle &= (1-p_\theta) \langle n |\rho_B|n'\rangle + p_\theta \left[ (1-q) \langle n-1 |\rho_B|n'-1\rangle + q \langle n+1 |\rho_B|n'+1\rangle \right] \nonumber \\ &- i c \frac{ \Omega}{2} \left[ e^{i\alpha} \left( \langle n-1 |\rho_B|n'\rangle - \langle n |\rho_B|n'+1\rangle \right) + e^{-i\alpha} \left( \langle n+1 |\rho_B|n'\rangle - \langle n |\rho_B|n'-1\rangle \right) \right]. \label{eq:qubitChargeTrafo} \end{align} in the energy representation, with $p_\theta = \sin^2 \theta$ and $\Omega = \sqrt{q(1-q)}\sin 2\theta$. This transformation, which describes the time evolution of an infinite battery in discrete charging steps, can generate a classical or quantum random walk process depending on the chosen parameters $\theta,q,\alpha,c$. In the case of a finite battery with $N+1$ levels from the zero-charge state $|0\rangle$ to the fully charged $|N\rangle$, Eqs.~\eqref{eq:U_matrix} and \eqref{eq:qubitChargeTrafo} still hold for any $0 < n,n'< N$. Hence the infinite-battery results remain valid for those battery states that do not occupy $|0\rangle$ or $|N\rangle$. In order to obtain an expression for the transformation of finite-battery states, we can expand the unitary \eqref{eq:map} as \begin{equation}\label{eq:map_finite} \Op{U}_\theta = \cos\theta \, \ensuremath{\mathbbm{1}} - i\sin\theta \left( \Op{A} \otimes |e\rangle\langle g| + \Op{A}^\dagger \otimes |g\rangle\langle e| \right) + (1-\cos\theta) \left( |0,g\rangle\langle 0,g| + |N,e\rangle\langle N,e| \right) , \end{equation} with $\Op{A} = \sum_{n=1}^N |n-1\rangle\langle n|$. Here, the last term accounts for the modified effect at the charge boundaries. The final state is given by $\rho_B' = \operatorname{tr}_Q[\Op{U}_\theta \left(\rho_B\otimes \rho_Q\right) \Op{U}_\theta ^\dagger]$ with $\rho_Q$ defined in \eqref{eq:qubitState}. Plugging $\rho_Q$ into $\rho_B'$ and reordering the terms, one can write the incremental change in the reduced battery state $\Delta \rho_B = \rho_B' - \rho_B = \mathcal{L} \rho_B$ in terms of the Lindblad generator $\mathcal{L}$ defined in the main text. In the following, we study the time evolution of the battery state $\rho_B(k) = (\ensuremath{\mathbbm{1}} + \mathcal{L})\rho_B(k-1) = (\ensuremath{\mathbbm{1}} +\mathcal{L})^k \rho_B(0)$ after $k$ charge steps, omitting boundary effects. We distinguish the opposite cases of incoherent charging ($c=0$) and coherent charging ($c=1$). \section{Incoherent battery charging}\label{app:infLadderClassical} Assuming an initially diagonal battery state and diagonal qubits ($c=0$), we are left with a classical discrete-time random walk model. The battery state remains diagonal at all times, and the populations transform according to a simple Markov chain with three branches: no jump with probability $1-p_\theta$, jump up with $p_\theta (1-q)$, jump down with $p_\theta q$. Denoting by $P(n,k) = \langle n|\rho_B (k) | n\rangle$ the populations of the battery at discrete time steps $k=0,1,\ldots$, we get from \eqref{eq:qubitChargeTrafo}, \begin{equation} P(n,k+1) = (1-p_\theta) P(n,k) + p_\theta (1-q) P(n-1,k) + p_\theta q P(n+1,k). \label{eq:classRWprop} \end{equation} This difference equation is \emph{exact} for an infinite ladder as well as for a finite ladder at $0<n<N$, which implies that its solution may serve as an approximation for transient states of finite ladders so long as the population at the charge boundaries is small. Specifically, the first and second moments and cumulants of charge $n$ evolve as \begin{align}\label{eq:classRWmoments} \overline{n}(k) &= \sum_{n=-\infty}^\infty n P(n,k) = \overline{n}(k-1) + p_\theta (1-2q) = \overline{n}(0) + p_\theta (1-2q) k =: \overline{n}(0) + v k, \nonumber \\ \overline{n^2}(k) &= \overline{n^2}(k-1) + p_\theta + 2 p_\theta (1-2q) \overline{n}(k-1) = \overline{n^2}(k-1) + p_\theta + 2 v \overline{n}(k-1), \\ \Delta n^2 (k) &= \overline{n^2}(k) - \overline{n}^2(k) = \Delta n^2 (k-1) + p_\theta - v^2 = \Delta n^2 (0) + (p_\theta - v^2) k, \nonumber \end{align} indicating that the charge distribution spreads and drifts, as described by the stepwise increment $v = p_\theta (1-2q)$ of the mean charge. The time-evolved $P(n,k)$ can be expressed analytically in terms of the initial $P(n,0)$ in various ways. For example, one can view the $P(n,k)$ as the $n$-th discrete Fourier coefficients of a periodic characteristic function $\chi (\phi,k) = \sum_n P(n,k) e^{in\phi}$ with $\phi \in (-\pi,\pi]$, normalized to $\chi(0,k) = 1$. The Markov chain \eqref{eq:classRWprop} translates into \begin{align} \chi(\phi,k+1) &= \left[ (1-p_\theta) + p_\theta (1-q) e^{i\phi} + p_\theta q e^{-i\phi} \right] \chi(\phi,k) \quad \Rightarrow \quad \chi(\phi , k) = \left[ 1 - p_\theta \left( 1 - \cos \phi \right) + i v \sin\phi \right]^k \chi(\phi,0) . \label{eq:chiSol_class} \end{align} This amounts to a geometric progression by a factor of magnitude smaller than unity. Indeed, one can easily check that the magnitude of the square-bracketed expression is never greater than $\sqrt{1-4p_\theta(1-p_\theta)\sin^2(\phi/2)}$ for any $q$. Hence, after sufficiently many time steps, the bracketed term will have suppressed the initial characteristic function almost everywhere except for a narrow region around $\phi = 0$. We can then perform a Taylor expansion in $|\phi| \ll 1$ to arrive at \begin{align} \chi(\phi,k) &\approx \left[ 1 - p_\theta \frac{\phi^2}{2} + i v \phi + \mathcal{O} \{ \phi^3 \} \right]^k \chi (\phi,0) = \left[ 1 - \frac{\phi^2}{2} \left(p_\theta-v^2 \right) + i v \phi + \frac{1}{2} \left( i v \phi + \mathcal{O}\{\phi^{3/2}\} \right)^2 \right]^k \chi (\phi,0) \nonumber \\ &\approx \exp \left[ - \frac{p_\theta-v^2}{2} k \phi^2 +i v k \phi \right] \chi (\phi,0) \end{align} Note that we have split the second order term in $\phi$ for a consistent expansion of the exponential function. As $\chi(\phi,k)$ will be narrowly peaked at small angles $\phi$, the corresponding charge distribution will be broad and thus admits a continuous description in $n$. Assuming an initial pure charge state at $n_0$, i.e.~$\chi(\phi,0) = e^{in_0\phi}$, and omitting the periodic boundary conditions, we are left with an approximately Gaussian charge distribution, \begin{align} \label{eq:Pn_Gauss} P(n,k) &= \int_{-\pi}^\pi \frac{\mathrm{d} \phi}{2\pi} \chi(\phi,k) e^{-in\phi} \approx \int_{-\infty}^\infty \frac{\mathrm{d} \phi}{2\pi} \chi(\phi,t) e^{-in\phi} \stackrel{k \gg 1}{\approx} \frac{1}{\sqrt{2\pi k ( p_\theta-v^2 )}} \exp \left[ -\frac{(n-n_0-v k)^2}{2k(p_\theta - v^2)} \right] \end{align} The normalization has to be corrected manually if the distribution is to be evaluated for discrete $n$. The mean and the variance of this Gaussian agree with the exact infinite-ladder results in \eqref{eq:classRWmoments}. A mixed initial charge state would result in a mixture of Gaussians, which would eventually converge towards a broader Gaussian. However, once the charge distribution hits the boundaries in the case of a finite battery, the Gaussian approximations no longer hold and the modified update rule for $P(0,k+1)$ and $P(N,k+1)$ will cause reflections and ultimately lead to a Gibbs-like steady state. \section{Coherent battery charging}\label{app:infLadderQuantum} We now consider a battery charged by qubits with energy coherences ($c>0$). Once again, the time evolution of the battery state has an exact analytic form so long as the charge boundaries are not yet occupied. To this end, we start again from the state transformation rule \eqref{eq:qubitChargeTrafo}, from which we see that the impact of the coherences is most prominent at half-swaps, $\theta = \pi/4$. The results presented in the main text were evaluated for for half-swaps and $c=1$, i.e.~with qubits prepared in the pure superposition state $\ket{\psi} = \sqrt{q}\ket{g} + \sqrt{1-q}\ket{e}$. In this case, we expect the coherences to cause interference effects as known from quantum random walks. In particular, a battery initialized in an intermediate charge state will evolve into a bimodal charge distribution as the charge steps accumulate, and the two branches of the distribution will simultaneously progress up and down the ladder until they hit the charge boundaries. The analytic solution to the battery evolution follows after taking the discrete Fourier transform of the density matrix with respect to $n,n'$, which formally amounts to switching into the periodic phase representation of the infinite-ladder Hilbert space, $\langle \phi | \rho_B |\phi' \rangle := \sum_{n,n'} \langle n|\rho_B |n'\rangle e^{in'\phi'-in\phi}/2\pi$. Introducing the short-hand notation $\chi (\Phi, \varphi) := \langle \Phi - \varphi/2 |\rho_B|\Phi + \varphi/2\rangle$, \eqref{eq:qubitChargeTrafo} translates to \begin{align} \langle \phi | \rho_B' |\phi' \rangle &= \left\{ 1-p_\theta + p_\theta \left[ (1-q)e^{i(\phi'-\phi)} + qe^{i(\phi-\phi')} \right] - ic \frac{\Omega}{2} \left[ e^{i\alpha} \left( e^{-i\phi} - e^{-i\phi'} \right) + e^{-i\alpha} \left( e^{i\phi} - e^{i\phi'} \right) \right] \right\} \langle \phi | \rho_B |\phi' \rangle \nonumber \\ &= \left\{ 1 - p_\theta \left[ 1 - \cos(\phi'-\phi) \right] + i v \sin (\phi'-\phi) - i c \Omega \left[ \cos (\phi-\alpha) - \cos (\phi' - \alpha) \right] \right\} \langle \phi | \rho_B |\phi' \rangle , \\ \Rightarrow \chi' (\Phi,\varphi) &= \left[ 1 - p_\theta \left( 1 - \cos \varphi \right) + i v \sin \varphi - 2 i c \Omega \sin (\Phi - \alpha) \sin \frac{\varphi}{2} \right] \chi (\Phi,\varphi). \nonumber \end{align} In this representation, a charge step merely amounts to a multiplicative factor. Notice that, since we employ identically prepared qubits with the same fixed phase angle $\alpha$, we are free to define the battery phase coordinates $\phi,\phi'$ relative to that reference, i.e.~set $\alpha=0$ without loss of generality. The battery state after $k$ charge steps can now be expressed in terms of the initial state as \begin{align} \chi_k (\Phi,\varphi) &= \left[ 1 - p_\theta (1-\cos\varphi) + i v \sin\varphi - 2i c \Omega \sin \Phi \sin \frac{\varphi}{2} \right]^k \chi_0 (\Phi,\varphi). \end{align} One can check that the magnitude of the progression factor in square brackets is a number between zero and one, as it should be. It always assumes its maximum at $\varphi=0$ and decreases at first with growing $\varphi$. However, contrary to the incoherent case \eqref{eq:chiSol_class}, the magnitude does not necessarily stay substantially below one for all $(\Phi,\varphi)$-arguments. Indeed, at $\varphi = \pm \pi$ and $\Phi = \pm \pi/2$, the magnitude reaches up to $\sqrt{1-4p_\theta (1-p_\theta)[1-4c^2 q(1-q)]}$, which is unity for coherent qubit superpositions of equal weight ($c=1$, $q \approx 1/2$) and half-swaps ($p_\theta = 1/2$). Hence, substantial values of the characteristic function at large angles $|\varphi| \sim \pi$ can persist even after many charge steps, which explains why the charge distribution $P(n,k)$ may exhibit high-frequency interference fringes between neighbouring charge levels; the distribution can be expressed as the Fourier integral \begin{align} \label{eq:Pn_chi_Fourier} P(n,k) &= \langle n|\rho_B (k) |n \rangle = \frac{1}{2\pi} \int_{-\pi}^{\pi}\mathrm{d} \phi \mathrm{d} \phi' e^{in(\phi - \phi')} \chi_k \left( \frac{\phi + \phi'}{2},\phi'-\phi \right) = \int_{-\pi}^\pi \frac{\mathrm{d} \Phi}{2\pi} \int_{2|\Phi|-2\pi}^{2\pi - 2|\Phi|} \mathrm{d} \varphi \, e^{-in\varphi} \chi_k (\Phi,\varphi). \end{align} The high-frequency fringes seen in Fig.2 neither influence the distinct bimodal structure of the charge distribution after $k\gg 1$ nor do they affect the mean and variance appreciably. We shall therefore ignore those high-frequency components and focus on the coarse-grained, quasi-continuous distribution at long times, assuming that the relevant $\varphi$-values for the characteristic function are small. Similar to the incoherent case, we can approximate consistently to 2nd order in $\varphi$ and are left with \begin{align} \chi_k (\Phi,\varphi) &\approx \left[ 1 + i\varphi \underbrace{\left( v - c \Omega \sin \Phi \right)}_{=: \gamma (\Phi)} - p_\theta \frac{\varphi^2}{2} + \mathcal{O} (\varphi^3) \right]^k \chi_0 (\Phi,\varphi) \approx \exp \left[ - \frac{p_\theta - \gamma^2 (\Phi)}{2} k\varphi^2 + ik \gamma (\Phi) \varphi \right] \chi_0 (\Phi,\varphi). \label{eq:chi_Gaussianapprox} \end{align} So the characteristic function $\chi_k$ approaches a Gaussian shape in $\varphi$ whose width decreases like $1/\sqrt{k}$, while its complex phase oscillates at a frequency proportional to $k$; both depend on the other angle coordinate $\Phi$, as determined by \begin{equation}\label{eq:gammaPhi} \gamma(\Phi) = v - c\Omega \sin\Phi = (1-2q) \sin^2 \theta + c\sqrt{q(1-q)} \sin 2\theta \sin \Phi. \end{equation} The charge distribution \eqref{eq:Pn_chi_Fourier} can be seen as the $[n-k\gamma(\Phi)]$-th Fourier component with respect to $\varphi$, truncated and averaged over $\Phi$. For large $k$, the $\Phi$-average will be dominated by the vicinity of the points $\Phi_\pm = \pm \pi/2$ at which the strongly oscillating phase is stationary. Moreover, the width of the Gaussian will be much smaller than $\pi$, which allows us to extend the $\varphi$-integral in \eqref{eq:Pn_chi_Fourier} to infinity. Inserting \eqref{eq:chi_Gaussianapprox} and assuming that the battery is initially prepared in a charge eigenstate $|n_0\rangle$ with $\chi_0 (\Phi,\varphi) = e^{in_0 \varphi}/2\pi$, we get \begin{align} P(n,k) &\approx \frac{1}{4\pi^2} \int_{-\pi}^\pi \mathrm{d} \Phi \int_{-\infty}^{\infty} \mathrm{d} \varphi \, \exp \left\{ - \frac{p_\theta - \gamma^2 (\Phi)}{2} k\varphi^2 - i[n- n_0 - k \gamma (\Phi)] \varphi \right\} \nonumber \\ &= \frac{1}{2\pi} \int_{-\pi}^\pi \mathrm{d} \Phi \frac{1}{\sqrt{2\pi k[p_\theta - \gamma^2 (\Phi)]}} \exp \left\{-\frac{[n - n_0 - k\gamma(\Phi)]^2}{2k [p_\theta - \gamma^2 (\Phi)] }\right\} . \label{eq:Pn_Gauss_coh} \end{align} The result is a smooth average over Gaussians with large widths and displacements. A mixed initial charge state would entail an additional weighted sum over different $n_0$, accordingly. The charge moments are consistently evaluated in the continuum approximation, as we are assuming that the charge distribution \eqref{eq:Pn_Gauss_coh} be broad, $\overline{n^j} (k) = \sum_n n^j P(n,k) \approx \int\mathrm{d} n\, n^j P(n,k)$. Specifically, we get \begin{align} \overline{n} (k) & \approx n_0 + k \frac{1}{2\pi} \int_{-\pi}^\pi \mathrm{d} \Phi \, \gamma(\Phi) = n_0 + vk , \\ \overline{n^2} (k) & \approx \frac{1}{2\pi} \int_{-\pi}^\pi \mathrm{d} \Phi \, \left[ k p_\theta - k\gamma^2 (\Phi) + (k\gamma(\Phi) + n_0)^2 \right] = n_0^2 + k (p_\theta + 2v) + k(k-1) \left(v^2 + \frac{c^2 \Omega^2}{2} \right), \nonumber \\ \Delta n^2 (k) &\approx (p_\theta - v^2)k + \frac{c^2\Omega^2}{2} k(k-1), \nonumber \end{align} see also Eq.9 in the main text. Note that the charge variance only holds for a \emph{pure} initial state, and one must add the initial spread $\Delta n^2(0)$ in the case of a mixture. The average charge moves at the same speed as in the incoherent charging case; there is no difference within the validity bounds of our approximations. The variance, however, gets an additional contribution that grows quadratically with $k$ and thus eventually becomes the the dominant cause of charge fluctuations. Indeed, our numerical simulations show that the asymptotic steady state of a coherently charged battery typically yields a flat energy distribution that extends over the whole ladder. Finally, in order to see that \eqref{eq:Pn_Gauss_coh} describes a bimodal distribution, recall that the displacement parameter $\gamma(\Phi)$ in \eqref{eq:gammaPhi} oscillates between its two extreme values, $\gamma (\Phi_{\pm}) = v \mp c\Omega$. Since these are stationary points, the $\Phi$-integral will allocate most weight to them and result in distinct maxima around the two respective mean charges $n_{\pm} \approx n_0 + vk \pm c\Omega k$. We can make this explicit by performing a stationary phase approximation with respect to $\Phi$ in the first line of \eqref{eq:Pn_Gauss_coh}. Consider two well-behaved real-valued functions $f(x), g(x)$ for $x\in [a,b]$ and let $x_n$ be non-degenerate stationary points well within that interval, at which $f'(x_n) = 0$ and $f''(x_n) \neq 0$. Then \begin{equation} \int_a^b \mathrm{d} x\, g(x) e^{ik f(x)} \approx \sum_{x_n} \sqrt{\frac{2\pi i }{k f''(x_n)}} g(x_n) e^{ikf(x_n)} \qquad \text{for }\,\, k\to\infty. \end{equation} Applied to \eqref{eq:Pn_Gauss_coh}, the approximation is only strictly justified for $|\varphi| \gg 1/k$ and should therefore be taken as a qualitative estimate that may well lead to deviations in the flat parts of the charge distribution while reproducing its more sharply peaked features. We arrive at \begin{align} P(n,k) &\approx \frac{1}{4\pi^2} \int_{-\infty}^{\infty} \mathrm{d} \varphi \left\{ \sqrt{\frac{2\pi i}{c\Omega k \varphi}} \exp \left[ - \frac{p_\theta - \gamma^2(\Phi_+)}{2} k \varphi^2 + ik\gamma(\Phi_+) \varphi \right] \right. \nonumber \\ &+ \left. \sqrt{-\frac{2\pi i}{c\Omega k \varphi}} \exp \left[ - \frac{p_\theta - \gamma^2(\Phi_-)}{2} k \varphi^2 + ik\gamma(\Phi_-) \varphi \right] \right\} e^{-i(n-n_0)\varphi} \nonumber \\ &= \frac{Q_+ \left[ n-n_0-k\gamma(\Phi_+), p_\theta - \gamma^2(\Phi_+) \right] + Q_- \left[ n-n_0-k\gamma(\Phi_-), p_\theta - \gamma^2(\Phi_-) \right]}{\sqrt{4c \Omega k}}, \label{eq:Pn_statPhase_coh} \end{align} introducing a distinctively peaked, unnormalized combination of Gaussian distribution and modified Bessel functions, \begin{align} Q_\pm [\mu,\sigma^2] &= \frac{1}{\sqrt{4\pi \sigma^2}}e^{-\mu^2/4\sigma^2} \left[ \sqrt{|\mu|} \, I_{-1/4} \left( \frac{\mu^2}{4\sigma^2} \right) \pm \frac{\mu}{\sqrt{|\mu|}} I_{1/4} \left( \frac{\mu^2}{4\sigma^2} \right) \right]. \end{align} Although the stationary phase approximation \eqref{eq:Pn_statPhase_coh} is noticably less accurate than the Gaussian approximation \eqref{eq:Pn_Gauss_coh}, it makes the double-peak structure of the charge distribution around $n_\pm$ explicit. \section{Charging efficiency using free energy} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{figApp.pdf} \caption{\label{fig:compareFreeEnergy} Ratio of ergotropy $\mathcal{E}_Q$ to free energy difference $\Delta\mathcal{F}_Q(T_R)$ for (a) incoherent ($c=0$) and (b) coherent ($c=1$) qubits at different $q$ and reference temperature $T_R$ used to prepare the qubits. Panel (c) shows the difference in ratio $\mathcal{E}_Q^{c=1}/\Delta\mathcal{F}_Q^{c=1}(T_R) - \mathcal{E}_Q^{c=0}/\Delta\mathcal{F}_Q^{c=0}(T_R) $.} \end{figure} In the main text, we defined the average efficiency after a given number of charging steps by the total ergotropy stored in the battery over the total ergotropy provided by the qubits, $\eta(k) = \mathcal{E}_B(k)/k\mathcal{E}_Q$ with \begin{equation} \mathcal{E}_Q = \frac{E}{2} \left[1-2q + \sqrt{(1-2q)^2 + 4c^2q(1-q)} \right]. \end{equation} The definition implies that ergotropy is the relevant input resource, i.e.~qubits in passive zero-ergotropy states of a given von Neumann entropy $\mathcal{S}(\rho_Q) = -\operatorname{tr}\{ \rho_Q \ln \rho_Q \}$ are freely available and the charging cost is the energy required to prepare the population-inverted and/or coherent qubit state \eqref{eq:qubitState} by means of a unitary (isentropic) operation. Alternatively, one could assume that the same qubit state was prepared in an isothermal process at a given reference temperature $T_R$ starting from equilibrium, in which case the required energy cost would be at least the free energy difference to the Gibbs state, \begin{equation} \Delta \mathcal{F}_Q (T_R) = E(1-q) - k_B T_R \left[ \mathcal{S}(\rho_Q) - \ln \left(1+e^{-E/k_B T_R} \right) \right] = \mathcal{F}_Q(T_R) + k_B T_R \ln \left(1+e^{-E/k_B T_R} \right) \geq \mathcal{F}_Q (T_R). \end{equation} The corresponding efficiency would read as $\eta(k,T_R) = \mathcal{E}_B(k)/k \Delta \mathcal{F}_Q (T_R)$. In the case of coherent charging ($c=1)$, the qubit is in a pure state with $\mathcal{S}(\rho_Q) = 0$, for which the ergotropy and the free energy content match, $\mathcal{E}_Q = \mathcal{F}_Q (T_R) = E(1-q)$. For incoherent charging with population-inverted qubits ($c=0$ and $q<1/2$), on the other hand, we have $\mathcal{E}_Q = E(1-2q)$ and $\mathcal{F}_Q (T_R) = E(1-q) + k_B T_R [q\ln q + (1-q)\ln(1-q)]$. Hence, the free energy resources in both cases are \begin{align} \Delta\mathcal{F}_Q^{(c=1)}(T_R) &= E(1-q) + k_B T_R \ln \left(1+e^{-E/k_B T_R} \right) \geq E(1-q) = \mathcal{E}_Q^{(c=1)}, \\ \Delta\mathcal{F}_Q^{(c=0)}(T_R) &= E(1-q) + k_B T_R \left[ q\ln q + (1-q)\ln (1-q) + \ln \left(1+e^{-E/k_B T_R} \right) \right] \geq E(1-2q) = \mathcal{E}_Q^{(c=0)}. \end{align} The inequality in the first line is trivial, while the second inequality can be checked numerically. Both inequalities imply $\eta(k) > \eta(k,T_R)$, and they align with the fact that the free energy gives the upper bound to the maximum energy extractable from a state at a given temperature (see e.g. Refs.~\cite{Skrzypczyk2014,Bera2019}). Figure \ref{fig:compareFreeEnergy} shows the ratio of $\mathcal{E}_Q$ to $\Delta \mathcal{F}_Q (T_R)$, or equivalently of $\eta(k,T_R)$ to $\eta(k)$, at different reference temperatures $T_R$ and populations $q$ for incoherent ($c=0$) and coherent ($c=1$) charging qubits. In the case of incoherent charging, the ratio drops with increasing $q$ and $T_R$ (while remaining high when both simultaneously increase), whereas for the coherent case, the ratio remains close to one up until the high-temperature regime $ k_B T_R > E$. In the main text, we use the ergotropy rather than the free energy difference as a figure of merit for quantifying efficiency, because it does not require a reference temperature $T_R$ and only depends on the state. While this may over-predict the net efficiency of the charging process with thermodynamically prepared qubits, the advantage of coherent over incoherent charging will persist at low reference temperatures $k_B T<0.15E$ since $\mathcal{E}_Q^{c=1}/\Delta\mathcal{F}^{c=1}\gtrsim 0.8$ for all $q$-values (see Fig.~\ref{fig:compareFreeEnergy}(c)) while $\mathcal{E}_Q^{c=0}/\Delta\mathcal{F}^{c=0}$ ranges from $0$ to $1$ for different $q$-values. The comparison of charging efficiency in terms of ergotropy and in terms of free energy difference to a reference bath highlights a subtlety on the precise thermodynamic resource that causes the quantum enhancement: is the greater charging power/efficiency at fixed $q$ due to the increased purity (i.e.~lower von Neumann entropy) or the increased coherence of the charge qubits? Our analysis suggests the latter, because it is the presence of coherence that causes the buildup of interference effects in the battery, which in turn lead to the described bimodal energy distribution and the coherent charging speed-up. Low-entropy qubit states without energy coherence, on the other hand, would never generate such interference effects. Specifically, we discuss in the main text that quantum coherent states can beat arbitrary classical strategies including those using pure excited-state qubits in terms of charging power. \end{widetext} \end{document}
{ "timestamp": "2021-08-25T02:10:12", "yymm": "2105", "arxiv_id": "2105.01863", "language": "en", "url": "https://arxiv.org/abs/2105.01863" }
\section*{Introduction} The Steenrod problem, posted as Problem 25 in \cite{SamEil}, states the following: \begin{quotation} ``If $z_n\in H_n(K)$ is an $n$-dimensional (integral) homology class in a simplicial complex $K$, does there exist an oriented manifold $M$ and a map $f: M\rightarrow K$ such that $z_n$ is the image of the generator of $H_n(M)$?\,''\end{quotation} Thom \cite{Th} constructed a counterexample with an integral class $\xi\in H_*(L^{7}\times L^7)$, where $L^7$ is the $7$-Lens space. This construction extends for $n=2p+1$, by taking the product of two $(2p+1)$-Lens spaces. More precisely, the cohomology algebra ($\mathbb{Z}_p$-coefficients) has the form $H^*(L^{2p+1};\mathbb{Z}_p)\cong\mathbb{Z}_p[u,\nu]/(\nu^2=0,u^{p+1}=0)$, with $|\nu|=1$, $|u|=2$ and the Bockstein $u=\beta(\nu)$. Consider the class \begin{equation} X_p:=u_1\nu_2u_2^{p-1}-\nu_1u_2^p\in H^*(L_1^{2p+1}\times L_2^{2p+1};\mathbb{Z}_p)\,,\end{equation} which is an integral class because of the equality $\beta(\nu_1\nu_2u_2^{p-1})=u_1\nu_2u_2^{p-1}-\nu_1u_2^p$. Now, take the Poincar\'e dual \begin{equation} \xi_p\in H_{2p+1}(L_1^{2p+1}\times L_2^{2p+1})\,, \end{equation} and because of the following calculation \begin{equation} \beta P^1(X_p)=\beta\left(P^1(u_1)\nu_2u_2^{p-1}+u_1\nu_2P^1(u_2^{p-1})-\nu_1P^1(u_2^p)\right)= \beta(u_1^p\nu_2u_2^{p-1})= u_1^pu_2^p\neq 0\,,\end{equation} the integral class $\xi_p$ is a counterexample for the Steenrod realization problem. Notice that the Thom's counterexamples is $\xi=\xi_3$. Thom \cite{Th} portrayed the obstruction to realizability in terms of cohomology. For a solution in terms of homology -as opposed to cohomology- we need a geometric approach for representing the cycles that is robust enough to treat singularities. We choose the theory of 'stratifolds', developed by Kreck in \cite{Kre}. Indeed, we will see that the Thom's counterexample can be represented by a $(2p+1)$-stratifold, where the singular part is a torus whose neighborhood is a cone on $\mathbb{C} P^{p-1}$. Conner and Floyd \cite{CF} rephrased the Steenrod problem in terms of the Atiyah Hirzebruch spectral sequence $(E_{s,t}^r,d^r_{s,t})$. More precisely, the homomorphism from oriented bordism to integral homology $\Omega_*(X)\rightarrow H_*(X)$ is an epimorphism if and only if the differentials $d^r_{s,t}:E_{s,t}^r\longrightarrow E_{s-r,t+r-1}^r$ are trivial for all $r\geq 2$. Sullivan \cite{Sul} formulated the Steenrod problem in terms of resolving singularities of geometric cycles in manifolds. Indeed, this consists of a blowing up process for simplifying the singularities of geometric cycles, which are constructed inductively by attaching and dragging generalized handles. He shows that for a geometric cycle $V$ with singularity $S(V)$, the obstruction for resolving the singularity lies in $H_s(S(V),\Omega_r)$, with $r+s+1=\operatorname{dim}V$. The Lens space $L^{2p+1}$ is extended to $B\mathbb{Z}_p$ where the homology classes are identified with free action of $\mathbb{Z}_p$ on manifolds. The AHSS of the space $X:=B(\mathbb{Z}_p\times \mathbb{Z}_p)$ has the form $E^2\cong \cdots \cong E^5$ and $E^6\cong\cdots \cong E^\infty$. There are generators $\alpha_i\in H_i(B\mathbb{Z}_p;\mathbb{Z}_p)$ such that for $i$ even, $\beta(\alpha_i)$ is the $(i-1)$-dimensional Lens space and for $i$ odd, $\alpha_i$ is the $\operatorname{mod} p$ image of the $i$-dimensional Lens space. In this respect, the Thom's counterexample corresponds to \begin{equation} \xi_p:=\alpha_{2}\times \alpha_{2p-1}+\alpha_{1}\times \alpha_{2p} \in H_{2p+1}(X)\,, \end{equation} where we take the product of $\mathbb{Z}_p$ actions producing and action of $\mathbb{Z}_p\times \mathbb{Z}_p$. The description of the generators is as follows: \begin{itemize} \item[i)] $\alpha_2$ is a closed surface of genus $(p-1)(p-2)/2$ with an action of $\mathbb{Z}_p$ with exactly $p$ fixed points; \item[ii)] $\alpha_n$, for $n$ odd, is represented by $S^n$ with the standard action of $\mathbb{Z}_p$ obtaining the $n$-Lens space; and \item[iii)] $\alpha_{2p}$ is obtained by taking the product of $S^1$ and the cone of $\mathbb{C} P^{p-1}$ equipped with the diagonal action of $\mathbb{Z}_p$. \end{itemize} The obstruction to realizability of Thom's counterexample is encoded by the fifth differential \begin{equation} d^5:H_{2p+1}(X;\Omega_0)\longrightarrow H_{2p-4}(X;\Omega_4)\,, \end{equation} and the geometric description of the AHSS \cite{Hag}, implies the image $d^5(\xi_p)$ is \begin{equation}\label{678} [S^1\times S^1\times \mathbb{C} P^{p-1}\rightarrow X^{2p}]\,, \end{equation} for $X^{2p}$ the $2p$-th skeleton. We will show that the element \eqref{678} is not trivial in $H_{2p-4}(X;\Omega_4)$, which shows that $\xi_p$ is not realizable. This article is organized as follows: in Section \ref{sec1}, we give a brief review of the Steenrod realization problem. In Section \ref{georea}, we introduce a basis for the integer homology of $B(\mathbb{Z}_p\times\mathbb{Z}_p)$ and we show the equivalence of the Thom's counterexamples in homology and cohomology. Finally, in Section \ref{sec2} we present the Stratifold realization of the Thom's counterexample and we show that the image under the fifth differential of the AHSS is not trivial. {\bf Acknowledgements:} we thank the UNAM, Oaxaca and Universidad de los Andes for the hospitality and financial support that made this collaboration possible. The second author is supported by c\'atedras CONACYT and Proyecto CONACYT ciencias b\'asicas 2016, No. 284621. \section{The Steenrod problem}\label{sec1} For $K$ a polyhedron of finite dimension $m$ and $z\in H_*(K)$, we say that the integer class $z$ is realizable if there exist a manifold $W$ and a map $f:W\rightarrow K$ such that $z$ is the image of the fundamental class, i.e., $z=f_*([W])$. Take the class $z\in H_{n-k}(K)$ inside a smooth manifold $K\hookrightarrow V^n$, constructed by embedding $K$ into the Euclidean space $\mathbb{R}^n$, with $n\geq 2m+1$. If $z\in H_{n-k}(V^n)$ is realizable by $f:W^{n-k}\hookrightarrow V^n$, then the Poincar\'e dual $u\in H^k(V^n)$ is the pullback of the universal Thom class. The main consequence of this construction is the following. \begin{thm}[\cite{Th}]\label{Thom} A necessary condition for an integer cohomology class $x$ to be realizable is that all $p$-Steenrod powers $\beta P^i(x)$, with degree change $2i(p-1)+1$, vanish for all prime $p$. \end{thm} Novikov gives an specific homology version of the result of Thom as follows. \begin{thm}[\cite{Nov}]\label{Nov} Suppose that the integral homology group $H_{n-2i(p-1)-1}(X)$ has no $p$-torsion for every odd prime $p$ and every $i \geq 1$. Then any homology class $z \in H_n(X)$ is realizable. \end{thm} Consider the homomorphism $\mu:\Omega_n(X)\rightarrow H_n(X)$, which sends every manifold to the image of its fundamental class. Conner-Floyd rephrased the Steenrod realization problem in terms of the AHSS $(E_{s,t}^r,d^r_{s,t})$, with the following result. \begin{thm}[\cite{CF}]\label{CF} If $X$ is a CW complex, then for the Atiyah-Hirzebruch spectral sequence $(E_{s,t}^r, d^r_{s,t})$ the differentials $d^r_{s,t}:E^r_{s,t}\rightarrow E^r_{s-r,t+r-1}$ are trivial for all $r\geq 2$ if and only if the map $\mu:\Omega_n(X)\longrightarrow H_n(X)$ is an epimorphism for all $n\geq 0$. \end{thm} In order to find counterexamples of the Steenrod problem, we notice that the AHSS is trivial modulo the odd torsion part, hence every element of $2$-torsion is realizable. Moreover, for a CW complex, if the term $E^r_{s,t}$ consists entirely of order 2 for $t\neq 0 \operatorname{mod} 4$, then the differential $d^r:E^r_{s,t}\rightarrow E^r_{s-r,t+r-1}$ is trivial unless $t=0\operatorname{mod}4$ and $r=1\operatorname{mod} 4$. Thus we start in the fifth page $E_{n,0}^5\cong E_{n,0}^2\cong H_n(X)$ and by the previous results, the obstruction to realizability is in degree $n-2i(p-1)-1$. Thom shows that every integer homology class of dimension $n\leq 6$ is realizable. It should be noted that for $n=7$, $i=1$ and $p=3$, we can find the first counterexample and the obstruction is in dimension $2$. \section{Geometric cycles for $B(\mathbb{Z}_p\times \mathbb{Z}_p)$} \label{georea} The Bockstein exact sequence of $B\mathbb{Z}_p$ implies the isomorphisms $H_{2n-1}(B\mathbb{Z}_p)\stackrel{\operatorname{mod}p}{\cong}H_{2n-1}(B\mathbb{Z}_p;\mathbb{Z}_p)$ and $H_{2n}(B\mathbb{Z}_p;\mathbb{Z}_p)\stackrel{\beta}{\cong}H_{2n-1}(B\mathbb{Z}_p)$ for $n>0$. Take generators $\alpha_i\in H_i(B\mathbb{Z}_p;\mathbb{Z}_p)$, such that $\beta(\alpha_i)=\alpha_{i-1}$ for $i$ even, and $\beta(\alpha_i)=0$ for $i$ odd. Let $X:=B(\mathbb{Z}_p\times \mathbb{Z}_p)$. We consider the following commutative diagram between the two Bockstein sequences, \begin{equation}\label{rel} \xymatrix{\cdots\ar[r]^(0.4){\times p}&H_{n}(X)\ar[r]^{\operatorname{mod}p}\ar[d]&H_{n}(X;\mathbb{Z}_p)\ar[r]^\beta\ar[d]^{=} & H_{n-1}(X)\ar[d]^{\operatorname{mod}p}\ar[r]^(0.6){\times p}&\cdots\\ \cdots\ar[r]^(0.4){\times p}&H_{n}(X;\mathbb{Z}_{p^2})\ar[r]&H_{n}(X;\mathbb{Z}_p)\ar[r]^{\tilde{\beta}} & H_{n-1}(X;\mathbb{Z}_p)\ar[r]^(0.6){\times p}&\cdots\,.} \end{equation} For $n>1$, the map $\operatorname{mod} p$ is injective, hence every element in the kernel of $\tilde{\beta}$ can be extended to an integral class. Thus $H_{2n}(X)$ is generated by $\alpha_{2i-1}\times \alpha_{2n-2i+1}$ ($i=1,\cdots, n$), $H_{2n-1}(X)$ is generated by $\alpha_{2i}\times \alpha_{2n-2i-1}+\alpha_{2i-1}\times \alpha_{2n-2i}$ ($i=0,1,\cdots,n$), and $H_0(X)$ is generated by $\alpha_0\times \alpha_0$. The description of the homology cycles $\alpha_i$'s is as follows: \begin{itemize} \item[(i)] $\alpha_2$ is the closed oriented surface of genus $(p-1)(p-2)/2$, where there is an action of $\mathbb{Z}_p$ with exactly $p$-fixed points. For example, for $p=3$, this surface is a torus constructed through the lattice $\mathbb{R}^2$ generated by $e_1$ and the rotation of $e_1$ by 120 degrees. There is an action of $\mathbb{Z}_3$ with exactly three fixed points, see Figure \ref{fig1}. \begin{figure}[h!] \centering \begin{tikzpicture}[scale=1.5] \draw (-1.5,0.866) -- (1.5,0.866); \draw (-1.5,0) -- (1.5,0); \draw (-1.5,-0.866) -- (1.5,-0.866); \draw (-1.5,0.866) -- (-0.5,-0.886); \draw (-0.5,0.866) -- (0.5,-0.886); \draw (0.5,0.866) -- (1.5,-0.886); \draw [fill] (0.5,0.2886) circle [radius=0.05]; \draw (0,0.5) circle [radius=0.05]; \draw (-1,0.5) circle [radius=0.05]; \draw [fill] (-0.5,0.2886) circle [radius=0.05]; \draw [fill] (1,-0.5774) circle [radius=0.05]; \draw [fill] (0,-0.5774) circle [radius=0.05]; \draw (-0.5,-0.366) circle [radius=0.05]; \draw (0.5,-0.366) circle [radius=0.05]; \draw [fill=gray] (0,0) circle [radius=0.05]; \draw [fill=gray] (1,0) circle [radius=0.05]; \draw [fill=gray] (-1,0) circle [radius=0.05]; \draw [fill=gray] (-0.5,0.866) circle [radius=0.05]; \draw [fill=gray] (0.5,0.866) circle [radius=0.05]; \draw [fill=gray] (-1.5,0.866) circle [radius=0.05]; \draw [fill=gray] (0.5,-0.866) circle [radius=0.05]; \draw [fill=gray] (-0.5,-0.866) circle [radius=0.05]; \draw [fill=gray] (1.5,-0.866) circle [radius=0.05]; \end{tikzpicture} \caption{The lattice of the torus with three fixed points.} \label{fig1} \end{figure} \item[(ii)] $\alpha_{2n-1}$ is represented by the sphere $S^{2n-1}=\{(z_1,\cdots,z_n):\sum_{i=1}^n|z_i|^2=1\}$, with an action of $\mathbb{Z}_p$ induced by the diagonal multiplication $T'(z_1,\cdots,z_n)=(\lambda z_1,\cdots,\lambda z_n)$, where $\lambda=\operatorname{exp}(2\pi i/p)$. \item[(iii)] $\alpha_{2n}$ is determined by the identity $\beta(\alpha_{2n})=\alpha_{2n-1}$. We use the following equation in bordism of $B\mathbb{Z}_p$ from Conner-Floyd \cite[p.~144]{CF} \begin{equation}\label{cofo}p\alpha_{2n-1}+[M^4]\alpha_{2n-5}+[M^8]\alpha_{2n-9}+\cdots=0\,, \textrm{ for }n\geq2\,, \end{equation} where the manifolds $M^{4k}$, $k=1,2,\cdots$ are constructed inductively in \cite{CF}. Therefore, there is a compact oriented manifold $V^{2n}$, with a free action of $\mathbb{Z}_p$, such that \begin{equation} \partial V^{2n} = pS^{2n-1}\cup (M^4\times S^{2n-5} )\cup (M^8\times S^{2n-9} )\cup \cdots \end{equation} Denote by $C(M^{4k})$ the cone on $M^{4k}$ and take the gluing of $V^{2n}$ with $C(M^4)\times S^{2n-5} \cup C(M^8)\times S^{2i-9} \cup \cdots$. The boundary of this construction is $pS^{2n-1}$ and therefore the Bockstein is $\alpha_{2n-1}$. In the case $n=p$ is a prime number, we have another equivalent representation for $\alpha_{2p}$. We use the following equation in bordism of $B\mathbb{Z}_p$ from Conner-Floyd \cite[p.~95]{CF} \begin{equation}\label{co3} p[T',S^{2p-1}]=[T_1,S^1][\mathbb{C} P^{p-1}]\,, \end{equation} where the action of $\mathbb{Z}_p$ on $S^1$ is $T_1(z)= \lambda z$ and for the complex projective space, the action of $\mathbb{Z}_p$ is given by $T([z_1:\cdots:z_p])= [z_1:\lambda z_2:\cdots ,\lambda^{p-1}z_p]$. Thus we consider the product of $S^1$ with the cone of $\mathbb{C} P^{p-1}$, equipped with the diagonal action of $\mathbb{Z}_p$. From \eqref{co3}, the boundary satisfies $\beta(\alpha_{2p})=\alpha_{2p-1}$. \end{itemize} The Thom's counterexample is \begin{equation} \xi_p:=\alpha_{2}\times \alpha_{2p-1}+\alpha_{1}\times \alpha_{2p} \in H_{2p+1}(X)\,. \end{equation} Recall the cohomology of the Lens space is $H^*(L^{2p+1};\mathbb{Z}_p)\cong\mathbb{Z}_p[u,\nu]/(\nu^2=0,u^{p+1}=0)$, with $|\nu|=1$, $|u|=2$ and $u=\beta(\nu)$. The Poincar\'e duality correspondence is $\alpha_1\longleftrightarrow u^p$ and $\alpha_2\longleftrightarrow \nu u^{p-1}$, which agrees with the Bocksteins $\beta(\alpha_2)=\alpha_1$ and $\beta(\nu u^{p-1})=u^p$ and conversely, we have $u\longleftrightarrow \alpha_{2p-1}$ and $\nu \longleftrightarrow\alpha_{2p}$, which also agrees with the Bocksteins $\beta(\alpha_{2p})=\alpha_{2p-1}$ and $\beta(\nu)=u$. Therefore, we apply the Poincar\'e duality isomorphism $D$ to Thom's counterexample: \begin{align} D(X_p)=D(u_1\nu_2u_2^{p-1}-\nu_1u_2^p)&=[L_1^{2p+1}\times L_2^{2p+1}]\cap (u_1\times\nu_2u_2^{p-1})- [L_1^{2p+1}\times L_2^{2p+1}]\cap(\nu_1\times u_2^p)\\ & = D(u_1)\times D(\nu_2u_2^{p-1})+D(\nu_1)\times D(u_2^p)\\ & = \alpha_{2p-1}\times \alpha_2+\alpha_{2p}\times \alpha_1=\xi_p\,. \end{align} \section{Stratifolds and the AHSS} \label{sec2} Stratifolds, which were introduced by Kreck \cite{Kre} and generalize the notion of a smooth manifold, provide a positive solution for the Steenrod realization problem. For that matter, in the category of $CW$ complexes, there is a stratifold bordism theory $SH_*$ with natural isomorphism $\varphi:SH_*\longrightarrow H_*$, where $H_*$ is homology with integral coefficients. A stratifold $S$ of dimension $n$ is a topological space with a sheaf of functions, which divides the topological space into strata $S_0,\cdots, S_n$, where each stratum $S_i$ is an $i$-dimensional manifold. The singular part of the stratifold $S$ is given by the union $\operatorname{sing}(S)=\bigsqcup^{n-1}_{i=0}{S_i}$. When we have stratifolds with boundary, they come with a germ of collars, which give a well defined gluing of stratifolds along a common boundary. In addition, we required a condition of regularity for stratifolds, with the aim of satisfying the Eilenberg-Steenrod axioms of a generalized homology theory, with the subtle change that the excision axiom is substituted by the Mayer-Vietoris property. We consider stratifolds which are constructed inductively by attaching manifolds together, using the germs of collars and attaching maps. These stratifolds are called parametrized stratifolds or p-stratifolds. The stratifold bordism theory $SH_*$ is composed of oriented stratifolds where the top stratum $S_n$ is an oriented manifold and the stratum $S_{n-1}$ is empty. Notwithstanding the simple definition of stratifolds, different constructions in algebraic topology, are simple and intuitive using the language of stratifolds. For instance, the Atiyah-Hirzebruch spectral sequence $(E^r_{s,t},d^r_{s,t})$, has a geometric description \cite{Hag}, using a Postnikov tower $\Omega^{(r)}$. The bordism theory $\Omega^{(r)}$ is composed of oriented p-stratifolds, with all strata of codimension $0<k<r+2$ empty. Thus a stratifold $S$ in $\Omega^{(r)}_n$ is an $n$-dimensional stratifold with singular part of dimension at most $(n-r-2)$. We put a similar restriction to the stratifold bordisms which are $(n+1)$-dimensional stratifolds with boundary and the singular part is of dimension at most $(n-r-1)$. We have maps $\Omega_n\rightarrow\Omega_n^{(r)}$ which are isomorphisms for $n\leq r$, and $\Omega_n^{(r)}$ trivial for $n>r$ ($\Omega_n$ stands for $\Omega_n(*)$, the $n$-th coefficient group). Among other properties, for a $CW$ complex $X$, with $k$-th skeleton $X^k$, we obtain that $\Omega_n^{(r)}(X^k)$ is trivial for $k+r< n$. For $r\geq 2$, there are natural isomorphisms \begin{equation} E^{r}_{s,t}\cong\operatorname{Im}( \Omega^{(t+r-2)}_{s+t}( X^s ) \longrightarrow \Omega^{(t)}_{s+t}( X^{s+r-1} ))\,,\end{equation} and the differential $d^r_{s,t}:E^r_{s,t}\longrightarrow E^r_{s-r,t+r-1}$ is induced by the following commutative diagram \begin{equation} \xymatrix{&\Omega_{s+t}^{(t+r-2)}(X^s)\ar[r]\ar[d]_\Phi&\Omega_{s+t}^{(t)}(X^{s+r-1})\ar[d]^\Phi\\ &\Omega_{s+t-1}(X^{s-r+1})\ar[d]\ar[r]&\Omega_{s+t-1}(X^{s-1})\ar[d]\\ \Omega_{s+t-1}^{(t+2r-3)}(X^{s-r})\ar[r]&\Omega_{s+t-1}^{(t+2r-3)}(X^{s-r+1})\ar[r]&\Omega_{s+t-1}^{(t+r-1)}(X^{s-1}) \,,} \end{equation} where $\Phi$ is a natural transformation defined by \begin{equation} \Omega_n^{(r)}(X)\rightarrow\Omega_n^{(r)}(X,X^{n-r-1})\stackrel{\cong}{\rightarrow}\Omega_n(X,X^{n-r-1})\rightarrow\Omega_{n-1}(X^{n-r-1})\,. \end{equation} The isomorphism $\Omega_n^{(r)}(X,X^{n-r-1})\stackrel{\cong}{\rightarrow}\Omega_n(X,X^{n-r-1})$ is given by the restriction to the top stratum and the map $\Omega_n(X,X^{n-r-1})\rightarrow\Omega_{n-1}(X^{n-r-1})$ is the boundary homomorphism. Therefore, for a stratifold $S$ of dimension $s+t$, with a map $f:S\rightarrow X^s$, the image of the differential $d^r_{s,t}$ is induced by \begin{equation} \label{wef}[f:S\rightarrow X^s]\longmapsto [g\circ f|_{\partial W}:\partial W\rightarrow X^{s-1}]\,,\end{equation} where $W$ is the top stratum of $S$ and $g:\partial W\rightarrow\operatorname{sing}(S)$ is the attaching map used to glue $W$ to the singular part $\operatorname{sing}(S)$. The obstruction for the Thom's counterexample to realizability is found in the fifth differential $d^5_{2p+1,0}:E^5_{2p+1,0}\rightarrow E^5_{2p-4,4}$, which has the form \begin{equation} \xymatrix{\operatorname{Im}\left( \Omega^{(3)}_{2p+1}\left( X^{2p+1} \right) \longrightarrow \Omega^{(0)}_{2p+1}\left( X^{2p+5} \right) \right)\ar[d]_{d^5}\\ \operatorname{Im}\left( \Omega^{(7)}_{2p}\left( X^{2p-4} \right) \longrightarrow \Omega^{(4)}_{2p}\left( X^{2p} \right) \right)\,.} \end{equation} Thus the Thom's counterexample is represented by an element in $\Omega^{(3)}_{2p+1}\left( X^{2p+1}\right)$. This is a stratifold of dimension $2p+1$, whose top stratum is constructed by first gluing the following manifolds: \begin{itemize} \item[(i)] $V_1:=(M^2-pD^2)\times S^{2p-1}$, where $M^2$ is the 2-dimensional manifold representing $\alpha_2$, $``-pD^2"$ denotes the removal of a small disc around each of the $p$-fixed points, and $S^{2p-1}$ is the $(2p-1)$-dimensional sphere. Notice that $V_1$ has the boundary $\partial V_1=pS^1\times S^{2p-1}=S^1\times pS^{2p-1}$. The action of $\mathbb{Z}_p$ on $M^2$ was described before, while for the sphere, the action is $T'(z_1,\cdots,z_p)=(\lambda z_1,\cdots,\lambda z_p)$, where $\lambda=\exp(2\pi i/p)$. \item[(ii)] $V_2:=S^1\times S^1\times \mathbb{C} P^{p-1}\times [0,1]$, where $S^1$ is the circle and $\mathbb{C} P^{p-1}$ is the complex projective space of dimension $p-1$. The action of $\mathbb{Z}_p$ on $\mathbb{C} P^{p-1}$ was described before, while for the circles, the action is $T_1(z)=\lambda z$. \end{itemize} Now, by \eqref{co3}, there is a bordism $N$ between $S^1\times pS^{2p-1}$ and $S^1\times S^1\times \mathbb{C} P^{p-1}$. Consider the gluing $V_1\sqcup_N V_2$ afforded by this bordism. The boundary $S^1\times S^1\times \mathbb{C} P^{p-1}$ has an action of $\mathbb{Z}_p\times \mathbb{Z}_p$, which can be considered of the form $T_1\times T_1\times \operatorname{id}$, where $T_1$ is the action on the circle. Thus the top stratum of the stratifold is given by the quotient $W=(V_1\sqcup_N V_2)/(\mathbb{Z}_p\times \mathbb{Z}_p)$. The stratum of dimension $i$, for $2<i<2p+1$ and $i=0,1$, is empty. For dimension $i=2$, the stratum is $S^1\times S^1$, hence the singular part is the torus $S^1\times S^1$. Furthermore, the attaching map used for gluing the top stratum $W$ to the singular part is given by \begin{equation} g:S^1\times S^1\times \mathbb{C} P^{p-1}\longrightarrow S^1\times S^1 \times \{*\} \,,\end{equation} which projects $\mathbb{C} P^{p-1}$ to a point. As a consequence, the stratifold has singular part given by a torus whose neighborhood is a cone on $\mathbb{C} P^{p-1}$. Recall that $X=B(\mathbb{Z}_p\times \mathbb{Z}_p)$, from \eqref{wef}, the image of the fifth differential, applied to the previous stratifold is \begin{equation}\label{sdfg} \left[S^1\times S^1\times \mathbb{C} P^{p-1}\stackrel{g}{\longrightarrow} S^1\times S^1\stackrel{f}{\longrightarrow} X^{2p}\right]\,, \end{equation} which is an element in $\Omega^{(4)}_{2p}\left( X^{2p}\right)$. The AHSS has the form $d^5:H_{2p+1}(X;\Omega_0)\longrightarrow H_{2p-4}(X;\Omega_4)$, since $E^2\cong\cdots \cong E^5$. Thus \eqref{sdfg} shall be written by an element in $\Omega_4$. For this purpose, we take the identities \eqref{cofo} and \eqref{co3}, obtaining the following identity in $\Omega_{2p}(B\mathbb{Z}_p \times B\mathbb{Z}_p )$, \begin{equation}\label{jkl} S^1 \times (S^1 \times \mathbb{CP}^{p-1} ) = -\left(\left(M^4\times S^1 \times S^{2p-5}\right)\cup \left(M^8\times S^1\times S^{2p-9}\right) \cup \cdots \right)\,.\end{equation} However, in $H_{2p-4}(X;\Omega_4)\cong\operatorname{Im}\left( \Omega^{(7)}_{2p}\left( (B\mathbb{Z}_p \times B\mathbb{Z}_p)^{2p-4} \right) \longrightarrow \Omega^{(4)}_{2p}\left( (B\mathbb{Z}_p \times B\mathbb{Z}_p)^{2p} \right) \right)$, we can cone $M^{4k}$, for $k>1$, since the bordism produced by these cones is an allowed bordism in $\Omega^{(7)}_{2p}\left( (B\mathbb{Z}_p \times B\mathbb{Z}_p)^{2p-4} \right)$. Thus the equation \eqref{jkl} becomes $S^1 \times (S^1 \times \mathbb{CP}^{p-1} ) = -\left(M^4\times S^1 \times S^{2p-5}\right)$ in $H_{2p-4}(X;\Omega_4)$. From Section \ref{georea}, the element $S^1\times S^{2p-5}$ is one of the generators of $H_{2p-4}(X)$. Furthermore, the manifold $M^4$ is not trivial in $\Omega_4$. Therefore, we conclude that Thom's counterexample is not realizable. \bibliographystyle{amsalpha}
{ "timestamp": "2021-05-06T02:07:05", "yymm": "2105", "arxiv_id": "2105.01806", "language": "en", "url": "https://arxiv.org/abs/2105.01806" }
\section{Introduction} In this paper, we consider optimization problems such as: \begin{itemize} \item \textsc{Maximum} $r$-\textsc{Independent Set}, $r \in \mathbb{Z}^+$: Given a graph $G$, the objective is to find a largest subset $X \subseteq V(G)$ such that distance in $G$ between any two vertices in $X$ is at least $r$. \item \textsc{Maximum weight induced forest}: Given a graph $G$ and an assignment $w:V(G)\to\mathbb{Z}_0^+$ of non-negative weights to vertices, the objective is to find a subset $X \subseteq V(G)$ such that $G[X]$ does not contain a cycle and subject to that, $w(X)\coloneqq\sum_{v\in X} w(v)$ is maximized. \item \textsc{Maximum} $(F,r)$-\textsc{Matching}, for a fixed connected graph $F$ and $r \in \mathbb{Z}^+$: Given a graph $G$, the objective is to find a largest subset $X \subseteq V(G)$ such that $G[X]$ can be partitioned into vertex-disjoint copies of $F$ such that distance in $G$ between any two vertices belonging to different copies is at least $r$. \end{itemize} To be precise, to fall into the scope of our work, the problem must satisfy the following conditions: \begin{itemize} \item It must be a \textbf{maximization problem on certain subsets of vertices} of an input graph, possibly with non-negative weights. That is, the problem specifies which subsets of vertices of the input graph are \emph{admissible}, and the goal is to find an admissible subset of largest size or weight. \item The problem must be \textbf{defined in terms of distances between the vertices, up to some fixed bound}. That is, there exists a parameter $r\in \mathbb{Z}^+$ such that for any graphs $G$ and $G'$, sets $X\subseteq V(G)$ and $X'\subseteq V(G')$, and a bijection $f:X\to X'$, if $\min(r,d_G(u,v))=\min(r,d_{G'}(f(u),f(v)))$ holds for all $u,v\in X$, then $X$ is admissible in $G$ if and only if $X'$ is admissible in $G'$. \item The problem must be \textbf{monotone} (i.e., all subsets of an admissible set must be admissible), or at least \textbf{near-monotone} (as happens for example for \textsc{Maximum} $(F,r)$-\textsc{Matching}) in the following sense: There exists a parameter $c\in \mathbb{Z}^+$ such that for any admissible set $A$ in a graph $G$, there exists a system $\{R_v\subseteq A:v\in A\}$ of subsets of $A$ such that every vertex belongs to $R_v$ for at most $c$ vertices $v\in A$, $v\in R_v$ for each $v\in A$, and for any $Z\subseteq A$, the subset $X\setminus \bigcup_{v\in Z} R_v$ is admissible in $G$. \item The problem must be \textbf{tractable in graphs of bounded treewidth}, that is, there must exist a function $g$ and a polynomial $p$ such that given any graph $G$, its tree decomposition of width $t$, an assignment $w$ of non-negative weights to the vertices of $G$, and a set $X_0\subseteq X$, it is possible to find a maximum-weight admissible subset of $X_0$ in time $g(t)p(|V(G)|)$. \end{itemize} Let us call such problems \emph{$(\le\!r)$-distance determined $c$-near-monotone $(g,p)$-tw-tractable}. Note that a convenient way to verify these assumptions is to show that the problem is expressible in \emph{solution-restricted Monadic Second-Order Logic} ($\msol$) \emph{with bounded-distance predicates}, i.e., by a $\msol$ formula with one free variable $X$ such that the quantification is restricted to subsets and elements of $X$, and using binary predicates $d_1$, \ldots, $d_r$, where $d_i(u,v)$ is interpreted as testing whether the distance between $u$ and $v$ in the whole graph is at most $i$. This ensures that the problem is $(\le\!r)$-distance determined, and $(g,O(n))$-tw-tractable for some function $g$ by Courcelle's metaalgorithmic result~\cite{Courcelle90}. Of course, the problems satisfying the assumptions outlined above are typically hard to solve optimally, even in rather restrictive circumstances. For example, \textsc{Maximum Independent Set} is $\np$-hard even in planar graphs of maximum degree at most $3$ and arbitrarily large (fixed) girth~\cite{AlekseevLMM08}. Moreover, it is hard to approximate it within factor of $0.995$ in graphs of maximum degree at most three~\cite{BermanK99}. Hence, to obtain polynomial-time approximation schemes ($\ptas$), i.e., polynomial-time algorithms for approximating within any fixed precision, further restrictions on the considered graphs are needed. A natural restriction that has been considered in this context is the requirement that the graphs have sublinear separators (a set $S$ of vertices of a graph $G$ is a \emph{balanced separator} if every component of $G\setminus S$ has at most $|V(G)|/2$ vertices, and a hereditary class $\mathcal{G}$ of graphs has \emph{sublinear separators} if for some $c<1$, every graph $G \in \mathcal{G}$ has a balanced separator of size $O(|V(G)|^c)$). This restriction still lets us speak about many interesting graph classes (planar graphs~\cite{LiptonT79} and more generally proper minor-closed classes~\cite{AlonST90}, many geometric graph classes~\cite{MillerTTV97}, \ldots). Moreover, the problems discussed above admit $\ptas$ in all classes with sublinear separators or at least in substantial subclasses of these graphs: \begin{itemize} \item \textsc{Maximum Independent Set} has been shown to admit $\ptas$ in graphs with sublinear separators already in the foundational paper of Lipton and Tarjan~\cite{LiptonT80}. \item For any positive integer, \textsc{Maximum} $r$-\textsc{Independent Set} and several other problems are known to admit $\ptas$ in graphs with sublinear separators by a straightforward local search algorithm~\cite{Har-PeledQ17}. \item All of the problems mentioned above (an more) are known to admit $\ptas$ in planar graphs by a layering argument of Baker~\cite{Baker94}; this approach can be extended to some related graph classes, including all proper minor-closed classes~\cite{DawarGKS06,Dvorak20}. \item The problems also admit $\ptas$ in graph classes that admit thin systems of overlays~\cite{Dvorak18}, a technical property satisfied by all proper minor-closed classes and by all hereditary classes with sublinear separators and bounded maximum degree. \item Bidimensionality arguments~\cite{DemaineH05} apply to a wide range of problems in proper minor-closed graph classes. \end{itemize} However, each of the outlined approaches has drawbacks. On one side, the local search approach only applies to specific problems and does not work at all in the weighted setting. On the other side of the spectrum, Baker's approach is quite general as far as the problems go, but there are many hereditary graph classes with sublinear separators to which it does not seem to apply. The approach through thin systems of overlays tries to balance these concerns, but it is rather technical and establishing this property is difficult. Another option that has been explored is via \emph{fractional treewidth-fragility}. For a function $f \colon \mathbb{Z}^+ \times \mathbb{Z}^+ \to \mathbb{Z}^+$ and a polynomial $p$, a class of graphs $\mathcal{G}$ is \emph{$p$-efficiently fractionally treewidth-$f$-fragile} if there exists an algorithm that for every $k \in \mathbb{Z}^+$ and a graph $G \in \mathcal{G}$ returns in time $p(|V(G)|)$ a collection of subsets $X_1, X_2, \dots X_m \subseteq V(G)$ such that each vertex of $G$ belongs to at most $m/k$ of the subsets, and moreover, for $i=1,\ldots,m$, the algorithm also returns a tree decomposition of $G \setminus X_i$ of width at most $f(k, |V(G)|)$. We say a class is \emph{$p$-efficiently fractionally treewidth-fragile} if $f$ does not depend on its second argument (the number of vertices of $G$). This property turns out to hold for basically all known natural graph classes with sublinear separators. In particular, a hereditary class $\mathcal{G}$ of graphs is efficiently fractionally treewidth-fragile if \begin{itemize} \item $\mathcal{G}$ has sublinear separator and bounded maximum degree~\cite{Dvorak16}, \item $\mathcal{G}$ is proper minor-closed~\cite{DeVosDOSRSV04, Dvorak20}, or \item $\mathcal{G}$ consists of intersection graphs of convex objects with bounded aspect ratio in a finite-dimensional Euclidean space and the graphs have bounded clique number, as can be seen by a modification of the argument of Erlebach et al.~\cite{ErlebachJS05}. This includes all graph classes with polynomial growth~\cite{KrauthgamerL07}. \end{itemize} In fact, Dvo\v{r}\'ak conjectured that every hereditary class with sublinear separators is fractionally treewidth-fragile, and gave the following result towards this conjecture. \begin{theorem}[Dvo\v{r}\'ak~\cite{Dvorak18a}]\label{thm-pll} There exists a polynomial $p$ so that the following claim holds. For every hereditary class $\mathcal{G}$ of graphs with sublinear separators, there exists a polynomial $q$ such that $\mathcal{G}$ is $p$-efficiently fractionally treewidth-$f$-fragile for the function $f(k,n)=q(k\log n)$. \end{theorem} Moreover, Dvo\v{r}\'ak~\cite{Dvorak16} observed that weighted \textsc{Maximum Independent Set} admits a $\ptas$ in any efficiently fractionally treewidth-fragile class of graphs. Indeed, the algorithm is quite simple, based on the observation that for the sets $X_1$, \ldots, $X_m$ from the definition of fractional treewidth-fragility, at least one of the graphs $G \setminus X_1$, \ldots, $G \setminus X_m$ (of bounded treewidth) contains an independent set whose weight is within the factor of $1-1/k$ from the optimal solution. A problem with this approach is that it does not generalize to more general problems; even for the \textsc{Maximum $2$-Independent Set} problem, the approach fails, since a $2$-independent set in $G \setminus X_i$ is not necessarily $2$-independent in $G$. Indeed, this observation served as one of the motivations behind more restrictive (and more technical) concepts employed in~\cite{Dvorak18,Dvorak20}. As our main result, we show that this intuition is in fact false: There is a simple way how to extend the approach outlined in the previous paragraph to all bounded distance determined near-monotone tw-tractable problems. \begin{theorem}\label{thm-main} For every class $\mathcal{G}$ of graphs with bounded expansion, there exists a function $h:\mathbb{Z}^+\times\mathbb{Z}^+\to\mathbb{Z}^+$ such that the following claim holds. Let $c$ and $r$ be positive integers, $g:\mathbb{Z}^+\to\mathbb{Z}^+$ and $f:\mathbb{Z}^+\times\mathbb{Z}^+\to\mathbb{Z}^+$ functions and $p$ and $q$ polynomials. If $\mathcal{G}$ is $q$-efficiently fractionally treewidth-$f$-fragile, then for every $(\le\!r)$-distance determined $c$-near-monotone $(g,p)$-tw-tractable problem, there exists an algorithm that given a graph $G\in \mathcal{G}$, an assignment of non-negative weights to vertices, and a positive integer $k$, returns in time $h(r,c)|V(G)|+q(|V(G)|)\cdot p(|V(G)|)\cdot g(f(h(r,c)k,|V(G)|))$ an admissible subset of $V(G)$ whose weight is within the factor of $1-1/k$ from the optimal one. \end{theorem} Note that the assumption that $\mathcal{G}$ has bounded expansion is of little consequence---it is true for any hereditary class with sublinear separators~\cite{DvorakN16} as well as for any fractionally treewidth-fragile class~\cite{Dvorak16}; see Section~\ref{sec-dist} for more details. The time complexity of the algorithm from Theorem~\ref{thm-main} is polynomial if $f$ does not depend on its second argument, and quasipolynomial (exponential in a polylogaritmic function) if $f$ is logarithmic in the second argument and $g$ is single-exponential (i.e., if $\log \log g(n)=O(\log n)$). Hence, we obtain the following corollaries. \begin{corollary} Let $c$ and $r$ be positive integers, $g:\mathbb{Z}^+\to\mathbb{Z}^+$ a function and $p$ a polynomial. Every $(\le\!r)$-distance determined $c$-near-monotone $(g,p)$-tw-tractable problem admits a $\ptas$ in any efficiently fractionally treewidth-fragile class of graphs. \end{corollary} We say a problem admits a quasipolynomial-time approximation schemes ($\mathsf{QPTAS}$) if there exist quasipolynomial-time algorithms for approximating the problem within any fixed precision. Combining Theorems~\ref{thm-pll} and \ref{thm-main}, we obtain the following result. \begin{corollary} Let $c$ and $r$ be positive integers, $g:\mathbb{Z}^+\to\mathbb{Z}^+$ a single-exponential function, and $p$ a polynomial. Every $(\le\!r)$-distance determined $c$-near-monotone $(g,p)$-tw-tractable problem admits a $\mathsf{QPTAS}$ in any hereditary class of graphs with sublinear separators. \end{corollary} The idea of the algorithm from Theorem~\ref{thm-main} is quite simple: We consider the sets $X_1 , \ldots, X_m$ from the definition of fractional treewidth-$f$-fragility, extend them to suitable supersets $Y_1$, \ldots, $Y_m$, and argue that for $i=1,\ldots, m$, any admissible set in $G \setminus X_i$ disjoint from $Y_i$ is also admissible in $G$, and that for some $i$, the weight of the heaviest admissible set in $G \setminus X_i$ disjoint from $Y_i$ is within the factor of $1-1/k$ from the optimal one. The construction of the sets $Y_1$, \ldots, $Y_m$ is based on the existence of orientations with bounded outdegrees that represent all short paths, a result of independent interest that we present in Section~\ref{sec-dist}. Let us remark one can develop the idea of this paper in further directions. Dvo\v{r}\'ak proved in~\cite{Dvorak_new}(via a substantially more involved argument) that every monotone maximization problem expressible in first-order logic admits a $\ptas$ in any efficiently fractionally treewidth-fragile class of graphs. Note that this class of problems is incomparable with the one considered in this paper (e.g., \textsc{Maximum Induced Forest} is not expressible in the first-order logic, while \textsc{Maximum Independent Set} consisting of vertices belonging to triangles is expressible in the first-order logic but does not fall into the scope of the current paper). Finally, it is worth mentioning that our results only apply to maximization problems. We were able to extend the previous uses of fractional treewidth-fragility by giving a way to handle dependencies over any bounded distance. However, for the minimization problems, we do not know whether fractional treewidth-fragility is sufficient even for the distance-$1$ problems. For a simple example, consider the \textsc{Minimum Vertex Cover} problem in fractionally treewidth-fragile graphs, or more generally in hereditary classes with sublinear separators. While the unweighted version can be dealt with by the local search method~\cite{Har-PeledQ17}, we do not know whether there exists a $\ptas$ for the weighted version of this problem. \section{Paths and orientations in graphs with bounded expansion}\label{sec-dist} For $r \in \mathbb{Z}^+_0$, a graph $H$ is an \emph{$r$-shallow minor} of a graph $G$ if $H$ can be obtained from a subgraph of $G$ by contracting pairwise vertex-disjoint connected subgraphs, each of radius at most $r$. For a function $f \colon \mathbb{Z}^+ \to \mathbb{Z}^+$, a class $\mathcal{G}$ of graphs has \emph{expansion bounded} by $f$ if for all non-negative integers $r$, all $r$-shallow minors of graphs from $\mathcal{G}$ have average degree at most $f(r)$. A class has bounded expansion if its expansion is bounded by some function $f$. The theory of graph classes with bounded expansion has been developed in the last 15 years, and the concept has found many algorithmic and structural applications; see~\cite{NesetrilM12} for an overview. Crucially for us, this theory includes a number of tools for dealing with short paths. Moreover, as we have pointed out before, all hereditary graph classes with sublinear separators~\cite{DvorakN16} as well as all fractionally treewidth-fragile classes~\cite{Dvorak16} have bounded expansion. Let $\vec{G}$ be an orientation of a graph $G$, i.e, $uv$ is an edge of $G$ if and only if the directed graph $\vec{G}$ contains at least one of the directed edges $(u,v)$ and $(v,u)$; note that we allow $\vec{G}$ to contain both of them at the same time, and thus for the edge $uv$ to be oriented in both directions. We say that a directed graph $\vec{H}$ with the same vertex set is a \emph{$1$-step fraternal augmentation of $\vec{G}$} if $\vec{G}\subseteq \vec{H}$, for all distinct edges $(x,y),(x,z)\in E(\vec{G})$, either $(y,z)$ or $(z,y)$ is an edge of $\vec{H}$, and for each edge $(y,z)\in E(\vec{H})\setminus E(\vec{G})$, there exists a vertex $x\in V(\vec{G})\setminus \{y,z\}$ such that $(x,y),(x,z)\in E(\vec{G})$. That is, to obtain $\vec{H}$ from $\vec{G}$, for each pair of edges $(x,y),(x,z)\in E(\vec{G})$ we add an edge between $y$ and $z$ in one of the two possible directions (we do not specify the direction, but in practice we would choose directions of the added edges that minimize the maximum outdegree of the resulting directed graph). For an integer $a\ge 0$, we say $\vec{F}$ is an \emph{$a$-step fraternal augmentation of $\vec{G}$} if there exists a sequence $\vec{G}=\vec{G}_0,\vec{G}_1,\ldots,\vec{G}_a=\vec{F}$ where for $i=1,\ldots, a$, $\vec{G}_i$ is a $1$-step fraternal augmentation of $\vec{G}_{i-1}$. We say $\vec{F}$ is an $a$-step fraternal augmentation of an undirected graph $G$ if $\vec{F}$ is an $a$-step fraternal augmentation of some orientation of $G$. A key property of graph classes with bounded expansion is the existence of fraternal augmentations with bounded outdegrees. Let us remark that whenever we speak about an algorithm returning an $a$-step fraternal augmentation $\vec{H}$ or taking one as an input, this implicitly includes outputing or taking as an input the whole sequence of $1$-step fraternal augmentations ending in $\vec{H}$. \begin{lemma}[Ne\v{s}et\v{r}il and Ossona de Mendez~\cite{NesetrilM08a}]\label{lemma-frat} For every class $\mathcal{G}$ with bounded expansion, there exists a function $d:\mathbb{Z}^+_0\to\mathbb{Z}^+$ such that for each $G\in\mathcal{G}$ and each non-negative integer $a$, the graph $G$ has an $a$-step fraternal augmentation of maximum outdegree at most $d(a)$. Moreover, such an augmentation can be found in time $O(d(a)|V(G)|)$. \end{lemma} As shown already in~\cite{NesetrilM08a}, fraternal augmentations can be used to succintly represent distances between vertices of the graph. For the purposes of this paper, we need a more explicit representation by an orientation of the original graph (without the additional augmentation edges). By a \emph{walk} in a directed graph $\vec{G}$, we mean a sequence $W=v_0v_1v_2\ldots v_b$ such that for $i=1,\ldots,b$, $(v_{i-1},v_i)\in E(\vec{G})$ or $(v_i,v_{i-1})\in E(\vec{G})$; that is, the walk does not have to respect the orientation of the edges. The walk $W$ is \emph{inward directed} if for some $c\in\{0,\ldots,b\}$, we have $(v_i,v_{i+1})\in E(\vec{G})$ for $i=0,\ldots,c-1$ and $(v_i,v_{i-1})\in E(\vec{G})$ for $i=c+1,\ldots,b$. For a positive integer $r$, an orientation $\vec{G}$ of a graph $G$ \emph{represents $(\le\!r)$-distances} if for each $u,v\in V(G)$ and each $b\in\{0,\ldots,r\}$, the distance between $u$ and $v$ in $G$ is at most $b$ if and only if $\vec{G}$ contains an inward-directed walk of length at most $b$ between $u$ and $v$. Note that given such an orientation with bounded maximum outdegree for a fixed $r$, we can determine the distance between $u$ and $v$ (up to distance $r$) by enumerating all (constantly many) walks of length at most $r$ directed away from $u$ and away from $v$ and inspecting their intersections. Our goal now is to show that graphs from classes with bounded expansion admit orientations with bounded maximum outdegree that represent $(\le\!r)$-distances. Let us define a more general notion used in the proof of this claim, adding to the fraternal augmentations the information about the lengths of the walks in the original graph represented by the added edges. A \emph{directed graph with $(\le\!r)$-length sets} is a pair $(\vec{H},\ell)$, where $\vec{H}$ is a directed graph and $\ell$ is a function assigning a subset of $\{1,\ldots,r\}$ to each \emph{unordered} pair $\{u,v\}$ of vertices of $\vec{H}$, such that if neither $(u,v)$ nor $(v,u)$ is an edge of $\vec{H}$, then $\ell(\{u,v\})=\emptyset$. We say that $(\vec{H},\ell)$ is an \emph{orientation} of a graph $G$ if $G$ is the underlying undirected graph of $\vec{H}$ and $\ell(\{u,v\})=\{1\}$ for each $uv\in E(G)$. We say that $(\vec{H},\ell)$ is an \emph{$(\le\!r)$-augmentation} of $G$ if $V(\vec{H})=V(G)$, for each $uv\in E(G)$ we have $1\in\ell(\{u,v\})$, and for each $u,v\in V(G)$ and $b\in \ell(\{u,v\})$ there exists a walk of length $b$ from $u$ to $v$ in $G$. Let $(\vec{H}_1,\ell_1)$ be another directed graph with $(\le\!r)$-length sets. We say $(\vec{H}_1,\ell_1)$ is a \emph{1-step fraternal augmentation} of $(\vec{H},\ell)$ if $\vec{H}_1$ is a $1$-step fraternal augmentation of $\vec{H}$ and for all distinct $u,v\in V(\vec{H})$ and $b\in\{1,\ldots,r\}$, we have $b\in\ell_1(\{u,v\})$ if and only if $b\in\ell(\{u,v\})$ or there exist $x\in V(\vec{H})\setminus\{u,v\}$, $b_1\in \ell(\{x,u\})$, and $b_2\in \ell(\{x,v\})$ such that $(x,u),(x,v)\in E(\vec{H})$ and $b=b_1+b_2$. Note that a $1$-step fraternal augmentation of an $(\le\!r)$-augmentation of a graph $G$ is again an $(\le\!r)$-augmentation of $G$. The notion of an $a$-step fraternal augmentation of a graph $G$ is then defined in the natural way, by starting with an orientation of $G$ and peforming the $1$-step fraternal augmentation operation $a$-times. Let us now restate Lemma~\ref{lemma-frat} in these terms (we just need to maintain the edge length sets, which can be done with $O(a^2)$ overhead per operation). \begin{lemma}\label{lemma-lensets} Let $\mathcal{G}$ be a class of graphs with bounded expansion, and let $d:\mathbb{Z}^+_0\to\mathbb{Z}^+$ be the function from Lemma~\ref{lemma-frat}. For each $G\in\mathcal{G}$ and each non-negative integer $a$, we can in time $O(a^2d(a)|V(G)|)$ construct a directed graph with $(\le\!a+1)$-length sets $(\vec{H},\ell)$ of maximum outdegree at most $d(a)$ such that $(\vec{H},\ell)$ is an $a$-step fraternal augmentation of $G$. \end{lemma} Let $(\vec{H},\ell)$ be an $(\le\!r)$-augmentation $(\vec{H},\ell)$ of a graph $G$. For $b\le r$, a \emph{length $b$ walk} in $(\vec{H},\ell)$ is a tuple $(v_0v_1\ldots v_t, b_1,\ldots, b_t)$, where $v_0v_1\ldots v_t$ is a walk in $\vec{H}$, $b_i\in\ell(\{v_{i-1},v_i\}$ for $i=1,\ldots,t$, and $b=b_1+\ldots+b_t$. Note that if there exists a length $b$ walk from $u$ to $v$ in $(\vec{H},\ell)$, then there also exists a walk of length $b$ from $u$ to $v$ in $G$. We say that $(\vec{H},\ell)$ \emph{represents $(\le\!r)$-distances} in $G$ if for all vertices $u,v\in V(G)$ at distance $b\le r$ from one another, $(\vec{H},\ell)$ contains an inward-directed length $b$ walk between $u$ and $v$. Next, we show that this property always holds for sufficient fraternal augmentations. \begin{lemma}\label{lemma-augment} Let $G$ be a graph and $r$ a positive integer and let $(\vec{H},\ell)$ be a directed graph with $(\le\!r)$-length sets. If $(\vec{H},\ell)$ is obtained as an $(r-1)$-step fraternal augmentation of $G$, then it represents $(\le\!r)$-distances in $G$. \end{lemma} \begin{proof} For $b\le r$, consider any length $b$ walk $W=(v_0v_1\ldots v_t, b_1,\ldots, b_t)$ in an $(\le\!r)$-augmentation $(\vec{H}_1,\ell_1)$ of $G$, and let $(\vec{H}_2,\ell_2)$ be a $1$-step augmentation of $(\vec{H}_1,\ell_1)$. Note that $W$ is also a length $b$ walk between $v_0$ and $v_t$ in $(\vec{H}_2,\ell_2)$. Suppose that $W$ is not inward-directed in $(\vec{H}_1,\ell_1)$, and thus there exists $i\in\{1,\ldots,t-1\}$ such that $(v_i,v_{i-1}),(v_i,v_{i+1})\in E(\vec{H}_1)$. By the definition of $1$-step fraternal augmentation, this implies $b_i+b_{i+1}\in \ell_2(v_{i-1},v_{i+1})$, and thus $(v_0\ldots v_{i-1}v_{i+1}\ldots v_t, b_1,\ldots,b_i+b_{i+1},\ldots b_t)$ is a length $b$ walk from $v_0$ to $v_t$ in $(\vec{H}_2,\ell_2)$. Let $(\vec{G}_0,\ell_0)$, \ldots, $(\vec{G}_{r-1},\ell_{r-1})$ be a sequence of $(\le\!r)$-augmentations of $G$, where $(\vec{G},\ell_0)$ is an orientation of $G$, $(\vec{G}_{r-1},\ell_{r-1})=(\vec{H},\ell)$, and for $i=1,\ldots, r-1$, $(\vec{G}_i,\ell_i)$ is a $1$-step fraternal augmentation of $(\vec{G}_{i-1},\ell_{i-1})$. Let $u$ and $v$ be any vertices at distance $b\le r$ in $G$, and let $P$ be a shortest path between them. Then $P$ naturally corresponds to a length $b$ walk $P_0$ in $(\vec{G}_0,\ell_0)$. For $i=1,\ldots, r-1$, if $P_{i-1}$ is inward-directed, then let $P_i=P_{i-1}$, otherwise let $P_i$ be a length $b$ walk in $(\vec{G}_i,\ell_i)$ obtained from $P_{i-1}$ as described in the previous paragaph. Since each application of the operation decreases the number of vertices of the walk, we conclude that $P_{r-1}$ is an inward-directed length $b$ walk between $u$ and $v$ in $(\vec{H},\ell)$. Hence, $(\vec{H},\ell)$ represents $(\le\!r)$-distances in $G$. \end{proof} Next, let us propagate this property back through the fraternal augmentations by orienting some of the edges in both directions. We say that $(\vec{H},\ell)$ is an \emph{$a$-step fraternal superaugmentation} of a graph $G$ if there exists an $a$-step fraternal augmentation $(\vec{F},\ell)$ of $G$ such that $V(\vec{F})=V(\vec{H})$, $E(\vec{F})\subseteq E(\vec{H})$ and for each $(u,v)\in E(\vec{H})\setminus E(\vec{F})$, we have $(v,u)\in E(\vec{F})$. We say that $(\vec{F},\ell)$ is a \emph{support} of $(\vec{H},\ell)$. \begin{lemma}\label{lemma-back} Let $G$ be a graph and $r$ a positive integer and let $(\vec{H},\ell)$ be an $(\le\!r)$-augmentation of $G$ of maximum outdegree $\Delta$ representing $(\le\!r)$-distances. For $a\ge 1$, suppose that $(\vec{H},\ell)$ is an $a$-step fraternal superaugmentation of $G$. Then we can in time $O(r^2\Delta|V(G)|)$ obtain an $(a-1)$-step fraternal superaugmentation of $G$ representing $(\le\!r)$-distances, of maximum outdegree at most $(r+1)\Delta$. \end{lemma} \begin{proof} Let $(\vec{F},\ell)$ be an $a$-step fraternal augmentation of $G$ forming a support of $(\vec{H},\ell)$, obtained as a $1$-step fraternal augmentation of an $(a-1)$-step fraternal augmentation $(\vec{F}_1,\ell_1)$ of $G$. Let $(\vec{H}_1,\ell_1)$ be the $(a-1)$-step fraternal superaugmentation of $G$ obtained from $(\vec{F}_1,\ell_1)$ as follows: \begin{itemize} \item For all distinct vertices $y,z\in V(G)$ such that $(y,z),(z,y)\in E(\vec{H})$, $(y,z)\in E(\vec{F}_1)$, and $(z,y)\not\in E(\vec{F}_1)$, we add the edge $(z,y)$. \item For each edge $(y,z)\in E(\vec{H})$ and integer $b\in\ell(\{y,z\})\setminus\ell_1(\{y,z\})$, we choose a vertex $x\in V(G)\setminus\{y,z\}$ such that $(x,y),(x,z)\in E(\vec{F}_1)$ and $b=b_1+b_2$ for some $b_1\in \ell_1(\{x,y\})$ and $b_2\in\ell_1(\{x,z\})$, and add the edge $(y,x)$. Note that such a vertex $x$ and integers $b_1$ and $b_2$ exist, since $b$ was added to $\ell(\{y,z\})$ when $(\vec{F},\ell)$ was obtained from $(\vec{F}_1,\ell_1)$ as a $1$-step fraternal augmentation. \end{itemize} Each edge $(y,x)\in E(\vec{H}_1)\setminus E(\vec{H})$ arises from an edge $(y,z)\in E(\vec{H})$ leaving $y$ and an element $b\in \ell(\{y,z\})\setminus\ell_1(\{y,z\})$, and each such pair contributes at most one edge leaving $y$. Hence, the maximum outdegree of $\vec{H}_1$ is at most $(r+1)\Delta$. Consider a length $b$ inwards-directed walk $(v_0v_1\ldots v_t,b_1,\ldots,b_t)$ in $\vec{H}$, for any $b\le r$. Then $\vec{H}$ contains a length $b$ inwards-directed walk from $v_0$ to $v_t$ obtained by natural edge replacements: For any edge $(y,z)\in E(\vec{H})$ of this walk and $b'\in \ell_i(\{y,z\})$, the construction described above ensures that if $(y,z)\not\in E(\vec{H}_1)$ or $b'\not\in \ell_1(\{y,z\})$, then there exists $x\in V(G)\setminus\{y,z\}$ such that $(y,x),(x,z)\in E(\vec{H}_1)$ and $b'=b''+b'''$ for some $b''\in \ell_1(\{x,y\})$ and $b'''\in\ell_1(\{x,z\})$, and we can replace the edge $(y,z)$ in the walk by the edges $(y,x)$ and $(x,z)$ of $E(\vec{H}_1)$. Since $\vec{H}$ represents $(\le\!r)$-distances in $G$, this transformation shows that so does $\vec{H}_1$. \end{proof} We are now ready to prove the main result of this section. \begin{lemma}\label{lem:orient} For any class $\mathcal{G}$ with bounded expansion, there exists a function $d':\mathbb{Z}^+\to\mathbb{Z}^+$ such that for each $G\in\mathcal{G}$ and each positive integer $r$, the graph $G$ has an orientation with maximum outdegree at most $d'(r)$ that represents $(\le\!r)$-distances in $G$. Moreover, such an orientation can be found in time $O(r^2d'(r)|V(G)|)$. \end{lemma} \begin{proof} Let $d$ be the function from Lemma~\ref{lemma-frat}, and let $d'(r)=(r+1)^{r-1}d(r-1)$. By Lemma~\ref{lemma-lensets}, we obtain an $(r-1)$-step fraternal augmentation $(\vec{H},\ell)$ of $G$ of maximum outdegree at most $d(r-1)$. By Lemma~\ref{lemma-augment}, $(\vec{H},\ell)$ represents $(\le\!r)$-distances in $G$. Repeatedly applying Lemma~\ref{lemma-back}, we obtain a $0$-step fraternal superaugmentation $(\vec{G},\ell_0)$ of $G$ of maximum outdegree at most $d'(r)$ representing $(\le\!r)$-distances. Clearly, $\vec{G}$ is an orientation of $G$ of maximum outdegree at most $d'(r)$ representing $(\le\!r)$-distances. \end{proof} \section{Approximation schemes} Let us now prove Theorem~\ref{thm-main}. To this end, let us start with a lemma to be applied to the sets arising from fractional treewidth-fragility. \begin{lemma}\label{lemma-avoid} Let $\vec{G}$ be an orientation of a graph $G$ with maximum outdegree $\Delta$. Let $A$ be a set of vertices of $G$ and for a positive integer $c$, let $\{R_v:v\in A\}$ be a system of subsets of $A$ such that each vertex belongs to at most $c$ of the subsets. For $X\subseteq V(G)$ and a positive integer $r$, let $D_{\vec{G},r}(X)$ be the union of the sets $R_v$ for all vertices $v\in V(G)$ such that $\vec{G}$ contains a walk from $v$ to $X$ of length at most $r$ directed away from $v$. For a positive integer $k$, let $X_1$, \ldots, $X_m$ be a system of subsets of $V(G)$ such that each vertex belongs to at most $\frac{m}{c(\Delta+1)^rk}$ of the subsets. For any assignment $w$ of non-negative weights to vertices of $G$, there exists $i\in\{1,\ldots,m\}$ such that $w(A\setminus D_{\vec{G},r}(X_i))\ge (1-1/k)w(A)$. \end{lemma} \begin{proof} For a vertex $z\in A$, let $B(z)$ be the set of vertices reachable in $\vec{G}$ from vertices $v\in A$ such that $z\in R_v$ by walks of length at most $r$ directed away from $v$. Note that $|B(z)|\le c(\Delta+1)^r$ and that for each $X\subseteq V(G)$, we have $z\in D_{\vec{G},r}(X)$ if and only if $B(z)\cap X\neq \emptyset$. Suppose for a contradiction that for each $i$ we have $w(A\setminus D_{\vec{G},r}(X_i))<(1-1/k)w(A)$, and thus $w(D_{\vec{G},r}(X_i))>w(A)/k$. Then \begin{align*} \frac{m}{k}w(A)&<\sum_{i=1}^m w(D_{\vec{G},r}(X_i))=\sum_{i=1}^m \sum_{z\in D_{\vec{G},r}(X_i)} w(z)=\sum_{i=1}^m \sum_{z\in A:B(z)\cap X_i\neq\emptyset} w(z)\\ &\le \sum_{i=1}^m\sum_{z\in A} w(z)|B(z)\cap X_i|=\sum_{z\in A} w(z) \sum_{i=1}^m |B(z)\cap X_i|\\ &=\sum_{z\in A} w(z)\sum_{x\in B(z)} |\{i\in\{1,\ldots,m\}:x\in X_i\}|\le \sum_{z\in A} w(z)\sum_{x\in B(z)} \frac{m}{c(\Delta+1)^rk}\\ &=\sum_{z\in A} w(z)|B(z)| \frac{m}{c(\Delta+1)^rk}\le \sum_{z\in A} w(z)\frac{m}{k}=\frac{m}{k}w(A), \end{align*} which is a contradiction. \end{proof} Next, let us derive a lemma on admissibility for $(\le\!r)$-distance determined problems. \begin{lemma}\label{lemma-admis} For a positive integer $r$, let $\vec{G}$ be an orientation of a graph $G$ representing $(\le\!r)$-distances. For a set $X\subseteq V(G)$, let $Y_{\vec{G},r}(X)$ be the set of vertices $y$ such that $\vec{G}$ contains a walk from $y$ to $X$ of length at most $r$ directed away from $y$. For any $(\le\!r)$-distance determined problem, a set $B\subseteq V(G)\setminus Y_{\vec{G},r}(X)$ is admissible in $G$ if and only if it is admissible in $G-X$. \end{lemma} \begin{proof} Since the problem is $(\le\!r)$-distance determined, it suffices to show that $\min(r,d_G(u,v))=\min(r,d_{G-X}(u,v))$ holds for all $u,v\in B$. Clearly, $d_G(u,v)\le d_{G-X}(u,v)$, and thus it suffices to show that if the distance between $u$ and $v$ is $G$ is $b\le r$, then $G-X$ contains a walk of length $b$ between $u$ and $v$. Since $\vec{G}$ represents $(\le\!r)$-distances, there exists an inward-directed walk $P$ of length $b$ between $u$ and $v$ in $\vec{G}$. Since $u,v\not\in Y_{\vec{G},r}(X)$, we have $V(P)\cap X=\emptyset$, and thus $P$ is also a walk of length $b$ between $u$ and $v$ in $G-X$. \end{proof} We are now ready to prove the main result. \begin{proof}[Proof of Theorem~\ref{thm-main}] Let $d'$ be the function from Lemma~\ref{lem:orient} for the class $\mathcal{G}$. Let us define $h(r,c)=c(d'(r)+1)^r$. The algorithm is as follows. Since $\mathcal{G}$ is $q$-efficiently fractionally treewidth-$f$-fragile, in time $q(|V(G)|)$ we can find sets $X_1, \ldots, X_m\subseteq V(G)$ such that each vertex belongs to at most $\frac{m}{h(r,c)k}$ of them, and for each $i$, a tree decomposition of $G-X_i$ of width at most $f(h(r,c)k,|V(G)|)$. Clearly, $m\le q(|V(G)|)$. Next, using Lemma~\ref{lem:orient}, we find an orientation $\vec{G}$ of $G$ that represents $(\le\!r)$-distances. Let $Y_{\vec{G},r}$ be defined as in the statement of Lemma~\ref{lemma-admis}. Since the problem is $(g,p)$-tw-tractable problem, for each $i$ we can in time $p(|V(G)|)\cdot g(f(h(r,c)k,|V(G)|))$ find a subset $A_i$ of $V(G)\setminus Y_{\vec{G},r}(X_i)$ admissible in $G-X_i$ of largest weight. By Lemma~\ref{lemma-admis}, each of these sets is admissible in $G$; the algorithm return the heaviest of the sets $A_1$, \ldots, $A_m$. As the returned set is admissible in $G$, it suffices to argue about its weight. Let $A$ be a heaviest admissible set in $G$. Let $\{R_v\subseteq A:v\in A\}$ be the system of subsets from the definition of $c$-near-monotonicity, and let $D_{\vec{G},r}$ be defined as in the statement of Lemma~\ref{lemma-avoid}. By the definition of $c$-near-monotonicity, for each $i$ the set $A\setminus D_{\vec{G},r}(X_i)$ is admissible in $G$. Since $v\in R_v$ for each $v\in A$, we have $Y_{\vec{G},r}(X_i)\subseteq D_{\vec{G},r}(X_i)$, and thus by Lemma~\ref{lemma-admis}, $A\setminus D_{\vec{G},r}(X_i)$ is also admissible in $G-X_i$, and by the choice of $A_i$, we have $w(A_i)\ge w(A\setminus D_{\vec{G},r}(X_i))$. By Lemma~\ref{lemma-avoid}, we conclude that for at least one $i$, we have $w(A_i)\ge (1-1/k)w(A)$, as required. \end{proof}
{ "timestamp": "2021-05-06T02:05:33", "yymm": "2105", "arxiv_id": "2105.01780", "language": "en", "url": "https://arxiv.org/abs/2105.01780" }
\section{Merino-Welsh inequalities} \label{Intro} There are many variations on the Merino-Welsh conjecture. They are of the form ``subject to conditions, there is an inequality involving the evaluations $T(M;2,0)$, $T(M;0,2)$, and $T(M;1,1)$ of the Tutte polynomial of the matroid $M$''. We shall consider the following version. \begin{conj}\label{MW} If a matroid $M$ has no loops or isthmuses, then \[ T(M;2,0)+ T(M;0,2) \geq 2 T(M;1,1), \eqno(1.1)\] with equality if and only if $M$ is a direct sum of $U_{1,2}$, the rank-$1$ matroid consisting of two parallel elements. \end{conj} \noindent The focus of this paper is on finding sufficient conditions on a matroid so that inequality (1.1) holds. As background, we note that $T(M;2,0)$ is the number of no-broken-circuit or nbc sets and $T(M;1,1)$ the number of bases in $M$. Hence Conjecture \ref{MW} asserts that the average number of nbc sets in $M$ and its dual exceeds the average number of bases in $M$ and its dual. A counting or algebraic-geometry proof of Conjecture \ref{MW} is quite possible. From the counting interpretation, $T(M;2,0)$ and $T(M;0,2)$ are positive integers. We shall tacitly use this fact, as well as the fact due to Tutte that the coefficients of his polynomial are non-negative. \section{A density condition} \label{sect:density} The first result is that a sufficiently dense matroid satisfies Conjecture \ref{MW}. \begin{thm} \label{density} Let $M$ be an isthmus-free matroid having size $n$ and rank $r$. Then if $r \geq 4$ and \[ n \geq \lceil r(\log r + \log\log r + \log\log\log r) \rceil, \] with the logarithms in base $2$, then $T(M;0,2) \geq 2T(M;1,1)$. In particular, inequality (1.1) holds for $M$. \end{thm} The theorem is true for the simple reason that exponential growth beats polynomial growth in the long run. \begin{prop}\label{inq1} If $r \geq 4$ and $n \geq \lceil r(\log r + \log\log r + \log\log\log r) \rceil,$ then $2^{n-r} \geq 2 \binom {n}{r}$. \end{prop} Consider the function \[ f(n,r) = 2^{n-r} - 2 \binom {n}{r} = 2^{n-r} - \frac {2n(n-1)\cdots (n-r+1)}{r!}, \] where, in spite of the notation, $n$ could be a real number, although $r$ is always a positive integer. Suppose $n > 2r$. Then its forward difference in the variable $n$ satisfies \begin{eqnarray*} f(n+1,r) - f(n,r) &=& [2^{n+1-r} - 2^{n-r}] - 2\left[\binom {n+1}{r} - \binom {n}{r}\right] \\ &=& 2^{n-r} - 2\binom {n}{r-1} \\ &> & 2^{n-r} - 2 \binom {n}{r} = f(n,r). \end{eqnarray*} This yields the following lemma. \begin{lemma} If $n > 2r$, then $ f(n+1,r) > 2f(n,r). $ In particular, if $f(n,r) \geq 0,$ then $f(n+1,r) > 0$. \end{lemma} By the lemma, to prove Proposition \ref{inq1}, it suffices to find a number $n_r$ such that $n_r \leq \lceil r(\log r + \log\log r + \log\log\log r) \rceil$ and $2^{n_r-r} \geq 2 \binom {n_r}{r}$. For small values of $r$, this can be done by a hand calculation. The results are shown in the following table: \begin{center} \begin{tabular}{lclclclclclclclclclclclclclclclcl} \hline $r$ \rule{0pt}{15pt} & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ &$13$ &$14$ & $15$ & $16$ \\ \hline \hline $n_r$ \rule{0pt}{15pt} & $4$ & $8$ & $12$ & $16$ & $21$ & $25$ & $29$ & $33$ & $37$ & $42$ & $46$ & $50$ & $54$ & $59$& $64$ & $68$ \\ \hline \\ \end{tabular} \end{center} This allows us to assume $r \geq 17$. \footnote{There is nothing special about $17$; it just happens that $16$ entries will fit comfortably in a one-line table. Any sufficiently large number, like $8$, will work.} We begin with the fact that for $x \geq 17$, \[ (\log x)(\log\log x) \geq \log x+ \log\log x + \log\log\log x + 1. \] This is an easy exercise in calculus. \footnote{To remind myself, with my memory failing in old age, let $f(x)= (\log x)(\log\log x)$ and $g(x) = \log x + \log\log x+\log\log\log x + 1$. Then $f(17) \geq g(17) > 0$ and when we differentiate both functions, it is evident that \[ \frac {\log\log x}{x} + \frac {1}{x} = f'(x) \geq g'(x) = \frac {1}{x} + \frac {1}{x \log x} + \frac {1}{x (\log x) (\log\log x)}> 0 \] when $x \geq 17$. The added $1$ in $g(x)$ is there so that we can round up.} Thus, \begin{align*} & 2^{n-r} \geq \frac {2^{r(\log r + \log\log r + \log\log\log r)}}{2^r} \\ &= \frac{[r(\log r)(\log\log r)]^{r}}{2^r} \geq \frac {\lceil r(\log r + \log\log r + \log\log\log r) \rceil^r}{2^r} = \frac {n^r}{2^r} \geq 2 \binom {n}{r}. \end{align*} In the last step, we use $2^{r+1} \leq r!$ when $r \geq 5$. This completes the proof of Proposition \ref{inq1}. Returning to the proof of Theorem \ref{density}, observe that if a matroid $M$ has no isthmuses, then $T(M;x,y) = y^{n-r} + \cdots,$ where the terms of lower degree in $y$ have non-negative coefficients. In addition, $T(M;1,1)$ is the number of bases and is bounded above by $\binom {n}{r}$. Hence, by Proposition \ref{inq1}, under the hypothesis of the theorem, \[ T(M;0,2) \geq 2^{n-r} \geq 2\binom {n}{r} \geq 2T(M;1,1). \] We remark that Theorem \ref{density} says that for a fixed rank $r$, there are only finitely many matroids to check to show that Conjecture \ref{MW} holds at that rank. \section{Cocircuits}\label{sect:cocircuits} A \emph{cocircuit} is the complement of a copoint or flat of rank $r-1$. A matroid is isthmus-free if and only if it has no cocircuits of size $1$; thus, one can regard the condition that there are no isthmuses as a condition on the minimum size of a cocircuit. The following theorem says that Conjecture \ref{MW} holds if one assumes that all cocircuits are sufficiently large. \begin{thm} \label{cocircuits} Suppose that every cocircuit in a rank-$r$ size-$n$ matroid $M$ has at least $r + 1$ elements. Then $M$ satisfies inequality (1.1). \end{thm} We begin with an easy binomial identity. \begin{lemma} \begin{eqnarray*} \sum_{j=0}^{r-1} \binom {r-j-1}{j} y^{r-1-j} &=& \binom {2r-2}{r-1} + \binom {2r-3}{r-2} y + \cdots + \binom {r}{1} y^{r-2} + y^{r-1} \\ &=& 2^{2r-2}. \end{eqnarray*} \end{lemma} \noindent For example, \[ \binom {6}{3}2^0 + \binom {5}{2}2^1 + \binom {4}{1}2^2 + \binom {3}{0}2^3 = 20 + 10\cdot 2 + 4 \cdot 2^2 + 2^3 = 64 = 2^6. \] Now observe that if every cocircuit in the matroid $M$ has size at least $m$, then \[ T(M;x,y) = y^{n-r} + \binom {r}{1}y^{n-r-1} + \cdots +\binom {r+m-3}{m-2}y^{n-r-m+2} + p(x,y), \] where $p(x,y)$ is a polynomial with non-negative coefficients of degree $n-r-m+1$ in $y$. In particular, if $m = r+1,$ then \[ T(M;0,2) \geq 2^{n-2r+1}\left( \binom {2r-2}{r-1} + \cdots + \binom {r}{1} 2^{r-2} + 2^{r-1} \right) = 2^{n-1}. \] The hypothesis on cocircuits implies that $n \geq 2r$ and under this condition, $2^{n-1} \geq 2 \binom {n}{r}$. We conclude that inequality (1.1) holds. Theorem \ref{cocircuits} is not the sharpest possible. It is possible to replace the upper bound of $r+1$ by a bound of the form $r -c\sqrt{r}$ (with $c$ a constant) for sufficiently large $r$ using the normal approximation to the binomial distribution. It is also possible to combine the arguments to obtain sufficient conditions involving density and size of cocircuits for inequality (1.1) to hold, but neither refinement would prove the full conjecture. We end with one further result of this type. \begin{prop} Let $M$ be a rank-$r$ size-$n$ isthmus-free matroid with at least $r-1$ loops. Then inequality (1.1) holds for $M$. \end{prop} \begin{proof} Loops are never in a basis. Hence, \[ T(M;0,2) \geq 2^{n-r} \geq 2\binom {n-r+1}{r} \geq 2T(M;1,1) \] and inequality (1.1) follows. \end{proof}
{ "timestamp": "2021-05-21T02:06:29", "yymm": "2105", "arxiv_id": "2105.01825", "language": "en", "url": "https://arxiv.org/abs/2105.01825" }
\section{Introduction} \label{sec:intro} Real-time volumetric capture of human-centric scenarios is the key to a large number of applications ranging from telecommunications, education, entertainment, and so on. And the underlying technique, volumetric capture, is a challenging and long-standing problem in both computer vision and computer graphics due to the complex shapes, fast motions, and changing topologies (e.g., human-object manipulations and multi-person interactions) that need to be faithfully reconstructed. Although high-end volumetric capture systems \cite{bradley2008markerless, gall2009motion, liu2009point, brox2009combined, liu2011markerless, pons2017clothcap} based on dense camera rigs (up to 100 cameras \cite{collet2015high}) and custom-designed lighting conditions \cite{VlasicPBDPRM09,guo2019relightables} can achieve high-quality reconstruction, they all suffer from complicated system setups and are limited to professional studio usage. In contrast, light-weight volumetric/performance capture systems are more practical and attractive. Given a pre-scanned template, ~\cite{li2009robust, zollhofer2014real, guo2015robust} track dense surface deformations from single-view RGB input~\cite{habermann2019livecap,habermann2020deepcap}. However, the prerequisite of a fixed-topology template restricts their applications for general volumetric capture. In 2015, DynamicFusion~\cite{newcombe2015dynamicfusion} proposed the first template-free and single-view dynamic 3D reconstruction system. The following works ~\cite{yu2017BodyFusion, yu2018doublefusion, Xu2020UnstructuredFusion, su2020robustfusion} further improve the reconstruction quality for human performance capture by incorporating semantic body priors. However, it remains challenging for them to handle large topological changes like dressing or taking-off clothes. Recently, a line of research \cite{natsume2019siclope,saito2019pifu,saito2020pifuhd,li2020monoport} leverages deep implicit functions for textured 3D human reconstruction only from a single RGB image. However, they still suffer from off-line reconstruction performance ~\cite{saito2020pifuhd,saito2019pifu} or over-smoothed, temporally discontinuous results~\cite{li2020monoport}. State-of-the-art real-time volumetric capture systems are volumetric fusion methods like Fusion4D~\cite{dou2016fusion4d} and Motion2Fusion~\cite{Motion2Fusion}. But both of them rely on custom-designed high-quality depth sensors (up to 120 fps and 1k resolution) and multiple (up to 9) high-end GPUs, which is infeasible for consumer usage. In this paper, we propose Function4D, a volumetric capture system using very sparse (as sparse as 3) consumer RGBD sensors. Compared with existing systems, our system is able to handle various challenging scenarios, including human-object manipulations, dressing or taking off clothes, fast motions and even multi-person interactions, as shown in Fig.~\ref{fig_teaser}. Our key observations are: To generate complete and temporal consistent results, current volumetric fusion methods have to fuse as much temporal depth observations as possible. This results in heavy dependency on accurate and long-term non-rigid tracking, which is especially challenging under severe topology changes and large occlusions. On the contrary, deep implicit functions are good at completing surfaces, but they cannot recover detailed and temporal continuous results due to the insufficient usage of depth information and severe noise from consumer RGBD sensors. To overcome all the limitations above, we propose a novel volumetric capture framework that organically combines volumetric fusion with deep implicit functions. By introducing dynamic sliding fusion, we re-design the volumetric fusion pipeline to restrict tracking and fusion in a sliding window and finally got noise-eliminated, topology-consistent, and temporally-continuous volumetric fusion results. Based on the sliding fusion results, we propose detail-preserving deep implicit functions for final surface reconstruction to eliminate the heavy dependency on long-term tracking. Moreover, by encoding truncated projective SDF (PSDF) values explicitly and incorporating attention mechanism into the multi-view feature aggregation stage, our networks not only achieve detailed reconstruction results but also orders of magnitude faster than existing methods. Our contributions can be summarized as: \begin{itemize} \item The first real-time volumetric capture system which combines volumetric fusion with deep implicit functions using very sparse consumer RGBD sensors. \item Dynamic Sliding Fusion for generating noise-eliminated and topology consistent volumetric fusion results. \item Detail-preserving Implicit Functions specifically designed for sufficient utilization of RGBD information to generate detailed reconstruction results. \item The training and evaluation dataset, which contains 500 high-resolution scans of various poses and clothes, will be publicly available to stimulate future research. \end{itemize} \section{Related Work} \label{sec:related} In the following, we focus on 3D human volumetric/performance capture and classify existing methods into $4$ categories according to their underlying techniques. \noindent\textbf{Volumetric capture from multi-view stereo. } Multi-view volumetric capture is an active research area in the computer vision and graphics community. Previous works use multi-view images for human model reconstruction~\cite{Kanade97,StarckCGA07,liu2009point}. Shape cues like silhouette, stereo, shading, and cloth priors have been integrated to improve reconstruction/rendering performance ~\cite{StarckCGA07,liu2009point,WuShadingHuman,WaschbuschWCSG05,VlasicPBDPRM09,bradley2008markerless,Mustafa16,pons2017clothcap,Wu_2020_CVPR}. State-of-the-art methods build extremely sophisticated systems where up to 100 cameras~\cite{collet2015high} and even custom-designed gradient lighting~\cite{guo2019relightables} for high-quality volumetric capture. In particular, the methods by~\cite{collet2015high} and~\cite{guo2019relightables} first perform multi-view stereo for the point cloud generation, followed by mesh construction, simplification, tracking, and post-processing steps such as UV mapping. Although the results are compelling, the reliance on well-controlled multi-camera studios and a huge amount of computational resources prohibit them from being used in living spaces. \noindent\textbf{Template-based Performance Capture. } For performance capture, some of the previous works leverage pre-scanned templates and exploit multi-view geometry to track the motion of the templates. For instance, the methods in~\cite{Vlasic08,gall2009motion,brox2009combined} adopted a template with an embedded skeleton driven by multi-view silhouettes and temporal feature constraints. Their methods are then extended to handle multiple interacting characters in~\cite{liu2011markerless,LiuPAMI13}. Besides templates with an embedded skeleton, some works adopted non-rigid deformation for template motion tracking. Li \textit{et,al}~\cite{li2009robust} utilized embedded deformation graph in~\cite{Sumner2007embedded} to parameterize the non-rigid deformations of the pre-scanned template. Guo \textit{et,al.}~\cite{guo2015robust} adopted an $\ell_0$ norm constraint to generate articulated motions for bodies and faces without explicitly constructing a skeleton. Zollh{\"o}fer \textit{et, al.}~\cite{zollhofer2014real} took advantage of GPU to parallelize the non-rigid registration algorithm and achieved real-time performance of general non-rigid tracking. Recently, capturing 3D dense human body deformation with coarse-to-fine registration from a single RGB camera has been enabled~\cite{xu2018monoperfcap} and improved for real-time performance~\cite{habermann2019livecap}. DeepCap~\cite{habermann2020deepcap} introduced a deep learning method that jointly infers the articulated and non-rigid 3D deformation parameters in a single feed-forward pass. Although template-based approaches require less input than multi-view stereo methods, they are incapable of handling topological changes due to the prerequisite of a template with fixed topology. \noindent\textbf{Volumetric Fusion for Dynamic 3D Reconstruction. } To get rid of template priors and realize convenient deployment, researchers turn to use one or sparse depth sensors for 3D reconstruction. In 2015, DynamicFusion~\cite{newcombe2015dynamicfusion} proposed the first template-free, single-view, real-time dynamic 3D reconstruction system, which integrates multiple frames into a canonical model to reconstruct a complete surface model. However, it only handles controlled and relatively slow motions due to the challenges of real-time non-rigid tracking. In order to improve the robustness of DynamicFusion, the following works incorporated killing/sobolev constraints~\cite{slavcheva2017killingfusion, Slavcheva_2018_CVPR}, articulated priors~\cite{chao2018ArticulatedFusion}, skeleton tracking~\cite{yu2017BodyFusion}, parametric body models~\cite{yu2018doublefusion}, sparse inertial measurements~\cite{Zheng2018HybridFusion}, data-driven prior~\cite{su2020robustfusion} or learned correspondences~\cite{bozic2020deepdeform} into the non-rigid fusion pipeline. However, all of these methods are prone to tracking failure in invisible areas, which is an inherent drawback of single-view systems. To overcome this limitation, Fusion4D~\cite{dou2016fusion4d} and Motion2fusion~\cite{Motion2Fusion} focused on real-time multi-view setups using high-end custom-design sensors, with the notion of key volume updating and learning-based surface matching. Even though the pipelines were carefully designed in~\cite{dou2016fusion4d} and~\cite{Motion2Fusion}, they still suffer from incomplete and noisy reconstructions when severe topological changes occur especially under very sparse system setups. \noindent\textbf{Learning-based 3D Human Reconstruction. } Fueled by the rapid developments in neural 3D representations(e.g., ~\cite{park2019deepsdf,Peng2020ECCV,DeepLocalShapes,Occupancy_Networks} etc.), a lot of data-driven methods for 3D human reconstruction have been proposed in recent years. Methods in~\cite{zhu2019hmd,alldieck2019tex2shape} proposed to deform a parametric body model to fit the image observations including keypoints, silhouettes, and shading. DeepHuman~\cite{zheng2019deephuman} combined the parametric body model with a coarse-scale volumetric reconstruction network to reconstruct 3D human models from a single RGB image. Some methods infer human shapes on 2D image domains using multi-view silhouettes~\cite{natsume2019siclope} or front-back depth pairs~\cite{gabeur2019moulding,wang2020normalgan}. PIFu~\cite{saito2019pifu} proposed to regress an implicit function using pixel-aligned image features. Unlike voxel-based methods, PIFu is able to reconstruct high-resolution results thanks to the compactness of the implicit representations. PIFuHD~\cite{saito2020pifuhd} extended PIFu to capture more local details. However, both PIFu~\cite{saito2019pifu} and PIFuHD~\cite{saito2020pifuhd} fail to reconstruct plausible models in cases of challenging poses and self-occlusions. PaMIR~\cite{zheng2020pamir} resolved this challenge by using the SMPL model as a prior but suffers from run-time inefficiency since it requires a post-processing optimization step. IFNet~\cite{chibane20ifnet} and IPNet~\cite{bhatnagar2020ipnet} can recover impressive 3D humans from partial point clouds, but the dependency on multi-scale 3D convolutions and parametric body models block the realization of real-time reconstruction performance. \section{Overview} \label{sec:overview} As shown in Fig.~\ref{fig_overview}, the proposed volumetric capture pipeline mainly contains 2 steps: Dynamic Sliding Fusion and Deep Implicit Surface Reconstruction. Given a group of synchronized multi-view RGBD inputs, we first perform dynamic sliding fusion by fusing its neighboring frames to generate noise-eliminated and temporal-continuous fusion results. After that, we re-render multi-view RGBD images using the sliding fusion results in the original viewpoints. Finally, in the deep implicit surface reconstruction step, we propose detail-preserving implicit functions (which consists of multi-view image encoders, a feature aggregation module, and an SDF/RGB decoder) for generating detailed and complete reconstruction results. \section{Dynamic Sliding Fusion (DSF)} Different from previous volumetric fusion methods, the proposed DSF method aims at augmenting current observations but not completing surfaces. So we re-design the fusion pipeline to make sure we can get topologically consistent and noise-eliminated results for current observations. The proposed DSF mainly contains 3 steps: topology-aware node graph initialization, non-rigid surface tracking, and observation-consistent truncated SDF (TSDF) fusion. To eliminate the heavy dependency on long-term tracking, instead of fusing all frames as in previous methods, we only perform fusion in a sliding window in DSF. Specifically, we allow a 1-frame delay for the reconstruction pipeline and fuse the current frame (indexed by t) only with its preceding frame (t-1) as well as its succeeding frame (t+1) to minimize the topological changes and tracking error accumulation. Note that we only need to perform non-rigid surface tracking for the succeeding frame since the deformation between the current frame and the preceding frame has already been tracked. Regarding the TSDF fusion stage, we propose observation-consistent TSDF fusion to fuse multi-view observations of frame t+1 into frame t. \subsection{Topology-aware Node Graph Initialization} \label{sec:DSF_node_graph} Previous fusion methods initialize the embedded deformation graph (ED graph) ~\cite{Sumner2007embedded} in the canonical frame~\cite{newcombe2015dynamicfusion}. However, such an ED graph cannot well describe the topological changes in live frames. Different from previous methods, we have to initialize the ED graph exactly for the current frame to guarantee that the topology of the node graph is consistent with the current observations. However, it is inefficient to initialize the node graphs from scratch for every frame because of the complexity of node selection, graph connection, and volume KNN-field calculation. To overcome this limitation, we propose topology-aware node graph initialization, which can not only leverage the node graph of the previous frame for fast initialization but also has the ability to generate a topologically-consistent node graph for the current observations. As shown in Fig.~\ref{fig_node_graph}, we first initialize the current node graph using the live node graph from the preceding frame. This is achieved by warping the node graph of the preceding frame to the current frame directly. Due to tracking errors and topological changes, the live node graph may not be well aligned with current observations (Fig.~\ref{fig_node_graph}(c)). So we clear those nodes that are located far from current observations by constraining their TSDF values in the current TSDF volume (Fig.~\ref{fig_node_graph}(d)). Specifically, if the magnitude of the normalized TSDF value corresponding to a node is greater than $\delta_t$, we suppose that it is relatively far from the observations and we delete this node to maintain the current mesh topology. Finally, considering that there may still exist newly observed surfaces that are not covered by the cleared node graph, we further refine it based on the current observations by sampling additional nodes (Fig.~\ref{fig_node_graph}(e)) as in DynamicFusion~\cite{newcombe2015dynamicfusion} to make sure that all of the surfaces can be covered by the final node graph (Fig.~\ref{fig_node_graph}(f)). \subsection{Non-rigid Surface Tracking} \label{sec:DSF_nonrigid} For non-rigid surface registration, we follow previous methods to search projective point-to-plane correspondences between the surface at frame t and the multi-view depth observations at frame t+1. The definition of the non-rigid tracking energy is: \begin{equation} \label{eqn:tracking_energy} E_{\mathrm{tracking}} = \lambda_{\mathrm{data}}E_{\mathrm{data}} + \lambda_{\mathrm{reg}}E_{\mathrm{reg}}, \end{equation} where $E_{\mathrm{data}}$ and $E_{\mathrm{reg}}$ are the energies of data term and regularization term respectively. The data term measures the fitting error between the deformed surface and the depth observations, while the regularization term enforces local as-rigid-as-possible surface deformations. A Gauss-Newton solver with Preconditioned Conjugate Gradient algorithm (PCG) is used to solve this non-linear optimization problem efficiently on GPU. Please refer to ~\cite{newcombe2015dynamicfusion} for more details. \begin{figure}[] \centering \includegraphics[width=0.9\linewidth]{./eval_fusion.jpg} \caption{Evaluation of dynamic sliding fusion. From (a) to (d) are multi-view RGB references and depth masks, multi-view depth input rendered in a side viewpoint, results without and with dynamic sliding fusion, respectively. } \label{fig_eval_fusion} \end{figure} \subsection{Observation-consistent TSDF Fusion} \label{sec:DSF_fusion} After non-rigid surface tracking, we warp the volume of frame t to the depth observations at frame t+1 for TSDF fusion. Since we are focusing on augmenting the current observations but not completing surfaces, we propose an aggressive TSDF fusion strategy to eliminate the impact of tracking errors and severe topological changes. Specifically, in our fusion method, a voxel in the volume of frame t will update its TSDF value if and only if: i) the tracking error corresponding to this voxel is lower than a threshold $\delta_e$, and ii) there exist valid voxels (voxels that in the truncated band of current observations) located around this voxel (the searching radius is set to $3$ in our implementation). Calculating the tracking error for a specific voxel is not straightforward since non-rigid tracking is established between the reference surface and the depth maps~\cite{Motion2Fusion}. So we first calculate the tracking error for each node on the ED graph and then calculate the tracking error for each voxel by interpolation. Suppose $\mathcal{C}={(v_i,p_i)}$ is the correspondence set after the last iteration of non-rigid tracking, where $v_i$ is the $i$th vertex on the reference surface and $p_i$ is the corresponding point of $v_i$ on live depth inputs. The tracking error corresponding to the $j$th node can be calculated as: \begin{equation} \label{eqn:node_tracking_error} \mathbf{e}(n_j) = \frac{\sum_{v_i\in\mathcal{C}_j}\mathbf{r}(v_i',p_i)}{\sum_{v_i\in\mathcal{C}_j}\mathbf{w}(v_i,n_j) + \epsilon}, \end{equation} where $\mathbf{r}(v_i',p_i) = \|pn_i^T\cdot(v_i'-p_i)\|_2^2$ is the residue of $v_i$ after non-rigid tracking, $v_i'$ the warped position of $v_i$, $pn_i$ the surface normal of $p_i$, $\mathcal{C}_j$ a subset of $\mathcal{C}$ which includes all the reference vertices controlled by node $j$, $\mathbf{w}(v_i,n_j)=exp(-\|v_i-n_j\|_2^2/(2d^2))$ the blending weight of node $j$ on $v_i$, in which $d$ is the influence radius of $n_j$ and $\epsilon=1e-6$ is used to avoid zero division. For a voxel $x_k$, its tracking error is then interpolated using its $K$-Nearest-Neighbors on the node graph $\mathcal{N}(x_k)$ (where $K=4$) as: \begin{equation} \label{eqn:node_tracking_error} \mathbf{e}(x_k) = \sum_{j\in\mathcal{N}(x_k)}\mathbf{w}(x_k,n_j)\cdot\mathbf{e}(n_j). \end{equation} \section{Deep Implicit Surface Reconstruction} \label{sec_DISR} After dynamic sliding fusion, we can get noise-eliminated surfaces. However, the surfaces are by no means complete due to the very sparse inputs and occlusions. The goal of the deep implicit surface reconstruction step is to generate complete and detailed surface reconstruction results using deep implicit functions. Since we have already fused a 3D TSDF volume in dynamic sliding fusion, a straightforward idea is to use a 3D convolution-based encoder-decoder network to ``inpaint'' the volume. And the methods in ~\cite{chibane20ifnet,bhatnagar2020ipnet} have achieved complete 3D surface reconstruction results by proposing multi-scale 3D convolution networks. However, the dependency on inefficient 3D convolution limits their applications in real-time systems, and the huge memory consumption restricts them from generating high-resolution results. In contrast, real-time implicit surface reconstruction can be achieved using 2D pixel-aligned local features combined with positional encoding as shown in ~\cite{li2020monoport}. However, it was designed for using only RGB images and can only generate over-smoothed results. Finally, regarding the RGBD-based implicit functions proposed in ~\cite{li2020portrait}, we can see that simply adding depth input as an additional input channel still cannot preserve the geometric details on the depth inputs. To resolve the limitations above, we propose a new deep implicit surface reconstruction method that is specifically designed for RGBD input. The implicit surface reconstruction contains two steps: First, we re-render the multi-view RGBD images from the fused surface after dynamic sliding fusion. And then, given multi-view RGBD inputs, we propose detail-preserving implicit functions to reconstruct a complete surface with texture for the current frame. \subsection{Multi-view RGBD Re-rendering} In this step, we re-render multi-view RGBD images from the fused surfaces using input camera viewpoints. The re-rendered RGBD images contain much less noise than the original inputs thanks to the dynamic sliding fusion step. Note that another benefit of multi-view RGBD re-rendering is that we can manually fix the perspective projection parameters for all the rendered RGBD images to make sure they are consistent with the projection parameters that were used for rendering the training dataset. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{./networks.jpg} \caption{Network structures of GeoNet and ColorNet. } \label{fig_network_structures} \end{figure} \subsection{Detail-preserving Implicit Functions} We propose two networks, \textit{GeoNet} and \textit{ColorNet}, for inferring detailed and complete geometry with color from multi-view RGBD images. As shown in Fig.~\ref{fig_network_structures}, GeoNet and ColorNet share similar network architectures. Different from~\cite{saito2019pifu,li2020monoport}, we explicitly calculate the truncated PSDF feature in \textit{GeoNet} for preserving geometric details on depth maps. Moreover, we use a multi-head transformer network for multi-view feature aggregation in \textit{ColorNet} to generate more plausible color inference results. Empirically, we found that using only depth images is enough for training the \textit{GeoNet}, so we eliminate the RGB information for geometry reconstruction for efficient reconstruction. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{./illu_psdf.jpg} \caption{Illustration of the truncated PSDF feature. The green line represents the depth input. Note that we visualize the absolute of PSDF values here for simplicity, and the darker the grid, the larger the absolute PSDF values. } \label{fig_illu_trunc_PSDF} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{./eval_psdf.jpg} \caption{Evaluation of the truncated PSDF. (a) is single-view depth input, (b,c,d) are results generated without PSDF, with PSDF and with truncated PSDF, respectively. } \label{fig_eval_trunc_PSDF} \end{figure} \subsubsection{Truncated PSDF Feature} The feature used for decoding occupancy values in pixel-aligned implicit functions can be decomposed into a 2D image feature and a positional encoding (z value in~\cite{saito2019pifu} or the one-hot mapping in~\cite{li2020monoport}). The previous method~\cite{li2020portrait} augments the 2D image feature by enhancing the 2D image feature using RGBD images as input. Although this can successfully guide the network to resolve the z-ambiguity of using only RGB images, it does not preserves the geometric details on the depth inputs. This is due to the fact that the variation of the geometric details on depth inputs is too subtle (when compared with the global range of depth inputs) for the networks to ``sense'' by 2D convolutions. To fully utilize the depth information, we propose to use truncated PSDF values as an additional feature dimension. The truncated PSDF value is calculated by: \begin{equation} \label{eqn:node_tracking_error} \mathit{f}(q) = \mathbf{T}(q_{.z}-\mathbf{D}(\mathbf{\Pi}(q))), \end{equation} where $q$ is the coordinate of the query point, $\mathbf{\Pi}(\cdot)$ is the perspective projection function, $\mathbf{D}(\cdot)$ is a bi-linear sampling function used for fetching depth values on the depth image, and $T(\cdot)$ is used to truncate the PSDF values in $[-\delta_p,\delta_p]$. As shown in Fig.~\ref{fig_illu_trunc_PSDF}, the truncated PSDF value is a strong signal corresponding to the observed depth inputs. Moreover, it also eliminates the ambiguities of using global depth values. Fig.~\ref{fig_eval_trunc_PSDF}(b) demonstrates that without using the PSDF values, we can only get over-smoothed results even for the visible regions with detailed observations. Moreover, without truncation, the depth variations of the visible regions (the arms on top of the body) will be misleadingly transferred to the invisible regions (Fig.~\ref{fig_illu_trunc_PSDF}) and finally leads to ghost artifacts (the ghost arm in the red circle of Fig.~\ref{fig_eval_trunc_PSDF}(c)). \subsubsection{Multi-view Feature Aggregation} Although PIFu~\cite{saito2019pifu} has demonstrated multi-view reconstruction results by averaging the intermediate feature of the SDF/RGB decoders, we argue that the average pooling operation has limited capacity and cannot capture the difference in inference confidence between different viewpoints. For color inference, the network should have the ability to ``sense'' the geometric structure and also the visibility in different viewpoints for a query point. To fulfill this goal, we propose to leverage the attention mechanism in~\cite{Vaswani_attention_nips17} for multi-view feature aggregation in \textit{ColorNet}. Compared with direct averaging, the attention mechanism has the advantage of incorporating the inter-feature correlations between different viewpoints, which is necessary for multi-view feature aggregation. Intuitively, for a query point that is visible in $view^0$ but fully occluded in other views, the feature from the $view^0$ should play a leading role in the final decoding stage. As shown in Fig.~\ref{fig_eval_attn}, direct averaging of the multi-view features may lead to erroneous texturing results. On the contrary, using attention-based feature aggregation can enable more effective feature merging and thus generate more plausible color inference results. In practice, we follow~\cite{Vaswani_attention_nips17} to use multi-head self-attention with 8 heads and 2 layers without positional encoding. The input is the concatenation of multi-view geometry features, color features, and RGB values as in Fig.~\ref{fig_network_structures}. And we fuse the output multi-head features through a two layers FC and weighted summation. Moreover, we found that the attention mechanism has limited improvement for geometry reconstruction since the visibility has been encoded by the truncated PSDF feature in \textit{GeoNet}. \section{Results} \label{sec:results} The results of our system are shown in Fig.~\ref{fig_teaser} and Fig.~\ref{fig_results}. Note that the temporal-continuous results are reconstructed by our system under various challenging scenarios, including severe topological changes, human-object manipulations, and multi-person interactions. \subsection{Real-time Implementation} In order to achieve real-time performance, we implement our run-time pipeline fully on GPU. Specifically, for deep implicit surface reconstruction, we use TensorRT with mixed precision for fast inference. After that, the major efficiency bottleneck of geometry inference lies in the evaluation of an excessive number of voxels when evaluating every voxel in the volume. Since we already have multi-view depth maps as input, we can leverage the depth information directly for acceleration without using the surface localization method in~\cite{li2020monoport}. Specifically, we first use the depth images to filter out empty voxels. Then we follow the octree-based reconstruction algorithm~\cite{Mescheder2019OccupancyNetwork} to perform inference for the rest voxels in a coarse-to-fine manner, which starts from a resolution of $64^3$ to the final resolution of $256^3$. To further improve the run-time efficiency, we simplify the network architectures as follows. For the image encoders, we follow~\cite{li2020monoport} and use HRNetV2-W18-Small-v2~\cite{SunXLW19Hrnet} as the backbone, setting its output resolution to $64\times 64$ and channel dimension to $32$. For the SDF/color decoders, we use MLPs with skip connections and the hidden neurons as $(128, 128, 128, 128, 128)$. For dynamic sliding fusion, we set $\delta_t=0.5$, $\delta_e=0.1$ and $\delta_p=0.01m$ for all the cases. We refer readers to~\cite{izadi2011kinectfusion,guo2017real} for real-time implementation details. For multi-view RGBD re-rendering, we render multi-view RGBD images in a single render pass with original color images as textures to improve efficiency. Finally, our system achieves reconstruction at $25fps$ with $21ms$ for dynamic sliding fusion, $17ms$ for deep implicit surface reconstruction (using $3$ viewpoints) and $2ms$ for surface extraction using Marching-Cubes Algorithm~\cite{lorensen1987marching}. \subsection{Network Training Details} We use 500 high-quality scans for training \textit{GeoNet} and \textit{ColorNet}, which contains various poses, clothes and human-object interactions. We rotate each scan around the yaw axis, apply random shifts and render 60 views of the scan with image resolution of 512$\times$512. For color image rendering, we use the PRT-based rendering as in~\cite{saito2019pifu}. For depth rendering, we first render ground truth depth maps and then synthesis the sensor noises of TOF depth sensors on top of the depth maps according to~\cite{Kinectv2_noise_2015}. Note that we render all the RGBD images using perspective projection to keep consistent with real world sensors. During network training, gradient-based adaptive sampling (in which we use discrete gaussian curvature and rgb gradient as reference for query point sampling in \textit{GeoNet} and \textit{ColorNet} respectively) is used for more effective sampling around detailed regions. We randomly select 3 views from the rendered 60 views of a subject for multi-view training. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{./comp3.jpg} \caption{Qualitative comparison. For each subject, from left to right are results of our method, Motion2Fusion~\cite{Motion2Fusion} and Multi-view PIFu~\cite{saito2019pifu}, respectively. } \label{fig_comp} \end{figure} \subsection{Comparisons} \noindent\textbf{Qualitative Comparison} The qualitative comparison with Motion2Fusion~\cite{Motion2Fusion} and Multi-view PIFu~\cite{saito2019pifu} is shown in Fig.~\ref{fig_comp}. Given very sparse and low frame rate depth inputs from consumer RGBD sensors, Motion2Fusion generates noisy results under severe topological changing regions and fast motions due to the deteriorated non-rigid tracking performance. Moreover, the lack of depth information in Multi-view PIFu leads to over-smoothed results. \begin{table} \centering \resizebox{0.45\textwidth}{!}{ \begin{tabular}{c|c|c|c} \hline Method &P2S$\times 10^{-3}${$\downarrow$} & Chamfer$\times 10^{-3}${$\downarrow$} & Normal-Consis{$\uparrow$} \\ \hline\hline Multi-view PIFU~\cite{saito2019pifu}&$4.594$&$4.657$&$0.862$ \\ IPNet~\cite{bhatnagar2020ipnet} &$3.935$&$3.858$&$0.902$\\ \textbf{\textit{GeoNet}}&$\textbf{1.678}$&$\textbf{1.719}$&$\textbf{0.941}$\\ \hline \end{tabular}} \caption{Quantitative comparison of geometry reconstruction with multi-view PIFU and IPNet. } \label{tab_comp} \end{table} \noindent\textbf{Quantitative Comparison} We compare with $2$ state-of-the-art deep implicit surface reconstruction methods, Multi-view PIFu~\cite{saito2019pifu}(with RGB images as input) and IPNet~\cite{bhatnagar2020ipnet}(with voxelized point clouds as input). We retrain their networks using our training dataset and perform the evaluation on a testing dataset that contains $116$ high-quality scans with various poses, clothes, and human-object interactions. Tab.~\ref{tab_comp} shows the quantitative comparison results. We can see that the lack of depth information deteriorates the reconstruction accuracy of Multi-view PIFu. Moreover, even with multi-view depth images as input, the heavy dependency on SMPL initialization (which is difficult to get with large poses and human-object interactions) restricts the IPNet from generating highly accurate results. Finally, by explicitly encoding depth observations using the truncated PSDF values, the proposed \textit{GeoNet} can not only achieves accurate reconstruction results but also orders of magnitude faster than IPNet (approximately $80$ seconds for reconstruction). For a detailed description of the comparison, please refer to the supplementary material. \subsection{Ablation Studies} \begin{table} \centering \resizebox{0.45\textwidth}{!}{ \begin{tabular}{c|c|c|c} \hline Method & P2S$\times 10^{-3}${$\downarrow$} & Chamfer$\times 10^{-3}${$\downarrow$} & Normal-Consis{$\uparrow$} \\ \hline\hline \textbf{w/o} PSDF \textbf{w.}RGBD &$2.36$ & $2.458$ & $0.916$\\ \textbf{w/o} PSDF \textbf{w.}depth-only & $2.264$ & $2.359$ & $0.918$ \\ \textbf{w. PSDF w.depth-only} &$\textbf{1.678}$ & $\textbf{1.719}$ &$\textbf{0.941}$\\ \hline \end{tabular}} \caption{Ablation study on the truncated PSDF feature.} \vspace{-15pt} \label{tab_eval_psdf} \end{table} \noindent\textbf{Dynamic Sliding Fusion} As shown in Fig.~\ref{fig_eval_fusion}(a) and (b), the depth inputs in different views are not consistent with each other due to the challenging hair motion. This results in incomplete results (the orange circle). More importantly, without using dynamic sliding fusion, the result is much noisy (the red circle). By using dynamic sliding fusion, we can get more complete and noise-eliminated reconstruction results as shown in Fig.~\ref{fig_eval_fusion}(d). Please refer to the supplementary video for more clear evaluations. \noindent\textbf{Truncated PSDF Feature} The qualitative evaluation of the truncated PSDF feature is shown in Fig.\ref{fig_eval_trunc_PSDF}. Tab.~\ref{tab_eval_psdf} also provides quantitative evaluation results for the networks with and without using truncated PSDF values. We conduct two experiments with RGBD images and depth-mask images as input, respectively. We can see that without using the truncated PSDF feature, the depth-only model and RGBD model produces similar results. Benefiting from the truncated PSDF feature, our GeoNet achieves much accurate results, which demonstrates the effectiveness of our method. \begin{figure}[] \centering \includegraphics[width=0.9\linewidth]{./eval_attn2.png} \caption{Evaluation of the attention mechanism in \textit{ColorNet}. From left to right are: input RGB images, texture results with (green) and without (red) attention, respectively. } \vspace{-20pt} \label{fig_eval_attn} \end{figure} \noindent\textbf{Attention-based Feature Aggregation} In Fig.~\ref{fig_eval_attn}, we compare the models without using a multi-view self attention mechanism for color inference qualitatively. Benefiting from the multi-view self-attention mechanism, the color inference results becomes much sharper and plausible especially around observation boundaries. This is because the self-attention enables dynamic feature aggregation rather than the simple average-based feature aggregation, which enforces the MLP-based-decoder to learn how multi-view features (including geometric features and texture features) are correlated with each other in the 3D space. \section{Conclusion} \label{sec:conclusion} In this paper, we propose Function4D, a real-time volumetric capture system using very sparse consumer RGBD sensors. By proposing dynamic sliding fusion for topology consistent volumetric fusion and detail-preserving deep implicit functions for high-quality surface reconstruction, our system achieves detailed and temporally-continuous volumetric capture even under various extremely challenging scenarios. We believe that such a light-weight, high-fidelity, and real-time volumetric capture system will enable many applications, especially consumer-level holographic communications, on-line education, and gaming, etc. \noindent\textbf{Limitations and Future Work} Although we can preserve the geometric details in the visible regions, generating accurate and detailed surfaces\&textures for the fully occluded regions remains challenging. This is because current deep implicit functions are mainly focus on per-frame independent reconstruction. Expanding deep implicit functions for using temporal observations may resolve this problem in the future. Moreover, specific materials like black hair may cause the lack of observations of depth sensors and therefore severely deteriorate current system, incorporating RGB information for geometry reconstruction may resolve this limitation and we leave this as a future work. \noindent \textbf{Acknowledgements} This work is supported by the National Key Research and Development Program of China No.2018YFB2100500; the NSFC No.61827805 and No.61861166002; and the China Postdoctoral Science Foundation NO.2020M670340.
{ "timestamp": "2021-05-07T02:11:43", "yymm": "2105", "arxiv_id": "2105.01859", "language": "en", "url": "https://arxiv.org/abs/2105.01859" }
\section{Conclusion} This work compares the COBOL and non-COBOL developers' perspectives on software defects and defect-location strategies. We observed that: (1) defects affecting COBOL belong to two major categories(Logic/Flow and Input Data), (2) COBOL and non-COBOL developers follow similar defect location strategies, and (3) adapting debugging tools to the mainframe environment could have a significant impact in COBOL developer productivity. \section{Discussion and Future Work} \noindent \textbf{Typical defects are challenging.} We observed that there is no significant difference between typical and challenging defects in COBOL. In general, COBOL developers reported Logic/Flow and Input data as top-two major defect categories, which are both typical and challenging. This indicates that a hypothetical defect location tool addressing just these two defect types is likely to significantly impact the developer productivity and reduce the time required to locate and fix the defect. As indicated by a COBOL developer in a follow-up interview, the lack of tools targeting COBOL is one of the key factors contributing to Logic/Flow being a challenging defect. On the contrary, non-COBOL developers, who enjoy an wide set of tools, reported that typical defects such as Logic/Flow or Input Data rarely pose any difficulties. We believe that if COBOL developers are given supporting tools addressing their most pressing needs, we would observe a decrease in Logic/Flow and Input data defects’ perceived difficulty. COBOL and non-COBOL developers reported that challenging defects are often not detected by testers; hence they reach production. While undesirable in any application, it is even worse for COBOL ones as they typically support critical financial, health care, and government systems. Our respondents stated that the key reason for challenging defect propagation is related to differences between testing and production environments’ configuration and capabilities. Taking a cue from the continuous delivery movement~\cite{humble2010continuous}, we recommend tool-smiths to cost-effectively minimize the differences between COBOL development and production environments. \noindent \textbf{Defect location strategies are fairly similar.} Even though our respondents work with different software ecosystems, they reported following fairly similar defect location strategies, indicating that observations and findings made by prior studies in the context of common patterns followed by non-COBOL developers can be applied to COBOL developers. Although confirmation requires further research, initial results are encouraging. Analyzing the survey results, we also note the benefits that a simple code navigation tool~\cite{henley2018human} could provide to COBOL developers. First, the tool could help developers keep track of the code under analysis as indicated in the code documentation preferences. Further, it could also be aligned with observed top-down navigation patterns where COBOL developers start at the highest granularity (File/Module) and iteratively move to finer granularities (Method/Paragraph). Respondents in both groups indicated the Execution-based pattern, involving running or debugging an application, as the most effective strategy for defect location. However, whereas non-COBOL developers usually have unrestricted access to the application via an IDE, the same cannot be said for COBOL developers. Mainframe time is expensive, so the possibility of debugging or re-running the application is often limited. On the other hand, follow-up interviews revealed that the sooner developers could reproduce a defect, the faster they can resolve it. Thus improving the state-of-the-art of debugging in the mainframe environment would greatly benefit developers. \noindent \textbf{Emerging research directions.} For the researchers, we recommend expanding the content and scope of this study. One possible research avenue is to conduct an in-situ study of COBOL developers to verify and refine actionable insights towards tools supporting mainframe development. We suspect that such an undertaking would be very fruitful as an industry collaboration since COBOL projects are typically highly regulated and not accessible to a broader audience. Another direction can be to offer COBOL courses at universities and study the student population to get end-user perspectives~\cite{ko2004six} on tool usage~\cite{pandita2018no}. While studying developers' perspectives on software defects gives a good initial overview of defects in the COBOL ecosystem, analysis of actual COBOL defects is imperative to validate our findings~\cite{devanbu2016belief}. Our data is publicly available at~\cite{projectWeb}. We hope researchers and practitioners will curate additional benchmarks to foster future research. We concede that collecting such information is challenging since enterprise COBOL projects are either proprietary and regulated Finally, we plan to follow previous studies~\cite{morrison2016veteran,jordan2015} and work with veteran developers to identify optimal strategies to preserve their knowledge and investigate how to remove entry-level barriers by passing hard-earned expertise onto a new generation of developers. With one of the participants reporting 60 years of programming experience, we deem this research direction to be particularly exciting and viable in the context of COBOL. \section{Introduction} \label{sec:intro} Mainframe systems, although perceived by many as technologically outdated, stand at the core of the daily operations of many financial, health care, and governmental institutions. They provide business-essential features that enable rapid, reliable, and secure processing of extremely large volumes of transactions. To put the scale and importance of the mainframe development into perspective, there are \textit{``over 220 billion lines of COBOL code being used in production today, and 1.5 billion lines are written every year''}~\cite{forbes-cobol}, while \textit{``on a daily basis, COBOL systems handle USD 3 trillion in commerce''}~\cite{newstack-cobol}. These staggering numbers emphasize two important facts. First, mainframe development is resilient and unlikely to be replaced any time soon due to significant cost and risk associated with building/migrating to new transaction processing systems that realize existing, well-established functionalities. Second, as this generation of mainframe developers retires, support for these systems is in jeopardy as mainframe development is not part of current mainstream curricula~\cite{carr2000cobol,kizior2000does}. We suspect this problem will only worsen as the new generation of developers trained on modern programming stacks takes the responsibility of maintaining legacy COBOL systems. While research efforts in studying maintenance in modern PL stacks resulted in advanced tooling~\cite{sando,martinez2019rtj,mai2020smrl,buhse2019vedebug,lockwood2019mockingbird,meng2019convul,beyer2019testcov} and guidance to improve developer productivity~\cite{fritz2016leveraging, goncales2019measuring}, investigating maintenance of mainframe systems is rare with most studies focusing on language migration~\cite{sneed2001,demarco2018cobol,rodriguez2013bottom}. Given the low success rate (less than 50\%\cite{tech-republic}) of the mainframe migration effort, we posit that mainframe developers need support and innovation in the most routine software maintenance activities, such as locating and fixing software defects~\cite{latoza2006}. As a step towards better understanding the needs of mainframe developers, \textit{in this study, we explore mainframe developers’ perspectives regarding the scope of software defects affecting mainframe systems and the strategies commonly followed to locate the defects in the legacy codebase.} We further investigate and highlight the differences in defects and defect location strategies as reported from the perspectives of COBOL and non-COBOL developers. Through this comparison, we aim to: (1) provide insights into the types and frequency of defects encountered during the development and maintenance of mainframe COBOL systems; and (2) elicit a key features of a hypothetical defect location tool targeted towards supporting mainframe developers. In particular, this work addresses the following research questions: \noindent\textbf{RQ1: What are the major categories of software defects in COBOL? Are these defect types specific to COBOL?} Previous research identified different defect categories and analyzed how frequently these defects occur in code~\cite{odc,catolino2019,lal2012,bruning2007}. However, most of the analyzed software ecosystems are built around the object-oriented paradigm, leaving a gap in understanding whether the same problems affect other environments. This study focuses on mainframe projects written in COBOL and contrasts frequent defects reported by COBOL and modern PL developers. \noindent\textbf{RQ2: Are challenging software defects the same as typical defects? What are the major challenging software defects in COBOL and non-COBOL environments?} Little is known about the developers' point of view on the defects that are most challenging to locate and fix~\cite{kocchar2016,siegmund2014}. The goal of RQ2 is to contrast the types and properties of typical and challenging software defects. This comparison can shed light on the less studied area of challenging software defects and pinpoint specific defect types that mainframe developers need the most assistance with. \noindent\textbf{RQ3: Does the defect location process vary between software ecosystems? What are the similarities and differences in developer approaches to locating defective code?} Locating a software defect is a time- and effort-consuming task. To gain more insights into how developers perform defect location, researchers performed multiple lab and field studies to observe developers' work patterns~\cite{kevic2017,damevski2016,wang2011,wang2013,siegmund2014}. In this study, we investigate developers' perceptions of the most effective approaches to locate defects in their respective environments to observe how working in a specific environment (e.g., COBOL or non-COBOL) affects the defect location process. To answer these questions, we asked software developers about their perspective on software defects and strategies to localize faulty code. We surveyed 106 professional developers with different job seniority, specializing in various technologies, including 30 COBOL developers. To support the survey's findings and get more insight into developer viewpoints, we conducted 10 follow-up interviews. The contributions of this paper are: \begin{itemize} \item identification of the most frequent and challenging software defects in COBOL, \item investigation of developers' perception of the most useful defect location strategies, \item recommendations of the key features of a hypothetical defect location tool to support developers in maintaining mainframe COBOL systems, \item publicly available anonymized study materials to aid replication and foster future research in this area\cite{projectWeb}. \end{itemize} \section*{Acknowledgment} We thank our participants for their time and effort, Dan Acheff for his insights about COBOL ecosystem, and Brad Cleavenger and Greg Brueggeman for their help in designing the survey. We also thank Phase Change Software for supporting this work. \bibliographystyle{IEEEtran} \section{Research methodology} To capture developers' perspective on software defects and defect location strategies, we leveraged prior research to devise taxonomies characterizing each of the two dimensions~\cite{catolino2019,odc,morrison2018,wang2013}. To ensure that the taxonomies correspond with developers' experience and practices, we consulted one COBOL and one non-COBOL developer, and altered the taxonomies accordingly. In this section, we describe the process of building and refining taxonomies used in the study. \subsection{Software defects taxonomy} Previous research have identified taxonomies characterizing software defects at different levels of granularity and with various properties ~\cite{odc,catolino2019,freimut2005,grady1992}. In this study, we leverage bug classification schema proposed by Catolino et al.~\cite{catolino2019}. This schema comprises of 9 defect categories, namely Configuration, Database, GUI, Network, Performance, Permission/Deprecation, Program Anomaly, Security and Test Code, hence it provides concise, yet comprehensive selection of defect types. However, we identified two major drawbacks of applying this taxonomy in our study. Firstly, as pointed out by Catolino et al., Program Anomaly is a broad category and pertain to the majority of software bugs. Moreover, the same concern was raised by participants of the pilot study. Hence, based on defect types indicated during the pilot study, we decided to expand Program Anomaly into 3 categories. Input data refers to issues caused by unexpected or malformed input data, which often occurs in COBOL ecosystem. Concurrency-related defects involves bugs related to e.g., incorrect access to shared resources or deadlocks, while Logic/Flow defects pertain to programming errors, e.g., incorrect use of data structures. Additionally, as indicated by the participants of the pilot study, we also introduced a new category related to defect occurring when system operates at the limit of the resources, Workload/Stress. In total, in the study we use 11 defect categories as shown in Tab.~\ref{tab:defects-taxonomy}. To capture properties of software defects, we leverage Orthogonal Defect Classification (ODC)~\cite{odc}. ODC describes defect properties at two moments in time (opening section), when defect is reported and when it is closed (closing section). Although, this taxonomy captures fine-grained details of defects, it is not applicable in a context of a survey since it requires participants to be familiar with its extensive terminology. Therefore, we opted for a hybrid approach in which we leverage defect categories based on the work of Catolino et al.~\cite{catolino2019}, which to some extent covers opening section properties, and additionally, we also use three of the closing section dimensions, Age, Source and Qualifier. In particular, Age refers to the time when defect was introduced into the code base, Source indicates which part of code was affected, while Qualifier identifies the cause of the defect. Comparing to the options specified in ODC for each of these dimensions, we decided to introduce two changes. First, we introduce Corrupted Data as a Qualifier to capture one of the frequent cause of defects in COBOL projects. Secondly, we merge Outsourced and Ported from the Source dimension into one category, Outsourced. The modification was dictated by observing participant of the pilot study who had difficulties in differentiatign between the two options. Finally, following Morrison et al.~\cite{morrison2018}, we introduced a dimension to determine an entity that reported a defect, Reporter. The final version of defect location taxonomy is presented in Tab.~\ref{tab:defects-taxonomy} and includes categories of defects and their properties. \begin{table}[ht] \scriptsize \caption{Software defects taxonomy based on \cite{catolino2019} and \cite{odc}.} \label{tab:defects-taxonomy} \begin{tabular}{l|ll} \toprule \textbf{Defect categories} & \multicolumn{2}{l}{\textbf{Defect properties}} \\ \midrule Concurrency & \multirow{3}{*}{Reporter} & Developer \\ Configuration & & Quality Assurance Personnel/ Tester \\ Database & & End user \\ \cline{2-3} GUI & \multirow{4}{*}{Cause} & Not coded to specification \\ Input Data & & Missing or incomplete specification \\ Logic/Flow & & Corrupted data \\ Network & & Extraneous \\ \cline{2-3} Performance & \multirow{3}{*}{Source} & Developed in-house \\ Permission/Deprecation & & External library/API \\ Security & & Outsourced \\ \cline{2-3} Test Code & \multirow{4}{*}{Age} & New feature \\ Workload/Stress & & Legacy code \\ & & Re-factored \\ & & Changed to fix a defect \\ \bottomrule \end{tabular} \end{table} \subsection{Defect location taxonomy} To study the process of defect location from a perspective of a software developer, researchers conducted exploratory studies to observe developers' actions when working on maintenance tasks \cite{ko2006,sillito2008, wang2013,kevic2017}. The summary of identified developer activities is presented in Table~\ref{tab:activities}. We note that across all of the examined studies exist a shared set of activities that reflect the process of locating a feature in code. More specifically, in all of the studies, the authors observed developers looking for a code entity representing a starting point (Seek, Search, Finding focus point, Searching for a specific string), followed by exploring related program elements (Relate, Expand, Expanding focus points)~\cite{ko2006,wang2013, sillito2008}. Additionally, 2 out of 4 studies identified documenting relevant code entities by developers \cite{ko2006,wang2013}. Overall, across various research we note a set of common high-level activities performed by developers when locating a feature. \begin{table}[t] \centering \scriptsize \caption{Stages of defect (or feature) location process identified in the previous studies.} \label{tab:activities} \begin{tabular}{p{0.5cm}p{0.5cm}p{1.3cm}p{1.5cm}} \toprule \multicolumn{1}{l}{Ko et al.~\cite{ko2006}} & \multicolumn{1}{l}{Wang et al. \cite{wang2013}} & \multicolumn{1}{l}{Sillito et al. \cite{sillito2008}} & \multicolumn{1}{l}{Kevic et al. \cite{kevic2017}} \\ \midrule Seek & Search & Finding focus point & Understanding of source code \\ Relate & Expand & Expanding focus point & Changes to source code \\ Collect & Validate & Understanding a subgraph & Change task examination \\ & Document & Questions over a graph of subgraph & Searching for specific string \\ \bottomrule \end{tabular} \end{table} In this study, we decided to leverage hierarchical feature location model proposed by Wang et al.~\cite{wang2013}. We deem this model serves as a good starting point as it (1) captures feature location process at different levels of granularity; (3) it is based on three exploratory studies with large number of developers; and (3) shares common set of activities with previous studies. To describe feature location process, the authors used three levels of granularity: phases, patterns and actions. Each phase represent a high-level stage in a feature location process, namely Search for entrance points, Expanding the search, Validating and Documenting. Patterns define common strategies developers follows at different phases related to e.g., execution or textual search, while actions correspond to fine-grained physical or mental activities undertaken by developers. Across all the phases, the authors identified 11 physical actions and 6 mental actions. The physical actions included e.g., reading code, or exploring source code files, while mental actions covered e.g., conceiving an execution scenario or identifying and refining keywords. \begin{figure*}[ht] \centering \includegraphics[width=0.8\textwidth]{figures/locations/paper_drawings.pdf} \caption{Defect location model.} \label{fig:defect-location-model} \end{figure*} Although, this hierarchical model provides a comprehensive representation of behavioral patterns used for locating relevant code entities, we decided to alter some of it components for the purpose of this study. The goal of these changes was (1) to create more concise representation of the model; and (2) to capture additional activities as indicated by the participants of the pilot study. Figure~\ref{fig:defect-location-model} presents the overview of the model with phases containing available patterns, while actions and strategies are listed below each phase. We mark introduced modifications with red color. In general, we reuse all of the phases defined in the original model, however, we extend available patterns. In particular, we included the IR-based pattern in the Expanding the search phase for the symmetry with the Search phase. Researchers noted that developers tend to lost track of the relevant program elements due to the large number of examined files and interruptions~\cite{ko2006,wang2013,sillito2008,latoza2006}. To provided more context for the Documentation phase and investigate if and how developers collect the information about relevant code components, we curated a list of common practices for documenting code based on prior work~\cite{ko2006, wang2013}. We modified the original list of physical actions in a following manner. We decided to merge {\em Breakpoints operations}, {\em Step program} and {\em Run program} into one succinct action, {\em Observing runtime}. As noted by Kr\"{u}ger et al.~\cite{kruger2018}, location of a specific feature can be discovered by examining the release log and pull requests, thus to reflect that we added new physical action, {\em Examine VCS}. \section{Related work} We contrast our work with prior research, which we grouped into three main areas: (1) COBOL, (2) developers' perception, and (3) software defects and defect location strategies. \noindent\textbf{COBOL ecosystem.} Despite its massive presence in the financial sector, the COBOL ecosystem has seen little attention in recent \textit{software engineering} research. Sellink et al.~\cite{sellink2002restructuring} proposed a set of methods to effectively restructure COBOL code, while Sneed et al.~\cite{sneed2001} described the process of extracting knowledge from a legacy COBOL system. More recently, researchers' efforts concentrated on the migration of COBOL code to Java. Mossienko~\cite{mossienko2003} introduced an automated technique translating COBOL to Java. Sneed et al.~\cite{sneed2011migrating, sneed2013migrating} reported on two industrial COBOL migration projects involving code restructuring and automated language translation. De Marco et al.~\cite{demarco2018cobol} described migration methodology designed for a newspaper company, while Rodriguez et al. ~\cite{rodriguez2013bottom} discussed two migration approaches based on a case study of 32 COBOL transactions. Although these studies made a significant step towards restructuring and migrating COBOL code, the ways to natively support large-scale production COBOL systems have not yet been investigated. As any programming language, COBOL is not defect-free; however, defects and their properties in the mainframe environment were rarely investigated. Litecky et al.~\cite{litecky1976} studied errors commonly made by students learning COBOL and observed that 20\% of error types account for 80\% of errors. Veerman et al.~\cite{veerman2006cobol} focused specifically on the \texttt{perform} statement and identified programming constructs leading to unexpected system crashes. In contrast, this study focuses on studying general defects types and defect location strategies employed by professional COBOL developers to deepen the understanding of the problems encountered in the field. \noindent\textbf{Studying developers' perceptions.} This work leverages online surveys, a common tool employed by researchers to study developers’ expectations and perceptions. For instance, the survey methodology has been applied to investigate debugging tools and approaches to solve the most difficult defects~\cite{siegmud2014}, and developers' expectations towards fault localization~\cite{kocchar2016}. While our work is similar in spirit, we explicitly study professional COBOL developers. Devanbu and colleagues~\cite{devanbu2016belief} indicate that although developers’ beliefs may differ from empirical evidence, developers’ perceptions should be taken into consideration, especially when prior research is limited, and obtaining data at-scale is challenging. Although we were not explicitly investigating such discrepancies, we did find outliers in developers' perceptions. However, they were few and far between and were discarded in aggregate analysis. \noindent\textbf{Software defects and defect location strategies.} Researchers and practitioners devised several classification taxonomies to capture and analyze the characteristics of software defects. Chillarege et al.~\cite{odc} introduced Orthogonal Defect Classification (ODC) comprising 8 dimensions describing defect properties. Researchers at Hewlett-Packard~\cite{grady1992} proposed a taxonomy leveraging three dimensions to explain defects' type, source, and root cause. Defect categories and properties identified in these studies have been successfully applied in various industrial settings~\cite{lutz2004,zheng2006} and led to building automatic tools for defect prevention~\cite{shenvi2009defect} and classification~\cite{huang2015autoodc}. Recently, Catolino et al.~\cite{catolino2019} inspected defect reports from 3 large software projects to build a modern taxonomy consisting of 9 defect categories. Our work builds upon defect taxonomies proposed by Catolino et al.~\cite{catolino2019} and ODC~\cite{odc}. Researchers conducted numerous lab and field studies observing strategies leveraged by developers while completing maintenance tasks to delineate the defect location process. For instance, Ko et al.~\cite{ko2006} identified three main activities frequently interleaved by developers: seeking, relating, and collecting information. Wang et al.~\cite{wang2013, wang2011} created a hierarchical model describing the feature location process in terms of phases, patterns, and actions. Based on exploratory studies, Kevic et al.~\cite{kevic2017} identified a set of 6 re-occurring activities related to, e.g., understanding or changing source code. Recently, Chattopadhyay et al.~\cite{chattopadhyay2019} performed an exploratory study with 10 developers to observe how developers maintain the current task context. This work leverages the model of feature location proposed by Wang et al.~\cite{wang2013} to create the survey. Building automatic tools to reduce the effort required to localize relevant code components has been of great interest. However, despite the plethora of advanced automated techniques for defect location~\cite{corley2018,wen2016,moreno2014,dao2017, kruger2018, lam2017}, none of these were evaluated in the context of mainframe development. Therefore, their applicability in the mainframe environment remains largely unknown. \section{Results} \subsection{Demographic Information} Overall, we received 106 responses from software developers located across 11 countries, with the majority of participants residing in the United States, India, and Poland. We discarded the responses from two developers who reported having less than a year of professional industry experience and considered the remaining responses for further analysis. Out of 104 respondents, 30 are currently, or were recently, involved in a COBOL project, while the remaining 74 work with modern programming languages, including Java, Python, and C\#. In the remainder of this section, we refer to the first group as {\em COBOL} and the later as {\em non-COBOL} developers. Among the COBOL developers, we noted 25 males and 5 females. Non-COBOL developers included 58 males and 10 females, while 6 developers did not disclose their gender. On average, COBOL developers reported to have 22.3 years of professional programming experience ($std=14.84$, $min=4$, $max=60$). In contrast, non-COBOL developers reported to have 9.78 years of professional programming experience ($std=7.68$, $min=1$, $max=51$). \subsection{Software defects} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\columnwidth]{figures/defects/typical_defects.pdf} \caption{\footnotesize{Q: \textit{What defect categories do you work on most frequently?}}} \label{fig:defects-typical} \end{subfigure}% ~ \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\columnwidth]{figures/defects/challenging_defects.pdf} \caption{\footnotesize{Q: \textit{What are the most challenging defects categories?}}} \label{fig:defects-challenging} \end{subfigure} \caption{\small{Developers' perspective on typical (left) and challenging software defects (right).}} \label{fig:defects} \vspace{-5mm} \end{figure*} \noindent\textbf{Typical Software Defects.} Figure~\ref{fig:defects-typical} shows the distribution of developers' perception of typical defect types. Logic/Flow and Input data are the most frequently occurring defect categories according to both COBOL and non-COBOL developers. Logic/Flow was selected by 83.3\% of COBOL and 59.5\% of non-COBOL developers, while Input data was reported by 80.0\% and 48.6\% of COBOL and non-COBOL developers, respectively. Our results are in conformance with previous work that states that the majority of software defects are related to programming errors leading to exceptions, crashes, or incorrect behavior~\cite{catolino2019,tan2014}. Developer perception varies for the rest of the defect categories. COBOL developers indicated that other typical defects pertain to Configuration (50\%) and Database (42.9\%), whereas Security, Network, or Concurrency related defects rarely happen. We posit that the lack of focus on modules implementing network or security features within COBOL mainframe batch processing explains this distribution. In contrast, non-COBOL developers reported Performance as their major category of frequently occurring typical defects (52.7\%). Overall, we notice that defects related to the rest of the categories, Configuration, Database, Test Code, User Interface, Workload/Stress, and Concurrency, are fairly evenly distributed, being reported as common by 25.7\% (Concurrency) to 35.1\% (Test Code) of non-COBOL developers. We ran a chi-squared test to test the null hypothesis: \textit{``the distributions of defect types among COBOL and non-COBOL responses are the same’’}. However, low values reported in various defect categories by our COBOL developers required us to follow the common recommendation to combine some of the rows to obtain sufficient values for an approximation of a chi-square distribution~\cite{ott1988introduction}. We combined Network, Security, and Permission/Deprecation into a ``Miscellaneous’’ category. Based on the $p$-value of 0.02, we reject the null hypothesis, concluding that typical defect types encountered by COBOL developers \textit{are} significantly different from typical defect faced by non-COBOL developers. We investigated whether developers' perception of typical defect types follows a distribution reported in previous studies. As a baseline for non-COBOL responses, we selected data from Morrison et al.~\cite{morrison2018} that includes a manual annotation of defect types from large open-source projects. Since our survey asked developers to select the top-3 typical defect types, we compared the survey results with the frequencies of top-3 defect categories reported by the baseline (Performance, Installability, and Capability). Note that we mapped categories of defects used in the baseline to defect types leveraged by this study. We ran a chi-square test to test the null hypothesis that ``\textit{defect categories reported by the survey and baseline follows the same distribution}''. The test resulted in $p=0.16$; thus, we accept the null hypothesis, which indicates that non-COBOL developers' perceptions of typical defects are fairly accurate. In the case of COBOL responses, we use annotated defect-related posts from the IBM mainframe forum as a baseline (described in Section~\ref{forumDefectClassification}). We observed that the top-3 most discussed defect types are related to Logic/Flow, Configuration, and Input data, which broadly confirms the survey results. \begin{tcolorbox} \small{\textit{RQ1}: Defects related to Logic/Flow and Input data are the primary defect types in COBOL systems. Typical defect types encountered by COBOL developers are significantly different from typical defects encountered by non-COBOL developers.} \end{tcolorbox} \noindent\textbf{Challenging software defects.} The distribution of developers' perception of challenging defect types is shown in Fig.~\ref{fig:defects-challenging}. According to COBOL developers, challenging defects are mostly related to Input data (64.3\%) and Logic/Flow (53.6\%), making those categories typical and challenging simultaneously. As the top third most challenging defect 35.7\% of COBOL developers selected Workload/Stress. In contrast, only 12.5\% of COBOL developers reported Workload/Stress as a \textit{typical} defect. Non-COBOL developers indicated issues related to Performance (60.8\%), Concurrency (56.8\%), and Workload/Stress (33.8\%) as the most challenging, whereas Logic/Flow and Input data defects were selected by 27\% and 13.5\% of the respondents, respectively. We ran the chi-squared test to compare the distribution of challenging defect types among COBOL and non-COBOL developer responses. In particular, we tested the following null hypothesis: \textit{``challenging defect types reported by COBOL and non-COBOL developer responses follow the same distribution}’’. Our test resulted in $p < 0.01$. We therefore reject the null hypothesis, indicating that challenging defects reported by COBOL and non-COBOL developers \textit{are}, in fact, different. We also performed the chi-squared test to evaluate the differences in the distribution of typical and challenging defects within each respondent group. Specifically, for each group, we tested the null hypothesis \textit{``if typical and challenging defect types have the same distribution}’’. In the case of COBOL, our test resulted in $p=0.096$, therefore we accept the null hypothesis and conclude that typical and challenging COBOL defects are not significantly different. In contrast, challenging defects reported by non-COBOL respondents are significantly different from the typical defect types with $p\ll0.01$. In the follow-up interviews, most COBOL and non-COBOL developers agreed with the top typical and challenging defect categories reported by their respective groups. COBOL developers indicated that Logic/Flow and Input data could be challenging since \textit{there is no good IDE for COBOL compared to IntelliJ or Eclipse tools, and the structure of COBOL code can easily lead to programming errors}. In support, another COBOL developer mentioned one of the hardest defects he faced was related to a missing \textit{null check for file pointer}, that could be easily detected by a modern IDE. Developers also stated that, since \textit{COBOL is primarily a legacy system, they rarely know the entire process}, thus making it difficult to locate the root cause effectively. Finally, developers also noted that Workload/Stress and Performance defects pose a challenge due to the difficulty in local reproduction and the unique characteristics of every defect. Overall, developers listed the following challenges associated with specific defects: (1) \textit{reproducing the defect outside the production environment is often impossible}; (2) \textit{defects do not surface immediately; instead, they happen occasionally, leading to numerous problems across components}; (3) \textit{it can take days to find the right solution}. \begin{tcolorbox} \small{\textit{RQ2:} Challenging defects encountered by COBOL developers are not different from typical defects. However, challenging defects encountered by non-COBOL developers vary significantly from typical defects.} \end{tcolorbox} \noindent\textbf{Defect properties.} Developers' perspectives on the properties of software defects are illustrated in Fig.~\ref{fig:properties}, with each property visualized as a Likert scale. Properties of typical defects are marked with (T), whereas challenging defects with (Ch). When asked about defect reporters, 63.2\% of COBOL and 80.7\% of non-COBOL developers agreed that testers are the most likely group to detect typical defects (Fig.~\ref{fig:reporter}). In contrast, only 33.3\% of COBOL and 47.3\% of non-COBOL developers expressed positive opinions about testers’ ability to detect challenging defects. At the same time, both groups indicated that end-users are more likely to report challenging defects, with 50\% for COBOL and 43.2\% for non-COBOL supporting such opinion. We conducted unpaired Mann-Whitney tests to verify the null hypothesis: \textit{``testers have the same capability to detect typical and challenging defects’’}. The tests were conducted for COBOL and non-COBOL separately. We note $p=0.029$ and $p\ll0.01$ for COBOL and non-COBOL, respectively, indicating a statistically significant difference in perceived testers’ abilities. We conducted analogous tests for end-users and obtained $p=0.052$ and $p=0.143$; thus, we conclude that the observed difference is not significant. In the follow-up interviews, both COBOL and non-COBOL developers supported these findings indicating that \textit{since end users have a system that has already been tested, then all the easy defects have been already caught. Hence, end-users usually find more complicated issues, which the engineering team has not thought of}. Moreover, testing for, e.g., Performance or Workload/Stress, \textit{is rarely doable in the development environment} and \textit{requires significant time and effort}. \begin{figure*}[ht!] \begin{subfigure}{0.40\textwidth} \centering \includegraphics[scale=0.7]{figures/defects/reporter_shared1.pdf} \caption{\footnotesize{Q: \textit{How frequently are software defects reported by:}}} \label{fig:reporter} \end{subfigure}% ~ \begin{subfigure}{0.55\textwidth} \centering \includegraphics[scale=0.7]{figures/defects/cause_shared1.pdf} \caption{\footnotesize{Q: \textit{How likely is a software defect caused by:}}} \label{fig:cause} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[scale=0.7]{figures/defects/source_shared1.pdf} \caption{\footnotesize{Q: \textit{How likely is a software defect to occur in code that is:}}} \label{fig:source} \end{subfigure}\hspace{5mm}% ~ \begin{subfigure}{0.50\textwidth} \centering \includegraphics[scale=0.7]{figures/defects/age_shared1.pdf} \caption{\footnotesize{Q: \textit{How likely is a software defect to occur in code that is:}}} \label{fig:age} \end{subfigure}% \caption{\small{Developers' perspective on the properties of software defects.}} \label{fig:properties} \vspace{-3mm} \end{figure*} COBOL and non-COBOL developers agree that typical defects are mostly caused by corrupted data (73.3\% and 55.4\%) or missing specifications (76.7\% and 73\%). In the case of challenging defects, respondents expressed weaker opinions about the root cause, shifting their opinions from \textit{Always} towards \textit{Often} and \textit{Sometimes}. Additionally, we note an increased selection of \textit{Not coded to specification} as a root cause of challenging defects compared to the results obtained for typical defects. Developers also agree that both typical and challenging defects are more likely to be located in the code developed in-house or outsourced than in external APIs (Fig.~\ref{fig:source}). The majority of defects are deemed to originate in a newly developed code. However, developers ranked legacy code as the second most likely source of issues (Fig.~\ref{fig:age}). \begin{tcolorbox} \small{Testers' abilities to detect challenging defects are limited due to environments’ restrictions and lack of tools supporting efficient testing of e.g., Performance, or Workload/Stress-related scenarios.} \end{tcolorbox} \iffalse \noindent\textbf{Defect report content} \begin{table}[h] \caption{The most useful categories of information enclosed in a defect report and developers' perception on how often a given type of information is present in a defect report. Availability is measured using Likert scale with \textcolor{red1}{Never/Rarely}, \textcolor{gray1}{Sometimes} and \textcolor{green1}{Often/Always} options.} \label{tab:content} \centering \scriptsize \begin{tabular}{p{2.3cm}|ll|ll} \toprule \multirow{2}{2.3cm}{Information category} & \multicolumn{2}{c}{COBOL} & \multicolumn{2}{|c}{Non-COBOL} \\ & Mean rank & Availability & Mean rank & Availability \\ \midrule Error log & 3.400 (1)& \PB{0.233}{0.533} & 3.838 (3) & \PB{0.081}{0.324}\\ Expected behavior & 3.500 (2) & \PB{0.133}{0.467} & 4.365 (5) & \PB{0.068}{0.230}\\ Failing test case & 3.667 (3)& \PB{0.2}{0.73} & 4.959 (6) & \PB{0.189}{0.514}\\ Observed behavior & 4.100 (4)& \PB{0.1}{0.4333} & 3.541 (2) & \PB{0.041}{0.135}\\ Code component name & 4.800 (5) & \PB{0.4}{0.7} & 5.513 (7) & \PB{0.473}{0.716}\\ Steps to reproduce & 4.867 (6)& \PB{0.4}{0.7} & 3.081 (1) & \PB{0.095}{0.338}\\ Stack Trace & 5.233 (7) & \PB{0.5}{0.733} & 4.189 (4) & \PB{0.203}{0.595}\\ Program version & 6.433 (8) & \PB{0.5}{0.733} & 6.514 (8) & \PB{0.297}{0.459} \\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:content} shows developers' perspective on usefulness of various categories of information enclosed in a defect report. More specifically, for each information category, we show mean rank, calculated as a average of ranks assigned by respondents, absolute rank value and availability of given type of information depicted using Likert scale. Overall, we note that COBOL developers express strong opinion about availability of certain information types. Error log, Expected behavior (EB), Failing test case and OB are indicated to be often enclosed in a defect report, while remaining information sources, including Steps to reproduce (S2R) and Code component name, are less likely to be present. Interestingly, COBOL developers also reported that the most often available categories of information are also the ones that are the most useful. During follow-up interviews, COBOL developers expressed strong support towards Error log indicating that they\textit{get the useful information, such as errors or warning, from the log and use other sources of information only if the error log fails.} Analyzing the usefulness of different categories of information reported by non-COBOL participants, we notice the developers ranked S2R as the most helpful category, followed by Observed behavior (OB) and Stack Trace. The high rank of S2R comes from the fact, that it provide developer with an exact scenario leading to a defect, while OB is less detailed and requires developer to find out how the defect can be reproduced. In fact, according to one of non-COBOL developer \textit{having steps to reproduce allows to see the defect happening in front of you. Once you can reproduce the problem, then it often becomes a trivial problem to solve. Most often, as soon as I see the steps to reproduce, I can almost always tell at least which class is involved, in not a method}. \fi \subsection{Defect Location Strategies} We analyzed developers' responses for the three phases of the defect location process outlined in the taxonomy in Section~\ref{studyDesign_defectLocationTaxonomy}: searching for a starting point, expanding relevant program elements, and documenting~\cite{wang2011}. \noindent \textbf{Phase 1: Search.} Table~\ref{tab:start-mental} presents reasoning strategies employed by developers in the first phase of the defect location process, ranked in order of importance. We note that each group of respondents have their own distinctive preferences. COBOL developers declare to first focus their efforts on locating a similar defect that was solved before (47\%), while non-COBOL developers prefer to identify simple steps to reproduce (44\%). Moreover, both groups frequently selected the opposite strategy as their second top choice, further emphasizing the usefulness of the two approaches. Next, participants indicated they tend to look into source code files and blocks of code, and finally, they concentrate on keywords present in a defect report. The developers further confirmed these rankings in the follow-up interviews. For instance, COBOL developers mentioned that \textit{seeing a defect which is similar to an issue fixed before gives a good approximation of the potential problem and its location}, therefore their organizations tend to \textit{keep defect databases to retain the knowledge of previous patterns of software failures}. \begin{table}[t] \scriptsize \caption{Reasoning strategies undertaken by developers when examining new defect reports.} \label{tab:start-mental} \centering \scriptsize \begin{tabular}{p{4.8cm}|ll} \toprule \multirow{2}{4.8cm}{Reasoning strategy} & \multicolumn{1}{l}{COBOL} & \multicolumn{1}{c}{Non-COBOL} \\ & Mean rank & Mean rank \\ \midrule Thinking about similar defect fixed before & 2.567 (1) & 3.378 (2) \\ Thinking about simple steps allowing to reproduce the defect & 3.133 (2) & 2.568 (1) \\ Thinking about source code files that may be related to the defect & 3.200 (3) & 3.689 (4) \\ Thinking about block of codes that may be related to the defect & 4.167 (4) & 3.500 (3) \\ Identifying keywords that can be related to defect's location & 4.367 (5) & 4.014 (5) \\ Refining or augmenting keywords based on my knowledge & 5.567 (6) & 5.541 (6)\\ Thinking about not obvious steps that can cause the defect & 5.000 (7) & 5.311 (7)\\ \bottomrule \end{tabular} \end{table} Developers' opinion on the effectiveness of physical actions is illustrated in Fig.~\ref{fig:start-actions}. In the initial phase, developers rank reading code and reviewing defect descriptions as the most useful actions for locating an entry-point for further exploration. Additionally, we note that COBOL developers prioritized searching for code components and navigating code dependencies, whereas non-COBOL developers prefer to explore source code files. This difference can be attributed to the tool capabilities (e.g., providing a summary of related code components) used by non-COBOL developers during their quick file exploration. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/locations/start_actions_shared1.pdf} \caption{\footnotesize{Developers' perspective on the usefulness of physical actions in the Search phase.}} \label{fig:start-actions} \vspace{-5mm} \end{figure} With respect to the most useful patterns in the Search phase of the defect location process, the majority of developers in both groups indicated they prefer to use an execution-based strategy to locate an initial set of entry points, followed by textual search and navigating static code dependencies. \noindent \textbf{Phase 2: Expand.} After locating a set of entry points, developers move to the Expand phase. Table~\ref{tab:expand-mental} depicts the main goals developers aim to achieve at this stage. We observe that both groups are mainly focused on reproducing the defect to confirm whether their location process moves in the direction of the defect's cause. Additionally, respondents indicated that they need to understand the code behavior better. Follow-up interviews revealed that as developers are often under time pressure, they prefer to focus on the fastest path leading to resolving the defect. Therefore, they opt for reproducing the faulty behavior as \textit{once the defect gets reproduced, it is very easy to proceed further}. \begin{table}[h] \scriptsize \caption{\small{Developers' motivations and goals in the Expand phase.}} \label{tab:expand-mental} \centering \scriptsize \begin{tabular}{p{4.8cm}|ll} \toprule \multirow{2}{4.8cm}{Motivation/Goal} & \multicolumn{1}{l}{COBOL} & \multicolumn{1}{c}{Non-COBOL} \\ & Mean rank & Mean rank \\ \midrule Reproducing a defect & 1.600 (1) & 1.905 (1) \\ Understanding code behavior & 2.500 (2) & 2.162 (2) \\ Locating more code components & 2.633 (3) & 2.824 (3) \\ Depicting dependencies & 3.267 (4) & 3.108 (4) \\ \bottomrule \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/locations/expand_actions_shared1.pdf} \caption{\footnotesize{Developers' perspective on the usefulness of physical actions in the Expand phase.}} \label{fig:expand-actions} \vspace{-5mm} \end{figure} Developers' perception of the usefulness of physical actions in the Expand phase is shown in Fig.~\ref{fig:expand-actions}. Like the Search phase, developers rank reading code as the most beneficial activity, followed by navigating structural code dependencies and exploring source code files. Further, when analyzing developers’ patterns in this phase, we note that they prefer to use an execution-based approach. In contrast to the Search phase, developers reported leveraging code dependencies more often than performing a textual search. \noindent \textbf{Phase 3: Document.} Preserving information about located relevant files concludes the defect location process. Fig.~\ref{fig:document} illustrates how developers keep track of important code components. Respondents in both groups indicated they frequently write notes (e.g., on paper or in a text editor) and use breakpoints to indicate relevant code pieces. We notice that non-COBOL developers rely more on ``open tabs’’ and ``keep in mind’’ strategies compared to COBOL developers. Both preferences can be explained in part by the tooling support for non-COBOL developers. For instance, a typical modern IDE provides ample opportunity to keep multiple tabs open and enough visual cues to facilitate the ``keep in mind’’ strategy. \begin{tcolorbox} \small{\textit{RQ3:} COBOL and non-COBOL developers approach defect location rather similarly. Minor differences are explainable by the varying level of experience and available tooling support.} \end{tcolorbox} \subsection{Code granularity throughout the defect location process} \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{figures/locations/document.pdf} \caption{\footnotesize{Developers' perspective on documentation.} \label{fig:document} \vspace{-5mm} \end{figure} In the case of non-COBOL developers, we observe a consistent preference for Method/Function level code granularity across all phases with support varying between 37.8 and 44.6\%. We note a slight change in preferences in Phase 3 when developers mark relevant code components to be fixed. At this stage, non-COBOL developers reported increased interest in the Block of code (from 17.6\% to 30\%) and Line of code (from 4.1\% to 21.6\%) levels of granularity. COBOL developers tend to follow a top-down approach focusing first on the high granularity level and progressively going towards lower levels. In the Search phase, 50\% of COBOL developers prefer to work with Files or Methods. In the Expand phase, they increasingly prefer Methods level, while when documenting, they operate at the level of Blocks or Lines of code. All COBOL developers agreed with these results during follow-up interviews and related the top-down code navigation hypothesis to the COBOL code structure. On the other hand, non-COBOL developers indicated that while they sometimes follow a similar top-down approach, overall, they prefer to reason about functions as \textit{a single line of code rarely provides enough context}, whereas, in \textit{a well-written code, a method encapsulates a specific sub-task and provides just enough information to understand and efficiently solve the defect}. \begin{tcolorbox} \small{COBOL developers typically follow a top-down approach to navigate artifacts during defect location as reflected by their choices for specific artifacts at different phases.} \end{tcolorbox} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/locations/granularity.pdf} \caption{\footnotesize{Preferred granularity of code components in different phases of defect location process.}} \label{fig:granularity} \vspace{-5mm} \end{figure} \section{Study design} \label{sec:studyDesign} For this study, we employed an online survey due to the following reasons. First, the population of COBOL developers is relatively small and scattered, hence to recruit a sufficient number of participants we could not be limited by our geographical area. Second, COBOL projects are of crucial value to companies, so accessing their internal data was highly unlikely. Third, due to the rapidly evolving COVID-19 pandemic, an online study remained the safest choice. We employed a mixed-methods approach to ascertain the differences between the COBOL and non-COBOL developer populations regarding defect types and defect location strategies. Specifically, we underwent the following stages, as depicted in Figure~\ref{fig:approach}: (A) Defect types taxonomy synthesis, (B) Defect location strategy taxonomy synthesis, (C) Survey Design, (D) Survey Deployment and Participant Recruitment, (E) Result analysis and follow-up validation, and (F) COBOL Forums Defect Classification. We next describe each of these stages in detail. \begin{figure} [ht] \centering \includegraphics[trim=100 80 90 80,clip,width=0.9\linewidth]{figures/Methodology.pdf} \caption{\small{An overview of our study design methodology.}} \label{fig:approach} \vspace{-6mm} \end{figure} \subsection{Defect Type Taxonomy} \label{studyDesign_defectTaxonomy} Previous research has identified various taxonomies characterizing software defects~\cite{odc,catolino2019,freimut2005,grady1992}. In this study, we leverage the classification schema proposed by Catolino et al.~\cite{catolino2019}, which provides a concise, yet comprehensive selection of defect types based on 9 categories: Configuration, Database, GUI, Network, Performance, Permission/Deprecation, Program Anomaly, Security, and Test Code. However, we note two major drawbacks of applying this taxonomy in our study. First, Program Anomaly is a broad category and pertains to the majority of software defects. This concern was identified during our think-aloud mock survey sessions (details in Sec.~\ref{sec:survey-design})(also mentioned Catolino et al..~\cite{catolino2019}). Therefore, we decided to further expand Program Anomaly into 3 categories based on defect types indicated during the mock surveys. \textit{Input data} refers to issues caused by unexpected or malformed input data, which often occurs in the COBOL transaction processing system. \textit{Concurrency-related} defects involve problems caused by e.g., incorrect access to shared resources or deadlocks. In contrast, \textit{Logic/Flow} defects pertain to programming errors, e.g., incorrect use of data structures. Second, participants of the mock survey session additionally identified a separate defect category related to issues arising due to the system operating at the limit of the resources, \textit{Workload/Stress}. In total, we use the 11 defect categories shown in Table~\ref{tab:defects-taxonomy}. To capture additional software defect properties, we leverage Orthogonal Defect Classification (ODC)~\cite{odc}. ODC captures defect properties from two perspectives, when the defect is reported and when it is closed. Although ODC captures fine-grained details of defects, it is not applicable in the context of a survey since that requires participants to be familiar with extensive ODC terminology. Therefore, we opted for a hybrid approach where we leverage Catolino et al.~\cite{catolino2019} defect categories, which to some extent cover ODC's defect reporting perspective, to which we add three of the ODC's defect closing perspective: \textit{Age, Source}, and \textit{Qualifier}. In particular, \textit{Age} refers to the time when the defect was introduced into the code base, \textit{Source} indicates which part of code was affected, whereas \textit{Qualifier} identifies the cause of the defect. Comparing the options specified in ODC for each of these dimensions, we decided to introduce two changes. First, we introduce \textit{Corrupted data} as a qualifier to capture one of the frequent causes of defects in COBOL projects. Second, we merge \textit{Outsourced} and \textit{Ported} categories in the \textit{Source} dimension into one category, \textit{Outsourced}. The modification was dictated by observing how during the mock survey session, the participants struggled to differentiate between the two options. Finally, following Morrison et al.~\cite{morrison2018}, we introduced a dimension to determine an entity that reported a defect, \textit{Reporter}. The final version of defect location taxonomy is presented in Table~\ref{tab:defects-taxonomy} and includes the defects categories and their properties. \begin{table}[ht] \scriptsize \caption{\small{Our software defect taxonomy based on \cite{odc} and \cite{catolino2019}.}} \label{tab:defects-taxonomy} \begin{tabular}{l|ll} \toprule \textbf{Defect categories} & \multicolumn{2}{l}{\textbf{Defect properties}} \\ \midrule Concurrency & \multirow{3}{*}{Reporter} & Developer \\ Configuration & & Quality assurance personnel/Tester \\ Database & & End user \\ \cline{2-3} GUI & \multirow{4}{*}{Cause} & Not coded to specification \\ Input Data & & Missing or incomplete specification \\ Logic/Flow & & Corrupted data \\ Network & & Extraneous \\ \cline{2-3} Performance & \multirow{3}{*}{Source} & Developed in-house \\ Permission/Deprecation & & External library/API \\ Security & & Outsourced \\ \cline{2-3} Test Code & \multirow{4}{*}{Age} & New feature \\ Workload/Stress & & Legacy code \\ & & Re-factored \\ & & Changed to fix a defect \\ \bottomrule \end{tabular} \ \end{table} \subsection{Defect Location Taxonomy} \label{studyDesign_defectLocationTaxonomy} There is a rich history of exploratory observational studies investigating the process of defect location~\cite{ko2006,sillito2008,wang2013,kevic2017,damevski2016}. In general, we note the following common strategies emerge across different studies: looking for initial code entities, exploring related program elements, and documenting relevant code components~\cite{ko2006,wang2013,sillito2008}. In this study, we decided to leverage the hierarchical feature location model proposed by Wang et al.~\cite{wang2013}. The model comprises three levels of granularity: phases, patterns, and actions. Each phase represents a high-level stage in a feature location process, namely, \textit{Search} for entrance points, \textit{Expand} the search, \textit{Validate}, and \textit{Document}. Patterns define common strategies developers follow at different phases related to, e.g., execution or textual search, while actions correspond to fine-grained physical or mental activities undertaken by developers. Across all phases, the authors identified 11 physical actions and 6 mental actions. The physical actions included, e.g., reading code, or exploring source code files, while mental actions covered, e.g., conceiving an execution scenario or identifying keywords. \begin{figure*}[ht] \centering \includegraphics[width=0.8\textwidth]{figures/locations/paper_drawings_citation.pdf} \caption{Defect location model based on Wang et al.~\cite{wang2013}.} \label{fig:defect-location-model} \vspace{-5mm} \end{figure*} We deem this model serves as a good starting point as it: (1) captures feature location process at different levels of granularity; (2) is based on three exploratory studies with a large number of developers; and (3) shares a common set of activities with previous studies. Although the model provides a comprehensive description of patterns used for locating relevant code entities, we decided to alter some of its components to include developers' practices discovered by prior studies. These modifications capture relevant activities as indicated by the participants of the mock survey. Figure~\ref{fig:defect-location-model} presents the overview of the model with phases containing available patterns, while physical and mental actions are listed below each phase. We mark introduced modifications with red color. In general, we reuse all of the phases defined in the original model. However, we extend available patterns. In particular, we included the IR-based pattern in the \textit{Expand} phase for the symmetry with the \textit{Search} phase. Researchers noted that developers tend to lose track of the relevant program elements due to a large number of examined files and interruptions~\cite{ko2006,wang2013,sillito2008,latoza2006}. To provide more context for the \textit{Document} phase and investigate if/how developers preserve the information about relevant code components, we curated a list of common documenting strategies based on prior work~\cite{ko2006, wang2013}. We modified the original list of physical actions in the following ways. We merged {\em Breakpoints operations}, {\em Step program} and {\em Run program} into one succinct action, {\em Observing runtime}. As noted in \cite{kruger2018}, the location of a specific feature can be discovered by examining the release log and pull requests, thus to reflect that we added new physical action, {\em Examine Version Control System (VCS)}. We included all mental actions identified by Wang et al. and added one new action, {\em Thinking about similar defects fixed before}, which aims to measure the importance of developers experience~\cite{jordan2015}. We decided to refer to mental actions as {\em reasoning strategies} throughout the survey for ease of understanding. Finally, we associated a set of {\em Motivations/Goals} with the {\em Expanding} phase to capture the main objectives of developers at this stage ~\cite{ko2007information,sillito2008}. \subsection{Survey Design} \label{sec:survey-design} We leveraged the taxonomies developed in the previous steps to formulate our survey questions and ask participants to provide answers in the context of their routine work. We divided the survey into three main sections: \textit{Software Defects, Defect Location,} and \textit{Demographics}. In the \emph{Software Defects} section, we focused on the distribution of the defects taxonomy attributes in the context of typical and challenging defects. We primed the survey participants with the following definitions: \begin{itemize} \item \emph{Typical defects} - software defects faced frequently by study participants in their daily work, \item \emph{Challenging defects} - software bugs that require a considerable amount of time and effort when fixing due to, e.g., difficulties in isolating the root cause, inability to reproduce the defect, or developing a viable solution. \end{itemize} Participants were asked to select between 3 to 5 typical software defect categories and up to 3 challenging categories. A short explanation accompanied each defect category. To capture the properties of typical and challenging bugs, we presented developers with a 5-point Likert scale (Never, Rarely, Sometimes, Often, Always), asking how frequently the participant observes specific properties of a defect, e.g., reporter being a tester or an end-user. In the \emph{Defect Location} section, we focused on collecting the distribution of taxonomy attributes in the context of a general defect location process. We asked questions about the most useful patterns, reasoning strategies or motivations, and physical actions related to the defect location process. Moreover, to investigate how developers move through the code, at the end of each phase, we asked them about their preferred granularity level~\cite{kocchar2016}. Finally, we captured the demographic information in the \textit{Demographics} section. In particular, we asked about: \begin{itemize} \item Years of paid professional experience, \item Preferred programming languages, \item Current country of residence, \item Willingness to participate in a follow-up interview. \end{itemize} Once we had the initial draft of questions, we ran a mock survey with both COBOL and non-COBOL developers following a think-aloud protocol~\cite{redmiles2017summary} to estimate the average response time and further fine-tune our survey. The mock survey was performed with developers recruited in-house at Phase Change Software via teleconference calls. We were interested in identifying the questions that were often: (1) \textit{misunderstood} - participants consistently erred in understanding the intent behind the question; (2) \textit{irrelevant} - participants consistently question the relevance of the question. We promptly removed the irrelevant questions. Furthermore, we did a lightweight thematic analysis of the think-aloud recordings to further refine the questions, which included three changes. First, we refined the language in questions that caused of the confusion. Second, we reordered questions to achieve a better transition. Finally, we created two distinct surveys individually catering to the COBOL and the non-COBOL developer population. The surveys had semantically equivalent questions, differing, when necessary, to account for discrepancies in terminology. For instance, classes/files in the non-COBOL survey translate to modules/procedures in the COBOL survey. Based on our mock sessions, we estimated that our survey’s average response time is 30 minutes. All of the questions used in our final survey can be found on the project website~\cite{projectWeb}. \subsection{Survey Deployment and Participant Recruitment} To recruit COBOL developers, we contacted the mailing list maintained by Phase Change Software\footnote{\url{https://www.phasechange.ai/}}. We further posted the survey to various COBOL groups on Linkedin and IBM-maintained COBOL developer forums. To recruit non-COBOL developers, we reached out to our personal contacts at several worldwide software companies (Spotify, Google, Facebook, Nokia, Capgemini), asking to distribute the survey across experienced software developers. To capture the perspective of the broader audience of software engineers, we also publicized our survey on professional forums within social platforms such as LinkedIn, Twitter, and Facebook. To improve our response rate, we offered rewards in the form of a chance to win an Amazon gift card. For COBOL developers, we set the reward to USD 25 for 25 participants selected at random. For non-COBOL developers, we set the reward to USD 10 for 25 participants selected at random. The disproportionate number of developers influenced the difference in reward in each group, which is also reflected in these two groups’ response-rates. \subsection{Analysis and follow-up validation} In this stage, we analyzed collected responses. A summary of these results was presented to a subset of survey participants as a light-weight mechanism to confirm our synthesis. Each semi-structured session lasted 30 minutes and followed a script prepared beforehand. In all, we recruited 6 COBOL and 4 non-COBOL survey participants at random for the follow-up interviews. These sessions were conducted on Zoom, an online video conferencing platform, and recorded with the participants’ permission to be transcribed and analyzed offline. \subsection{COBOL Forums Defect Classification} \label{forumDefectClassification} Studying developers' perspectives can be biased by one's perception; hence we decided to compare defect types reported in the survey with results obtained by data mining. We referred to prior research~\cite{morrison2018} to triangulate the survey results of the non-COBOL reported defect types. However, no such baseline existed for COBOL developers. To address this gap, we manually classified the queries posted in the COBOL Programming forums of the IBM Mainframe Experts group~\cite{ibmmainframes}, which we believe is representative of our target COBOL developers population. The forum has over 6000 posts, so to get to a 95\% confidence level with a 10\% confidence interval to the target representative population, we needed to annotate at least 100 posts. However, not all of the posts were defect-related. Thus, authors ended up coding 400 posts only to discard nearly 300 posts as non-defects. Authors independently classified the defect types of 20 defect-related posts. We captured our agreement with a 0.32 Fleiss' Kappa score, which can be interpreted as a ``fair agreement''. After discussing and resolving the differences, the authors again classified the next 20 defect-related posts to reach 0.64 Fleiss' Kappa score indicating ``substantial agreement’’. The authors then resolved their differences and split the rest of the posts to code them independently. In all, we coded 107 defect-related posts. \section{Threats to validity} This study suffers from several limitations that can impact the validity and generalizability of the results. \noindent\textbf{Internal validity.} Terminology employed in the mainframe environment differs from that leveraged by non-COBOL developers. Providing developers with imprecise or misleading wording may increase the interpretation bias, which, in turn, leads to unreliable results. We developed two independent surveys with the same content to mitigate this threat, with terminology adjusted to the target group of respondents. Examples of such adjustments include, e.g., using code granularity levels from the mainframe environment such as Job, Section, Paragraph, or modifying {\em External libraries/API} to {External libraries/COTS} (Common of the shelf software). \noindent\textbf{External validity.} The survey measures developers' perception, which is strictly subjective and affected by interpretation bias; hence it may not reflect true defect and defect location strategies. We mitigated this threat with the following steps. First, we conducted a pilot study with a COBOL and non-COBOL developer to ensure the usage of correct and clear terminology. Second, we recruited a large number of participating developers to account for potential noise in the data. Finally, we conducted follow-up interviews to validate our observations and gather developers' rationale for them. Furthermore, we triangulated the non-COBOL defect-types results with an independent baseline~\cite{morrison2018} and COBOL defect types by mining defects from online forums.
{ "timestamp": "2021-07-16T02:04:01", "yymm": "2105", "arxiv_id": "2105.01830", "language": "en", "url": "https://arxiv.org/abs/2105.01830" }
\section{Introduction}\label{sec1} Topological phases of matter have been largely understood and fully classified in terms of symmetries (time-reversal, particle-hole, chiral and crystal symmetries) of the quantum systems under investigation~\cite{Chiu16, Kruthoff17, Zhang19, Vergniory19, Tang19}. Such phases can be characterized by their topological invariants and are robust against weak time-reversal invariant perturbations~\cite{Bansil16}. In ultracold atomic systems, the preparation of topologically non-trivial states via dynamically engineered perturbations has been experimentally realized and measured~\cite{Jotzu14, Aidelsburger15}. However, it has been demonstrated that in the thermodynamic limit, topological characteristics of the time-dependent wave function are preserved under local unitary evolution~\cite{Chen10, Foster13, Sacramento14, Alessio15, Sacramento16, Caio16}. As a direct consequence, it puts into question how dynamical topological transitions can take place in isolated quantum systems. To overcome such obstacle, approaches were proposed that include dephasing~\cite{Ying16} or the presence of interactions~\cite{Kruckenhauser18, Michael19}. The latter may allow a system to act as its own bath and stabilize a dynamically induced topological phase, despite still being governed by a unitary evolution. In particular, for two-dimensional systems in equilibrium, the celebrated TKNN expression~\cite{Thouless82} directly relates the mathematical topological invariant, the Chern number $C$, with a physical observable, the Hall response $\sigma_{xy}$. Departing from equilibrium conditions, growing evidence shows that such relation breaks down, that is, an induced Hall response may coexist with an invariant Chern number~\cite{Ying16,Wilson16,Michael19,Ge21,Peralta18}. Our starting point is an interacting model in which the formation of a local order parameter directly competes with the topological phase~\cite{Varney10, Varney11}, part of a large scope of studies with similar scenarios~\cite{Rachel10, Yamaji11, Zheng11, Yu11, Griset12, Hohenadler11, Hohenadler12, Reuther12, Laubach14, Shao21}. These are to be contrasted with investigations wherein topological Mott insulators are argued to be directly induced by the interactions~\cite{Raghu08, Wen10, Weeks10, Ruegg11, Yang11, Budich12, Dauphin12, LeiWang12, Yoshida14}. The fundamental question we address is on the possibility that a non-trivial Hall response can be obtained when performing dynamical perturbations in the otherwise topologically trivial phase within the strong coupling regime. For that, we explore the route of promoting engineered perturbations, in particular on subjecting the system to circularly polarized light. This has been shown in the past, for non-interacting schemes, to induce a non-trivial topological response~\cite{Oka09, Inoue10, Kitagawa11, Lindner11, Perez-Piskunow14, Usaj14}. Going beyond the usual Floquet picture of time-periodic drivings, our approach is connected to recent experiments that make use of a short-lived perturbation, i.e., a femtosecond pulse of circularly polarized light~\cite{McIver20}. Such method endows the ability of fine tuning the amount of energy deposited in the system and, consequently, resonantly explore some of its excited states~\cite{Shao19}. Unlike in experiments, since we do not possess explicit dephasing mechanisms, its effect on engineering non-trivial topological characteristics is long-lived, in spite of the perturbation being constrained in time. In details, we study the half-filled spinless Haldane model with nearest-neighbor repulsion in- and out-of-equilibrium, by means of exact diagonalization (ED). The phase transition of this model, from a Chern insulator (CI) towards a trivial charge-density-wave (CDW) Mott insulator with growing interactions, has been studied by Varney \emph{et al.}~\cite{Varney10, Varney11}. They further demonstrated that clusters with sixfold rotational symmetry and $K$ point included in the discrete momentum space, display reduced finite-size effects. Hence, we adopt a 24-sites cluster [see inset in Fig.~\ref{fig_1}(b)] that is amenable to ED and satisfies the above conditions. Our study in equilibrium confirms that once a CDW order is formed after a level crossing (indicating the first-order phase transition), the topological characteristics of the ground state (GS) are no longer present. We then notice that a low-energy excited state within the CDW regime, which is smoothly connected to the GS in the parent Chern insulating phase, preserves its finite Chern number and nonzero Hall response before merging into the continuum of the spectrum for even larger interactions. Based on the energy difference of this topological excited state to the GS, a circularly polarized pump is resonantly applied to stimulate the initial CDW ground state. We find that the overlap of the time-evolving wave function is oscillatory between the two states. The oscillating frequency mainly depends on the pump amplitude, resulting that the post-pump nonequilibrium state can be tuned by the laser strength. Finally, we contrast these results with the ones from a quench protocol, and we observe the inability to recover such dynamical topological phase. The presentation is organized as follows: In Sec.~\ref{sec_model}, we introduce the model, methods and all the relevant quantities. An analysis of our results, in- and out-of-equilibrium, is shown in Sec.~\ref{sec_Results}, and a conclusion, accompanied by a discussion of the results, is given in Sec.~\ref{conclusion}. \section{Model and method}\label{sec_model} The model under investigation is the half-filled spinless Haldane model with repulsive nearest-neighbor interactions: \begin{eqnarray} \hat H=&-&t_1\sum_{\langle i,j\rangle}(\hat c^{\dagger}_{i} \hat c^{\phantom{}}_{j}+\text{H.c.}) -t_2\sum_{\langle\langle i,j\rangle\rangle}(e^{{\rm i}\phi_{ij}}\hat c^{\dagger}_{i} \hat c^{\phantom{}}_{j}+\text{H.c.}) \nonumber \\ &+&V\sum_{\langle i,j\rangle}\hat n_{i}\hat n_{j}. \label{eq:H} \end{eqnarray} Here, $\hat c^{\dagger}_{i}$ ($\hat c^{\phantom{}}_{i}$) represents the fermionic creation (annihilation) operator at site $i$ and $\hat n_{i}$ is the corresponding number operator. $t_1$ ($t_2$) is the nearest-neighbor (next-nearest-neighbor) hopping constant and $V$ is the nearest-neighbor interaction. A phase $\phi_{i,j}=\frac{\pi}{2}$ ($-\frac{\pi}{2}$) in the anticlockwise (clockwise) loop is added to the second hopping term, which breaks the time-reversal symmetry, resulting in topologically non-trivial behavior in the system. In equilibrium, we calculate the topological invariants of the GS and a selection of low-lying eigenstates. This is quantified by the Chern number, which is defined as an integration over the Brillouin zone~\cite{Niu85} using twisted boundary conditions~\cite{Didier91}, \begin{align} C = \int \frac{d\phi_x d\phi_y}{2 \pi i} \left( \langle\partial_{\phi_x} \Psi^\ast | \partial_{\phi_y} \Psi\rangle - \langle{\partial_{\phi_y} \Psi^\ast | \partial_{\phi_x} \Psi\rangle} \right), \label{eq:C} \end{align} with $\ket{\Psi}$ the many-particle wave function, and $\phi_x$ ($\phi_y$) the twisted phase along the $x$ ($y$) direction. This continuous expression has been shown to converge to the correct one if using a sufficiently discretized version~\cite{Fukui05, Varney11}. In out-of-equilibrium conditions, we employ the time-dependent Lanczos technique~\cite{Prelovsek} to evolve the many-body wave function via, \begin{equation} \ket{\psi(t+\delta{t})}=e^{-\mathrm{i}H(t)\delta t}\ket{\psi(t)} \simeq\sum_{l=1}^{M}{e^{-\mathrm{i}\epsilon_l\delta{t}}}\ket{\phi_l}\braket{\phi_l}{\psi(t)}, \label{eq:lanczos} \end{equation} where $\epsilon_l$ and $|\phi_l\rangle$ are the eigenvalues and eigenvectors of the $M$-dimensional Krylov subspace generated in the Lanczos process at each instant of time $t$. Here, we chose $\delta{t}=0.02$ and $M=30$ to ensure the convergence of the numerical evolution. In what follows, the system can be excited by either a quench, with a sudden change of $V$, or by a pump pulse. In the latter, the external electric field during photoirradiation can be included into the Hamiltonian via the Peierls substitution in the hopping terms: \begin{equation} c^{\dagger}_{i,\sigma}c_{j,\sigma}+\text{H.c.}\rightarrow e^{\mathrm{i}\textbf{A}(t)\cdot(\textbf{R}_j-\textbf{R}_i)}c^{\dagger}_{i,\sigma}c_{j,\sigma}+\text{H.c.}, \label{eq:Peierls} \end{equation} where, \begin{equation} \textbf{A}(t)=A_0e^{-\left(t-t_0\right)^2/2t_d^2} (\cos\left[\omega_0\left(t-t_0\right)\right],\sin\left[\omega_0\left(t-t_0\right)\right]), \label{eq:vpotent} \end{equation} is the vector potential of a circularly polarized pump pulse. Its temporal envelope is centered at $t_0$ and taken to be Gaussian. The parameter $t_d$ controls its width, and $\omega_0$ is the central frequency. Additionally, we define $\Delta t=t-t_0$ as the time difference between the probing and pumping (or quench) instants. To characterize the dynamics and the possible non-trivial Hall response, we calculate the following relevant quantities:\\ \indent i) $|\langle\psi(\Delta t)|\psi_n\rangle|$, the overlap between the time-dependent wave function and the $n$-th eigenstate of the equilibrium Hamiltonian [Eq.~\eqref{eq:H}] in the \textbf{k}=(0,0) quasi-momentum subspace. As the ground state is located at this momentum sector for our model parameters, and that both pump and quench scenarios do not break translational invariance, all the dynamics we explore is constrained to this subspace.\\ \indent ii) $S_{\text{CDW}} = \frac{1}{N}\sum\limits_{i,j}\langle (\hat n_{i}^{A}-\hat n_{i}^{B})(\hat n_{j}^{A}-\hat n_{j}^{B})\rangle$, structure factor of CDW, where $N=12$ is the number of unit cells and $\hat n_{i}^{\alpha}$ denotes the density number operator at site $i$ of sublattice $\alpha$ $= A$ or $B$.\\ \indent iii) $\tilde{\sigma}_{xy}(t_{\text{H}})$, the dynamical Hall response. By applying a weak electric field $E_x(t_{\text{H}})=F_0(1-e^{-t_{\text{H}}/\tau})$ along the $x$ direction, with $F_0=10^{-4}$ and $\tau=5$, we measure the induced current in the $y$ direction, $J_y(t_{\text{H}})$, as to obtain the dynamical Hall response $\tilde\sigma_{xy}(t_{\text{H}})=\frac{J_y(t_{\text{H}})}{F_0\cdot A_s}$, where $A_s$ is the total area of the cluster. In this process, $F_0$ is sufficiently small to mitigate its influence on the system's dynamics. Note that in what follows $E_x(t_{\text{H}})$ will be applied to either the eigenstates of the Hamiltonian~\eqref{eq:H}, such as the GS and second excited state, or to the time-dependent wave function $\psi(\Delta t)$ in out-of-equilibrium conditions. In the latter, we choose the reference probe time $t_{\text{H}}=0$ located at $\Delta t=40$ after the pump, when discussing the results of Fig.~\ref{fig_4}. The electric field $E_x(t_{\text{H}})$ is introduced smoothly from $t_{\text{H}}=0$ by adding a vector potential \begin{eqnarray} A_{x}^{\rm Hall}(t_{\text{H}})=F_0(t_{\text{H}}+\tau\cdot e^{-t_{\text{H}}/\tau}-\tau). \label{eq:Hall} \end{eqnarray} to Eq.~\eqref{eq:Peierls}. In summary, we employ three steps to obtain the Hall response after the pump: \\ \indent 1. Apply a pump expressed by Eq.~\eqref{eq:vpotent}, calculating the current $J'_y(t_{\text{H}})$ from $\Delta t=40$; \\ \indent 2. apply both the pump and weak electric field (starting from $\Delta t=40$) described by Eq.~\eqref{eq:vpotent} and Eq.~\eqref{eq:Hall}, respectively, to calculate $J^{''}_y(t_{\text{H}})$; \\ \indent 3. obtain the net current $J_y(t_{\text{H}})=J^{''}_y(t_{\text{H}})-J^{'}_y(t_{\text{H}})$ so that $\tilde\sigma_{xy}(t_{\text{H}})=\frac{J_y(t_{\text{H}})}{F_0\cdot A_s}$ represents the nonequilibrium Hall response. Alternatively, we can use the direct computation of the Kubo formula to benchmark $\tilde{\sigma}_{xy}(t_{\text{H}})$ in equilibrium: \begin{equation} \sigma_{xy}=\frac{{\rm i} \hbar}{A_s} \sum_{n \neq 0} \frac{\langle \psi_0|\hat J_y|\psi_n\rangle \langle \psi_n|\hat J_x|\psi_0\rangle-\langle \psi_0|\hat J_x|\psi_n\rangle \langle \psi_n|\hat J_y|\psi_0\rangle}{(E_n-E_0)^2} \label{eq:lin_resp} \end{equation} where $(E_\alpha,\psi_\alpha)$ are the eigenpairs of the Hamiltonian~\eqref{eq:H}, and $\hat J_\mu$ is the current operator along the $\mu$ direction. In what follows, we set the parameters $t_2=0.2$, using units where $e=\hbar=1$ and the lattice spacing $a_0=1$. In these units, $t_1$ and $t_1^{-1}$ are set to be the unit of energy and time, respectively. \section{Results and Analysis}\label{sec_Results} We start by calculating the low-lying energy spectrum versus the interaction strength $V$ for the Hamiltonian~\eqref{eq:H}, as shown in Fig.~\ref{fig_1}(a). A level crossing, associated with a first-order phase transition between the CI and CDW phases, locates at $V\simeq1.9$. The Chern number and CDW structure factor of the ground state also suddenly change at this point, see Fig.~\ref{fig_1}(b) -- these results are consistent with those in Ref.~[\onlinecite{Varney11}]. From Fig.~\ref{fig_1}(a), a close inspection reveals that the original CI ground state is smoothly connected to the \emph{second} excited state $|\psi_2\rangle$ when $V>1.9$ (green line), whereas the CDW ground state is nearly degenerate with the first excited state (dashed-blue line) in our finite lattice. We compute the Chern number of the $|\psi_2\rangle$ state and find that for $V\in[2,4]$ it always has $C=1$ [denoted by the black square markers in Fig.~\ref{fig_1}(a)]. The Chern numbers of other excited states can turn unstable: either not integers or different values for different $V$. The reason is attributed to the method to calculate Chern number, which requires that the manifold of eigenenergies of the state $|\psi_\alpha\rangle$ in the torus of $E_\alpha(\phi_x,\phi_y)$ to be always gapped~\cite{Fukui05,Varney11}, where $\phi_x$ and $\phi_y$ represent the twisted phases in two directions. Fortunately, the second excited state is always separated from the bulk of the spectrum in a finite range of interactions within the CDW phase, as one would expect from states that just underwent first order phase transitions~\cite{Chen10}. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Figure_1.pdf} \caption{Six lowest-lying energy levels (a), the ground-state Chern number and CDW structure factor (b) \emph{vs.} $V$ for the Hamiltonian \eqref{eq:H} in the $\textbf{k}=(0,0)$ subspace. The dynamical Hall response, $\tilde{\sigma}_{xy}(t_{\text{H}})$, of the GS with $V=0.0$, $1.0$, $3.0$, $4.0$ (c) and second excited state with $V=2.5$, $3.0$, $3.5$, $4.0$ (d), respectively. In (c), the equilibrium Hall response ${\sigma}_{xy}$ obtained from the Kubo formula, Eq.~\eqref{eq:lin_resp} (using a total of 2,000 low-lying eigenstates), is shown as horizontal dash-dotted lines. The cartoon in (b) depicts the 24-sites cluster.} \label{fig_1} \end{figure} To confirm the results of the Chern number, we check the dynamical Hall response, $\tilde{\sigma}_{xy}(t_{\text{H}})$, for the GS and the second excited state. For that, as mentioned in Sec.~\ref{sec_model}, a weak electric field along the $x$ direction is applied at time $t_{\text{H}}=0$, where we subsequently quantify the induced current in $y$ direction. In Fig.~\ref{fig_1}(c), we show $\tilde{\sigma}_{xy}(t_{\text{H}})$ computed at the GS in the CI ($V=0$ and $1$) and CDW ($V=3$ and $4$) phases, respectively. We observe that the ground states with $C=1$ ($C=0$) have a nonzero (zero) Hall response, whose long-time values oscillate around the equilibrium ones obtained from the Kubo formula [Eq.~\eqref{eq:lin_resp}], which are denoted by the dashed lines. This indicates that the Hall response can be used to characterize the topological characteristics of the many-body wave function even in out-of-equilibrium, where an exact definition of the Chern number is elusive. We now proceed with the same protocol to compute $\tilde{\sigma}_{xy}(t_{\text{H}})$, but using the second excited state with $V=2.5$, $3$, $3.5$, $4$, as shown in Fig.~\ref{fig_1}(d). Although the Hall response decreases deep in the CDW phase, it is always finite, in direct agreement with the analysis of the corresponding Chern number of $|\psi_2\rangle$ in this regime. An important remark is that a quantization of the Hall response in these `equilibrium' settings can be obtained only for larger lattice sizes, in similarity to what occurs in non-interacting systems. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Figure_2.pdf} \caption{(a) Overlaps between several eigenstates $|\psi_n\rangle$ and the time-dependent wave function $|\psi(\Delta t)\rangle$ under a pump with $A_0=0.1$ (a), $A_0=0.2$ (b), $A_0=0.3$ (c) and $A_0=0.4$ (d), respectively. Pump parameters are $\omega_0=1.63$ and $t_d=4.0$; the interaction strength is set at $V=2.5$, within the CDW phase for the GS.} \label{fig_2} \end{figure} \subsubsection{Pump dynamics} Now that we have characterized the equilibrium scenario and used an out-of-equilibrium scheme to understand a possible dynamical Hall response, we turn focus to the possibility of generating nontrivial topological properties from a topologically trivial state, which includes two protocols: pump and quench. In the former, we emulate the ultrafast laser pulse by using the vector potential in Eqs.~\eqref{eq:Peierls} and \eqref{eq:vpotent}. By setting $V=2.5$, we investigate the overlaps between the time-dependent wave function $|\psi(\Delta t)\rangle$ and many eigenstates $|\psi_n\rangle$ of the equilibrium Hamiltonian, as shown in Fig.~\ref{fig_2}. The initial state $|\psi(\Delta t=-\infty)\rangle$ is the CDW ground state with $V=2.5$ and pumps with $\omega_0=1.63$ (that is, the pulse is made resonant with the energy difference $E_2-E_0\simeq1.63$), $t_d=4.0$ and four different $A_0$ are applied. The parameters' setting is chosen after a careful tuning, which will become more clear when describing Fig.~\ref{fig_3}. In Figs.~\ref{fig_2}(a) and (c), with $A_0=0.1$ and $A_0=0.3$, respectively, we find that the overlap of $|\psi(\Delta t)\rangle$ with the $|\psi_2\rangle$ at long times after the pump can be larger than $0.98$, while the overlap with the initial CDW state is smaller than $0.1$, indicating a radical switch of the contribution of the two states in the time-dependent wave function. In contrast, the second excited state is mostly not excited in the case of $A_0=0.2$ and $A_0=0.4$, and the reason is due to the oscillation of the overlaps around the duration of the pulse, whose frequencies increases with growing $A_0$ (similar results with $V=4$ are shown in the Appendix~\ref{appendix1}). In addition, the evolution of the CDW structure factor for different $A_0$'s and associated discussion can be found in Appendix~\ref{appendix2}. We conjecture that this oscillating behavior of the overlaps can be well explained via an analogy with a dynamical picture of the two-level Rabi model~\cite{Lu_2013}. Considering a two-level quantum system with $2\epsilon$ level spacing, coupled with a single-mode classical external field, the time-dependent Hamiltonian can be written as \begin{eqnarray} H_R(t)=\epsilon\sigma_z+{\sf g}(t)\sigma_x, \label{eq:Rabi} \end{eqnarray} where $\sigma_a$ are Pauli matrices and ${\sf g}(t)=2g\cos(\omega t)$, which describes the coupling of the two-level system to an external field. The parameter $g$ here represents the strength of the external field and can be compared with the amplitude of the pump, i.e., $A_0$ in our case. When $\omega\approx2\epsilon$, which is the energy difference of the two levels, the rotating-wave approximation can be applied. The Rabi frequency, which characterizes the system's oscillation between the two levels, can be then expressed as \begin{eqnarray} \Omega_R=[(\epsilon-\omega/2)^2+g^2]^{1/2}. \label{eq:Rabi} \end{eqnarray} As it increases with $g$, it can directly explain why larger $A_0$ leads to a larger oscillating frequency of overlaps in Fig.~\ref{fig_2}. Since the pulse is short-lived, the long-time overlaps can be either maximized at the original GS or the second excited state. Moreover, the pulse with higher $A_0$ provides sufficient energy to allow the participation of other eigenstates, and the two-level dynamics may eventually not work anymore. To systematically investigate the influence of various pump parameters in these results, we show in Figs.~\ref{fig_3}(a) and (b) the contour plots of the injected energy $\Delta E = E(\Delta t\to \infty) - E_0$ and the overlap between $|\psi(\Delta t=10 t_d)\rangle$ and the eigenstate $|\psi_2\rangle$, as a function of $A_0$ and $t_d$. As before, $\omega_0$ is set to be $1.63$, and we find that $A_0=0.1$ and $t_d=4.0$ is the optimal combination to increase the overlap with $|\psi_2\rangle$, which has been detailed in Fig.~\ref{fig_2}(a). We plot the same contour plots as a function of $A_0$ and $\omega_0$, in Figs.~\ref{fig_3}(c) and (d). Results in Fig.~\ref{fig_3}(d) confirm that $\omega_0\simeq1.63$ is the resonant frequency to excite $|\psi_2\rangle$. Besides, we observe that the overlaps have a staggered dependence on $A_0$, together with a staggered injected energy $\Delta E$, especially when $\omega_0\approx1.6$ and $t_d=4$. That is, the photoinduced increase of the overlap with eigenstate $|\psi_2\rangle$ strongly depends on $\Delta E$, and the settings of parameters that can predominantly excite $|\psi_2\rangle$ always occurs with injected energy $\Delta E\approx E_2 - E_0 \simeq 1.6$, which is just the energy $\hbar\omega_0$. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Figure_3.pdf} \caption{Contour plots of the injected energy $\Delta E$ (a) and the overlap between $|\psi(\Delta t=10 t_d)\rangle$ and the eigenstate $|\psi_2\rangle$ (b) as a function of $A_0$ and $t_d$ with $\omega_0=1.63$. Contour plots of the injected energy $\Delta E$ (c) and the overlap $\langle\psi(\Delta t=10 t_d)|\psi_2\rangle$ (d) as a function of $A_0$ and $\omega_0$ with $t_d=4.0$. The color bars are adjusted in order to highlight the best conditions to resonantly excite the state $|\psi_2\rangle$, via the energy difference and the corresponding overlap with this state. In all cases, we fix the interaction at $V=2.5$. } \label{fig_3} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Figure_4.pdf} \caption{The dynamical Hall response $\tilde{\sigma}_{xy}(t_{\text{H}})$ to the second excited state $|\psi_2\rangle$ with $V=2.5$ (blue dashed line) and to the time-dependent wave function after pump (red dashed lines) with $A_0=0.1$ (a) and $A_0=0.3$ (b). Red solid lines show the corresponding time-averages of $\tilde{\sigma}_{xy}(t_{\text{H}})$. In (c) and (d), we show overlaps between time-dependent wave function and the second excited state, $\langle\psi(t_{\rm H})|\psi_2\rangle$, after the weak electric field applied to $|\psi_2\rangle$ (blue dashed lines) and to the time-dependent wave function after pump (green dashed lines) with $A_0=0.1$ and $A_0=0.3$, respectively. Green solid lines show $\langle\psi(t_{\rm H})|\psi_2\rangle$ just after the pump without weak electric field applied. As explained in the text, the reference probe time $t_{\text{H}}=0$ locates at $\Delta t=40$. Parameters used: $V=2.5$, $\omega_0=1.63$ and $t_d=4.0$.} \label{fig_4} \end{figure} To certify that a photoinduced topological phase transition takes place, we calculate the dynamical Hall response after pump with $A_0=0.1$ (probing from $\Delta t=40$, i.e, turning the probe electric field after a substantial time delayed from the central pump time), as shown in Fig.~\ref{fig_4}(a) by a red dashed line. For comparison, we also plot the `equilibrium' Hall response for state $|\psi_2\rangle$ [blue dashed line, similar data obtained in Fig.~\ref{fig_1}(d)], where we find that they are almost identical when $t_{\rm H} < 30$. At long times, continuous application of the weak electric field, however, leads the Hall response to display large oscillations but whose time-average value (red solid line) is still around the Hall response associated to $|\psi_2\rangle$. This points to the conclusion that the dynamical transition from topologically trivial to non-trivial state can be obtained after a well-tuned pump. Now we explain the large oscillations of Hall response for the post-pump out-of-equilibrium state, which is not observed for equilibrium states in Fig.~\ref{fig_1}(c). The effect of weak electric field to the overlaps between the time-dependent wave functions and second excited state, $\langle\psi(t_{\rm H})|\psi_2\rangle$, is shown in Fig.~\ref{fig_4}(c) with $A_0 = 0.1$. If we just apply the weak field to $|\psi_2\rangle$, its influence on the overlap is rather small (see blue dashed line). However, if we apply the weak field to the post-pump time-dependent wave function (green dashed line), there is an apparent deviation and oscillation compared with that without weak electric field applied (green solid line). The reason is due to the time-dependent wave function never perfectly turning to $|\psi_2\rangle$ (the maximum overlap is 0.996), as expected from a unitary evolution that prevents $\langle \psi(t+\Delta t)|\psi(t)\rangle=0$~\cite{Ge21}. Even in the perfect resonant case, $|\psi(\Delta t\to\infty)\rangle$ is never an eigenstate of the Hamiltonian \eqref{eq:H}, in spite of abundant evidence showing the emergence of topological non-trivial properties. Moreover, the frequencies of oscillations on Hall response and overlap are identical. We then show the Hall response and overlaps with $A_0=0.3$ in Figs.~\ref{fig_4}(b) and (d), from which one can find that if the post-pump state being further away from $|\psi_2\rangle$ (0.987), it becomes hard to obtain a correct dynamical Hall response after $t_{\rm H}>10$, leading to an unstable time-average result. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Figure_5.pdf} \caption{Overlaps between several eigenstates $|\psi_n\rangle$'s and the time-dependent wave function $|\psi(\Delta t)\rangle$ after a quench from $V=3$ to $V=1$ (a) as well as from $V=2.5$ to $V=1.5$ (b). (c) and (d) show the Hall responses (red dashed lines) and their time average (red solid lines) to the nonequilibrium states after the corresponding quenches shown in (a) and (b), respectively.} \label{fig_5} \end{figure} \subsubsection{Quench dynamics} Finally, we contrast these results with more often used protocols to understand non-equilibrium topological transitions, that is, using a quench from a trivial to a non-trivial phase in the regime of parameters. As the `knob' in our model that destroys topological characteristics of the ground state is the interaction, we study a quench scenario from large interactions ($V>V_c$) to smaller ones ($V<V_c$). In Figs.~\ref{fig_5}(a) and (b), after the quenches $V=3.0\rightarrow1.0$ and $V=2.5\rightarrow1.5$, we calculate the overlaps between the time-dependent wave function and several eigenstates of the equilibrium Hamiltonian before quench. Although the overlap with the GS is substantially decreased in both cases, $|\psi_2\rangle$ is not excited. Instead, overlaps with the fourth excited state, whose Chern number can not be identified (owing to the small gaps to other states in the spectrum), increases considerably. In direct contrast to the pump case, the nonequilibrium state after quench is largely unstable, at least in a short-time scale. This can be inferred in Figs.~\ref{fig_5}(c) and (d), where we show the Hall responses after the quench (red dashed lines) and their time-average results (red solid line). We find no apparent features to characterize a potential topological phase transition. \section{conclusions and discussion}\label{conclusion} We investigated the interacting Haldane model that hosts a topological (Chern insulating) phase at small interaction strengths. In the strong coupling regime with a CDW ground state, we studied the topological features (including Chern number and Hall response) of an excited state, often neglected in previous investigations. For the first time, we allow this interacting system to evade equilibrium via the application of an external pump, instead of more common quench scenarios, and demonstrate that resonant targeting of topologically non-trivial excited states is possible, as long as they are separated from the continuum of the spectrum. This situation is precisely satisfied when a first-order phase transition occurs~\cite{Varney10,Varney11}, even in finite lattices. In contrast, if one focus on quench dynamics from topologically trivial to non-trivial cases, such an excited state cannot be stimulated, and no evident Hall response is found for the model under study. To better understand how the dynamical topological transition is possible, we now compare the time-dependence of the overlaps and the system's instantaneous energy in the resonant conditions, when using both the original ground state $|\psi_0\rangle$, as well as the corresponding second excited eigenstate $|\psi_2\rangle$, as the initial states. By observing the overlaps with $|\psi_2\rangle$ [Fig.~\ref{fig_concl}(a)] and the total energy [Fig.~\ref{fig_concl}(b)] during the pump process, it becomes clear that a dynamical level crossing occurs for the two states, where they switch roles. This is at the core of our observation of induced Hall response. Given the first order character of the topological transition in equilibrium, such scenarios could be achieved in other models with engineered perturbations, attesting the generality of our results. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Figure_conclusion.pdf} \caption{Nonequilibrium dynamics after pump in the resonant conditions when starting from the ground state ($n=0$) and the second excited state ($n=2$) of the equilibrium Hamiltonian~\eqref{eq:H}. In (a) and (b), the overlaps with the $|\psi_2\rangle$ state and the instantaneous energies are shown for both different initializations; a characteristic dynamical level crossing occurs about the pump's central time. Here the pump parameters are the ones that optimize the targeting of the second excited state: $A_0=0.1$, $\omega_0 = 1.63$, and $t_d=4.0$, at interaction strength $V=2.5$.} \label{fig_concl} \end{figure} \begin{acknowledgments} The authors acknowledges insightful discussions with H.~Lu and H.-Q.~Lin. C.~S.~acknowledges support from the National Natural Science Foundation of China (NSFC; Grants No.~12104229). P.~D.~S.~acknowledges the support from FCT through the Grant UID/CTM/04540/2019. R.~M.~acknowledges support from NSFC (Grants No. NSAF-U1930402, No.~11974039, No.~12050410263, and No.~12111530010). Computations were performed in the Tianhe-2JK at the Beijing Computational Science Research Center. \end{acknowledgments}
{ "timestamp": "2021-09-07T02:05:44", "yymm": "2105", "arxiv_id": "2105.01845", "language": "en", "url": "https://arxiv.org/abs/2105.01845" }
\section{Introduction} Recursion schemes are faithful and algorithmically manageable abstractions of the control flow of programs involving higher-order functions~\cite{Kobayashi-jacm}. Such functions are nowadays widely used not only in functional programming languages such as Haskell and the OCAML family, but also in mainstream languages such as Java, JavaScript, Python, and C++. Simultaneously, the formalism of recursion schemes is equivalent via direct translations to simply-typed $\lambda Y$-calculus~\cite{lambdaY}. Collapsible pushdown systems~\cite{collapsible-translation} and ordered tree-pushdown systems~\cite{tree-pushdown} are other equivalent formalisms. Recursion schemes cover some other models such as indexed grammars~\cite{indexed} and ordered multi-pushdown automata~\cite{multi-pushdown}. The most celebrated algorithmic result in the analysis of recursion schemes is the decidability of the \emph{model-checking problem} against regular properties of trees: given a recursion scheme $\mathcal{G}$ and a parity tree automaton $\mathcal{A}$, one can decide whether the tree generated by $\mathcal{G}$ is accepted by $\mathcal{A}$~\cite{Ong-schemes}. This fundamental result has been reproved several times, that is, using collapsible higher-order pushdown automata~\cite{collapsible-orig}, intersection types~\cite{KobayashiOng-types}, Krivine machines~\cite{Krivine}, and it has been extended in diverse directions such as global model checking~\cite{global}, logical reflection~\cite{reflection}, effective selection~\cite{effective-selection}, and a transfer theorem via models of lambda-calculus~\cite{models}. The model-checking problem for recursion schemes of order $n$ is complete for $n$-fold exponential time~\cite{Ong-schemes}. Despite this hardness result, the model-checking problem can be solved efficiently on multiple nontrivial examples, thanks to the development of several recursion-scheme model checkers~\cite{practical-apta, travmc2, horsats} (including some model checkers that work only for automata models weaker than parity tree automata~\cite{trecs, gtrecs, horsat, travmc, preface}). In this paper, we give a new simple algorithm solving the model-checking problem for recursion schemes, mentioned above. The algorithm amounts to a procedure that transforms a recursion scheme of order $n$ to a recursion scheme of order $n-1$, preserving acceptance, and increasing the size only exponentially. After repeating the procedure $n$ times, we obtain a recursion scheme of order $0$, for which acceptance boils down to winning a finite parity game. Since the size grows exponentially at each step, we reach the optimal overall complexity of $n$-fold exponential time. In a more detailed view, the complexity looks even better: the size growth is exponential only in the arity of types appearing in the recursion scheme, and in the size of the parity automaton; if these two parameters are bounded by a constant, the transformation is linear in the size of the recursion scheme. Since solving a finite parity game is \textsf{FPT} in the number of priorities~\cite{Calude}, our algorithm for the the model-checking algorithm is \textsf{FTP} in the two parameters.% \footnote{This is not new. Actually, most previous algorithms reduce the model-checking problem to the problem of solving a parity game whose size is polynomial (for a polynomial of a fixed degree, for some algorithms just linear) in the size of the input, assuming that the arity of types appearing in the recursion scheme and the size of the parity automaton are fixed. Thus, only the method introduced by us is new, not the complexity results.} The main difference between our algorithm and all the others is that we solve the problem step by step, repeatedly reducing the order by one, while most previous algorithms work ``in one step'', being compulsorily more complicated. The only algorithms that have been reducing the order by one, were algorithms using collapsible pushdown automata~\cite{collapsible-orig,reflection,effective-selection}. Notice, however, that these algorithms: first, are very technical; second, are contained only in unpublished appendices and in an arXiv paper~\cite{collapsible-arxiv}; third, if we want to use them for recursion schemes, it is necessary to employ a (nontrivial) translation from recursion schemes to collapsible pushdown automata~\cite{collapsible-translation,lambdaY,effective-selection}. A reduction of order was also possible for a subclass of recursion schemes, called \emph{safe} recursion schemes~\cite{easy-trees}, but it was not known how to extend it to all recursion schemes. The transformation presented in this paper generalizes of a previous transformation of the author~\cite{trans-nonempty}, working for reachability automata in place of parity automata. It has also a close relationship with a transformation given by Asada and Kobayashi~\cite{word2tree}. \section{Preliminaries}\label{sec:prelim} For a number $k\in\mathbb{N}$ we write $\scope{k}$ for $\set{1,\dots,k}$. For any relation $\longrightarrow$ we write $\longrightarrow^*$ for the reflexive transitive closure of $\longrightarrow$. For a function $Z$ we write $Z\mapch{z\mapsto r}$ to denote the function that maps $z$ to $r$ while all other elements of the domain of $Z$ are mapped as in $Z$. Likewise, we write $Z\mapch{z_i\mapsto r_i\mid i\in I}$ to denote the function that maps $z_i$ to $r_i$ for all $i\in I$, while all other elements of the domain of $Z$ are mapped as in $Z$. We also use this notation without the ``$Z$'' part, for a function $Z$ with empty domain. \subparagraph{Recursion schemes.} The set of \emph{(simple) types} is constructed from a unique ground type $\mathsf{o}$ using a binary operation $\mathbin{\to}$; namely $\mathsf{o}$ is a type, and if $\alpha$ and $\beta$ are types, so is $\alpha\mathbin{\to}\beta$. By convention, $\mathbin{\to}$ associates to the right, that is, $\alpha\mathbin{\to}\beta\mathbin{\to}\gamma$ is understood as $\alpha\mathbin{\to}(\beta\mathbin{\to}\gamma)$. We often abbreviate $\underbrace{\alpha\mathbin{\to}\dots\mathbin{\to}\alpha}_\ell\to\beta$ as $\alpha^\ell\mathbin{\to}\beta$. The \emph{order} of a type $\alpha$, denoted $\mathsf{ord}(\alpha)$, is defined by induction: $\mathsf{ord}(\alpha_1\mathbin{\to}\dots\mathbin{\to}\alpha_k\mathbin{\to}\mathsf{o})=\max(\set{0}\cup\setof{\mathsf{ord}(\alpha_i)+1}{i\in\scope{k}})$; for example $\mathsf{ord}(\mathsf{o})=0$, $\mathsf{ord}(\mathsf{o}\mathbin{\to}\mathsf{o}\mathbin{\to}\mathsf{o})=1$, and $\mathsf{ord}((\mathsf{o}\mathbin{\to}\mathsf{o})\mathbin{\to}\mathsf{o})=2$. Having a set of typed nonterminals $\mathcal{X}$, a set of typed variables $\mathcal{Y}$, and a set of symbols $\Sigma$, \emph{terms} over $(\mathcal{X},\mathcal{Y},\Sigma)$ are defined by induction: \begin{itemize} \item nonterminal: every nonterminal $X\in\mathcal{X}$ of type $\alpha$ is a term of type $\alpha$; \item variable: every variable $y\in\mathcal{Y}$ of type $\alpha$ is a term of type $\alpha$; \item node constructor: if $K_1,\dots,K_k$ are terms of type $\mathsf{o}$ and $a\in\Sigma$, then $\symb{a,K_1,\dots,K_k}$ is a term of type $\mathsf{o}$; \item application: if $K$ is a term of type $\alpha\mathbin{\to}\beta$, and $L$ is a term of type $\alpha$, then $K\,L$ is a term of type $\beta$. \end{itemize} The type of a term $K$ is denoted $\mathsf{tp}(K)$. The order of a term $K$, written $\mathsf{ord}(K)$, is defined as the order of its type. A \emph{(higher-order) recursion scheme} is a tuple $\mathcal{G}=(\mathcal{X},X_0,\Sigma,\mathcal{R})$, where $\mathcal{X}$ is a finite set of typed nonterminals, and $X_0\in\mathcal{X}$ is a \emph{starting nonterminal} of type $\mathsf{o}$, and $\Sigma$ is a finite set of symbols (called an \emph{alphabet}), and $\mathcal{R}$ is a function assigning to every nonterminal $X\in\mathcal{X}$ a \emph{rule} of the form $X\,y_1\,\dots\,y_k\to R$, where $\mathsf{tp}(X)=(\mathsf{tp}(y_1)\mathbin{\to}\dots\mathbin{\to}\mathsf{tp}(y_k)\mathbin{\to}\mathsf{o})$, and $R$ is a term of type $\mathsf{o}$ over $(\mathcal{X},\set{y_1,\dots,y_k},\Sigma)$. The order of a recursion scheme, $\mathsf{ord}(\mathcal{G})$, is defined as the maximum of orders of its nonterminals. Having a recursion scheme $\mathcal{G}=(\mathcal{X},X_0,\Sigma,\mathcal{R})$, for every set of variables $\mathcal{Y}$ we define a \emph{reduction relation} $\longrightarrow_\mathcal{G}$ between terms over $(\mathcal{X},\mathcal{Y},\Sigma)$ as the least relation such that \begin{itemize} \item $X\,K_1\,\dots\,K_k\longrightarrow_\mathcal{G} R\subst{K_1/y_1,\dots,K_k/y_k}$ if the rule for $X$ is $X\,y_1\,\dots\,y_k\to R$, where $R\subst{K_1/y_1,\allowbreak\dots,K_k/y_k}$ denotes the term obtained from $R$ by substituting $K_i$ for $y_i$ for all $i\in\scope{k}$. \end{itemize} A (potentially infinite) \emph{tree} over an alphabet $\Sigma$ is defined by coinduction: every tree over $\Sigma$ is of the form $\symb{a,T_1,\dots,T_k}$, where $a\in\Sigma$ and $T_1,\dots,T_k$ are again trees over $\Sigma$ (for an introduction to coinductive definitions and proofs see, e.g., Czajka~\cite{Czajka}). We employ the usual notions of nodes, children, branches, etc. Formally, we can define nodes as sequences of natural numbers; then for a tree $T=\symb{a,T_1,\dots,T_k}$, the empty sequence $()$ is a node of $T$ labeled by $a$, and any longer sequence $(i_1,i_2,\dots,i_n)$ is a node of $T$ labeled by $b$ if $i_1\in\scope{k}$ and $(i_2,\dots,i_n)$ is a node of $T_{i_1}$ labeled by $b$. For a tree $T$ and its node $v$, we write $T\!{\restriction}\!_v$ for the subtree of $T$ starting at $v$. Again by coinduction, we define the tree \emph{generated} by a recursion scheme $\mathcal{G}=(\mathcal{X},X_0,\Sigma,\allowbreak\mathcal{R})$ from a term $M$ of type $\mathsf{o}$ (over $(\mathcal{X},\emptyset,\Sigma)$), denoted $\mathsf{BT}_\mathcal{G}(M)$: \begin{itemize} \item if $M\longrightarrow_\mathcal{G}^*\symb{a,K_1,\dots,K_k}$, then $\mathsf{BT}_\mathcal{G}(M)=\symb{a,\mathsf{BT}_\mathcal{G}(K_1),\dots,\mathsf{BT}_\mathcal{G}(K_k)}$; \item otherwise, $\mathsf{BT}_\mathcal{G}(M)=\symb{\omega}$ for a special symbol $\omega\not\in\Sigma$. \end{itemize} The tree generated by $\mathcal{G}$ (without mentioning a term), denoted $\mathsf{BT}(\mathcal{G})$, is defined as $\mathsf{BT}_\mathcal{G}(X_0)$. \subparagraph{Parity games.} As already said, in the model-checking problem we are given a recursion scheme $\mathcal{G}$ and an alternating parity automaton $\mathcal{A}$, and we are asked whether the tree $T_\mathcal{G}$ generated by $\mathcal{G}$ is accepted by $\mathcal{A}$. One can, however, create a product of $\mathcal{G}$ and $\mathcal{A}$, which is a recursion scheme $\mathcal{G}_\mathcal{A}$ generating the tree of all possible runs of $\mathcal{A}$ on $T_\mathcal{G}$. This tree is a parity game; the game is won by Eve if and only if $\mathcal{A}$ accepts $T_\mathcal{G}$ (see \cref{app:product} for more details). Due to this reduction, it is enough to work with recursion schemes generating parity games, and consider the problem of finding a winner in such games. For every $d\in\mathbb{N}_+$ we consider the alphabet $\Sigma_d=\set{\mathrm{Adam},\mathrm{Eve}}\times\scope{d}$. A \emph{parity tree} is a tree over $\Sigma_d$ where every node has at least one child. A \emph{parity recursion scheme} is a recursion scheme generating a parity tree (in particular the generated tree cannot have nodes without children, including $\omega$-labeled nodes). For a node labeled by $(\wp,p)\in\Sigma_d$, we say that it \emph{belongs} to the player $\wp$, and that it has \emph{priority} $p$. For trees and terms we write $\symb{\wp,p,K_1,\dots,K_k}$ instead of $\symb{(\wp,p),K_1,\dots,K_k}$, avoiding excessive brackets. A branch $\xi$ in a parity tree $T$ is \emph{won by Eve} (\emph{Adam}) if the greatest priority appearing infinitely often on $\xi$ is even (odd, respectively). A \emph{strategy} $\rho$ of a player $\wp\in\set{\mathrm{Adam},\mathrm{Eve}}$ in a parity tree $T$ is a function that assigns numbers to nodes of $T$ belonging to the player $\wp$; if a node $v$ has $k$ children, we require that $\rho(v)\in\scope{k}$. A branch $\xi$ \emph{agrees} with $\rho$ if for every node $v$ on $\xi$ that belongs to $\wp$, the next node of $\xi$ is the $\rho(v)$-th child of $v$. A strategy $\rho$ of $\wp$ is \emph{winning} if all branches that agree with $\rho$ are winning for $\wp$. Finally, $\wp$ \emph{wins} in $T$ if $\wp$ has a winning strategy in $T$; otherwise $\wp$ \emph{loses} in $T$. It is a standard result that in every parity tree $T$ exactly one of the players wins. It is useful to consider the following order $\preceq$ on positive natural numbers (priorities): $\dots\preceq 5\preceq 3\preceq 1\preceq 2\preceq 4\preceq 6\preceq\dots$ (first we have odd numbers in the reversed order, and then positive even numbers). We use the words \emph{worse} and \emph{better} to say that a priority is, respectively, earlier or later in this order. The intuition is that while playing a parity game, Eve always prefers to see better priorities. \section{Transformation}\label{sec:transformation} In this section we present a transformation, called \emph{order-reducing transformation}, resulting in the main theorem of this paper: \begin{theorem}\label{thm:main} For any $n\geq 1$, there exists a transformation from order-$n$ parity recursion schemes to order-$(n-1)$ parity recursion schemes, and a polynomial $p_n$ such that, for any order-$n$ parity recursion scheme $\mathcal{G}$, the winner in the tree generated by the resulting recursion scheme $\mathcal{G}^\dag$ is the same as in the tree generated by $\mathcal{G}$, and $|\mathcal{G}^\dag|\leq 2^{p_n(|\mathcal{G}|)}$. \end{theorem} \subparagraph{Intuitions.} Let us first present intuitions behind our transformation. While reducing the order, we have to replace, in particular, order-$1$ functions by order-$0$ terms. Consider for example a tree $T$ generated from a term $K\,L$ of type $\mathsf{o}$, where $K$ has type $\mathsf{o}\mathbin{\to}\mathsf{o}$. Essentially, $T$ consists of a context $C_K$, generated by $K$, where the tree $T_L$ generated by $L$ is inserted in some ``holes''. Instead of playing in $T$, we propose the following modification of the game. At the beginning, we ask Eve a question: how is she going to reach subtrees $T_L$ while playing in $T$? She may declare that, according to her winning strategy, \begin{itemize} \item she is able to ensure that the greatest priority seen before reaching $T_L$ will not be worse than $r$, for some number $r$ of her choice, or \item she will not reach subtrees $T_L$ at all, which amounts to choosing for $r$ an even number greater than $d$, say $r=2d$. \end{itemize} Then, we ask Adam if he believes in this declaration. If so, we simply read the declared worst-case priority $r$, and we continue playing in $T_L$ (this possibility is unavailable for Adam, if Eve declared that she will not visit $T_L$). Otherwise, we check the declaration: we start playing in $C_K$; while reaching a place where $T_L$ should be placed, Eve immediately wins (loses) if her declaration is fulfilled (not fulfilled, respectively). We can see that such a modification of the game (even applied in infinitely many places of the considered tree) does not change the winner. A subtle point is that, in the modified game, Eve has to make a declaration on the priority $r$ before actually starting the game in the tree generated from $K\,L$, and it is not completely obvious why the need for the declaration introduces no disadvantage for Eve. Nevertheless, for a fixed Eve's winning strategy, the worst greatest priority seen before reaching $T_L$ is fixed, so that Eve can declare it as $r$. In the transformation, we change the order-$1$ term $K$ into several order-$0$ terms: $K_r$ for $r\in\set{1,\dots,d,2d}$ (where $d$ is a bound on priorities in the considered parity recursion scheme $\mathcal{G}$). These terms generate trees of the same shape as the context $C_K$ generated by $K$ but with some fixed trees substituted in place of the holes of $C_K$ (where originally trees generated by the argument $L$ were attached). The generated trees correspond to particular declarations made by Eve, as described above. Namely, we consider some fixed trees $\bot$ and $\top$ in which Eve loses and wins, respectively. Then, in the tree generated by $K_r$, the tree $\top$ is placed in holes such that the greatest priority on the path from the root to the hole is not worse than $r$, and the tree $\bot$ is placed in the remaining holes. In particular, the tree $\bot$ is placed in all holes of the tree generated by $K_{2d}$, because all priorities actually appearing in the tree are worse than $2d$. Finally, we replace $K\,L$ by $\symb{\mathrm{Eve},1,K_1^L,K_2^L,\dots,K_d^L,K_{2d}}$, where $K_r^L=\symb{\mathrm{Adam},1,K_r,\allowbreak\symb{\mathrm{Eve},\allowbreak r,\allowbreak L}}$. In this way we realize the modified game described above: first Eve chooses a declaration $r$ and then Adam either proceed to $K_r$ or to $L$ after seeing priority $r$ (the latter possibility is disabled for $r=2d$). The priority $1$ of the newly created tree nodes should be seen as a neutral priority; higher priorities visited later will be more important anyway. When a term $K$ of order $1$ takes multiple arguments (instead of one argument $L$), we proceed in the same way, allowing Eve to make declarations for each of the arguments. While applying the above-described transformation to recursion schemes, it is possible that the term $K$ considered above contains some nonterminals or variables. Then, in order to realize the transformation, we need to create multiple copies of these nonterminals and variables, corresponding to particular declarations of Eve. For example, say that in a recursion scheme we have (among others) the following two rules: \begin{align*} &\mathsf{X}\to\mathsf{Y}\,\mathsf{Z},\\ &\mathsf{Y}\,\mathsf{z}\to\symb{\mathrm{Eve},1,\mathsf{z},\symb{\mathrm{Eve},2,\mathsf{z}}}. \end{align*} Here $\mathsf{X}$ and $\mathsf{Z}$ are of type $\mathsf{o}$, and $\mathsf{Y}$ is of type $\mathsf{o}\mathbin{\to}\mathsf{o}$, so $\mathsf{Y}\,\mathsf{Z}$ is an application that should be replaced by the transformation. Assuming $d=2$, we should obtain the following rules: \begin{align*} &\mathsf{X}'\to\symb{\mathrm{Eve},1,\symb{\mathrm{Adam},1,\mathsf{Y}_1,\symb{\mathrm{Eve},1,\mathsf{Z}'}}, \symb{\mathrm{Adam},1,\mathsf{Y}_2,\symb{\mathrm{Eve},2,\mathsf{Z}'}}, \mathsf{Y}_4},\\ &\mathsf{Y}_1\to\symb{\mathrm{Eve},1,\downVdash,\symb{\mathrm{Eve},2,\downVdash}},\\ &\mathsf{Y}_2\to\symb{\mathrm{Eve},1,\upVdash,\symb{\mathrm{Eve},2,\downVdash}},\\ &\mathsf{Y}_4\to\symb{\mathrm{Eve},1,\upVdash,\symb{\mathrm{Eve},2,\upVdash}}, \end{align*} where $\upVdash$ and $\downVdash$ are nonterminals from which the trees $\bot$ and $\top$ (in which Eve loses and wins, respectively) are generated. Another possibility is that in the original recursion scheme we have $\mathsf{y}\,\mathsf{Z}$ instead of $\mathsf{Y}\,\mathsf{Z}$: \begin{align*} &\mathsf{S}\to\mathsf{T}\,\mathsf{Y},\\ &\mathsf{T}\,\mathsf{y}\to\mathsf{y}\,\mathsf{Z}. \end{align*} Then, the single parameter $\mathsf{y}$ gets transformed into three parameters: \begin{align*} &\mathsf{S}'\to\mathsf{T}'\,\mathsf{Y}_1\,\mathsf{Y}_2\,\mathsf{Y}_4,\\ &\mathsf{T}'\,\mathsf{y}_1\,\mathsf{y}_2\,\mathsf{y}_4\to\symb{\mathrm{Eve},1,\symb{\mathrm{Adam},1,\mathsf{y}_1,\symb{\mathrm{Eve},1,\mathsf{Z}'}}, \symb{\mathrm{Adam},1,\mathsf{y}_2,\symb{\mathrm{Eve},2,\mathsf{Z}'}}, \mathsf{y}_4}. \end{align*} \subparagraph{Formal definition.} We now formalize the above intuitions. Fix a parity recursion scheme $\mathcal{G}=(\mathcal{X},X_0,\Sigma_d,\mathcal{R})$; in particular fix a bound $d$ on priorities appearing in $\mathcal{G}$. A set $D_d$ of Eve's \emph{declarations} is defined as $D_d=\set{1,\dots,d,2d}$. For a priority $p\in\scope{d}$ and a declaration $r\in D_d$ we define a \emph{shifted} declaration $r\!{\restriction}\!_p$ (obtained from $r$ after seeing priority $p$): \begin{align*} r\!{\restriction}\!_p=\left\{\begin{array}{ll} p+1&\mbox{if $p$ is odd and $p>r$,}\\ p-1&\mbox{if $p$ is even and $p\geq r$,}\\ r&\mbox{otherwise.} \end{array}\right. \end{align*} We remark that the same definition appears in Tsukada and Ong~\cite{shift-was-here} (where shifts are called left-residuals); a slightly different representation is present also in Salvati and Walukiewicz~\cite{Krivine} (with declarations called residuals and shifts called liftings). The \emph{leader} (``most important priority'') of a sequence of priorities $\pi$ is the greatest priority appearing in $\pi$, or $1$ if $\pi$ is empty. A sequence of priorities $\pi$ \emph{fulfils} a declaration $r\in D_d$ if $r$ is worse or equal than the leader of $\pi$ (where ``worse'' refers to the $\preceq$ order defined in \cref{sec:prelim}). For example, $1,4,2$, and $1,1,1$, both fulfil $3$, but $1,5,4$ does not. The empty sequence fulfils $r$ exactly when $r$ is odd. No sequence of priorities from $\scope{d}$ fulfils $2d$. The following \lcnamecref{shift2fulfilled} is obtained by a direct analysis (see \cref{app:shift2fulfilled}): \begin{restatable}{lemma}{shiftfulfilled}\label{shift2fulfilled} A sequence of priorities $p_1,p_2,\dots,p_k\in\scope{d}$ fulfils a declaration $r\in D_d$ if and only if $p_2,\dots,p_k$ fulfils $r\!{\restriction}\!_{p_1}$. \end{restatable} Having a type, we are interested in cutting off its suffix of order $1$. Thus, we use the notation $\alpha_1\mathbin{\to}\dots\mathbin{\to}\alpha_k\mathbin{\Rightarrow}\mathsf{o}^\ell\mathbin{\to}\mathsf{o}$ for a type $\alpha_1\mathbin{\to}\dots\mathbin{\to}\alpha_k\mathbin{\to}\mathsf{o}^\ell\mathbin{\to}\mathsf{o}$ such that either $k=0$ or $\alpha_k\neq\mathsf{o}$. Notice that every type $\alpha$ can be uniquely represented in this form. We remark that some among the types $\alpha_1,\dots,\alpha_{k-1}$ (but not $\alpha_k$) may be $\mathsf{o}$. For a type $\alpha$ we write $\mathsf{gar}(\alpha)$ (``ground arity'') for the number $\ell$ for which we can write $\alpha=(\alpha_1\mathbin{\to}\dots\mathbin{\to}\alpha_k\mathbin{\Rightarrow}\mathsf{o}^\ell\mathbin{\to}\mathsf{o})$; we also extend this to terms: $\mathsf{gar}(M)=\mathsf{gar}(\mathsf{tp}(M))$. We transform terms of type $\alpha$ to terms of type $\alpha^{\dag_d}$, which is defined by induction: \begin{align*} (\alpha_1\mathbin{\to}\dots\mathbin{\to}\alpha_k\mathbin{\Rightarrow}\mathsf{o}^\ell\mathbin{\to}\mathsf{o})^{\dag_d} = \left((\alpha_1^{\dag_d})^{|D_d|^{\mathsf{gar}(\alpha_1)}}\mathbin{\to}\dots\mathbin{\to}(\alpha_k^{\dag_d})^{|D_d|^{\mathsf{gar}(\alpha_k)}}\mathbin{\to}\mathsf{o}\right). \end{align*} Thus, we remove all trailing order-$0$ arguments, and we multiplicate (and recursively transform) remaining arguments. The number of copies depends on the bound $d$ on priorities appearing in the considered parity recursion scheme. For a finite set $S$, we write $D_d^S$ for the set of functions $A\colon S\to D_d$. Moreover, we assume some fixed order on functions in $D_d^S$, and we write $P\,(Q_A)_{A\in D_d^S}$ for an application $P\,Q_{A_1}\,\dots\,Q_{A_{|D_d|^{|S|}}}$, where $A_1,\dots,A_{|D_d|^{|S|}}$ are all the functions from $D_d^S$ listed in the fixed order. The only function in $D_d^\emptyset$ is denoted $\emptyset$. For every variable $y$ and for every function $A\in D_d^{\scope{\mathsf{gar}(y)}}$ we consider a variable $y_A^{\dag_d}$ of type $(\mathsf{tp}(y))^{\dag_d}$. Likewise, for every nonterminal $X$ of $\mathcal{G}$ and for every function $A\in D_d^{\scope{\mathsf{gar}(X)}}$ we consider a nonterminal $X_A^{\dag_d}$ of type $(\mathsf{tp}(X))^{\dag_d}$. As the new set of nonterminals we take $\mathcal{X}^{\dag_d}=\setof{X_A^{\dag_d}}{X\in\mathcal{X},A\in D_d^{\scope{\mathsf{gar}(X)}}}\cup\set{\upVdash,\downVdash}$. We now define a function $\mathsf{tr}_d$ transforming terms. Its value $\mathsf{tr}_d(A,Z,M)$ is defined when $M$ is a term over $(\mathcal{X},\mathcal{Y},\Sigma_d)$ for some set of variables $\mathcal{Y}$, and $A\in D_d^{\scope{\mathsf{gar}(M)}}$, and $Z\colon\mathcal{Y}\rightharpoonup D_d$ is a partial function such that $\mathrm{dom}(Z)$ contains only variables of type $\mathsf{o}$. The intention is that $A$ specifies Eve's declarations for trailing order-$0$ arguments, and $Z$ specifies them for order-$0$ variables (among those in $\mathrm{dom}(Z)$). The transformation is defined by induction on the structure of $M$, as follows: \begin{bracketenumerate} \item $\mathsf{tr}_d(A,Z,X)=X_A^{\dag_d}$ for $X\in\mathcal{X}$; \item $\mathsf{tr}_d(A,Z,y)=y_A^{\dag_d}$ for $y\in\mathcal{Y}\setminus\mathrm{dom}(Z)$; \item\label[case-br]{tr:case:3} $\mathsf{tr}_d(\emptyset,Z,z)=\downVdash$ if $Z(z)$ is odd; \item\label[case-br]{tr:case:4} $\mathsf{tr}_d(\emptyset,Z,z)=\upVdash$ if $Z(z)$ is even; \item\label[case-br]{tr:case:5} $\mathsf{tr}_d(\emptyset,Z,\symb{\wp,p,K_1,\dots,K_k})=\symb{\wp,p,\mathsf{tr}_d(\emptyset,Z\!{\restriction}\!_p,K_1),\dots,\mathsf{tr}_d(\emptyset,Z\!{\restriction}\!_p,K_k)}$, where $Z\!{\restriction}\!_p$ is the function defined by $Z\!{\restriction}\!_p(z)=(Z(z))\!{\restriction}\!_p$ for all $z\in\mathrm{dom}(Z)$; \item\label[case-br]{tr:case:6} $\mathsf{tr}_d(A,Z,K\,L)=\symb{\mathrm{Eve},1,K_1^L,K_2^L,\dots,K_d^L,K_{2d}}$ if $\mathsf{tp}(K)=(\mathsf{o}^{\ell+1}\mathbin{\to}\mathsf{o})$, where $K_r^L=\symb{\mathrm{Adam},1,K_r,\symb{\mathrm{Eve},r,\allowbreak\mathsf{tr}_d(\emptyset,Z\!{\restriction}\!_r,L)}}$ for $r\in\scope{d}$ and $K_r=\mathsf{tr}_d(A\mapch{\ell+1\mapsto r},Z,K)$ for $r\in D_d$; \item $\mathsf{tr}_d(A,Z,K\,L)=(\mathsf{tr}_d(A,Z,K))\,(\mathsf{tr}_d(B,Z,L))_{B\in D_d^{\scope{\mathsf{gar}(L)}}}$ if $\mathsf{tp}(K)=(\alpha_1\mathbin{\to}\dots\mathbin{\to}\alpha_k\mathbin{\Rightarrow}\mathsf{o}^\ell\mathbin{\to}\mathsf{o})$ with $k\geq 1$. \end{bracketenumerate} In \cref{tr:case:3,,tr:case:4,tr:case:5} the term is of type $\mathsf{o}$, so the ``$A$'' argument is necessarily $\emptyset$ (a function with an empty domain). For every rule $X\,y_1\,\dots\,y_k\,z_1\,\dots\,z_\ell\to R$ in $\mathcal{R}$, where $\ell=\mathsf{gar}(X)$, and for every function $A\in D_d^{\scope{\ell}}$, to $\mathcal{R}^{\dag_d}$ we take the rule \begin{align*} X_A^{\dag_d}\,(y_{1,B}^{\dag_d})_{B\in D_d^{\scope{\mathsf{gar}(y_1)}}}\,\dots\,(y_{k,B}^{\dag_d})_{B\in D_d^{\scope{\mathsf{gar}(y_k)}}}\to\mathsf{tr}_d(\emptyset,\mapch{z_i\mapsto A(\ell+1-i)\mid i\in\scope{\ell}},R). \end{align*} In the function $A$ it is more convenient to count arguments from right to left (then we do not need to shift the domain in \cref{tr:case:6} above), but it is more natural to have variables $z_1,\dots,z_\ell$ numbered from left to right; this is why in the rule for $X_A^{\dag_d}$ we assign to $z_i$ the value $A(\ell+1-i)$, not $A(i)$. Additionally, in $\mathcal{R}^{\dag_d}$ we have rules $\upVdash\to\symb{\mathrm{Eve},1,\upVdash}$ and $\downVdash\to\symb{\mathrm{Eve},2,\downVdash}$. Then Eve loses (wins) in the tree $\bot$ ($\top$) generated by $\mathcal{G}^\dag$ from $\upVdash$ ($\downVdash$, respectively). Finally, the resulting recursion scheme $\mathcal{G}^\dag$ is $(\mathcal{X}^{\dag_d},X_{0,\emptyset}^{\dag_d},\Sigma_d,\mathcal{R}^{\dag_d})$. This finishes the definition of the transformation. In the next \lcnamecref{sec:complexity} we analyze its complexity, and in \cref{sec:correctness} we justify its correctness. \begin{remark} Let us briefly compare our transformation with a transformation by Broadbent et al.\@ \cite{collapsible-arxiv} reducing the order of a collapsible pushdown automaton by one while preserving the winner of the generated parity game. Although their transformation seems technically more complicated, its overall idea is quite similar to what we do in this paper. Their transformation is split into three independent steps. First, they make the automaton ``rank-aware'', which means that it knows what was the highest priority visited between creation of a collapse link and its usage. This corresponds to adding the parameters $A$ and $Z$ to our transformation, so that we know whether a declaration is fulfilled when a variable $z$ is used. Second, they eliminate collapse links of order $n$, which in our case corresponds to removing trailing arguments of order $0$ and introducing the gadget asking Eve for a declaration. Third, they reduce the order of the automaton by one, which we also do for recursion schemes. \end{remark} \section{Complexity}\label{sec:complexity} In this section we analyze complexity of our transformation. First, we formally define the \emph{size} of a recursion scheme. The size of a term is defined by induction on its structure: \begin{gather*} |X|=|y|=1,\qquad |K\,L|=1+|K|+|L|,\\ |\symb{a,K_1,\dots,K_k}|=1+|K_1|+\dots+|K_k|. \end{gather*} Then $|\mathcal{G}|$, the size of $\mathcal{G}$, is defined as the sum of $|R|+k$ over all rules $X\,y_1\,\dots\,y_k\to R$ of $\mathcal{G}$. In Asada and Kobayashi~\cite{word2tree} such a size is called \emph{Curry-style} size; it does not include sizes of types of employed variables. We say that a type $\alpha$ \emph{appears in the definition} of a type $\beta$ if either $\alpha=\beta$, or $\beta=(\beta_1\mathbin{\to}\beta_2)$ and $\alpha$ appears in the definition of $\beta_1$ or of $\beta_2$. We write $A_\mathcal{G}$ for the largest arity of types appearing in the definition of types of nonterminals in a recursion scheme $\mathcal{G}$. Notice that types of other objects used in $\mathcal{G}$, namely variables and subterms of right-hand sides of rules, appear in the definition of types of nonterminals, hence their arity is also bounded by $A_\mathcal{G}$. It is reasonable to consider large recursion schemes, consisting of many rules, where simultaneously the maximal arity $A_\mathcal{G}$ is respectively small. While the exponential bound mentioned in \cref{thm:main} is obtained by applying the order-reducing transformation to an arbitrary parity recursion scheme, the complexity becomes slightly better if we first apply a preprocessing step. This is in particular necessary, if we want to obtain linear dependence in the size of $\mathcal{G}$ (and exponential only in the maximal arity $A_\mathcal{G}$). The preprocessing, making sure that the recursion scheme is in a \emph{simple form} (defined below), amounts to splitting large rules into multiple smaller rules. A similar preprocessing is present already in prior work~\cite{Kobayashi-jacm,word2tree,diagonal-arxiv,trans-nonempty}. An \emph{application depth} of a term $R$ is defined as the maximal number of applications on a single branch in $R$, where a compound application $K\,L_1\,\dots\,L_k$ counts only once. More formally, we define by induction: \begin{align*} &\mathsf{ad}(\symb{a,K_1,\dots,K_k})=\max\set{\mathsf{ad}(K_i)\mid i\in\scope{k}},\\ &\mathsf{ad}(X\,K_1\,\dots\,K_k)=\mathsf{ad}(y\,K_1\,\dots\,K_k)=\max(\set{0}\cup\set{\mathsf{ad}(K_i)+1\mid i\in\scope{k}}). \end{align*} We say that a recursion scheme $\mathcal{G}$ is in a \emph{simple form} if the right-hand side of each its rule has application depth at most $2$. We have the following: \begin{lemma}[{\cite[Lemma~4.1]{trans-nonempty}}]\label{simpl-complexity} For every recursion scheme $\mathcal{G}$ there exists a recursion scheme $\mathcal{G}'$ being in a simple form, generating the same tree as $\mathcal{G}$, and such that $\mathsf{ord}(\mathcal{G}')=\mathsf{ord}(\mathcal{G})$, and $A_{\mathcal{G}'}\leq 2A_\mathcal{G}$, and $|\mathcal{G}'|=\mathcal{O}(A_\mathcal{G}\cdot|\mathcal{G}|)$. The recursion scheme $\mathcal{G}'$ can be created in time linear in its size. \end{lemma} We now state and prove the main lemma of this section: \begin{lemma}\label{trans-complexity} For every parity recursion scheme $\mathcal{G}=(\mathcal{X},X_0,\Sigma_d,\mathcal{R})$ in a simple form, the recursion scheme $\mathcal{G}^\dag$ (i.e., the result of the order-reducing transformation) is also in a simple form, and $\mathsf{ord}(\mathcal{G}^\dag)=\max(0,\mathsf{ord}(\mathcal{G})-1)$, and $A_{\mathcal{G}^\dag}\leq A_\mathcal{G}\cdot (d+1)^{A_\mathcal{G}}$, and $|\mathcal{G}^\dag|=\mathcal{O}(|\mathcal{G}|\cdot (d+1)^{5\cdot A_\mathcal{G}})$. Moreover, $\mathcal{G}^\dag$ can be created in time linear in its size. \end{lemma} \begin{proof} The part about the running time is obvious. It is also easy to see by induction that $\mathsf{ord}(\alpha^{\dag_d})=\max(0,\mathsf{ord}(\alpha)-1)$. It follows that the order of the recursion scheme satisfies the same equality, because nonterminals of $\mathcal{G}^\dag$ have type $\alpha^{\dag_d}$ for $\alpha$ being the type of a corresponding nonterminal of $\mathcal{G}$. Recall that in the type $\alpha^{\dag_d}$ obtained from $\alpha=(\alpha_1\mathbin{\to}\dots\mathbin{\to}\alpha_k\mathbin{\to}\mathsf{o})$, every $\alpha_i$ either disappears or becomes (transformed and) repeated $|D_d|^{\mathsf{gar}(\alpha_i)}$ times, that is, at most $(d+1)^{A_\mathcal{G}}$ times. This implies the inequality concerning $A_{\mathcal{G}^\dag}$. Every compound application can be written as $f\,K_1\,\dots\,K_k\,L_1\,\dots\,L_\ell$, where $f$ is a nonterminal or a variable, and $\ell=\mathsf{gar}(f)$. In such a term, every $K_i$ (after being transformed) gets repeated $|D_d|^{\mathsf{gar}(K_i)}$ times, that is, at most $(d+1)^{A_\mathcal{G}}$ times. Then, for every $L_i$ we replicate the outcome $d+1$ times, and we append a small prefix; this replication happens $\ell$ times (and $\ell\leq A_\mathcal{G}$). In consequence, we easily see by induction that while transforming a term of application depth $c$, its size gets multiplicated by at most $O((d+1)^{2c\cdot A_\mathcal{G}})$. Moreover, every nonterminal $X$ is repeated $|D_d|^{\mathsf{gar}(X)}$ times, that is, at most $(d+1)^{A_\mathcal{G}}$ times. Because the application depth of right-hand sides of rules is at most $2$, this bounds the size of the new recursion scheme by $\mathcal{O}(|\mathcal{G}|\cdot (d+1)^{5\cdot A_\mathcal{G}})$. Looking again at the above description of the transformation, we can notice that the application depth cannot grow; in consequence the property of being in a simple form is preserved. \end{proof} Thus, if we want to check whether Eve wins in the tree generated by a parity recursion scheme $\mathcal{G}$ of order $n$, we can first convert $\mathcal{G}$ to a simple form, and then apply the order-reducing transformation $n$ times. This gives us a parity recursion scheme of order $0$, which can be seen as a finite parity game with $d$ priorities. Such a game can be solved in time $O(N^4\cdot 2^d)$, where $N$ is its size~\cite{Calude}. Thus, by \cref{simpl-complexity,trans-complexity}, the whole algorithm works in time $n$-fold exponential in $A_\mathcal{G}$ and $d$, and polynomial (quartic) in $|\mathcal{G}|$. If $\mathcal{G}$ is created as a product of a recursion scheme $\mathcal{H}$ and an alternating parity automaton $\mathcal{A}$, the running time is $n$-fold exponential in $A_\mathcal{H}$ and $|\mathcal{A}|$, and quartic in $|\mathcal{H}|$ (cf.~\cref{app:product}). \section{Correctness}\label{sec:correctness} In this section we finish a proof of \cref{thm:main} by showing that the winner in the tree generated by the recursion scheme $\mathcal{G}^\dag$ resulting from transforming a recursion scheme $\mathcal{G}$ is the same as in the tree generated by the original recursion scheme $\mathcal{G}$. Our proof consists of three parts. First, we show that reductions performed by $\mathcal{G}$ can be reordered, so that we can postpone substituting for (trailing) variables of order $0$. To store such postponed substitutions, called \emph{explicit substitutions}, we introduce \emph{extended trees}. Second, we show that such reordered reductions in $\mathcal{G}$ are in a direct correspondence with reductions in $\mathcal{G}^\dag$. Finally, we show how winning strategies of particular players from the tree generated by $\mathcal{G}^\dag$ can be transferred to the tree generated by $\mathcal{G}$. \subparagraph{Extended trees and terms.} In the sequel, trees and terms defined previously are sometimes called non-extended trees and non-extended terms, in order to distinguish them from extended trees and extended terms defined below. Having a set $\mathcal{Z}$ of variables of type $\mathsf{o}$ and a set of symbols $\Sigma$, (potentially infinite) \emph{extended trees} over $(\mathcal{Z},\Sigma)$ are defined by coinduction: every extended tree over $(\mathcal{Z},\Sigma)$ is of the form either \begin{itemize} \item $\symb{a,T_1,\dots,T_k}$, where $a\in\Sigma$ and $T_1,\dots,T_k$ are again extended trees over $\Sigma$, or \item $z$ for some variable $z\in\mathcal{Z}$, or \item $\esubst{T}{U}{z}$, where $z\not\in\mathcal{Z}$ is a variable of type $\mathsf{o}$, and $T$ is an extended tree over $(\mathcal{Z}\cup\set{z},\Sigma)$, and $U$ is an extended tree over $(\mathcal{Z},\Sigma)$. \end{itemize} The construction of the form $\esubst{T}{U}{z}$ is called an \emph{explicit substitution}. Intuitively, it denotes the tree obtained by substituting $U$ for $z$ in $T$. Notice that the variable $z$ being free in $T$ becomes bound in $\esubst{T}{U}{z}$. Likewise, having a set of typed nonterminals $\mathcal{X}$, a set $\mathcal{Z}$ of variables of type $\mathsf{o}$, and a set of symbols $\Sigma$, \emph{extended terms} over $(\mathcal{X},\mathcal{Z},\Sigma)$ are defined by induction: \begin{itemize} \item if $z\not\in\mathcal{Z}$ is a variable of type $\mathsf{o}$, and $E$ is an extended term over $(\mathcal{X},\mathcal{Z}\cup\set{z},\Sigma)$, and $L$ is a non-extended term of type $\mathsf{o}$ over $(\mathcal{X},\mathcal{Z},\Sigma)$, then $\esubst{E}{L}{z}$ is an extended term over $(\mathcal{X},\mathcal{Z},\Sigma)$; \item every non-extended term of type $\mathsf{o}$ over $(\mathcal{X},\mathcal{Z},\Sigma)$ is an extended term over $(\mathcal{X},\mathcal{Z},\Sigma)$. \end{itemize} Notice that explicit substitutions can be placed anywhere inside an extended tree, while in an extended term they are allowed only to surround a non-extended term. Of course an extended tree over $(\mathcal{Z},\Sigma)$ can be also seen as an extended tree over $(\mathcal{Z}',\Sigma)$, where $\mathcal{Z}'\supseteq\mathcal{Z}$; likewise for extended terms. In the sequel, such extending of the set of variables is often performed implicitly. Having a recursion scheme $\mathcal{G}=(\mathcal{X},X_0,\Sigma,\mathcal{R})$, for every set $\mathcal{Z}$ of variables of type $\mathsf{o}$ we define an \emph{ext-reduction} relation $\rightsquigarrow_\mathcal{G}$ between extended terms over $(\mathcal{X},\mathcal{Z},\Sigma)$, as the least relation such that \begin{itemize} \item $X\,K_1\,\dots\,K_k\,L_1\,\dots\,L_\ell\rightsquigarrow_\mathcal{G}\esubstdots{R\subst{K_1/y_1,\dots,K_k/y_k,z_1'/z_1,\dots,z_\ell'/z_\ell}}{L_1}{z_1'}{L_\ell}{z_\ell'}$ if\linebreak $\ell=\mathsf{gar}(X)$, and $\mathcal{R}(X)=(X\,y_1\,\dots\,y_k\,z_1\,\dots\,z_\ell\to R)$, and $z_1',\dots,z_\ell'$ are fresh variables of type $\mathsf{o}$ not appearing in $\mathcal{Z}$. \end{itemize} Then, we define by coinduction the extended tree (over $(\mathcal{Z},\Sigma)$) \emph{ext-generated} by $\mathcal{G}$ from an extended term $E$ (over $(\mathcal{X},\mathcal{Z},\Sigma)$), denoted $\mathsf{BT}^\mathsf{ext}_\mathcal{G}(E)$: \begin{itemize} \item if $E\rightsquigarrow_\mathcal{G}^*\symb{a,F_1,\dots,F_k}$, then $\mathsf{BT}^\mathsf{ext}_\mathcal{G}(E)=\symb{a,\mathsf{BT}^\mathsf{ext}_\mathcal{G}(F_1),\dots,\mathsf{BT}^\mathsf{ext}_\mathcal{G}(F_k)}$; \item if $E\rightsquigarrow_\mathcal{G}^*\esubst{F}{L}{z}$, then $\mathsf{BT}^\mathsf{ext}_\mathcal{G}(E)=\esubst{\mathsf{BT}^\mathsf{ext}_\mathcal{G}(F)}{\mathsf{BT}^\mathsf{ext}_\mathcal{G}(L)}{z}$; \item otherwise, $\mathsf{BT}^\mathsf{ext}_\mathcal{G}(E)=\symb{\omega}$. \end{itemize} The extended tree ext-generated by $\mathcal{G}$ (without mentioning a term), denoted $\mathsf{BT}^\mathsf{ext}(\mathcal{G})$, is defined as $\mathsf{BT}^\mathsf{ext}_\mathcal{G}(X_0)$. Formally, the ext-generated extended tree is not unique, because arbitrary fresh names may be used for bound variables; we should thus identify extended trees differing only in names of bound variables. Finally, we say how to convert extended trees to trees, by performing all postponed substitutions. To this end, having fixed a set $\Sigma$ of symbols, we define a \emph{simplification} relation $\rightarrowtail$ between extended trees over $(\emptyset,\Sigma)$ as the least relation such that \begin{itemize} \item $\esubstdots{\symb{a,T_1,\dots,T_k}}{L_1}{z_1}{L_\ell}{z_\ell}\rightarrowtail\symb{a,\esubstdots{T_1}{L_1}{z_1}{L_\ell}{z_\ell},\dots,\esubstdots{T_k}{L_1}{z_1}{L_\ell}{z_\ell}}$, and \item $\esubstdots{z_i}{L_1}{z_1}{L_\ell}{z_\ell}\rightarrowtail\esubstdots{L_i}{L_{i+1}}{z_{i+1}}{L_\ell}{z_\ell}$. \end{itemize} Then, we define by coinduction the \emph{expansion} of an extended tree $T$ over $(\emptyset,\Sigma)$, being a tree over $\Sigma$, and denoted $\mathsf{BT}^\mathsf{s}(T)$: \begin{itemize} \item if $T\rightarrowtail^*\symb{a,T_1,\dots,T_k}$, then $\mathsf{BT}^\mathsf{s}(T)=\symb{a,\mathsf{BT}^\mathsf{s}(T_1),\dots,\mathsf{BT}^\mathsf{s}(T_k)}$; \item otherwise, $\mathsf{BT}^\mathsf{s}(T)=\symb{\omega}$. \end{itemize} The following \lcnamecref{std2ext} says that instead of generating a tree, we can first ext-generate an extended tree, and then expand all the explicit substitutions: \begin{lemma}\label{std2ext} For every recursion scheme $\mathcal{G}$ it holds that $\mathsf{BT}(\mathcal{G})=\mathsf{BT}^\mathsf{s}(\mathsf{BT}^\mathsf{ext}(\mathcal{G}))$. \end{lemma} The lemma can be proved in a standard way; a proof is contained in \cref{app:std2ext} (similar lemmata appear in previous work~\cite[Lemma 18]{word2tree},~\cite[Lemma 5.1]{trans-nonempty}). \subparagraph{Transforming extended parity trees.} An \emph{extended parity tree} is an extended tree whose expansion is a parity tree. We now show how the transformation, defined previously for terms, can be applied to extended parity trees. Namely, we define $\mathsf{tr}^\mathsf{t}_d(Z,T)$ when $T$ is an extended tree over $(\mathcal{Z},\Sigma_d)$ for some set $\mathcal{Z}$ of variables of type $\mathsf{o}$, and $Z\colon\mathcal{Z}\to D_d$ (we do not need an ``$A$'' argument, used previously to store declarations for arguments, because extended trees have no arguments). The definition is by coinduction: \begin{enumerate}[(1')] \setcounter{enumi}{2} \item $\mathsf{tr}^\mathsf{t}_d(Z,z)=\top$ if $Z(z)$ is odd; \item $\mathsf{tr}^\mathsf{t}_d(Z,z)=\bot$ if $Z(z)$ is even; \item $\mathsf{tr}^\mathsf{t}_d(Z,\symb{\wp,p,K_1,\dots,K_k})=\symb{\wp,p,\mathsf{tr}^\mathsf{t}_d(Z\!{\restriction}\!_p,K_1),\dots,\mathsf{tr}^\mathsf{t}_d(Z\!{\restriction}\!_p,K_k)}$; \setcounter{enumi}{7} \item $\mathsf{tr}^\mathsf{t}_d(Z,\esubst{T}{U}{z})=\symb{\mathrm{Eve},1,T_1^U,T_2^U,\dots,T_d^U,T_{2d}}$, where we take $T_r^U=\symb{\mathrm{Adam},1,T_r,\symb{\mathrm{Eve},\allowbreak r,\allowbreak\mathsf{tr}^\mathsf{t}_d(Z\!{\restriction}\!_r,\allowbreak U)}}$ for $r\in\scope{d}$ and $T_r=\mathsf{tr}^\mathsf{t}_d(Z\mapch{z\mapsto r},T)$ for $r\in D_d$. \end{enumerate} Notice that $\mathsf{tr}_d$ transforms a term $z$ to nonterminals $\downVdash$ or $\upVdash$, while $\mathsf{tr}^\mathsf{t}_d$ transforms an extended tree $z$ to trees $\top$ or $\bot$, generated from those nonterminals. In the next \lcnamecref{ext2trans} we observe that the tree generated by the transformed recursion scheme $\mathcal{G}^\dag$ can be obtained by transforming the extended tree ext-generated by the original recursion scheme $\mathcal{G}$: \begin{lemma}\label{ext2trans} For every parity recursion scheme $\mathcal{G}$ it holds that $\mathsf{tr}^\mathsf{t}_d(\emptyset,\mathsf{BT}^\mathsf{ext}(\mathcal{G}))=\mathsf{BT}(\mathcal{G}^\dag)$. \end{lemma} The proof is purely syntactical, and is contained in \cref{app:ext2trans}. \subparagraph{Transforming strategies.} We finish our correctness proof by showing the following \lcnamecref{strat-trans}: \begin{lemma}\label{strat-trans} Let $T$ be an extended parity tree over $(\emptyset,\Sigma_d)$. If a player $\wp\in\set{\mathrm{Adam},\mathrm{Eve}}$ wins in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$, then $\wp$ wins also in $\mathsf{BT}^\mathsf{s}(T)$. \end{lemma} Recall that the goal of this section is to prove that the winner in $\mathsf{BT}(\mathcal{G}^\dag)$ is the same as in $\mathsf{BT}(\mathcal{G})$, for every parity recursion scheme $\mathcal{G}$. This follows from the above \lcnamecref{strat-trans} used for $T=\mathsf{BT}^\mathsf{ext}(\mathcal{G})$, because $\mathsf{BT}(\mathcal{G}^\dag)=\mathsf{tr}^\mathsf{t}_d(\emptyset,\mathsf{BT}^\mathsf{ext}(\mathcal{G}))$ by \cref{ext2trans} and $\mathsf{BT}(\mathcal{G})=\mathsf{BT}^\mathsf{s}(\mathsf{BT}^\mathsf{ext}(\mathcal{G}))$ by \cref{std2ext}. We now come to a proof of \cref{strat-trans}. In the sequel we assume a fixed extended parity tree $T$ over $(\emptyset,\Sigma_d)$. Suppose first that it is Eve who wins in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$; thus, we also fix her winning strategy $\rho$ in this tree. Our goal is to construct Eve's winning strategy $\rho'$ in $\mathsf{BT}^\mathsf{s}(T)$. In the proof, we use two additional notions. First, we say that a sequence of priorities $r_1,\dots,r_k$ is a \emph{$\preceq$-contraction} of a sequence of priorities $p_1,\dots,p_n$ if the latter can be split at some indices $i_0,i_1,\dots,i_k$, where $0=i_0\leq i_1\leq\dots\leq i_k=n$, so that for every $j\in\scope{k}$ the infix $p_{i_{j-1}+1},p_{i_{j-1}+2},\dots,p_{i_j}$ fulfils declaration $r_j$. Likewise we define $\preceq$-contractions for infinite sequences, only there are infinitely many splitting indices (which necessarily tend to infinity, meaning that the whole infinite sequence is split). Notice that we allow empty infixes, so one can arbitrarily insert odd numbers $r_j$ (i.e., numbers $r_j$ fulfilled by the empty sequence) to the $\preceq$-contraction. For example, $3,4,2$ is a $\preceq$-contraction of $4,3,2,3,4$ because the empty sequence fulfils $3$, and $4,3$ fulfils $4$, and $2,3,4$ fulfils $2$. On the other hand, $3,4,2$ is not a $\preceq$-contraction of $4,3,2,3$. The idea of $\preceq$-contractions is to describe what happens when we move from $\mathsf{BT}^\mathsf{s}(T)$ to $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$. Indeed, if $T$ has a subtree of the form $\esubst{U}{V}{z}$, then in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$ the play can continue to $V$ after playing only an Eve's declaration $r$ (skipping completely $U$), while in $\mathsf{BT}^\mathsf{s}(T)$ before reaching $V$ we traverse through $U$, where visited priorities are intended to fulfil $r$. It is easy to see that $\preceq$-contractions are transitive, and that they can make the situation only worse for Eve: \begin{lemma}\label{contraction-transitive} If a sequence $\pi_1$ is a $\preceq$-contraction of a sequence $\pi_2$, which is in turn a $\preceq$-contraction of a sequence $\pi_3$, then $\pi_1$ is a $\preceq$-contraction of $\pi_3$. \end{lemma} \begin{lemma}\label{contsctions-preserves-win} If an infinite sequence $\pi_1$ is a $\preceq$-contraction of an infinite sequence $\pi_2$, and the greatest priority appearing infinitely often in $\pi_1$ is even, then the greatest priority appearing infinitely often in $\pi_2$ is even as well. \end{lemma} We now introduce the second notion (it concerns only finite sequences, and is relative to the bound $d$ on priorities): for a declaration $r\in D_d$ and two sequences $\pi_1,\pi_2$ of priorities from $\scope{d}$ we say that $\pi_1$ is an \emph{$r$-extension} of $\pi_2$ if for every sequence $\pi_3$ of priorities from $\scope{d}$ that fulfils the declaration $r$, the sequence $\pi_1$ is a $\preceq$-contraction of the concatenation $\pi_2\cdot\pi_3$. For example, the sequence $3,4,4$ is a $5$-extension of the sequence $4,3,6$ (independently from the value of $d\geq 6$), because the empty sequence fulfils $3$, and $4,3$ fulfils $4$, and $6,p_1,\dots,p_k$ fulfils $4$ whenever $p_1,\dots,p_k$ fulfils $5$ (i.e., the maximum among $p_1,\dots,p_k$ is either even or at most $5$). Notice, moreover, that every sequence is a $2d$-extension of every sequence, because no sequence of priorities from $\scope{d}$ can fulfil the declaration $2d$. The following \lcnamecref{extension-shifted} is a direct consequence of the definition and of \cref{shift2fulfilled}: \begin{lemma}\label{extension-shifted} If a sequence $\pi$ is an $r$-extension of a sequence $p_1,\dots,p_n$, then $\pi$ is also an $r\!{\restriction}\!_{p_{n+1}}$-extension of $p_1,\dots,p_n,p_{n+1}$ for every priority $p_{n+1}\in\scope{d}$. \end{lemma} Additionally, for a node $v$ (of some parity tree) we write $\pi(v)$ for the sequence of priorities in ancestors of $v$ (not including the priority in $v$). We now come back to the proof, showing how to construct the new strategy $\rho'$, winning for Eve in $\mathsf{BT}^\mathsf{s}(T)$. In order to describe $\rho'$, we play simultaneously in both trees, $\mathsf{BT}^\mathsf{s}(T)$ and $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$, and we use moves in one tree to choose moves in the other tree. Namely, at every moment of the play, we remember \begin{itemize} \item a current node $v$ in $\mathsf{BT}^\mathsf{s}(T)$, \item nodes $w_0,w_1,\dots,w_\ell$ in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$, for some $\ell\in\mathbb{N}$, \item variables $z_1,\dots,z_\ell$ of type $\mathsf{o}$, \item functions $Z_0,Z_1,\dots,Z_\ell$ storing Eve's declarations, where $Z_i\colon\set{z_{i+1},\dots,z_\ell}\to D_d$ for every $i$, and \item extended trees $U_0,U_1,\dots,U_\ell$, where every $U_i$ is over $(\set{z_{i+1},\dots,z_\ell},\Sigma_d)$. \end{itemize} They satisfy the following invariant: \begin{enumerate}[(a)] \item $\mathsf{BT}^\mathsf{s}(T)\!{\restriction}\!_v=\mathsf{BT}^\mathsf{s}(\esubstdots{U_0}{U_1}{z_1}{U_\ell}{z_\ell})$, \item $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)\!{\restriction}\!_{w_i}=\mathsf{tr}^\mathsf{t}_d(Z_i,U_i)$ for all $i\in\set{0,1,\dots,\ell}$, \item\label{inv:c} $\pi(w_0)$ is a $\preceq$-contraction of $\pi(v)$, and \item\label{inv:d} $\pi(w_j)$ is a $Z_i(z_j)$-extension of $\pi(w_i)$, for all $i,j$ such that $0\leq i<j\leq\ell$. \end{enumerate} We start with $\ell=0$, with $v$ and $w_0$ at the root of $\mathsf{BT}^\mathsf{s}(T)$ and $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$, respectively, with $Z_0=\emptyset$, and with $U_0=T$. The invariant is clearly satisfied. Then, during the play, we have one of three cases, depending on the shape of $U_0$: \begin{enumerate} \item\label[case]{case:1} First, assume that $U_0=\symb{\wp,p,T_1,\dots,T_k}$. Then \begin{align*} \mathsf{BT}^\mathsf{s}(T)\!{\restriction}\!_v&=\symb{\wp,p,\mathsf{BT}^\mathsf{s}(\esubstdots{T_1}{U_1}{z_1}{U_\ell}{z_\ell}),\dots,\mathsf{BT}^\mathsf{s}(\esubstdots{T_k}{U_1}{z_1}{U_\ell}{z_\ell})};\\ \mathsf{tr}^\mathsf{t}_d(\emptyset,T)\!{\restriction}\!_{w_0}&=\symb{\wp,p,\mathsf{tr}^\mathsf{t}_d(Z_0,T_1),\dots,\mathsf{tr}^\mathsf{t}_d(Z_0,T_k)}. \end{align*} If $\wp=\mathrm{Adam}$, Adam chooses some child of $v$ in $\mathsf{BT}^\mathsf{s}(T)$, and we choose the same child of $w_0$ in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$. If $\wp=\mathrm{Eve}$, Eve chooses some child of $w_0$ in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$, according to her strategy $\rho$, and in $\rho'$ we choose the same child of $v$. Thus, in both cases, we move both $v$ and $w_0$ to their $c$-th child, for some $c\in\scope{k}$. We also take $Z_0\!{\restriction}\!_p$ as the new $Z_0$ and $T_c$ as the new $U_0$. \cref{extension-shifted} ensures that \cref{inv:d} of the invariant is preserved. \item\label[case]{case:2} Another possibility is that $U_0$ is a variable, that is, $U_0=z_c$ for some $c\in\scope{\ell}$. Then $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)\!{\restriction}\!_{w_0}$ (i.e., $\mathsf{tr}^\mathsf{t}_d(Z_0,U_0)$) is either $\bot$ or $\top$, depending on the parity of $Z_0(z_c)$. But our play in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$ follows an Eve's winning strategy, so it will be won by Eve, thus the subtree cannot be $\bot$, in which Eve is losing. In consequence $Z_0(z_c)$ is odd, so the empty sequence fulfils $Z_0(z_c)$. This implies that $\pi(w_c)$, being an $Z_0(z_c)$-extension of $\pi(w_0)$, is its $\preceq$-contraction, and thus also an $\preceq$-contraction of $\pi(v)$ (by \cref{contraction-transitive}). We discard $w_i,z_i,Z_i,U_i$ for $i<c$ (so that $w_c$ becomes now $w_0$, etc.). \item\label[case]{case:3} Finally, assume that $U_0=\esubst{V}{W}{z}$. Then $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)\!{\restriction}\!_{w_0}=\symb{\mathrm{Eve},1,V_1^W,\dots,V_d^W,V_{2d}}$, where $V_r^W=\symb{\mathrm{Adam},1,V_r,\symb{\mathrm{Eve},r,\mathsf{tr}^\mathsf{t}_d(Z_0\!{\restriction}\!_r,W)}}$ for $r\in\scope{d}$ and $V_r=\mathsf{tr}^\mathsf{t}_d(Z_0\subst{z\mapsto r},V)$ for $r\in D_d$. In such a node $w_0$ Eve, according to her strategy $\rho$, chooses a declaration $r$ by going to an appropriate subtree $V_r^W$ (or $V_r$ if $r=2d$). We then update our memory as follows: \begin{itemize} \item We leave $v$ and $w_i,z_i,Z_i,U_i$ for $i\geq 1$ unchanged. \item We move $w_0$ to the root of $V_r$ (this adds once or twice priority $1$ to $\pi(w_0)$, hence \cref{inv:c} of the invariant is preserved). \item Let $r'=r$ if $r\in\scope{d}$, and $r'=1$ if $r=2d$. \item We add an additional node $w_{0.5}$ between $w_0$ and $w_1$ (saying this differently, we shift $w_i$ for $i\geq 1$ by one, and we insert the new node in place of $w_1$). For $w_{0.5}$ we choose the root of $\mathsf{tr}^\mathsf{t}_d(Z_0\!{\restriction}\!_{r'},W)$. Notice that $\pi(w_{0.5})$ is an $r$-extension of $\pi(w_0)$ (for $r\in\scope{d}$ because $\pi(w_{0.5})$ is obtained from $\pi(w_0)$ by appending the priority $r'=r$, and for $r=2d$ because no sequence of priorities from $\scope{d}$ fulfils $2d$), and that every $\pi(w_j)$ for $1\leq j\leq\ell$ is a $Z_0\!{\restriction}\!_{r'}(z_j)$-extension of $\pi(w_{0.5})$ (by \cref{extension-shifted}). \item As $Z_0$, $U_0$, $z_{0.5}$, $Z_{0.5}$, and $U_{0.5}$ we take $Z_0\subst{z\mapsto r}$, $V$, $z$, $Z_0\!{\restriction}\!_{r'}$, and $W$, respectively. \end{itemize} \end{enumerate} Observe that after finitely many repetitions of \cref{case:2,case:3} necessarily \cref{case:1} has to occur, where the play advances in $\mathsf{BT}^\mathsf{s}(T)$. Indeed, $\esubstdots{U_0}{U_1}{z_1}{U_\ell}{z_\ell}$ has to generate the next node of $\mathsf{BT}^\mathsf{s}(T)$ in finitely many steps; in particular, the number of explicit substitution at the head of $U_0$ has to be finite. We have to prove that the infinite branch $\xi$ of $\mathsf{BT}^\mathsf{s}(T)$ obtained this way is won by Eve. To this end, consider the corresponding sequence of ``$w_0$'' nodes in the construction and observe that this sequence converges to some infinite branch $\zeta$ in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$. Indeed, whenever the sequence enters to a subtree of the form $\mathsf{tr}^\mathsf{t}_d(Z_0,\esubst{V}{W}{z})$ and stays there forever, then either it enters to the subtree $V_r=\mathsf{tr}^\mathsf{t}_d(Z_0\subst{z\mapsto r},V)$ for some $r$ and stays there forever, or, after some time, it enters to the subtree $\mathsf{tr}^\mathsf{t}_d(Z_0\!{\restriction}\!_r,W)$ for some $r$ and stays there forever. Moreover, the sequence of priorities on $\zeta$ is a $\preceq$-contraction of the sequence of priorities on $\xi$ (the function from elements of the former sequence to infixes of the latter sequence, as needed for $\preceq$-contraction, is obtained as the limit of such functions witnessing that always $\pi(w_0)$ is a $\preceq$-contraction of $\pi(v)$). Since $\zeta$ agrees with the strategy $\rho$, it is won by Eve, hence by \cref{contsctions-preserves-win} also $\xi$ is won by Eve, as required. This finishes the proof in the case of Eve winning in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$. Suppose now that it is Adam who wins in $\mathsf{tr}^\mathsf{t}_d(\emptyset,T)$. The proof in this case is similar, so we only list differences. First, \emph{$\succeq$-contraction} is defined like $\preceq$-contraction, but for every infix $p_{i_{j-1}+1},p_{i_{j-1}+2},\dots,p_{i_j}$ in the split we require that $r_j$ is $\succeq$ (instead of $\preceq$) than the leader of the infix. Second, we say that a sequence $\pi_1$ of priorities from $\scope{d}$ is an \emph{$r$-neg-extension} of a sequence $\pi_2$ of priorities from $\scope{d}$ if for every sequence $\pi_3$ of priorities from $\scope{d}$ that does NOT fulfil the declaration $r$, the sequence $\pi_1$ is a $\succeq$-contraction of the concatenation $\pi_2\cdot\pi_3$. In \cref{inv:c,inv:d} of the invariant we replace $\preceq$-contraction by $\succeq$-contraction, and $r$-extension by $r$-neg-extension. Then, in \cref{case:1} of the construction we only swap the role of Eve and Adam. In \cref{case:2} we now have that the play is won by Adam, so $Z_0(z_c)$ is even, that is, not fulfilled by the empty sequence; this implies that $\pi(w_c)$, being an $Z_0(z_c)$-neg-extension of $\pi(w_0)$, is also its $\succeq$-contraction. The main difference is in \cref{case:3}. For every $r\in\scope{d}$ we know Adam's decision in the root of $V_r^W$, according to his winning strategy. Take the worst $r\in\scope{d}$ such that in $V_r^W$ Adam goes to the left subtree, or $r=2d$ if he goes right everywhere; in both cases, Adam's strategy allows to enter $V_r$. Let also $s$ be the best among priorities that are worse than $r$; in $V_s^W$ Adam goes to the right subtree (if there are no priorities worse than $r$, we choose $s$ arbitrarily, e.g., $s=1$). Then as the new $w_0$ we take the root of $V_r$, and as $w_{0.5}$ we take the root of $\mathsf{tr}^\mathsf{t}_d(Z\!{\restriction}\!_s,W)$. Notice that $\pi(w_{0.5})$ is an $r$-neg-extension of $\pi(w_0)$: $s$ is better or equal than the leader of every sequence not fulfilling $r$ (also when $r$ is the worst priority, because no such a sequence exists), which ensures that the invariant is preserved. \section{Final remarks} We have presented a new, simple model-checking algorithm for higher-order recursion schemes. One may ask whether this algorithm can be used in practice. Of course the complexity $n$\textsf{-EXPTIME} for recursion schemes of order $n$ is unacceptably large (even if we take into account the fact that we are $n$-fold exponential only in the arity of types and in the size of an automaton, not in the size of a recursion scheme), but one has to recall that there exist tools solving the considered problem in such a complexity. The reason why these tools work is that the time spent by them on ``easy'' inputs is much smaller than the worst-case complexity (and many ``typical inputs'' are indeed easy). Unfortunately, this is not the case for our algorithm: the size of the recursion scheme resulting from our transformation is always large. Moreover, it seems unlikely that any simple analysis of the resulting recursion scheme (like removing useless nonterminals or some control flow analysis) may help in reducing its size. Indeed, one can see that if no nonterminals nor arguments were useless in the original recursion scheme, then also no nonterminals nor arguments are useless in the resulting recursion scheme. Thus, our algorithm is mainly of a theoretical interest. It seems feasible that a transformation similar to the one presented in this paper can be used to solve the simultaneous unboundedness problem (aka.\@ diagonal problem)~\cite{diagonal-arxiv} for recursion schemes. Developing such a transformation is a possible direction for further work.
{ "timestamp": "2021-05-06T02:09:26", "yymm": "2105", "arxiv_id": "2105.01861", "language": "en", "url": "https://arxiv.org/abs/2105.01861" }
\section{Introduction} \section{Introduction}\label{sec:intro} The Event Horizon Telescope (EHT) Collaboration has recently published the first images of a black hole (\citealt{PaperI,PaperII,PaperIII,PaperIV,PaperV,PaperVI,PaperVII,PaperVIII}; hereafter EHTC I-VIII). These images achieve a diffraction-limited angular resolution that corresponds to approximately $5 G M/c^2$, where $M$ is the mass of the black hole. They reveal a bright ring of emission with a twisting polarization pattern and a prominent rotationally symmetric mode. The polarization structure in the EHT images depends on details of the emitting plasma, principally the magnetic field geometry. However, it is also affected by the strongly curved spacetime near the black hole. Over the past few decades, simulated polarimetric images of black holes have been studied as a means to understand astrophysical properties of their surrounding accretion flows \citep[e.g.,][]{Bromley_2001,Shcherbakov_2012,Moscibrodzka_2017,Jimenez_2018,PWP_2020} and to infer the disk inclination and black hole spin through the effects of parallel transport \citep[e.g.,][]{Connors_1980,Broderick_Loeb_2006,Li_et_al_2009,Schnittman_Krolik_2009,Gold_2017,Marin_et_al_2018}. While they are becoming increasingly realistic, these simulations are generally difficult to use for broad parameter surveys because of their computational cost, and they often provide little insight into how to decouple astrophysical and relativistic effects. In this article, we develop a simple toy model to understand polarimetric images of black holes. This model consists of a ring of magnetized fluid orbiting a Schwarzschild black hole. Our model allows arbitrary emission radius, magnetic field geometry, equatorial fluid velocity, and observer inclination. With a single approximation, described in \autoref{sec:model}, we can analytically compute the resulting polarimetric image and can assess its dependence on the input parameters. In \autoref{sec:model}, we describe the toy ring model and work out the relevant relativistic transformations from the frame of a radiating fluid element in the ring to the image as seen on the sky by an observer. In \autoref{sec:examples}, we present a series of examples to illustrate the primary model features. In \autoref{sec:analytic}, we provide analytic estimates of image diagnostics -- the apparent shape of the ring, the vector polarization, and the coefficient of rotational symmetry \citep[$\beta_2$;][]{PWP_2020}. In \autoref{sec:observation_comparisons}, we discuss the suitability of our model for comparisons with observations, focusing on the EHT images of M87* and polarization ``loops'' seen during flares of Sagittarius A* (Sgr~A*). In \autoref{sec:summary}, we summarize our results. \section{The Model}\label{sec:model} We consider an accretion disk around a Schwarzschild black hole of mass $M$. We use standard geometrized units: $G=c=1$. The fluid radiates from the equatorial plane within a narrow range of radii centered on a dimensionless radius $R$, measured in units of $M$ (or $GM/c^2$, including the physical constants). With respect to a distant observer, the ring is tilted from a face-on orientation by an angle $\theta_{\rm o}$. We assume that the tilt is towards the North, so that the line-of-nodes between the ring orbital plane and the observer's sky plane is in the East-West direction. We take the sky angular coordinate $x$ to be oriented towards the West (i.e., to the right), and the coordinate $y$ towards the North (i.e., towards the top). The fluid has radial and tangential components of velocity in the plane of the ring, but no vertical velocity. In the comoving frame of the fluid, the magnetic field has radial, azimuthal and vertical components. For simplicity, we assume that both the velocity field and the magnetic field are axisymmetric, though the equations developed in this section are valid even without this assumption. We wish to compute the following primary observables: (1) the shape of the ring as viewed by the distant observer, (2) the variation of the polarized intensity around the observed ring, and (3) the orientation and pattern of the polarization vectors around the ring. An exact calculation requires integrating the geodesic equation, which has to be done numerically. However, with one simplification, described below, it is possible to do all the calculations analytically. This simplified model provides a convenient method for investigating polarization properties of idealized models. \subsection{Geometry, Lensing and Special Relativity} In the ring plane, we consider a fluid element P located at azimuthal angle $\phi$ measured from the line-of-nodes. We are interested in a null geodesic, a light ray, that travels from P to the observer. This geodesic lies in a plane that includes the line from the black hole O to the point P, as well as the line from O to the observer (see Fig.~\ref{fig:gframe}). We set up Cartesian coordinates in the geodesic plane so that the unit vector along the $x$-axis $\hat{x}$ is oriented along OP and the observer lies on the $\hat{x}$-$\hat{z}$~plane. We call this the geodesic frame, or G-frame. The angle $\psi$ between $\hat{x}$ and the unit vector $\hat{n}$ towards the observer satisfies \begin{align} \label{eq:psi} \cos\psi &= -\sin\theta_{\rm o}\,\sin\phi, \nonumber\\ \sin\psi &= (1-\cos^2\psi)^{1/2}. \end{align} \begin{figure}[t] \includegraphics[width=\columnwidth]{Geodesic_Frame-crop.pdf} \caption{Geometry in the geodesic frame, or G-frame. In the Schwarzschild metric, each null geodesic is confined to a plane that intersects the black hole. The G-frame, defined for photons emitted at point P and reaching a distant observer at relative angle $\psi$, corresponds to Cartesian axes centered on the black hole, with $\hat{x}$ in the direction of P and the $\hat{x}$-$\hat{z}$~plane given by the geodesic plane. We approximate the emission angle $\alpha$ in this frame using \autoref{belo}. } \label{fig:gframe} \end{figure} We consider a null geodesic with conserved energy\footnote{This is the photon energy measured by an observer at infinity, and we normalize it to unity.} $k_t=-1$ traveling from P to the observer. At the location P, the orthonormal time component $k_{({\rm G})}^{\hat{t}}$ of its 4-wavevector is given by (the redshift factor here is calculated using the Schwarzschild metric, as appropriate for the assumed non-spinning black hole) \begin{equation} k_{({\rm G})}^{\hat{t}} = -\frac{k_t}{\sqrt{-g_{tt}}} = \frac{1}{\left(1-\frac{2}{R}\right)^{1/2}}, \end{equation} where the subscript `$(G)$' indicates that this quantity is measured in the G-frame. Also, since the geodesic lies in the $xz$-plane, we have $k_{(G)}^{\hat{y}}=0$. To determine the other two components of $k$, we need the angle $\alpha$ in Fig.~\ref{fig:gframe}, in terms of which we can write \begin{equation} k_{({\rm G})}^{\hat{x}} = k_{({\rm G})}^{\hat{t}}\cos\alpha, \qquad k_{({\rm G})}^{\hat{z}}=k_{({\rm G})}^{\hat{t}}\sin\alpha. \end{equation} Instead of attempting to calculate $\alpha$ precisely, which would require a numerical integration of the geodesic equation, we use the following approximate formula obtained by \citet{Beloborodov_2002}, \begin{align} \label{belo} \cos\alpha &= \cos\psi + \frac{2}{R}\,(1-\cos\psi), \nonumber\\ \sin\alpha &= (1-\cos^2\alpha)^{1/2}. \end{align} This approximation is surprisingly accurate even for values of $R$ of order a few (see sec.~\ref{sec:numerical} and \autoref{sec:errorAnalysis}). \begin{figure}[t] \includegraphics[width=\columnwidth]{P_Frame-crop.pdf} \caption{Geometry in the P-frame. This frame is aligned with the rotating gas at emission radius $R$ and emission azimuth $\phi$. The $\hat{x}$ direction lies along the radial line from the black hole at O to the emission point P, and $\hat{y}$ is the azimuthal direction. The equatorial magnetic field $\vec{B}_{\rm eq}$ and fluid velocity $\vec{\beta}$ lie at angles $\eta$ and $\chi$ to $\hat{x}$ in the $x$-$y$ plane, respectively. Our model allows these angles to be specified independently, but we will later focus on the physically motivated choices of $\eta = \chi$ and $\eta = \chi+\pi$ (see \autoref{sec:examples}).} \label{fig:pframe} \end{figure} We now switch to a Cartesian frame that is aligned with the orbiting fluid ring. We take $\hat{x}$ along OP, $\hat{y}$ in the azimuthal direction at P parallel to $\hat{\phi}$, and $\hat{z}$ perpendicular to the orbital plane. We call this the P-frame (see Fig.~\ref{fig:pframe}). The G-frame and P-frame have a common $\hat{x}$-axis. Therefore, transforming from one to the other involves rotation by some angle $\xi$ around the $x$-axis. To determine $\xi$, we note that the unit vector $\hat{n}$ from the black hole O towards the observer has Cartesian components $(\cos\psi, ~0, ~\sin\psi)$ in the G-frame, and Cartesian components $(-\sin\theta_{\rm o} \sin\phi, ~-\sin\theta_{\rm o} \cos\phi, ~\cos\theta_{\rm o})$ in the P-frame. Since a rotation by angle $\xi$ transforms one set of components to the other, we obtain \begin{equation} \cos\xi = \frac{\cos\theta_{\rm o}}{\sin\psi}, \qquad \sin\xi = \frac{\sin\theta_{\rm o}\,\cos\phi}{\sin\psi}. \end{equation} Applying this rotation to the orthonormal components of $k_{\rm (G)}$, we obtain the corresponding orthonormal components in the P-frame, \begin{alignat}{3} k_{({\rm P})}^{\hat{t}} &= \frac{1}{\left(1-\frac{2}{R}\right)^{1/2}},\qquad & k_{({\rm P})}^{\hat{x}} &= \frac{\cos\alpha}{\left(1-\frac{2}{R}\right)^{1/2}},\\ k_{({\rm P})}^{\hat{y}} &=-\frac{\sin\xi\,\sin\alpha}{\left(1-\frac{2}{R}\right)^{1/2}},\qquad & k_{({\rm P})}^{\hat{z}} &=\frac{\cos\xi\,\sin\alpha}{\left(1-\frac{2}{R}\right)^{1/2}}. \end{alignat} The fluid at the point P moves in the $xy$-plane of the local P-frame with a velocity $\vec{\beta}$, which we write in the local Cartesian coordinate frame as (see Fig.~\ref{fig:pframe}) \begin{equation} \vec{\beta} = \beta \left(\cos\chi\,\hat{x} + \sin\chi\,\hat{y}\right). \label{velocity2} \end{equation} Our sign convention is that radial motion towards the black hole corresponds to $\cos\chi<0$, and clockwise rotation on the sky corresponds to $\sin\chi<0$. In the case of M87*, the rotation is clockwise. The velocity $\vec{\beta}$ describes motion of the fluid through the ring; the ring model itself is not expanding or contracting. We now transform to the fluid frame --- the F-frame --- by applying a Lorentz boost with velocity $\vec{\beta}$. This gives the following orthonormal components of $k$, \begin{eqnarray} k_{({\rm F})}^{\hat{t}} &=& \gamma\, k_{({\rm P})}^{\hat{t}} -\gamma\beta\cos\chi\, k_{({\rm P})}^{\hat{x}}-\gamma\beta\sin\chi\, k_{({\rm P})}^{\hat{y}}, \nonumber \\ k_{({\rm F})}^{\hat{x}} &=& -\gamma\beta\cos\chi\, k_{({\rm P})}^{\hat{t}} +(1+(\gamma-1)\cos^2\chi)\, k_{({\rm P})}^{\hat{x}} \nonumber \\ &~& \qquad\qquad\qquad +(\gamma-1)\cos\chi\sin\chi\, k_{({\rm P})}^{\hat{y}}, \nonumber \\ k_{({\rm F})}^{\hat{y}} &=& -\gamma\beta\sin\chi\, k_{({\rm P})}^{\hat{t}} +(\gamma-1)\cos\chi\sin\chi\, k_{({\rm P})}^{\hat{x}} \nonumber \\ &~& \qquad\qquad\qquad +(1+(\gamma-1)\sin^2\chi)\, k_{({\rm P})}^{\hat{y}}, \nonumber \\ k_{({\rm F})}^{\hat{z}} &=& k_{({\rm P})}^{\hat{z}}. \end{eqnarray} \subsection{Transformation of Polarized Intensity} Any radiation emitted along $k_{\rm(F)}^{\hat{\mu}}$ in the F-frame is Doppler-shifted by the time it reaches the observer. Since $k_{\rm (O)}^{\hat{t}}$ in the observer frame is equal to unity, the Doppler factor $\delta$ is \begin{equation} \delta = \frac{k_{\rm (O)}^{\hat{t}}}{k_{\rm (F)}^{\hat{t}}} = \frac{1}{k_{\rm (F)}^{\hat{t}}}. \end{equation} This includes both gravitational redshift and Doppler shift from velocity. In the fluid frame, there is a magnetic field which we write as\footnote{Because the emission of synchrotron radiation is best described in the fluid frame, we find it convenient to specify the magnetic field components in this frame. The $\hat{x}$, $\hat{y}$, $\hat{z}$ axes in the fluid frame are related to the corresponding axes in the P-frame (equivalently, the Schwarzschild frame, e.g., eq \ref{Schwarzschild}), via a Lorentz transformation with velocity $\vec{\beta}$. The transformation of field components between the two frames is worked out in Appendix~\ref{sec:transformations}.} \begin{eqnarray} \vec{B} &=& B_r\hat{x} + B_\phi \hat{y} + B_z \hat{z} \nonumber \\ &=& B_{\rm eq}\,(\cos\eta\,\hat{x} + \sin\eta\,\hat{y}) + B_z\, \hat{z} \\ &\equiv& \vec{B}_{\rm eq} + B_z\hat{z}, \nonumber \label{field_comps} \end{eqnarray} where the second line describes the field components in the equatorial plane in terms of a magnitude $B_{\rm eq}$ and an orientation $\eta$ (see Fig.~\ref{fig:pframe}). The intensity of synchrotron radiation emitted along the 3-vector $\vec{k}_{\rm (F)}$ depends on $\sin\zeta$, where $\zeta$ is the angle between $\vec{k}_{\rm (F)}$ and the magnetic field $\vec{B}$: \begin{equation} \sin\zeta = \frac{|\vec{k}_{\rm (F)} \times \vec{B}|}{|\vec{k}_{\rm (F)}|\,\,|\vec{B}|}. \end{equation} In the case of thermal synchrotron emission, the intensity also depends on the ratio of the emitted photon energy $h\nu$ to the electron temperature $kT_e$. At low frequencies $h\nu \ll kT_e$, the intensity is proportional to $\sin^{2/3}\zeta$ \citep[e.g.,][]{Mahadevan_1996}, whereas in the opposite limit $h\nu \gg kT_e$, the intensity varies as a very large positive power of $\sin\zeta$, because of the exponential cutoff of the particle energy distribution and the corresponding rapid decline of emissivity with increasing frequency. In general, if the emitted intensity varies as $I_\nu \sim \nu^{-\alpha_\nu}$, then the angle dependence goes as $(\sin\zeta)^{1+\alpha_\nu}$. In models of M87*, a dependence $\sim\sin^2\zeta$ is often obtained at 230\,GHz. This corresponds to $\alpha_\nu\sim1$, which is consistent with the synchrotron emission being close to its peak at this frequency ($\nu F_\nu$ roughly constant). In the analysis below, we explicitly retain the $\alpha_\nu$ dependence. However, we set $\alpha_\nu=1$ for the numerical calculations described in sec.~\ref{sec:examples}, and also when we series-expand the equations in Appendix~\ref{sec:series}. The factor $(\sin\zeta)^{1+\alpha_\nu}$ discussed in the previous paragraph is the emission per unit volume. To convert this to the emerging intensity in the fluid frame we need to multiply by the geodesic path length $l_{\rm p}$ through the emitting region. We assume that the medium is optically thin to its own emission. If we model the emitting fluid as a thin disk of vertical thickness $H$, then the path length is \begin{equation} l_{\rm p} = \frac{k_{\rm (F)}^{\hat{t}}} {k_{\rm (F)}^{\hat{z}}}\,H. \label{eq:lp} \end{equation} So far, we have discussed the emitted intensity in the fluid frame. This intensity is Doppler-boosted by a factor of $\delta^{3+\alpha_\nu}$ by the time it reaches the observer.\footnote{In the context of a continuous relativistic jet, a Doppler boost factor of $\delta^{2+\alpha_\nu}$ is generally used \citep[e.g.,][]{Blandford_Konigl}. That corresponds to the combined quantity $l_p\delta^{3+\alpha_\nu}$, where for motion parallel to the jet axis, $l_p\propto \delta^{-1}$. Our formulation, with $l_p$ handled as a separate factor, is more general.} Thus, the intensity $|P|$ of linearly polarized synchrotron radiation that reaches the observer from the location P is \begin{eqnarray} |P| &=& \delta^{3+\alpha_\nu}\,l_{\rm p}\,|\vec{B}|^{1+\alpha_\nu}\,\sin^{1+\alpha_\nu}\zeta \label{absP0}\\ &\to& \delta^4\,l_{\rm p}\,|\vec{B}|^2\sin^2\zeta~~{\rm for~\alpha_\nu=1}, \label{absP} \end{eqnarray} where we have omitted a proportionality constant. Since $|\vec{B}|$ is constant around the ring, the factors involving $|\vec{B}|$ could be eliminated from Equations \ref{absP0} and \ref{absP} and absorbed into the omitted proportionality constant. We retain these factors because keeping track of $|\vec{B}|^2$ and its components is convenient for much of the analysis in \autoref{sec:series}.\footnote{Alternatively, we could assume $|\vec{B}|=1$, as indeed we do in all the plots, eliminate $|\vec{B}|$ from Equations \ref{absP0} and \ref{absP}, but still keep track of the components of $\vec{B}$ in \autoref{sec:series}.} \subsection{Transformation of Polarization Vector} We next work on the polarization vector. In the fluid frame, the $\vec{E}$-vector of the radiation is oriented along $\vec{k}_{\rm (F)} \times \vec{B}$, i.e., perpendicular to both $\vec{k}_{\rm(F)}$ and $\vec{B}$. Therefore, we write the orthonormal components of the polarization 4-vector $f^\mu$ as \begin{eqnarray} f_{\rm (F)}^{\hat{t}} &=& 0, \quad\qquad\qquad f_{\rm (F)}^{\hat{x}} = \frac{\left(\vec{k}_{\rm (F)}\times\vec{B}\right)_{\hat{x}}}{|\vec{k}_{\rm (F)}|}, \nonumber \\ f_{\rm (F)}^{\hat{y}} &=& \frac{\left(\vec{k}_{\rm (F)}\times\vec{B}\right)_{\hat{y}}}{|\vec{k}_{\rm (F)}|}, \quad f_{\rm (F)}^{\hat{z}} = \frac{\left(\vec{k}_{\rm (F)}\times\vec{B}\right)_{\hat{z}}}{|\vec{k}_{\rm (F)}|}. \label{eq:fmuF} \end{eqnarray} By construction, this 4-vector satisfies \begin{equation} f^\mu k_\mu=0, \qquad f^\mu f_\mu= \sin^2\zeta\,|\vec{B}|^2. \label{fnorm} \end{equation} An inverse Lorentz boost transforms the 4-vector $f_{\rm (F)}^{\hat\mu}$ back to the P-frame: \begin{eqnarray} f_{({\rm P})}^{\hat{t}} &=& \gamma\, f_{({\rm F})}^{\hat{t}} +\gamma\beta\cos\chi\, f_{({\rm F})}^{\hat{x}}+\gamma\beta\sin\chi\, f_{({\rm F})}^{\hat{y}}, \nonumber \\ f_{({\rm P})}^{\hat{x}} &=& \gamma\beta\cos\chi\, f_{({\rm F})}^{\hat{t}} +(1+(\gamma-1)\cos^2\chi)\, f_{({\rm F})}^{\hat{x}} \nonumber \\ &~& \qquad\qquad\qquad +(\gamma-1)\cos\chi\sin\chi\, f_{({\rm F})}^{\hat{y}}, \nonumber \\ f_{({\rm P})}^{\hat{y}} &=& \gamma\beta\sin\chi\, f_{({\rm F})}^{\hat{t}} +(\gamma-1)\cos\chi\sin\chi\, f_{({\rm F})}^{\hat{x}} \nonumber \\ &~& \qquad\qquad\qquad +(1+(\gamma-1)\sin^2\chi)\, f_{({\rm F})}^{\hat{y}}, \nonumber \\ f_{({\rm P})}^{\hat{z}} &=& f_{({\rm F})}^{\hat{z}}. \end{eqnarray} Since the Cartesian unit vectors $\hat{x}, ~\hat{y}, ~\hat{z}$ in the P-frame are oriented along the spherical polar unit vectors $\hat{r}, ~\hat{\phi}, ~-\hat{\theta_{\rm o}}$ of the Schwarzschild frame, the orthonormal components of $k$ and $f$ in Schwarzschild coordinates are \begin{equation} k^{\hat{t}}=k^{\hat{t}}_{\rm (P)}, \quad k^{\hat{r}}=k^{\hat{x}}_{\rm (P)}, \quad k^{\hat{\theta}}=-k^{\hat{z}}_{\rm (P)}, \quad k^{\hat{\phi}}=k^{\hat{y}}_{\rm (P)}, \label{Schwarzschild} \end{equation} \begin{equation} f^{\hat{t}}=f^{\hat{t}}_{\rm (P)}, \quad f^{\hat{r}}=f^{\hat{x}}_{\rm (P)}, \quad f^{\hat{\theta}}=-f^{\hat{z}}_{\rm (P)}, \quad f^{\hat{\phi}}=f^{\hat{y}}_{\rm (P)}. \end{equation} The photon geodesic emitted at P has three conserved quantities (see for instance \citealt{Bardeen1973b}): its energy $k_t=-1$, its angular momentum around the $\hat{z}$ axis $k_\phi = Rk^{\hat{\phi}}$, and the \cite{Carter1968} constant $C$, which is the square of the total angular momentum of the photon for the Schwarzschild metric. In the P-frame the Carter constant is \begin{equation} C = R^2 \left[\left(k^{\hat{\theta}}\right)^2 + \left(k^{\hat{\phi}}\right)^2\right]. \end{equation} Using the conservation of $k_\phi$ and $C$, we find the coordinates $x$ and $y$ of the geodesic at the observer sky plane (recall the orientation of the sky coordinates $x,y$ described at the top of section \ref{sec:model}) \citep{Bardeen1973b}, \begin{align} \label{alphabeta} x &= -\frac{k_\phi}{\sin\theta_{\rm o}}=-\frac{Rk^{\hat\phi}}{\sin\theta_{\rm o}}, \nonumber\\ y &= k_\theta = R\, \left[\left(k^{\hat{\theta}}\right)^2 - \cot^2\theta_{\rm o}\,\left(k^{\hat{\phi}}\right)^2\right]^{1/2} {\rm sgn}(\sin\phi). \end{align} To compute the polarization vector at the observer, we make use of the Walker-Penrose constant $K_1+iK_2$ \citep{Walker_Penrose_1970}, which takes a simple form for a Schwarzschild spacetime. At the position P, we have (using the sign convention in \citealt{Himwich_2020}), \begin{equation} K_1 = R(k^tf^r-k^rf^t), \qquad K_2 = -R^3 (k^\phi f^\theta - k^\theta f^\phi). \end{equation} Both $K_1$ and $K_2$ are conserved along the geodesic. Therefore, knowing their values, we can evaluate the two transverse components of the polarization electric field $\vec{E}$ at the observer. If we use the normalization used in \citet{Himwich_2020}, the field components are \begin{align} \label{Enorm} E_{x,\rm norm} &= \frac{y K_2 + x K_1} {[(K_1^2+K_2^2)\, (x^2+y^2)]^{1/2}}, \nonumber\\ E_{y,\rm norm} &= \frac{y K_1 - x K_2} {[(K_1^2+K_2^2)\, (x^2+y^2)]^{1/2}}, \nonumber\\ E_{x,\rm norm}^2 + E_{y,\rm norm}^2 &= 1, \end{align} which is normalized to unity. This normalization is suitable for plotting the orientation of polarization vectors in the $xy$-plane. An alternative normalization is \begin{align} E_{x} &= \frac{y K_2 + x K_1} { x^2+y^2}, \nonumber\\ E_{y} &= \frac{y K_1 - x K_2} { x^2+y^2}, \nonumber\\ E_{x}^2 + E_{y}^2 &= \sin^2\zeta\,|\vec{B}|^2. \label{Enormzeta} \end{align} This retains the original normalization of $f^\mu$ in the fluid frame (eq \ref{fnorm}), hence the electric field is proportional to $\sin\zeta\,|\vec{B}|$. For computing the observed polarized intensity, we need to include the dependence on the Doppler factor $\delta$ and path length $l_{\rm p}$, and must also ensure the correct powers of $\sin\zeta$ and $|\vec{B}|$ as given in Equations \ref{absP0} and \ref{absP}. Since the intensity is proportional to $|\vec{E}|^2$, we therefore write the observed electric field components as \begin{align} E_{x,\rm obs} &= \delta^{(3+\alpha_\nu)/2}\,l_{\rm p}^{1/2}\,(\sin\zeta)^{(1+\alpha_\nu)/2}\,|\vec{B}|^{(1+\alpha_\nu)/2}\,E_{x,\rm norm} \nonumber\\ &= \delta^{(3+\alpha_\nu)/2}\,l_{\rm p}^{1/2}\,(\sin\zeta)^{(\alpha_\nu -1)/2}|\vec{B}|^{(\alpha_\nu-1)/2}\,E_{x}, \label{ealphaobs}\\ \nonumber E_{y,\rm obs} &= \delta^{(3+\alpha_\nu)/2}\,l_{\rm p}^{1/2}\,(\sin\zeta)^{(1+\alpha_\nu)/2}\,|\vec{B}|^{(1+\alpha_\nu)/2}\,E_{y,\rm norm} \nonumber\\ &= \delta^{(3+\alpha_\nu)/2}\,l_{\rm p}^{1/2}\,(\sin\zeta)^{(\alpha_\nu -1)/2}|\vec{B}|^{(\alpha_\nu-1)/2}\,E_{y}, \label{ebetaobs}\\ \nonumber E_{x,\rm obs}^2 + E_{y,\rm obs}^2 &= |P(\phi)|, \end{align} where $P(\phi)$ is the observed linear polarized intensity of radiation that is originally emitted by a fluid element at ring azimuthal angle $\phi$. We need one more transformation: we must convert the coordinates $(R,\phi)$ of the emitting region in the fluid to the Cartesian sky coordinates $(x,y)$, or equivalently the polar sky coordinates $(\rho,\varphi)$, at which the radiation is observed, \begin{equation} x = \rho\cos\varphi, \quad y = \rho\sin\varphi. \label{rho_varphi} \end{equation} The relation between $(R,\phi)$ and $(\rho,\varphi)$ is worked out in Appendix~\ref{sec:mapping}. The observed linear polarization $P(\phi)$ can then be described in image coordinates by the complex function $P(\varphi)$, \begin{align} P(\varphi) \equiv Q(\varphi) + i U(\varphi), \label{eq:PQU} \end{align} where the Stokes parameters $Q(\varphi)$ and $U(\varphi)$ are obtained from the electric field components $E_{x,\rm obs}$, $E_{y,\rm obs}$ using \autoref{eq:QU_Ealphabeta}. The electric vector position angle (or EVPA) is then \begin{align} {\rm EVPA} \equiv \frac{1}{2}\arctan{\frac{U}{Q}}. \end{align} This completes the calculation of the intensities $Q$, $U$, $P$ on the image plane. If one wishes to calculate fluxes in the sky plane corresponding to specific source configurations in ring coordinates $(R,\phi)$, it would be necessary to apply the Jacobian of the transformation from $(R,\phi)$ to $(\rho,\varphi)$, \added{as in \autoref{fig:gravity_loops}}. The Jacobian determinant is evaluated in Appendix~\ref{sec:mapping}. \deleted{, but we do not use it in this paper}. To summarize, in this section we showed how, given the position $(R,\phi$, Fig.~\ref{fig:pframe}) and velocity $(\beta,\chi$, eq.~\ref{velocity2}) of a synchrotron-emitting fluid element located on a tilted equatorial plane around a Schwarzschild black hole, and given also the magnetic field configuration $(B_{\rm eq},\eta,B_z$, eq.~\ref{field_comps}) in the frame of the fluid, one can calculate the sky coordinates $(x,y$, equivalently $\rho,\varphi$) of the image of this radiating element, and the linearly polarized intensity and position angle of the observed radiation. The mapping from the radiating element to the observer's image plane is written as a sequence of analytical calculations that do not require numerically integrating the geodesic equation or iteratively solving any equation. The equations are written in sufficient detail for easy incorporation into modeling calculations. \section{Example Models}\label{sec:examples} The simple model considered in the previous section has the following parameters: tilt angle of the ring $\theta_{\rm o}$, ring radius $R$, velocity vector of the fluid $\vec{\beta}$, which is parameterized by $\beta=v/c$ and $\chi$ (eq~\ref{velocity2}), fluid frame magnetic field $\vec{B}$, which is parameterized by either $B_r$, $B_\phi$, $B_z$, or $B_{\rm eq}$, $\eta$, $B_z$ (eq~\ref{field_comps}), and spectral index $\alpha_\nu$. Figures \ref{fig:Bz}--\ref{fig:Beq2} show the polarization patterns produced by this model for selected values of the parameters. In all these examples, we choose $\theta_{\rm o}=20^\circ$ and $\alpha_\nu=1$. Before considering the examples, we briefly summarize the salient features of the polarized image of M87* obtained by the EHT \citepalias{PaperVII}. First, the linear polarized flux shows a pronounced asymmetry around the ring. The polarized flux is strong between PA (measured East of North) $\sim150^\circ$ and $\sim300^\circ$; the peak polarized intensity is around PA $200^\circ$ on April~5 and $240^\circ$ on April~11. The linear polarized flux is much weaker at other angles. The large scale jet in M87* is oriented towards PA $288^\circ$. Presumably, the accretion disk is also tilted toward this direction. Such a tilt is consistent with the EHT total intensity image shown in \citetalias{PaperIV}. Thus, if we measure angles counter-clockwise with respect to the presumed tilt direction in M87*, the polarized flux is strong between angles $\sim+10^\circ$ and $-140^\circ$, with peak at $-90^\circ$ and $-50^\circ$ on April~5 and April~11. In our analytic model, the tilt and putative jet are toward the North. Thus, for a direct comparison of this model with the M87* image, we should rotate the calculated image clockwise by $72^\circ$. Alternatively, we could measure angles as offsets from the jet direction North. Thus, for a model to reproduce what is seen in M87*, it should have strong linearly polarized flux between $+10^\circ$ from the jet, i.e., just to the left of North, and $-140^\circ$ from the jet, which is located in the lower-right quadrant. That is, the polarized flux should concentrate in the right half of the panels in plots such as Figs.~\ref{fig:Bz}--\ref{fig:Beq2} below, shading towards the upper right quadrant. As we will see, this is a fairly strong constraint. The second piece of information from the polarized image of M87* is that the polarization vectors show a twisting pattern that wraps around the black hole \citepalias{PaperVII, PaperVIII}. The twist is described quantitatively by the $\beta_2$ mode of the azimuthal decomposition of polarization described in \citet{PWP_2020}. The amplitude of $\beta_2$ describes the degree to which the EVPA obeys rotational symmetry and scales linearly with fractional polarization, while the phase of $\beta_2$ describes the twist angle between the EVPA and the local radial unit vector on the image. In the M87* image, the twist angle is fairly stable in the regions where the polarized flux is strong. With respect to the local radial direction, the EVPA of the polarization vector is rotated clockwise by $\sim70^\circ$. This too is a strong constraint on models, as discussed at length in \citetalias{PaperVIII}. \begin{figure*}[t] \includegraphics[width=9cm]{pure_tilt.pdf} \includegraphics[width=9cm]{azimuthal_velocity.pdf} \includegraphics[width=9cm]{radial_velocity.pdf} \includegraphics[width=9cm]{pure_lensing.pdf} \caption{Polarization patterns corresponding to models with a ``vertical'' magnetic field (non-zero $B_z$ in the fluid frame). In each case, the directions of the ticks indicate the orientation of the polarization $\vec{E}$-vector around the ring as viewed on the sky. The lengths of the ticks are proportional to the polarized intensity. Top Left: Ring with a very large radius and no orbital velocity, so that neither velocity aberration nor lensing plays a role. Top Right: Large ring radius (i.e., no lensing), and fluid orbiting with a tangential velocity $\beta=0.3$ in the clockwise direction ($\chi=-90^\circ$). Bottom Left: Large ring radius (no lensing), and fluid flowing with velocity $\beta=0.3$ radially inward ($\chi=-180^\circ$). Bottom Right: Ring with a small radius $R=6M$, hence strong gravitational lensing, but with no fluid velocity, hence no aberration.} \label{fig:Bz} \end{figure*} \subsection{Models with Pure Vertical Field} \citet{Gravity_2018_orbit} reported observations of polarized flares in Sgr A$^*$ in near-IR, and showed that a model with a dominant vertical magnetic field can reproduce the observations. Motivated by this, we begin by studying the predictions of our toy model for a pure vertical field, oriented normal to the plane of the emitting ring. Figure \ref{fig:Bz} shows results from the analytical model for the case when $B_z=1$, $B_r=B_\phi=0$. It explores the two primary physical effects other than magnetic field direction that influence the observed polarization: (i) Doppler beaming and relativistic aberration caused by motion of the radiating fluid, and (ii) gravitational lensing caused by the gravity of the black hole. The Top Left panel in Fig.~\ref{fig:Bz} corresponds to a ring with a large radius ($R=10^4$) such that there is negligible gravitational lensing.We also set $\beta=0$, thereby eliminating Doppler beaming and aberration. The only remaining effect is the tilt of the ring, which causes the pure $B_z$ field in the ring frame to appear in projection on the sky as a vertically oriented (North-South) field. The polarized synchrotron emission from the ring has its EVPA perpendicular to the projected field, i.e., in the East-West direction. The observed polarized intensity, which is indicated by the sizes of the polarization ticks in the plot, is uniform around the ring. In this figure and all others shown in this section, ticks are shown at 50 equally spaced positions in $\phi$. The Top Right panel in Fig.~\ref{fig:Bz} shows the effect of including an arbitrary relativistic velocity ($\beta=0.3$) for the fluid in the clockwise tangential direction ($\chi=-90^\circ$), but still keeping a large radius, hence no gravitational deflection. In this case, there is a strong asymmetry in the polarized flux around the ring. However, the bright region of the ring is in the left half of the plot, exactly the opposite of what we require to explain M87*. This contrary behavior is actually rather surprising. Given the direction of the tilt and the clockwise sense of rotation, the fluid in the right half of the plot has a component of its motion towards the observer, while the fluid on the left has a component away from the observer. Doppler beaming ought to favor the right side, yet we see the opposite. This paradoxical behavior is because of aberration, as we explain in sec.~\ref{sec:analytic}. The Bottom Left panel in Fig.~\ref{fig:Bz} shows the effect of a pure inward radial velocity ($\chi=-180^\circ$), again for a large ring radius. Once again, the bright region of the disk is on the wrong side compared to what is seen in M87*. It is also exactly the opposite of what we would expect from Doppler beaming, since the fluid in the upper half has a velocity component towards the observer, and ought to be bright. Once again, aberration is the explanation. Finally, the bottom right panel considers a ring at small radius ($R=6$) such that gravitational deflection of light rays is important. For simplicity, we assume that there is no fluid velocity. In this case, the results are similar to the Bottom Left panel, and the strongest polarized flux is at the bottom, which does not match what is seen in M87*. We do not discuss the $\beta_2$ phase of the polarization patterns for models with pure vertical field, except to note that in the regions where M87* has its strongest polarized flux (upper right), the sense of the EVPA twist seen in all the examples in Fig.~\ref{fig:Bz} has the wrong sense. The conclusion from these examples is the following. If the polarized emission that we see in M87* at 230\,GHz is from equatorial gas, and if the gas rotates in the clockwise direction, as \citetalias{PaperV} concluded, and/or flows radially inward, as is natural for accretion, then the magnetic field cannot be dominated by a pure vertical component. There must be substantial radial and tangential field components. Note that the observed ring in the Bottom Right panel in Fig.~\ref{fig:Bz} has a radius slightly larger than the original ring radius $R=6$. The ring is also shifted slightly upward relative to the origin. Both effects are the result of gravitational deflection, as we explain in sec.~\ref{sec:analytic}. The effect is seen only when $R$ is small (gravity is strong), which is the case in this panel of Fig.~\ref{fig:Bz}, and in all the panels in Figs.~\ref{fig:Beq1}, \ref{fig:Beq2}. \begin{figure*}[t] \includegraphics[width=9cm]{azifield_azivelocity.pdf} \includegraphics[width=9cm]{azifield_radvelocity.pdf} \includegraphics[width=9cm]{radfield_azivelocity.pdf} \includegraphics[width=9cm]{radfield_radvelocity.pdf} \caption{Polarization patterns for models with magnetic field in the equatorial plane. Top Left: Azimuthal field ($\eta=90^\circ$) with azimuthal clockwise velocity ($\chi=-90^\circ$). Top Right: Azimuthal field ($\eta=90^\circ$) with radial inward velocity ($\chi=-180^\circ$). Bottom Left: Radial field ($\eta=0^\circ$) with azimuthal clockwise velocity ($\chi=-90^\circ$). Bottom Right: Radial field ($\eta=0^\circ$) with radial inward velocity ($\chi=-180^\circ$).} \label{fig:Beq1} \end{figure*} \begin{figure*}[t] \includegraphics[width=9cm]{chi_minus120.pdf} \includegraphics[width=9cm]{chi_minus135.pdf} \includegraphics[width=9cm]{chi_minus150.pdf} \includegraphics[width=9cm]{chi_minus165.pdf} \caption{Polarization patterns for four models that include both radial and azimuthal components of velocity and magnetic field. The models correspond to $\chi=-120^\circ$ (Top Left), $\chi=-135^\circ$ (Top Right), $\chi=-150^\circ$ (Bottom Left), $\chi=-165^\circ$ (Bottom Right), each with magnetic field trailing opposite to the velocity ($\eta=\chi+180^\circ$). The two models in the bottom row come closest to reproducing the polarization pattern seen in M87*.}\label{fig:Beq2} \end{figure*} \subsection{Models with Pure Radial or Tangential Field} We now turn our attention to models with magnetic field entirely in the equatorial plane, i.e., $B_z=0$, non-zero $B_r$ or $B_\phi$. We consider a ring with small radius ($R=6$) and include relativistic fluid motion; thus, lensing, Doppler and aberration are all included. Figure~\ref{fig:Beq1} shows four models, two with radial field ($\eta=0^\circ$) and two with tangential field ($\eta=90^\circ$). For each field configuration, we consider two velocity fields, either pure clockwise rotation ($\chi=-90^\circ$) or pure radial infall ($\chi=-180^\circ$). Three of the four panels in Fig.~\ref{fig:Beq1} have their strongest polarized flux in the correct region of the ring (top and/or right) to match what is seen in M87*. Even the fourth (Top Right panel) has slightly stronger polarized flux at the top. The very different behavior of these models, compared to those in Fig.~\ref{fig:Bz}, is explained in detail in the next section. In brief, for models with magnetic field restricted to the equatorial plane, aberration induces the same sense of flux asymmetry as Doppler beaming and therefore enhances the effect of the latter, whereas in the pure $B_z$ models, aberration induces flux asymmetry with the opposite sign of that due to Doppler beaming, and in fact overwhelms the latter and reverses the sign of what is observed. In this sense, equatorial field-dominated models are more promising for M87*. Considering the twist of the polarization pattern, as discussed in \citetalias{PaperVIII}, a pure tangential field is ruled out because the polarization ticks are predicted to be purely radial, which does not match M87. A pure radial field is also ruled out since it predicts polarization ticks entirely in the tangential direction. However, these models come closer to what is seen in M87*. It would appear that models in which $B_r>B_\phi$ are most suitable. \subsection{Models with Both Radial and Tangential Field} Figure~\ref{fig:Beq2} shows four models in which both $B_r$ and $B_\phi$ are non-zero, and $B_z=0$. All the models have fluid with clockwise rotation in the sky and radial infall, i.e., the angle $\chi$ of the vector $\vec{\beta}$ is in the lower left quadrant. Since the radial and tangential magnetic field components in the inner regions of an accretion disk are likely oriented parallel to the motion of the fluid -- the field is ``combed out" by the flow -- we simplify matters by assuming that the field is aligned with the velocity. Specifically, we choose \begin{align} {\rm Pure}~B_{\rm eq}:\quad \eta=\chi ~~{\rm or} ~~\eta = \chi + \pi. \label{etachi} \end{align} For the specific case of a purely equatorial field, we can choose either of the two values of $\eta$ indicated above. The two choices correspond to oppositely oriented directions of the magnetic field lines; this ambiguity has no effect on the linear polarized emission. As we discuss in \autoref{subsec:allfield}, we need to be more careful about the choice of $\eta$ when we have both vertical and equatorial field components. In \autoref{fig:Beq2}, the model in the Top Left panel has tangential velocity larger than radial velocity, and correspondingly $B_\phi>B_r$. In the Top Right panel, the radial and tangential components are equal, while in the lower two panels the radial components of velocity and magnetic field are larger than the respective tangential components. All four models have flux asymmetry that qualitatively matches M87*. All four models also have polarization patterns with the same sense of twist, or sign of $\beta_2$ phase, as observed in M87*. Among the four models, the ones in the bottom row come closest to M87*. \subsection{Models with $R=4.5$ M and Varying Inclination} We round out the discussion of examples by considering models with a smaller emission radius, $R=4.5$, which is better matched to M87*, and exploring the effect of varying the tilt angle $\theta_{\rm o}$. \autoref{fig:incmodels} shows models with $\chi = -150^\circ$, $\eta = \chi + \pi = 30^\circ$, and four choices of $\theta_{\rm o}$: $20^\circ$, $40^\circ$, $60^\circ$, and $80^\circ$. The top left panel has $\theta_{\rm o}=20^\circ$ and is designed to resemble M87*. The polarized intensity asymmetry (relative to the direction of the jet), as well as the twist of the EVPA pattern, are similar to the EHT observations described in \citetalias{PaperVII} and \citetalias{PaperVIII}. This same model is shown again in \autoref{fig:M87_comparison} with the polarization pattern rotated counter-clockwise by $288^\circ$ to match the jet orentation in M87*, and with the emitting fluid spread out in radius with an exponential profile with scale width $2M$ (see \autoref{sec:M87_comparisons} for details), instead of the infinitely thin emitting ring assumed here. The remaining panels in \autoref{fig:incmodels} show the effect of increasing the tilt angle $\theta_{\rm o}$. The Doppler asymmetry in the polarized intensity increases rapidly since the fluid motion has a larger component parallel to the line-of-sight. The orientation of the asymmetry (bright on the right, dim on the left) as well as the twist of the polarization pattern qualitatively resemble what is seen in the $\theta_{\rm o}=20^\circ$ model. The ring appears increasingly flattened as $\theta_{\rm o}$ increases, but it also acquires an additional asymmetry such that, by $\theta_{\rm o}=80^\circ$ it looks more like a semi-circle than an ellipse. This is because of extreme lensing of radiation emitted from the far side of the ring. As in the previous Figures, ticks are equally spaced in $\phi$; the large gaps on the north side of the $\theta=80^\circ$ image indicate the relative stretching between $\varphi$ and $\phi$ at high inclination. \begin{figure*}[t] \includegraphics[width=9cm]{inc_20.pdf} \includegraphics[width=9cm]{inc_40.pdf} \includegraphics[width=9cm]{inc_60.pdf} \includegraphics[width=9cm]{inc_80.pdf} \caption{Polarization patterns for four models with equatorial magnetic field and emission radius $R=4.5$, viewed at different inclination angles. Top left: $\theta_{\rm o}=20^\circ$. Top right: $\theta_{\rm o}=40^\circ$. Bottom left: $\theta_{\rm o}=60^\circ$. Bottom right: $\theta_{\rm o}=80^\circ$. All the models have velocity angle $\chi=-150^\circ$, and magnetic field trailing opposite to the velocity ($\eta=\chi+180^\circ$). The model in the top left, rotated counter-clockwise by $288^\circ$ and with emission spread over a finite range of radii, is shown in \autoref{fig:M87_comparison} as a toy model of M87*.} \label{fig:incmodels} \end{figure*} \subsection{Models with All Field Components} \label{subsec:allfield} We finally discuss models in which all three components of the magnetic field are non-zero. In this general case, we need to be careful about the geometry of the magnetic field. In a three-dimensional accretion flow in which magnetic field lines penetrate the disk from one side to the other, as for instance in a magnetically arrested disk (MAD) field geometry \citep{Narayan_et_al_2003,Igumenshchev_et_al_2003,Tchekhovskoy_et_al_2011,Bisnovatyi-Kogan_2019}, one expects a reflection antisymmetry in $B_{\rm eq}$ about the midplane. That is, $B_r$ and $B_\phi$ would flip sign when crossing the mid-plane, whereas $B_z$ would retain the same sign on the two sides. Let us assume, without loss of generality, that $B_z$ is positive, i.e., the $z$-component of the magnetic field line is pointed towards the observer, and let us also take $B_{\rm eq}$ to be positive. If the magnetic field is dragged and aligned with the flow, as we assumed in the previous two subsections, the field angle $\eta$ and the flow velocity angle $\chi$ must be related as follows on the two sides of the disk, \begin{eqnarray} &z>0 ~{\rm (near~side)}:\quad& \eta = \chi + \pi, \nonumber\\ &z<0 ~{\rm (far~side)}:\quad& \eta = \chi, \end{eqnarray} where ``near side" means the side of the disk facing the observer. In the absence of Faraday rotation effects, the above antisymmetry affects emission only by changing the relative sign between $B_{\rm eq}$ and $B_z$, hence it is not relevant if either $B_{\rm eq}$ or $B_z$ is zero. However, when both $B_{\rm eq}$ and $B_z$ are non-zero, one should separately compute the polarized image produced by the near side and far side of the disk and add the resulting Stokes parameters. If Faraday effects internal to the flow are strong enough to depolarize the emission from the far side, the polarized image seen by the observer will be dominated by the near side. The simulations considered in \citetalias{PaperVIII}, for instance, generally show large internal Faraday depths. In such cases, we need compute only a single image from the near side of the disk, setting $\eta=\chi+\pi$. We do not show examples of models with both vertical and equatorial field since the parameter space is large. \subsection{Numerical Geodesics and Effect of Spin}\label{sec:numerical} A general Beloborodov-like analytic approximation for the emission angle of photons from equatorial matter around a spinning black hole is not known. However, it is possible to solve analytically for the observed polarization once the photon's arrival coordinates on the image are determined from a numerical solution to the geodesic equation; this relation can be explicitly expressed in terms of real elliptic integrals (\citealt{GL_lensing,GL_null}, see also \citealt{Li_et_al_2005,Gates2020} for a calculation of images of an orbiting emitter in this formalism). For a spinning black hole, we generalize the P-frame to the ``zero-angular-momentum-observer" (ZAMO) frame, and then consider a boost $\vec{\beta}$ as in \eqref{velocity2} into the corresponding F-frame. The semi-analytic result for the polarized image of such a boosted fluid orbiting a spinning black hole is presented in Figure \ref{fig:withspincomparison}, in which changing spin is plotted by color. The inner and outer ring in the first two panels correspond to emission radii of $R = 4.5$ and $R = 6$, respectively. The results of the Beloborodov approximation are overlaid with black dashed lines and coincide with the low spin semianalytic solution from Kerr. The first and second panels of Fig.~\ref{fig:withspincomparison} generalize the scenarios from the bottom right panel of Fig.~\ref{fig:Bz} and the upper left panel of Fig.~\ref{fig:Beq2}, respectively. The small panels zoom in on one set of ticks from the second panel. \autoref{fig:withspincomparison} illustrates that for the idealized case of purely geometric and relativistic effects that we consider here, black hole spin has only a small effect on the observed EVPA and can be reasonably neglected for the purposes of the toy model. It also shows that the Beloborov approximation is fairly accurate even at radii as small as $R=4.5$. The effects of spin on observed polarization become more pronounced at very small radius and high observer inclination, neither of which are considered in this paper but will be the subject of future work. \begin{figure*}[t] \includegraphics[width=\textwidth]{figure7_replacement.pdf} \caption{The effects of spin on the observed polarization pattern. Each of the two main panels displays a different configuration of magnetized fluid. The first panel corresponds to the bottom right panel of Fig.~\ref{fig:Bz} and the second panel corresponds to the top left panel of Fig.~\ref{fig:Beq2}. Both panels show an inclination of $20^\circ$ and negative spin (i.e., clockwise rotation on the image). The inner and outer rings of polarization ticks correspond to emission from $R = 4.5$ and $R=6$, respectively. The color bar shows increasing spin from $a=0$ to $|a|=1$, and the Beloborodov approximation for Schwarzschild is shown in black overlaid dashes. The two small panels display a zoom-in of one set of ticks at $R =4.5$ (lower) and $R = 6$ (upper). }\label{fig:withspincomparison} \end{figure*} \subsection{Generalizations} \label{subsec:generalizations} Although the examples presented in this paper are restricted to axisymmetric models with emission limited to a single radius, the underlying model is more general. The primary result of the analysis presented in sec.~\ref{sec:model} is an analytical method to map emission properties at a given $(R,\phi)$ in the emitting ring to the properties of the observed radiation in the sky plane. This transformation can be easily applied to models with non-axisymmetric emission, as well as to radially extended sources. In such models, $|\vec{B}|$ would be a function of location and this would need to be included in the calculations. Other quantities like the electron temperature and number density that affect the emissivity could also vary with position and will need to be accounted for. Two other approximations in the model, both made in the interests of simplicity, deserve discussion: (1) We restricted the emitting gas to lie in a single equatorial plane. (2) We took the velocity to lie entirely within the same plane (though we did allow for a general magnetic field). Both limitations can be eliminated. The Beloborodov approximation can be applied at any emission location $(R,\phi,z)$, not just at equatorial locations. For non-equatorial locations, the geometry of the Geodesic Frame and the computation of $\alpha$ (Fig.~\ref{fig:gframe}) will differ. This will modify the result for the components of $k^{\hat\mu}_{\rm (P)}$. If a given null geodesic has contributions from several emission regions at different heights $z$ from the equatorial plane, one could compute their individual contributions to the Stokes parameters and add the contributions incoherently. Similarly, an off-plane velocity component will modify the Lorentz transformation coefficients between the P-Frame and the F-Frame, and will alter the geometrical factor that enters the path length calculation. The distinction between ``vertical" and ``in-plane" magnetic field components would become less clear, but this is merely a matter of definition. The model discussed in this paper has been derived for a non-spinning (Schwarzschild) black hole. However, as shown in \autoref{sec:numerical}, and as discussed also in \citet{Gravity_2020} and \citetalias{PaperVIII}, black hole spin has very little effect on the polarized image, at least for the low inclination angles considered so far. Finally, the analysis here is focused on optically thin synchrotron emission for which the polarization four-vector $f^\mu$ is given by equation~(\ref{eq:fmuF}) and the electric field is normalized as in equation~(\ref{Enormzeta}). For optically thick emission from a thin accretion disk, other prescriptions will need to be substituted, e.g., \cite{Li_et_al_2009} discuss polarization of X-rays emitted by the scattering atmosphere above a black hole X-ray binary disk. Except for this change, the rest of the analysis should remain the same. \section{Analytical Understanding of the Results}\label{sec:analytic} By Taylor-expanding the expressions given in sec.~\ref{sec:model} in suitably chosen ``small'' quantities, and keeping terms up to second order, we can obtain useful analytical approximations for various observables. This provides a physical understanding of the results shown in sec.~\ref{sec:examples}. In the present context of trying to understand M87* and Sgr A$^*$, we have three small quantities, $2/R \approx 1/3$ (lensing), $\beta\approx 1/3$ (Doppler and aberration), $\sin\theta_{\rm o} \approx 1/3$ (ring tilt\footnote{In the case of M87*, observations of the radio jet suggest a tilt $\theta_{\rm o} \sim17^\circ$ \citep{Walker_2018}, and in the case of Sgr~A$^*$, \cite{Gravity_2018_orbit} estimate $\theta_{\rm o} <30^\circ$ based on the polarization signatures of infrared flares.}), where the numerical values correspond to the models shown in sec.~\ref{sec:examples}. We treat all three quantities on an equal footing in the series expansions we carry out. The full results, with all terms up to quadratic order, are listed in Appendix~\ref{sec:series}. The reason for going up to quadratic order is explained below. Here we use the series expansion of the equations to interpret the numerical results presented in sec.~\ref{sec:examples}. \subsection{Shape of the Observed Ring}\label{sec:ring_shape} We begin with the shape of the ring as observed on the sky. To quadratic order, the result is \begin{align} \label{alphaseries} x &= (R+1)\cos\varphi\\ \nonumber &\qquad + \left[ -\frac{1}{2R}\cos\varphi + \sin\theta_{\rm o}\sin 2\varphi - \frac{R}{2}\sin^2\theta_{\rm o}\sin^2\varphi \cos\varphi\right],\\ \label{betaseries} y &= (R+1)\sin\varphi\\ \nonumber &\qquad + \left[ -\frac{1}{2R}\sin\varphi + 2\sin\theta_{\rm o}\sin^2\varphi -\frac{R}{2}\sin^2\theta_{\rm o}\sin^3\varphi \right]. \end{align} The first term in each expression gives the answer up to linear order, and the remaining terms inside the square brackets correspond to quadratic order. Up to linear order we see that the observed ring is circular, but with an apparent radius larger by unity (i.e., $GM/c^2$) than the radius of the source ring. The radial ``expansion" of the observed ring is caused by gravitational deflection (lensing) of geodesics. As shown in Fig.~\ref{fig:gframe}, lensing causes the geodesic to curve around the black hole such that the impact parameter is larger than the naive straight-line estimate $R\sin\psi$. Among the quadratic terms in equations~(\ref{alphaseries}) and (\ref{betaseries}), the terms proportional to $1/R$ are second-order corrections to the ring radius, and the $\sin^2\theta_{\rm o}$ terms describe the flattening of the observed ring because of tilt. The latter is simple geometry: a tilted circular ring appears elliptical in shape, with a minor axis radius equal to $\cos\theta_{\rm o} \approx 1-(1/2)\sin^2\theta_{\rm o}$ times the original ring radius. The $\sin\theta_{\rm o}$ terms describe the effect of tilt on lensing. Geodesics reaching the observer from the upper half of the ring ($0<\phi<\pi$) travel a longer distance near the black hole and suffer more deflection (this is the case shown schemaically in Fig.~\ref{fig:gframe}), while geodesics from the lower half ($\pi<\phi<2\pi$) experience less deflection. This causes an upward shift of the observed ring, i.e., a net positive bias in $y$. The shift is of the order of $\sin\theta_{\rm o}$ in units of $GM/c^2$. The shift is seen in all the models in sec.~\ref{sec:examples} that have a smallish radius ($R=6$, Lower Right panel in Fig.~\ref{fig:Bz}, and all panels in Figs.~\ref{fig:Beq1}, \ref{fig:Beq2}, \ref{fig:incmodels}). \subsection{Doppler Factor and $\sin\zeta$} Expanding up to second order, we find for the Doppler factor $\delta$, \begin{eqnarray} \delta &=& \left(1-\frac{1}{R}\right) \nonumber \\ &~& -\left[\frac{\beta^2}{2} + \frac{1}{2R^2} - \frac{2\beta}{R}\cos\chi + \beta\sin\theta_{\rm o}\sin(\chi+\varphi)\right], \label{delta} \end{eqnarray} where the second order terms are shown on the second line inside square brackets. The linear order term $-1/R$ describes deboosting of the observed intensity by gravitational redshift, and the first three second-order terms describe various other deboosting effects such as second-order Doppler. Since $\cos\chi$ is negative for radial infall, all three terms have a positive magnitude for the inflowing models we have considered, causing uniform dimming all around the ring. Azimuthal modulation of the intensity from relativistic beaming is described by the final term, $\beta\sin\theta_{\rm o}\sin(\chi+\varphi)$, and this is the only term that varies as a function of $\varphi$. The fact that this important effect appears only at second order is a major reason for expanding the equations up to quadratic order rather than stopping at linear. Why is it second order? It is because azimuthal modulation from Doppler beaming requires both tilt and fluid velocity, each of which is treated as a small quantity in our analysis.\footnote{For the models considered in sec.~\ref{sec:examples}, where each of the three small quantities is $\approx 1/3$, one expects second-order terms to be of order 10\% of the leading-order terms. However, many second-order terms come with large coefficients, e.g., intensity is proportional to $\delta^4$ so Doppler boost goes like $-4\beta\sin\theta_{\rm o}\sin(\chi+\varphi)$. Hence the second-order contributions are often not small. The analysis in this section should thus be used only for qualitative understanding. For accurate results, it is necessary to evaluate numerically the full equations given in sec.~\ref{sec:model}.} Doppler beaming causes an increase in the observed polarized intensity when $\sin(\chi+\varphi)$ is negative, with the maximum boost occurring when $\chi+\varphi=-90^\circ$. For pure clockwise rotation ($\chi=-90^\circ$), the maximum boost is at $\varphi=0$. This is natural since, for a ring tilted towards the North, the fluid at $\varphi=0$ has the largest velocity component towards the observer and hence produces the most Doppler-boosted radiation. For pure radial infall ($\chi=-180^\circ$), the maximum boost is at $\varphi=90^\circ$, again because the fluid there has the maximum velocity towards the observer. Since we consider models that lie between these two extremes, we expect the polarized intensity to be maximum somewhere in the top right quadrant, $0<\varphi<90^\circ$ (for a tilt to the North). This agrees with what is observed in M87* (once we allow for the different tilt/jet direction). Surprisingly, it is not true for the models shown in Fig.~\ref{fig:Bz}. To understand the reason for this discrepancy, we need to consider a second effect. From equation~(\ref{absP}), the observed polarized intensity depends on the Doppler factor $\delta$ as well as the path length $l_{\rm p}$ and the angle $\zeta$ between the photon wave-vector $\vec{k}_{\rm (F)}$ in the fluid frame and the local magnetic field $\vec{B}$. For small tilt angles, the variation in the path length is small and not very important. We ignore it in the discussion below. The angle $\zeta$, however, is crucial since synchrotron emission is maximum when $\vec{k}_{\rm (F)}$ and $\vec{B}$ are orthogonal to each other ($\zeta=\pm\,\pi/2$) and vanishes when they are parallel ($\zeta=0,~\pi$). Appendix~\ref{sec:series} evaluates $|\vec{B}|^2\sin^2\zeta$ up to quadratic order. We consider in the following subsections the effect of various terms in the series expansion. \subsection{Models with Pure Vertical Field} We begin by considering a model with pure $B_z$ and consider the non-zero terms in $|\vec{B}|^2\sin^2\zeta$: \begin{align} \label{sinzeta1} &B_z~{\rm Finite},~B_{\rm eq}=0: \nonumber\\ &|\vec{B}|^2\sin^2\zeta = \biggl[ -\frac{4}{R}\sin\theta_{\rm o}\sin\varphi + \frac{4}{R^2} + \sin^2\theta_{\rm o} - \frac{4\beta}{R}\cos\chi \nonumber\\ &\qquad \qquad \qquad + 2\beta\sin\theta_{\rm o}\sin(\chi+\varphi) + \beta^2 \cdots \biggr]\,B_z^2. \end{align} There are several interesting effects here. First, we have only second-order terms, no zeroth- or first-order terms (this is another reason for going up to second order in the analysis). It suggests that the observed flux should be strongly suppressed. This is not surprising since the emission towards the observer goes as $\sin^2\zeta \sim\sin^2\theta_{\rm o}$, which is small for models with small tilt. The lack of zeroth- and first-order terms also means that the importance of the second-order quantities in equation~(\ref{sinzeta1}) is enhanced. Consider first the term $-(4/R)\sin\theta_{\rm o}\sin\varphi$, which describes the combined effect of lensing ($4/R$) and tilt ($\sin\theta_{\rm o}$). Figure~\ref{fig:gframe} shows the origin of this term. In the absence of lensing, a geodesic travels on a straight line to the observer and hence subtends an angle $\theta_{\rm o}$ to the (vertical) magnetic field. When gravitational ray deflection is included, the angle at the emission point is modified. For a point on the North or upper half of the ring (the case shown in Fig.~\ref{fig:gframe}), the deflection is such that the photon wave-vector becomes more nearly parallel to the $z$-axis, i.e., more parallel to the magnetic field. Thus $\zeta$ is reduced, and this causes the emissivity to go down. The decrease is largest when $\varphi=90^\circ$, as indeed we find in equation (\ref{sinzeta1}). If we consider instead a point on the South or lower half of the ring, e.g., $\varphi=-90^\circ$, the gravitational deflection works in the opposite sense and causes $\zeta$ to increase, and the emissivity to correspondingly increase. The net result is an asymmetry in the polarized flux around the ring such that the maximum flux is in the South and the minimum is in the North, precisely as seen in the Bottom Right panel in Fig.~\ref{fig:Bz}. Consider next the term $2\beta\sin\theta_{\rm o}\sin(\chi+\varphi)$, which corresponds to the combined effect of tilt and relativistic motion. Here the relevant effect is aberration. Because of the motion of the fluid, the orientation of the wave-vector $\vec{k}_{\rm (F)}$ in the fluid frame is different from its orientation $\vec{k}_{\rm (P)}$ in the P-frame. The aberration effect is such that fluid that is moving towards the observer has $\vec{k}_{\rm (F)}$ rotated closer to the $z$-axis in the fluid frame, i.e., more nearly parallel to $\vec{B}$, while fluid that is moving away from the observer has the tilt of $\vec{k}_{\rm (F)}$ with respect to $\vec{B}$ increased. The former fluid element thus emits less and the latter more in the direction of the observer. This cancels the effect of Doppler beaming. Actually, since the constant $\varphi$-independent terms in equation~(\ref{sinzeta1}) are of the same order as the modulation term $\sin(\chi+\varphi)$ (note that $2\beta\sin\theta_{\rm o}$ is almost equal to $4/R^2+\sin^2\theta_{\rm o}+\beta^2$), the cancellation tends to be quite pronounced when $\chi+\varphi \sim -90^\circ$. The net effect is that aberration overwhelms Doppler beaming and gives the patterns seen in the Top Right and Bottom Left panels in Fig.~\ref{fig:Bz}. \subsection{ Models with Pure Equatorial Field} When we consider models with pure equatorial field ($B_{\rm eq}$ finite, $B_z=0$), the situation is quite different. Focusing on $|\vec{B}|^2\sin^2\zeta$, we find \begin{align} \label{sinzeta2} &B_{\rm eq}~{\rm Finite},~B_z=0,~\eta=\chi+\pi:\nonumber\\ &|\vec{B}|^2\sin^2\zeta \approx B_{\rm eq}^2 + \left[ - 2\beta\sin\theta_{\rm o}\sin(\chi+\varphi) \cdots \right]\,B_{\rm eq}^2, \end{align} where we have written only one of the second-order terms. As in sec.~\ref{sec:examples}, we have simplified matters by assuming that the magnetic field is oriented anti-parallel with the velocity: $\eta=\chi+\pi$. The first thing to note is that in the case of an equatorial field there is a non-vanishing zero-order term. For small tilt, a magnetic field in the equatorial plane is almost orthogonal to the photon wave-vector, hence synchrotron emissivity in the direction of the observer is nearly maximum. Correspondingly, the second-order terms are less important. Moreover, the second order term in equation~\ref{sinzeta2} appears with the same sign as the corresponding term in $\delta$ (eq.~\ref{delta}), and the opposite sign as in equation~(\ref{sinzeta1}). The reason is simple. When aberration tilts the wavevector closer to the $z$-axis, the wavevector becomes more nearly orthogonal to $\vec{B}$, and hence the emissivity increases. Thus in equatorial field models, the second-order terms in $|\vec{B}|^2\sin^2\zeta$ cooperate with and enhance the effect of Doppler beaming, as seen in the panels in Figs~\ref{fig:Beq1} and \ref{fig:Beq2}. As an aside, when both $B_{\rm eq}$ and $B_z$ are non-zero, and if we assume as before that $\eta=\chi+\pi$, then there is a first order term $-2\sin\theta_{\rm o}\sin(\eta+\varphi)\,B_{\rm eq}B_z$, which again has the same sign as the corresponding term in $\delta$. \subsection{Twist of the Polarization Pattern} \label{sec:twist} We now briefly discuss the twist of the polarization pattern around the ring. When the field is purely in the equatorial plane, the results are transparent. To zeroth order, the electric field in the sky plane is given by \begin{alignat}{2} &&E_{x,\rm obs} &= -\sin\varphi\, B_r - \cos\varphi\, B_\phi =-\sin(\eta+\varphi)\,B_{\rm eq}^2,\nonumber\\ \label{Ebetaobs} &&E_{y,\rm obs} &= \cos\varphi\, B_r - \sin\varphi\, B_\phi = \cos(\eta+\varphi)\,B_{\rm eq}^2. \end{alignat} That is, the electric field is oriented perpendicular to the projected magnetic field, as one would expect. Instead of considering the electric field, one could consider the Stokes parameters $Q$ and $U$ and look at their Fourier coefficients $\beta_m$ \citep{PWP_2020}, as described in Appendix~\ref{sec:series}. The most useful coefficient is $\beta_2$, whose complex phase directly gives the orientation of the twist. If the electric field is radial, the phase of $\beta_2$ is zero, if it is rotated clockwise from radial by $45^\circ$, the phase is $-90^\circ$, and if the electric field is tangential, the phase is $-180^\circ$. The EHT observations of M87* give a phase $\sim -130^\circ \equiv +230^\circ$. From Appendix~\ref{sec:series}, the leading order term in $\beta_2$ in the case of a pure equatorial magnetic field is \begin{equation} \beta_2 \approx e^{i(\pi+2\eta)} B_{\rm eq}^2. \label{beta2eq} \end{equation} The phase of this quantity will match the phase observed in M87* if $\eta\sim 25^\circ$. Hence, the magnetic field must be mostly radial. When $B_{\rm eq}=0$ and we have a purely vertical field, the phase of $\beta_2$ is determined by the coefficient of $B_z^2$, which consists entirely of second-order terms: \begin{equation} B_{\rm eq}=0:\quad \beta_2= \left[\left(-\frac{4}{R^2}+\frac{4\beta}{R}e^{i\chi}-\beta^2e^{2i\chi}\right) B_z^2\right]. \label{beta2z} \end{equation} If lensing is unimportant, i.e., $R$ is large, then $\beta^2$ dominates and the phase of $\beta_2$ is determined by the orientation angle $\chi$ of the fluid velocity. For a radial velocity ($\chi=\pi$), the phase of $\beta_2$ is $\pi$, i.e., the polarization vectors should be tangentially oriented. This is indeed seen in the brightest part of the ring in the Bottom Left panel in Fig.~\ref{fig:Bz}. Similarly, for a tangential velocity ($\chi=-\pi/2$), the phase of $\beta_2=0$ and the polarization ticks should be radial, as seen in the Top Right panel of Fig.~\ref{fig:Bz}. Finally, if there is no velocity but we consider strong lensing (small $R$), then equation~(\ref{beta2z}) shows that $\beta_2$ has phase $=\pi$ and the polarization should be tangential, as in the Bottom Right panel. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{angelo_ff_plot_blur20uas_v5.pdf} \caption{Comparison of GRMHD simulations to images of the ring model for simulation parameters favored in \citetalias{PaperVIII}. The left three columns show random snapshots, time averaged images, and blurred time averages of each GRMHD simulation; the right column shows the image generated by the simple ring model when evaluated for magnetic field and fluid velocity values taken from the simulations at $R=4.5$ after azimuthal and temporal averaging. Ticks show polarization magnitude and position angle where total intensity exceeds 5\% of the maximum. Grayscale shows total intensity in linear scale (directly proportional to polarization magnitude for the ring model). The total intensity and polarization magnitude are separately normalized in each panel. Panels show the average fractional polarization weighted by total intensity at bottom left; note that the GRMHD images are heavily depolarized, whereas the ring model images are not. The ring model and averaged images show the argument of the $\beta_2$ PWP mode at top left.} \label{fig:GRMHD} \end{figure*} \begin{figure*}[t] \centering \includegraphics[height=7.7cm]{M87_compare_image-crop.pdf} \hfill\includegraphics[height=7.7cm]{M87_compare_model-crop.pdf} \caption{Comparison of the EHT polarimetric image of M87* on 2017 April~11 (left) with a representative ring model (right). Ticks show polarization fraction (color), magnitude (length), and position angle (direction); grayscale is identical for the two panels and shows total intensity of the EHT image of M87*. Ticks are only plotted where the M87* polarization exceeds 2\% of the maximum intensity. All images are shown after convolution with a circular beam of FWHM $23\,\mu{\rm as}$ (shown in the left panel). As in \autoref{fig:GRMHD}, the total intensity and polarization are individually normalized for each panel. The ring model has clockwise rotation with radial inflow, corresponding to the top left model in \autoref{fig:incmodels} after counterclockwise rotation by $288^\circ$. For complete model details, see \autoref{sec:M87_comparisons}. The fractional polarization of the resolved ring model is set to 70\%; the fractional polarization is reduced only through beam depolarization. Even after blurring, the ring model has significantly higher fractional polarization than the M87* image, although the relative variation in fractional polarization is similar across both images. } \label{fig:M87_comparison} \end{figure*} \section{Comparison to Observations} \label{sec:observation_comparisons} Our ring model provides a convenient framework for direct comparison with a variety of polarimetric observations of near-horizon emission. We now discuss two specific cases of particular interest: polarimetric imaging with the EHT and infrared flares of Sgr~A*. \subsection{Comparison to the M87 Polarized Image} \label{sec:M87_comparisons} Recent EHT observations produced polarized images of M87* \citepalias{PaperVII}. \added{As reported in the one-zone model comparisons performed in \citetalias{PaperV} and \citetalias{PaperVIII}, the brightness, angular size, and expectation of significant Faraday effects coarsely constrain the magnetic field strength $B$, electron number density $n_e$, and electron temperature $T_e$ in the flow imaged by the EHT. The \citetalias{PaperVIII} results suggest that $B \lesssim 30$ G, ${10^4<n_e<10^7{\rm cm^{-3}}}$, and {$10^{10} < T_e < 1.2\times10^{11} \rm{K}$}.} \replaced{These images}{The reconstructed images in \citetalias{PaperVII}} were compared to general relativistic magnetohydrodynamic (GRMHD) simulations to identify a space of favored model parameters \citepalias{PaperVIII}. We will now explore whether our ring model can reproduce the polarization structure in these favored GRMHD simulations and in EHT images of M87*. For the GRMHD comparison, we first perform an azimuthal and temporal averaging in the fluid domain to approximate a stationary axisymmetric flow. In the fluid frame, the magnetic field in each cell is decomposed in Cartesian Kerr-Schild coordinates, which are then recast into cylindrical coordinates and then azimuthally averaged. These azimuthally averaged magnetic field decompositions are then further averaged over time between $7500\leq t/(GM/c^3) \leq 10000$ (the final quarter of these simulations). We then sample values of the fluid velocity and magnetic field vectors from the averaged simulations and use these values to generate ring models at $\theta_0 = 17^\circ$. To avoid sampling near where the tangential and radial field directions tend to abruptly flip sign, we use $z=1 M$, just above the midplane. We use $R=4.5 M$, corresponding to the apparent lensed size of the emission ring in EHT images of M87* (see the later discussion of the observed image). To create an image from the one-dimensional ring model, we adopt a radial profile that decays symmetrically in $R$ about $R=4.5$ as an exponential with a scale width of $2M$ \citepalias[EHT images only constrain this width to be ${<}\,5M$;][]{PaperVI}. We take a pixel-wise fractional polarization $|m|$ of 0.7 before blurring in the ring model. Finally, we convolve both the ring model image and the GRMHD image with a $20\,\mu{\rm as}$ Gaussian kernel. Using this approach, \autoref{fig:GRMHD} compares four favored GRMHD models to the corresponding ring models. In each case, the ring model reproduces the sense of EVPA twist and relative polarized intensity of the averaged and blurred GRMHD image, although discrepancies in $\arg ( \beta_2 )$ suggest contributions from emission away from the midplane or from other effects that are not included in the ring model (e.g., black hole spin or Faraday effects). The $R_{\rm low}$ and $R_{\rm high}$ parameters adapted from \citet{Mosci_2016} for use in \citetalias{PaperV} tune the ratio of electron to ion temperatures depending on the magnetic energy density of the plasma; large values of $R_{\rm high}$ tend to produce significant emission far from the midplane, particularly in SANE models. Also, Faraday effects in MAD models can produce significant coherent rotation of the EVPA and, hence, in $\arg ( \beta_2 )$ \citepalias{PaperVIII}. \autoref{fig:M87_comparison} compares a representative ring model to the ``consensus'' EHT polarimetric image for 11 April, 2017 (i.e., the method-averaged image, see \citetalias{PaperVII}). The ring model parameters are chosen based on the observed image and a priori expectations for M87*. For simplicity, we take $B_z=0$, although non-zero values of $B_z/B_{\rm eq}$ over a modest range also give similar results. We use $\chi=-150^\circ$, to roughly match the observed $\beta_2$ for M87* (see \autoref{sec:twist}). We take $R = d/(2\theta_{\rm g}) - 1 \approx 4.5$ (\autoref{sec:ring_shape} explains the $-1$ factor), where $d \approx 42\,\mu{\rm as}$ is the observed ring diameter and $\theta_g \approx 3.8\,\mu{\rm as}$ is the angular gravitational radius \citepalias{PaperVI}. We use $\beta = 0.4$, which is comparable to the equatorial velocity seen in GRMHD simulations \citep[see][]{Ricarte_2020}. We use $\theta_0 = 20^\circ$ to match the jet inclination of M87*. Thus, this model has a modestly relativistic fluid with clockwise rotation and predominantly radial infall. This model corresponds to the top left panel of \autoref{fig:incmodels} after rotation to match the jet position angle of M87*, $288^\circ$. As with the GRMHD comparison, the ring model is evaluated over an exponential profile with a scale width of 2\,M centered at $R=4.5$\,M. The resulting ring model image is broadly consistent with the polarization morphology of the EHT image. Although the qualitative agreement in \autoref{fig:M87_comparison} is encouraging, our simple ring model fundamentally fails to reproduce all the features in the M87* image. Namely, our simplest model would produce a high fractional polarization (${\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}}}\,60\%$), while the M87* image has a low resolved fractional polarization ${\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}}\,20\%$. This suggests that significant depolarization from internal Faraday effects are essential when modeling and interpreting the M87* image. Nevertheless, the success of the ring model in reproducing the structure of some GRMHD images that have significant Faraday effects is encouraging for the prospects of physical inference from this simple model. One possibility for using our model for a more complex emission scenario is to combine multiple ring models that correspond to different emission regions. Specifically, the assumption $\eta = \chi + \pi$ corresponds to emission sourced by entrained magnetic field lines on the near side of the accretion flow (see \autoref{subsec:allfield}). The far side of the flow would instead have $\eta = \chi$, flipping $\vec{B}_{\rm eq}$. Ignoring that contribution is equivalent to assuming that Faraday depolarization effects in the midplane are strong, so that the far-side emission is fully depolarized (as indicated in many models considered in \citetalias{PaperVIII}; see \citealt{Ricarte_2020}). Our ring model could also be adapted to the case of weak Faraday rotation in the midplane; the resulting image would be the sum of two ring models, one with $\eta = \chi$ and the other with $\eta = \chi + \pi$. Both cases would reduce the image polarization substantially and may give better agreement with the M87* image, but we defer a full analysis to a future paper. \subsection{Comparison to Sgr~A* Polarization} The polarization of Sgr~A* shows continuous variability in the submillimeter \citep{Marrone_2006,Johnson_2015,Bower_2018} and also shows rapid variability during near-infrared (NIR) flares \citep{Eckart_2006,Trippe_2007,Zamaninasab_2010,Gravity_2018}. The variability often appears as ``loops'' in Stokes $Q$-$U$, and is frequently attributed to localized emission from an orbiting ``hotspot'' \citep{Broderick_Loeb_2005,Broderick_Loeb_2006,Fish_2009}. For the case of NIR flares, Faraday effects, absorption, and background emission are insignificant, so we can directly compare observed values of polarization and centroid motion with a simulated hotspot-only model. \autoref{fig:gravity_loops} shows a representative example. In this figure, we compute the hotspot \replaced{polarization}{polarized flux in the} $(Q,U)$ plane over a full period for a set of orbits with varying emission radius and inclination. We hold the underlying magnetic field structure to be vertical and constant, and adopt a relativistic Keplerian velocity for the hotspot: $\beta = 1/\sqrt{r-2}$. Our results are similar to previous studies with fully numerical calculations \citep[see, e.g.][]{Fish_2009, Gravity_2018_orbit, Gravity_2020}; lensing and aberration compress the image of azimuthal evolution of polarization on one side of the flow and expand it on the other. In the formalism of azimuthal Fourier modes on the ring \citep{PWP_2020}, power is shifted from the $m=2$ mode to the $m=1$ mode. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{gravity_test.pdf} \caption{Polarization signatures for a vertically magnetized hotspot on a circular, relativistic Keplerian orbit. Each curve shows the \replaced{polarization}{polarized flux} for a full orbit. Different curves correspond to varying the hotspot radius (top) and viewing inclination (bottom). \added{Note that we use radio astronomy conventions for $Q$ and $U$ here, distinct from those in \autoref{eq:QU_Ealphabeta} by an overall sign.} } \label{fig:gravity_loops} \end{figure} \section{Summary} \label{sec:summary} We have developed an analytical method for computing the polarized image of a synchrotron-emitting fluid ring orbiting a Schwarzchild black hole. Given simple assumptions for the magnetic field geometry and fluid velocity, this model allows us to generate predictions of EVPA and relative polarized intensity as a polar function in the observed image at arbitrary viewing inclination. We explored the main features of the model through a number of representative examples and by further expansion in the inverse emission radius (lensing), fluid velocity (Doppler and aberration), and observer inclination (ring tilt). These reveal how the various physical effects influence the polarized image. \added{In its simplest form, the fractional polarization of our model is significantly higher than that seen in EHT images of M87* \citepalias{PaperVII}. This may indicate significant sub-beam depolarization, potentially from strong internal Faraday effects \citepalias{PaperVIII}. If so, observations at higher frequencies, where Faraday effects are suppressed, may show significantly higher image polarizations, while observations at lower frequencies are expected to show a heavily depolarized ``core.''} Our polarized ring model provides intuition and insights about how a black hole's accretion flow and spacetime combine to produce a polarized image. It also provides a pathway to constrain these physical properties through direct comparisons with data and images from the EHT, GRAVITY, and future X-ray polarimetry studies. Extensions such as non-axisymmetric structure and non-equatorial emission will provide an expanded class of geometrical models to complement the growing library of GRMHD simulations \citepalias{PaperV}. The inclusion of black hole spin will be necessary for rigorous understanding of M87* polarization, particularly if emission at small radii is significant. Further studies which examine the capability of the model in matching snapshots of GRMHD simulations with similar magnetic field and flow conditions will elucidate how readily field geometries may be directly inferred from polarized images. \acknowledgments{We thank the National Science Foundation (awards OISE-1743747, AST-1816420, AST-1716536, AST-1440254, AST-1935980) and the Gordon and Betty Moore Foundation (GBMF-5278) for financial support of this work. This work was supported in part by the Black Hole Initiative, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation to Harvard University. \added{Support for this work was also provided by the NASA Hubble Fellowship grant HST-HF2-51431.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.} \replaced{The authors of the present paper thank}{The Event Horizon Telescope Collaboration thanks} the following organizations and programs: the Academy of Finland (projects 274477, 284495, 312496\added{, 315721}); \deleted{the Advanced European Network of E-infrastructures for Astronomy with the SKA (AENEAS) project, supported by the European Commission Framework Programme Horizon 2020 Research and Innovation action under grant agreement 731016;} \added{the Agencia Nacional de Investigación y Desarrollo (ANID), Chile via NCN$19\_058$ (TITANs) and Fondecyt 3190878}, the Alexander von Humboldt Stiftung; an Alfred P. Sloan Research Fellowship; Allegro, the European ALMA Regional Centre node in the Netherlands, the NL astronomy research network NOVA and the astronomy institutes of the University of Amsterdam, Leiden University and Radboud University; the Black Hole Initiative at Harvard University, through a grant (60477) from the John Templeton Foundation; the China Scholarship Council; \deleted{Comisión Nacional de Investigación Científica y Tecnológica (CONICYT, Chile, via PIA ACT172033, Fondecyt projects 1171506 and 3190878, BASAL AFB-170002, ALMA-conicyt 31140007);} Consejo Nacional de Ciencia y Tecnolog\'{\i}a (CONACYT, Mexico, projects U0004-246083, U0004-259839, F0003-272050, M0037-279006, F0003-281692, 104497, 275201, 263356); the Delaney Family via the Delaney Family John A. Wheeler Chair at Perimeter Institute; Dirección General de Asuntos del Personal Académico-—Universidad Nacional Autónoma de México (DGAPA-—UNAM, projects IN112417 and IN112820); the European Research Council Synergy Grant "BlackHoleCam: Imaging the Event Horizon of Black Holes" (grant 610058); the Generalitat Valenciana postdoctoral grant APOSTD/2018/177 and GenT Program (project CIDEGENT/2018/021); MICINN Research Project PID2019-108995GB-C22; the Gordon and Betty Moore Foundation \replaced{(grants GBMF- 3561, GBMF-5278)}{(grant GBMF-3561}; the Istituto Nazionale di Fisica Nucleare (INFN) sezione di Napoli, iniziative specifiche TEONGRAV; the International Max Planck Research School for Astronomy and Astrophysics at the Universities of Bonn and Cologne; \deleted{the Jansky Fellowship program of the National Radio Astronomy Observatory (NRAO);} Joint Princeton/Flatiron and Joint Columbia/Flatiron Postdoctoral Fellowships, research at the Flatiron Institute is supported by the Simons Foundation; the Japanese Government (Monbukagakusho: MEXT) Scholarship; the Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for JSPS Research Fellowship (JP17J08829); the Key Research Program of Frontier Sciences, Chinese Academy of Sciences (CAS, grants QYZDJ-SSW-SLH057, QYZDJSSW- SYS008, ZDBS-LY-SLH011); the Leverhulme Trust Early Career Research Fellowship; the Max-Planck-Gesellschaft (MPG); the Max Planck Partner Group of the MPG and the CAS; the MEXT/JSPS KAKENHI (grants 18KK0090, JP18K13594, JP18K03656, JP18H03721, 18K03709, 18H01245, 25120007); the Malaysian Fundamental Research Grant Scheme (FRGS)\\ FRGS/1/2019/STG02/UM/02/6; the MIT International Science and Technology Initiatives (MISTI) Funds; the Ministry of Science and Technology (MOST) of Taiwan (105- 2112-M-001-025-MY3, 106-2112-M-001-011, 106-2119- M-001-027, 107-2119-M-001-017, 107-2119-M-001-020, \deleted{and }107-2119-M-110-005\added{, 108-2112-M-001-048, and 109-2124-M-001-005}); the National Aeronautics and Space Administration (NASA grant NNX17AL82G, Fermi Guest Investigator grant \replaced{80NSSC17K0649}{80NSSC20K1567}, NASA Astrophysics Theory Program grant 80NSSC20K0527 \deleted{, and Hubble Fellowship grant HST-HF2-51431.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555,}, \added{NASA NuSTAR award 80NSSC20K0645}); the National Institute of Natural Sciences (NINS) of Japan; the National Key Research and Development Program of China (grant 2016YFA0400704, 2016YFA0400702); the National Science Foundation (NSF, grants AST-0096454, AST-0352953, AST-0521233, AST-0705062, AST-0905844, AST-0922984, AST-1126433, AST-1140030, DGE-1144085, AST-1207704, AST-1207730, AST-1207752, MRI-1228509, OPP-1248097, AST-1310896, \deleted{AST-1337663, AST-1440254,} AST-1555365, AST-1615796, AST-1715061, AST-1716327, \deleted{AST-1716536, OISE-1743747, AST-1816420,} AST-1903847\deleted{, AST-1935980}, \added{AST-2034306}); the Natural Science Foundation of China (grants 11573051, 11633006, 11650110427, 10625314, 11721303, 11725312, 11933007, 11991052\added{, 11991053}); a fellowship of China Postdoctoral Science Foundation (2020M671266); the Natural Sciences and Engineering Research Council of Canada (NSERC, including a Discovery Grant and the NSERC Alexander Graham Bell Canada Graduate Scholarships-Doctoral Program); the National Research Foundation of Korea (the Global PhD Fellowship Grant: grants NRF-2015H1A2A1033752, 2015- R1D1A1A01056807, the Korea Research Fellowship Program: NRF-2015H1D3A1066561, Basic Research Support Grant 2019R1F1A1059721); the Netherlands Organization for Scientific Research (NWO) VICI award (grant 639.043.513) and Spinoza Prize SPI 78-409; the New Scientific Frontiers with Precision Radio Interferometry Fellowship awarded by the South African Radio Astronomy Observatory (SARAO), which is a facility of the National Research Foundation (NRF), an agency of the Department of Science and Innovation (DSI) of South Africa; the South African Research Chairs Initiative of the Department of Science and Innovation and National Research Foundation; the Onsala Space Observatory (OSO) national infrastructure, for the provisioning of its facilities/observational support (OSO receives funding through the Swedish Research Council under grant 2017-00648) the Perimeter Institute for Theoretical Physics (research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research, Innovation and Science); \deleted{the Russian Science Foundation (grant 17-12-01029);} the Spanish Ministerio de Economía y Competitividad (grants \replaced{AYA2015-63939-C2-1-P}{PGC2018-098915-B-C21}, AYA2016-80889-P, PID2019-108995GB-C21); the State Agency for Research of the Spanish MCIU through the "Center of Excellence Severo Ochoa" award for the Instituto de Astrofísica de Andalucía (SEV-2017- 0709); the Toray Science Foundation; the Consejería de Economía, Conocimiento, Empresas y Universidad of the Junta de Andalucía (grant P18-FR-1769), the Consejo Superior de Investigaciones Científicas (grant 2019AEP112); the US Department of Energy (USDOE) through the Los Alamos National Laboratory (operated by Triad National Security, LLC, for the National Nuclear Security Administration of the USDOE (Contract 89233218CNA000001); \deleted{the Italian Ministero dell’Istruzione Università e Ricerca through the grant Progetti Premiali 2012-iALMA (CUP C52I13000140001);} the European Union’s Horizon 2020 research and innovation programme under grant agreement No 730562 RadioNet; ALMA North America Development Fund; the Academia Sinica; Chandra \added{DD7-18089X and}TM6- 17006X; the GenT Program (Generalitat Valenciana) Project CIDEGENT/2018/021. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), supported by NSF grant ACI-1548562, and CyVerse, supported by NSF grants DBI-0735191, DBI-1265383, and DBI-1743442. XSEDE Stampede2 resource at TACC was allocated through TG-AST170024 and TG-AST080026N. XSEDE JetStream resource at PTI and TACC was allocated through AST170028. The simulations were performed in part on the SuperMUC cluster at the LRZ in Garching, on the LOEWE cluster in CSC in Frankfurt, and on the HazelHen cluster at the HLRS in Stuttgart. This research was enabled in part by support provided by Compute Ontario (http://computeontario.ca), Calcul Quebec (http://www.calculquebec.ca) and Compute Canada (http://www.computecanada.ca). We thank the staff at the participating observatories, correlation centers, and institutions for their enthusiastic support. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2016.1.01154.V. ALMA is a partnership of the European Southern Observatory (ESO; Europe, representing its member states), NSF, and National Institutes of Natural Sciences of Japan, together with National Research Council (Canada), Ministry of Science and Technology (MOST; Taiwan), Academia Sinica Institute of Astronomy and Astrophysics (ASIAA; Taiwan), and Korea Astronomy and Space Science Institute (KASI; Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, Associated Universities, Inc. (AUI)/NRAO, and the National Astronomical Observatory of Japan (NAOJ). The NRAO is a facility of the NSF operated under cooperative agreement by AUI. APEX is a collaboration between the Max-Planck-Institut f{\"u}r Radioastronomie (Germany), ESO, and the Onsala Space Observatory (Sweden). The SMA is a joint project between the SAO and ASIAA and is funded by the Smithsonian Institution and the Academia Sinica. The JCMT is operated by the East Asian Observatory on behalf of the NAOJ, ASIAA, and KASI, as well as the Ministry of Finance of China, Chinese Academy of Sciences, and the National Key R\&D Program (No. 2017YFA0402700) of China. Additional funding support for the JCMT is provided by the Science and Technologies Facility Council (UK) and participating universities in the UK and Canada. The LMT is a project operated by the Instituto Nacional de Astrofísica, Óptica, y Electrónica (Mexico) and the University of Massachusetts at Amherst (USA), with financial support from the Consejo Nacional de Ciencia y Tecnología and the National Science Foundation. The IRAM 30-m telescope on Pico Veleta, Spain is operated by IRAM and supported by CNRS (Centre National de la Recherche Scientifique, France), MPG (Max-Planck- Gesellschaft, Germany) and IGN (Instituto Geográfico Nacional, Spain). The SMT is operated by the Arizona Radio Observatory, a part of the Steward Observatory of the University of Arizona, with financial support of operations from the State of Arizona and financial support for instrumentation development from the NSF. The SPT is supported by the National Science Foundation through grant PLR- 1248097. Partial support is also provided by the NSF Physics Frontier Center grant PHY-1125897 to the Kavli Institute of Cosmological Physics at the University of Chicago, the Kavli Foundation and the Gordon and Betty Moore Foundation grant GBMF 947. The SPT hydrogen maser was provided on loan from the GLT, courtesy of ASIAA. The EHTC has received generous donations of FPGA chips from Xilinx Inc., under the Xilinx University Program. The EHTC has benefited from technology shared under open-source license by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER). The EHT project is grateful to T4Science and Microsemi for their assistance with Hydrogen Masers. This research has made use of NASA’s Astrophysics Data System. We gratefully acknowledge the support provided by the extended staff of the ALMA, both from the inception of the ALMA Phasing Project through the observational campaigns of 2017 and 2018. We would like to thank A. Deller and W. Brisken for EHT-specific support with the use of DiFX. We acknowledge the significance that Maunakea, where the SMA and JCMT EHT stations are located, has for the indigenous Hawaiian people.}
{ "timestamp": "2021-05-14T02:19:52", "yymm": "2105", "arxiv_id": "2105.01804", "language": "en", "url": "https://arxiv.org/abs/2105.01804" }
\section{Introduction} Out-of-distribution (OOD) detection has become a central challenge in safely deploying machine learning models in the open world, where the test data may be distributionally different from the training data. A plethora of literature has emerged in addressing the problem of OOD detection~\cite{bevandic2018discriminative,hein2019relu, hendrycks2016baseline, lakshminarayanan2017simple, lee2018simple, liang2018enhancing, mohseni2020self, chen2020robust-new, hsu2020generalized, liu2020energy, lin2021mood}. However, existing solutions are mainly driven by small, low-resolution datasets such as CIFAR~\cite{krizhevsky2009learning} and MNIST~\cite{lecun2010mnist}. Deployed systems like autonomous vehicles often operate on images that have far greater resolution and perceive environments with far more categories. As a result, a critical research gap exists in developing and evaluating OOD detection algorithms for large-scale image classification tasks. While one may be eager to conclude that solutions for small datasets should transfer to a large-scale setting, we argue that this is far from the truth. The main challenges posed in OOD detection stem from the fact that it is impossible to comprehensively define and anticipate anomalous data in advance, resulting in a large space of uncertainty. As the number of semantic classes increases, the plethora of ways that OOD data may occur increases correspondingly. For example, our analysis reveals that the average false positive rate (at 95\% true positive rate) of a common baseline~\cite{hendrycks2016baseline} would rise from 17.34\% to 76.94\% as the number of classes increases from 50 to 1,000 on ImageNet-1k~\cite{deng2009imagenet}. Very few works have studied OOD detection in the large-scale setting, with limited evaluations and effectiveness~\cite{roady2019outofdistribution,hendrycks2019benchmark}. This begs the following question: \emph{how can we design an OOD detection algorithm that scales effectively for classification with large semantic space} Motivated by this, we take an important step to bridge this gap and propose a group-based OOD detection framework that is effective for large-scale image classification. Our key idea is to decompose the large semantic space into smaller groups with similar concepts, which allows simplifying the decision boundary and reducing the uncertainty space between in- vs. out-of-distribution data. Intuitively, for OOD detection, it is simpler to estimate whether an image belongs to one of the coarser-level semantic groups than to estimate whether an image belongs to one of the finer-grained classes. For example, consider a model tasked with classifying 200 categories of plantations and another 200 categories of marine animals. A \texttt{truck} image can be easily classified as OOD data since it does not resemble either the plantation group or the marine animal group. Formally, our proposed method leverages group softmax and derives a novel OOD scoring function. Specifically, the group softmax computes probability distributions within each semantic group. A key component is to utilize a category \texttt{others} in each group, which measures the probabilistic score for an image to be OOD with respect to the group. Our proposed OOD scoring function, \emph{Minimum Others Score} (\textbf{MOS}), exploits the information carried by the \texttt{others} category. As illustrated in Figure~\ref{fig:main_arch}, MOS is higher for OOD inputs as they will be mapped to \texttt{others} with high confidence in all groups, and is lower for in-distribution inputs. We extensively evaluate our approach on models trained with the ImageNet-1k dataset, leveraging the state-of-the-art pre-trained BiT-S models~\cite{kolesnikov2020big} as backbones. We explore label space of size 10-100 times larger than that of previous works~\cite{hendrycks2016baseline,liang2018enhancing,lee2018simple,liu2020energy,chen2020robust-new,hendrycks2018deep}. Compared to the best baseline~\cite{hendrycks2019benchmark}, our method improves the average performance of OOD detection by 14.33\% (FPR95) over four diverse OOD test datasets. More importantly, our method achieves improved OOD detection performance while preserving the classification accuracy on in-distribution datasets. We note that while group-based learning has been used for improving tasks such as long-tail object detection~\cite{li2020overcoming}, our objective and motivation are very different---we are interested in reducing the uncertainty between in- and out-of-distribution data, rather than reducing the confusion among in-distribution data themselves. Below we summarize our \textbf{key results and contributions} \vspace{-0.1cm} \begin{itemize} \vspace{-0.1cm} \item We propose a group-based OOD detection framework, along with a novel OOD scoring function MOS, that scales substantially better for large label space. Our method establishes the new state-of-the-art performance, reducing the average FPR95 by \textbf{14.33}\% while achieving \textbf{6x} speedup in inference time compared to the best baseline \vspace{-0.2cm} \item We conduct extensive ablations which improve understandings of our method for large-scale OOD detection under (1) different grouping strategies, (2) different sizes of semantic class space, (3) different backbone architectures, and (4) varying fine-tuning capacities. \vspace{-0.2cm} \item We curate diverse OOD evaluation datasets from four real-world high-resolution image databases, which enables future research to evaluate OOD detection methods in a large-scale setting\footnote{ Code and data for reproducing our results are available at:~\url{https://github.com/deeplearning-wisc/large_scale_ood}}. \end{itemize} \section{Preliminary and Analysis} \vspace{-0.1cm} \paragraph{Preliminaries} We consider a training dataset drawn i.i.d.\ from the in-distribution $P_{\bm{X}}$, with label space ${Y} = \{1,2,\cdots,C \}$. For OOD detection problem, it is common to train a classifier $f(\mathbf{x})$ on the in-distribution $P_{\bm{X}}$, and evaluate on samples that are drawn from a different distribution $Q_{\bm{X}}$. An OOD detector $G(\mathbf{x})$ is a binary classifier: \vspace{-0.1cm} \begin{equation*} G(\mathbf{x}) = \begin{cases} \text{in}, &\text{if}\ S(\mathbf{x}) \geq \gamma \\ \text{out}, &\text{if}\ S(\mathbf{x}) < \gamma, \end{cases} \end{equation*} where $S(\mathbf{x})$ is the scoring function, and $\gamma$ is the threshold chosen so that a high fraction (\eg, 95\%) of in-distribution data is correctly classified. \vspace{-0.3cm} \paragraph{Effect of Number of Classes on OOD Detection} We first revisit the common baseline approach~\cite{hendrycks2016baseline}, which uses the maximum softmax probability (MSP), $S(\*x)=\max_i \frac{e^{f_i(\*x)}}{\sum_{j=1}^C e^{f_j(\*x)}}$, for OOD detection. We investigate the effect of label space size on the OOD detection performance. In particular, we use a ResNetv2-101 architecture~\cite{he2016identity} trained on different subsets\footnote{To create the training subset, we first randomly select $C$ ($C \in \{50, 200, 300, 400, 500, 600, 700, 800, 900, 1000\}$) labels from the 1,000 ImageNet classes. For each of the chosen label, we then sample 700 images for training.} of ImageNet with varying numbers of classes $C$. As shown in Figure~\ref{fig:class_num_ablation_baseline}, the performance (FPR95) degrades rapidly from 17.34\% to 76.94\% as the number of in-distribution classes increases from 50 to 1,000. This trend signifies that current OOD detection methods are indeed challenged by the increasingly large label space, which motivates our work. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/class_num_ablation_msp.pdf} \caption{\small OOD detection performance of a common baseline MSP~\cite{hendrycks2016baseline} decreases rapidly as the number of ImageNet-1k classes increases (\emph{left}: AUROC; \emph{right}: FPR95).} \label{fig:class_num_ablation_baseline} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.54\textwidth]{figures/2d_toy_example_v3.pdf} \caption{\small{A toy example in 2D space of group-based OOD detection framework. \emph{Left}: without grouping, the decision boundary between in- vs. out-of-distribution data becomes increasingly complex with more classes. \emph{Right}: our group-based method simplifies the decision boundary and reduces the uncertainty space for OOD data.}} \vspace{-0.5cm} \label{fig:2d_toy_example} \end{figure} \vspace{-0.4cm} \section{Method} Our novel group-based OOD detection framework is illustrated in Figure~\ref{fig:main_arch}. In what follows, we first provide an overview and then describe the group softmax training technique in Section~\ref{sec:group-softmax}. We introduce our proposed OOD detection algorithm MOS in Section~\ref{sec:gs-ood}, followed with grouping strategies in Section~\ref{sec:grouping}. \vspace{-0.3cm} \paragraph{Method Overview: A Conceptual Example} As aforementioned, OOD detection performance can suffer notably from the increasing number of in-distribution classes. To mitigate this issue, our key idea is to decompose the large semantic space into smaller groups with similar concepts, which allows simplifying the decision boundary and reducing the uncertainty space between in- vs. out-of-distribution data. We illustrate our idea with a toy example in Figure~\ref{fig:2d_toy_example}, where the in-distribution data consists of class-conditional Gaussians. Without grouping (left), the decision boundary between in- vs. OOD data is determined by \emph{all} classes and becomes increasingly complex as the number of classes grows. In contrast, with grouping (right), the decision boundary for OOD detection can be significantly simplified, as shown by the dotted curves. In other words, by way of grouping, the OOD detector only needs to make a small number of relatively simple estimations about \textit{whether an image belongs to this group}, as opposed to making a large number of hard decisions about \textit{whether an image belongs to this class}. An image will be classified as OOD if it belongs to none of the groups. We proceed with describing the training mechanism that achieves our novel conceptual idea. \subsection{Group-based Learning} \label{sec:group-softmax} We divide the total number of $C$ categories into $K$ groups, $\mathcal{G}_1, \mathcal{G}_2, ..., \mathcal{G}_K$. We calculate the standard group-wise softmax for each group $\mathcal{G}_k$: \vspace{-0.2cm} \begin{equation} p_c^k (\mathbf{x}) = \frac{e^{f_c^k (\mathbf{x})}}{\sum_{c' \in \mathcal{G}_k} e^{f_{c'}^k (\mathbf{x})}}, \ c \in \mathcal{G}_k, \vspace{-0.1cm} \end{equation} where $f_c^k (\mathbf{x})$ and $p_c^k (\mathbf{x})$ denote the output logit and the softmax probability for class $c$ in group $\mathcal{G}_k$, respectively. \vspace{-0.2cm} \paragraph{Category ``Others''} Standard group softmax is insufficient as it can only discriminate classes within the group, but cannot estimate the OOD uncertainty between inside vs. outside the group. To this end, a new category \texttt{others} is introduced to every group, as shown in Figure~\ref{fig:main_arch}. The model can predict \texttt{others} if the input $\*x$ does not belong to this group. In other words, the \texttt{others} category allows explicitly learning the decision boundary between inside vs. outside the group, as illustrated by the dashed curves surrounding classes C1/C2/C3 in Figure~\ref{fig:2d_toy_example}. This is desirable for OOD detection, as an OOD input can be mapped to \texttt{others} for all groups, whereas an in-distribution input will be mapped to one of the semantic categories in some group with high confidence. Importantly, our use of the category \texttt{others} creates ``virtual" group-level outlier data without relying on any external data. Each training example $\*x$ not only helps estimate the decision boundary for the classification problem, but also effectively improves the OOD uncertainty estimation for groups to which it does not belong. We show the formulation can in fact achieve the dual objective of in-distribution classification, as well as OOD detection. \vspace{-0.3cm} \paragraph{Training and Inference} During training, the ground-truth labels are re-mapped in each group. In groups where $c$ is not included, class \texttt{others} will be defined as the ground-truth class. The training objective is a sum of cross-entropy losses in each group: \vspace{-0.2cm} \begin{equation} \mathcal{L}_{GS} = - \frac{1}{N}\sum_{n=1}^N \sum_{k=1}^K \sum_{c\in \mathcal{G}_k}y_c^k\log (p_c^k (\mathbf{x})), \vspace{-0.1cm} \end{equation} where $y_c^k$ and $p_c^k$ represent the label and the softmax probability of category $c$ in $\mathcal{G}_k$, and $N$ is the total number of training samples. We denote the set of all valid (non-\texttt{others}) classes in each group as $\mathcal{G}'_k = \mathcal{G}_k \backslash \{\text{\texttt{others}}\}$. During inference time, we derive the group-wise class prediction in the valid set for each group: \vspace{-0.2cm} \begin{equation*} \hat p^k = \max_{c \in \mathcal{G}'_k} p_c^k(\mathbf{x}),\ \hat c^k = \argmax_{c \in \mathcal{G}'_k} p_c^k(\mathbf{x}). \vspace{-0.1cm} \end{equation*} Then we use the maximum group-wise softmax score and the corresponding class for final prediction: \vspace{-0.2cm} \begin{equation*} k_* = \argmax_{1 \leq k \leq K} \hat p^k. \vspace{-0.1cm} \end{equation*} The final prediction is category $\hat c^{k_*}$ from group $\mathcal{G}_{k_*}$. \subsection{OOD Detection with MOS} \label{sec:gs-ood} For a classification model trained with the group softmax loss, we propose a novel OOD scoring function, \textbf{Minimum Others Score (MOS)}, that allows effective differentiation between in- vs. out-of-distribution data. Our key observation is that category \texttt{others} carries useful information for how likely an image is OOD with respect to each group. As discussed in Section~\ref{sec:group-softmax}, an OOD input will be mapped to \texttt{others} with high confidence in all groups, whereas an in-distribution input will have a low score on category \texttt{others} in the group it belongs to. Therefore, the lowest \texttt{others} score among all groups is crucial for distinguishing between in- vs. out-of-distribution data. This leads to the following OOD scoring function, termed as \emph{Minimum Others Score}: \vspace{-0.2cm} \begin{equation} S_\text{MOS}(\mathbf{x}) = -\min_{1 \leq k \leq K}{p_{\text{others}}^k(\mathbf{x})}. \vspace{-0.1cm} \end{equation} Note that we negate the sign to align with the conventional notion that $S_\text{MOS}(\*x)$ is higher for in-distribution data and lower for out-of-distribution. To provide an interpretation and intuition behind MOS, we show in Figure~\ref{fig:score_dist_sample} the average scores for the category \texttt{others} in each group for both in-distribution and OOD images. For in-distribution, we select all validation images from the \texttt{animal} group in the ImageNet-1k dataset. The minimum \texttt{others} score among all groups is significantly lower for in-distribution data than that for OOD data, allowing for effective differentiation between them. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/probability_score_sample.pdf} \caption{\small{Average of \texttt{others} scores in each group for both in-distribution data (\textit{left}) and OOD data (\textit{right}).}} \label{fig:score_dist_sample} \vspace{-0.4cm} \end{figure} \begin{table*}[ht] \footnotesize{ \centering \begin{tabular}{l|c|cc|cc|cc|cc|cc} \toprule \multirow{3}{*}{\textbf{Method}} & \multirow{3}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Test \\ Time\\ (min)\end{tabular}}} & \multicolumn{2}{c|}{\textbf{iNaturalist}} & \multicolumn{2}{c|}{\textbf{SUN}} & \multicolumn{2}{c|}{\textbf{Places}} & \multicolumn{2}{c|}{\textbf{Textures}} & \multicolumn{2}{c}{\textbf{Average}} \\ \cline{3-12} & & \textbf{AUROC} & \textbf{FPR95} & \textbf{AUROC} & \textbf{FPR95} & \textbf{AUROC} & \textbf{FPR95} & \textbf{AUROC} & \textbf{FPR95} & \textbf{AUROC} & \textbf{FPR95} \\ & & \multicolumn{1}{c}{$\uparrow$} & \multicolumn{1}{c|}{$\downarrow$} & \multicolumn{1}{c}{$\uparrow$} & \multicolumn{1}{c|}{$\downarrow$} & \multicolumn{1}{c}{$\uparrow$} & \multicolumn{1}{c|}{$\downarrow$} & \multicolumn{1}{c}{$\uparrow$} & \multicolumn{1}{c|}{$\downarrow$} & \multicolumn{1}{c}{$\uparrow$} & \multicolumn{1}{c}{$\downarrow$} \\ \midrule MSP~\cite{hendrycks2016baseline} & 3.1 & 87.59 & 63.69 & 78.34 & 79.98 & 76.76 & 81.44 & 74.45 & 82.73 & 79.29 & 76.96 \\ ODIN~\cite{liang2018enhancing} & 23.6 & 89.36 & 62.69 & 83.92 & 71.67 & 80.67 & 76.27 & 76.30 & 81.31 & 82.56 & 72.99 \\ Mahalanobis~\cite{lee2018simple} & 145.4 & 46.33 & 96.34 & 65.20 & 88.43 & 64.46 & 89.75 & 72.10 & 52.23 & 62.02 & 81.69 \\ Energy~\cite{liu2020energy} & 3.1 & 88.48 & 64.91 & 85.32 & 65.33 & 81.37 & 73.02 & 75.79 & 80.87 & 82.74 & 71.03 \\ KL Matching~\cite{hendrycks2019benchmark} & 20.6 & 93.00 & 27.36 & 78.72 & 67.52 & 76.49 & 72.61 & \textbf{87.07} & \textbf{49.70} & 83.82 & 54.30 \\ \midrule \textbf{MOS (ours)} & 3.2 & \textbf{98.15} & \textbf{9.28} & \textbf{92.01} & \textbf{40.63} & \textbf{89.06} & \textbf{49.54} & 81.23 & 60.43 & \textbf{90.11} & \textbf{39.97} \\ \bottomrule \end{tabular} } \caption{\small{OOD detection performance comparison between MOS and baselines. All methods are fine-tuned from the same pre-trained BiT-S-R101x1 backbone with ImageNet-1k as in-distribution dataset. The description of 4 OOD test datasets is provided in Section~\ref{sec:dataset}. $\uparrow$ indicates larger values are better, while $\downarrow$ indicates smaller values are better. All values are percentages. \textbf{Bold} numbers are superior results. Test time for all methods are evaluated with the same in- and out-of-distribution datasets (60k images in total). }} \label{table:main_result} \vspace{-0.3cm} \end{table*} \subsection{Grouping Strategies} \label{sec:grouping} Given the dependency on the group structure, a natural question arises: \emph{how do different grouping strategies affect the performance of OOD detection}? To answer this, we systematically consider three grouping strategies: (1) taxonomy, (2) feature clustering, and (3) random grouping. \vspace{-0.4cm} \paragraph{Taxonomy} The first grouping strategy is applicable when the taxonomy of the label space is known. For example, in the case of ImageNet, each class is associated with a synset in WordNet~\cite{miller1995wordnet}, from which we can build the taxonomy as a hierarchical tree. In particular, we adopt the 8 super-classes defined by ImageNet\footnote{\url{http://image-net.org/explore}} as our groups and map each category into one of the 8 groups: \texttt{animal}, \texttt{artifact}, \texttt{geological formation}, \texttt{fungus}, \texttt{misc}, \texttt{natural object}, \texttt{person}, and \texttt{plant} \vspace{-0.4cm} \paragraph{Feature Clustering} When taxonomy is not available, we can approximately estimate the structure of semantic classes through feature clustering. Specifically, we extract feature representations for each training image from a pre-trained feature extractor. Then, the feature representation of each class is the average of feature embeddings in that class. Finally, we perform a K-Means clustering~\cite{macqueen1967some} on categorical feature representations, one for each class. \vspace{-0.4cm} \paragraph{Random Grouping} Lastly, we contrast the taxonomy and the feature clustering strategies with random grouping, where each class is randomly assigned to a group. This allows us to estimate the lower bound of OOD detection performance with MOS. By default, we use taxonomy as the grouping strategy if not specified otherwise. In Section~\ref{sec:grouping_method_ablation}, we experimentally compare the OOD detection performance using all three grouping strategies. \section{Experiments} \vspace{-0.1cm} We first describe the evaluation datasets (Section~\ref{sec:dataset}) and experimental setups (Section~\ref{sec:exp_setup}). In Section~\ref{sec:exp_results}, we show that MOS achieves state-of-the-art OOD detection performance, followed by extensive ablations that improve the understandings of MOS for large-scale OOD detection \subsection{Datasets} \label{sec:dataset} \subsubsection{In-distribution Dataset} \vspace{-0.2cm} We use ImageNet-1k~\cite{deng2009imagenet} as the in-distribution dataset, which covers a wide range of real-world objects. ImageNet-1k has at least 10 times more labels compared to CIFAR datasets used in prior literature. In addition, the image resolution is also significantly higher than CIFAR (32$\times$32) and MNIST (28$\times$28). \vspace{-0.4cm} \subsubsection{Out-of-distribution Datasets} \vspace{-0.2cm} To evaluate our approach, we consider a diverse collection of OOD test datasets, spanning various domains including fine-grained images, scene images, and textural images. We carefully curate the OOD evaluation benchmarks to make sure concepts in these datasets do not overlap with ImageNet-1k. Below we describe the construction of each evaluation dataset in detail. Samples of each OOD dataset are provided in Figure~\ref{fig:main_arch}. We provide the list of concepts chosen for each OOD dataset in Appendix~\ref{app:ood_class}. \vspace{-0.4cm} \paragraph{iNaturalist} iNaturalist~\cite{van2018inaturalist} is a fine-grained dataset containing 859,000 images across more than 5,000 species of plants and animals. All images are resized to have a max dimension of 800 pixels. We manually select 110 plant classes not present in ImageNet-1k, and randomly sample 10,000 images for these 110 classes. \vspace{-0.4cm} \paragraph{SUN} SUN~\cite{xiao2010sun} is a scene database of 397 categories and 130,519 images with sizes larger than $200\times200$. SUN and ImageNet-1k have overlapping categories. Therefore, we carefully select 50 nature-related concepts that are unique in SUN, such as \textit{forest} and \textit{iceberg}. We randomly sample 10,000 images for these 50 classes. \vspace{-0.4cm} \paragraph{Places} Places365~\cite{zhou2017places} is another scene dataset with similar concept coverage as SUN. All images in this dataset have been resized to have a minimum dimension of 512. We manually select 50 categories from this dataset that are not present in ImageNet-1k and then randomly sample 10,000 images for these 50 categories. \vspace{-0.5cm} \paragraph{Textures} Textures~\cite{cimpoi2014describing} consists of 5,640 images of textural patterns, with sizes ranging between $300\times 300$ and $640 \times 640$. We use the entire dataset for evaluation. \begin{figure*}[t] \centering \vspace{-0.3cm} \includegraphics[width=0.99\textwidth]{figures/class_num_ablation_fix_each_final.pdf} \caption{\small{OOD detection performance of MOS (blue) and the MSP baseline (gray). MOS exhibits more stabilized performance as the number of in-distribution classes increases. For each OOD dataset, we show AUROC (\textit{top}) and FPR95 ( \textit{bottom}).}} \label{fig:exp_class_num_ablation_fix_each} \vspace{-0.3cm} \end{figure*} \subsection{Experiment Setup} \label{sec:exp_setup} \vspace{-0.1cm} \paragraph{Pre-trained Backbone} We use Google BiT-S models~\cite{kolesnikov2020big} as our feature extractor in all experiments. The models are trained on ImageNet-1k, with ResNetv2 architectures~\cite{he2016identity} at varying capacities. Pre-trained models allow extracting high-quality features with minimal time and energy consumption. In practice, one can always choose to train from scratch. For the main results, we use the BiT-S-R101x1 model with depth 101 and width factor 1, unless specified otherwise. We provide a comparison of using feature extractors of varying model sizes in Section~\ref{sec:extractor_ablation}. For efficiency, we fix the backbone and only fine-tune the last fully-connected (FC) layer in the main experiments. We additionally explore the effect of fine-tuning more layers beyond the last FC layer in Section~\ref{sec:finetune_ablation}. \vspace{-0.5cm} \paragraph{Training Details} We follow the procedure in BiT-HyperRule~\cite{kolesnikov2020big} and fine-tune the pre-trained BiT-S model for 20k steps with a batch size of 512. We use SGD with an initial learning rate of 0.003 and a momentum of 0.9. The learning rate is decayed by a factor of 10 at 30\%, 60\%, and 90\% of the training steps. During training, all images are resized to 512 $\times$ 512 and randomly cropped to 480 $\times$ 480. At test time, all images are resized to 480 $\times$ 480. A learning rate warm-up is used for the first 500 steps. We perform all experiments on NVIDIA GeForce RTX 2080Ti GPUs. \vspace{-0.5cm} \paragraph{Evaluation Metrics} We measure the following metrics that are commonly used for OOD detection: (1)~the false positive rate of OOD examples when the true positive rate of in-distribution examples is at 95\% (FPR95); (2)~the area under the receiver operating characteristic curve (AUROC). We additionally report the area under the precision-recall curve (AUPR) in Appendix~\ref{app:aupr}. \vspace{-0.1cm} \subsection{Results} \label{sec:exp_results} \subsubsection{MOS vs. Existing Methods} \label{sec:exp_main_results} \vspace{-0.2cm} The main results are shown in Table~\ref{table:main_result}. We report performance for each dataset described in Section~\ref{sec:dataset}, as well as the average performance. For fair evaluation, we compare with competitive methods in the literature that derive OOD scoring functions from a model trained on in-distribution data and do not rely on auxiliary outlier data. We first compare with approaches driven by small datasets, including MSP~\cite{hendrycks2016baseline}, ODIN~\cite{liang2018enhancing}, Mahalanobis~\cite{lee2018simple}, as well as Energy~\cite{liu2020energy}. All these methods rely on networks trained with flat softmax. Under the same network backbone (BiT-S-R101x1), MOS outperforms the best baseline Energy~\cite{liu2020energy} by \textbf{31.06}\% in FPR95. It is also worth noting that fine-tuning with group softmax maintains competitive classification accuracy (75.16\%) on in-distribution data compared with its flat softmax counterpart (75.20\%). We also compare our method with KL matching~\cite{hendrycks2019benchmark}, a competitive baseline evaluated on large-scale image classification. MOS reduces FPR95 by \textbf{14.33}\% compared to KL matching. Note that for each input, KL matching needs to calculate its KL divergence to all class centers. Therefore, the running time of KL matching increases linearly with the number of in-distribution categories, which can be computationally expensive for a very large label space. As shown in Table~\ref{table:main_result}, our method achieves a \textbf{6x} speedup compared to KL matching \vspace{-0.3cm} \subsubsection{MOS with Increasing Numbers of Classes} \label{sec:class_num_ablation} \vspace{-0.2cm} In Figure~\ref{fig:exp_class_num_ablation_fix_each}, we show the OOD detection performance as we increase the number of in-distribution classes $C\in \{50, 200, 300, 400, 500, 600, 700, 800, 900, 1000\}$ on ImageNet-1k. For each $C$, we create training data by first randomly sampling $C$ labels from the entire 1k classes, and then sampling 700 images for each chosen label. Importantly, we observe that MOS (in blue) exhibits more stabilized performance as $C$ increases, compared to MSP~\cite{hendrycks2016baseline} (in gray). For example, on the iNaturalist OOD dataset, FPR95 rises from 21.02\% to 63.36\% using MSP, whilst MOS degrades by only 4.76\%. This trend signifies that MOS is an effective approach for scaling OOD detection towards a large semantic space. We also explore an alternative setting where we fix the total number of training images, as we vary the number of classes $C$. In this setting, the model is trained on fewer images per class as the number of classes increases, making the problem even more challenging. We report those results in Appendix~\ref{app:class_num_ablation_fix_total}. Overall, MOS remains less sensitive to the number of classes compared to the MSP baseline. \begin{figure}[t] \centering \vspace{-0.3cm} \includegraphics[width=0.45\textwidth]{figures/grouping_method_ablation.pdf} \caption{\small{OOD detection performance comparison between MOS with different grouping strategies and the MSP baseline on 4 OOD datasets. (\textit{top}: AUROC; \textit{bottom}: FPR95).}} \label{fig:exp_group_method_ablation} \vspace{-0.3cm} \end{figure} \vspace{-0.3cm} \subsubsection{MOS with Different Grouping Strategies} \vspace{-0.2cm} \label{sec:grouping_method_ablation} In this ablation, we contrast the performance of three different grouping strategies described in Section~\ref{sec:grouping}. For a fair comparison, we use the number of groups $K=8$ for all methods, since ImageNet taxonomy has 8 super-classes. For feature clustering, we first extract the feature vector from the penultimate layer of the pre-trained BiT-S model for each training image. The feature representation for each category is the average feature vector among all images in that category. We then perform a K-Means clustering on the 1,000 categorical feature vectors (one for each class) with $K=8$. For random grouping, we randomly split 1,000 classes into 8 groups with equal sizes (125 classes each). We compare the performance of MOS under different grouping strategies in Figure~\ref{fig:exp_group_method_ablation}. We observe that feature clustering works substantially better than the MSP baseline~\cite{hendrycks2016baseline} while maintaining similar in-distribution classification accuracy (-0.16\%) to the taxonomy-based grouping. Interestingly, random grouping achieves better performance than the MSP baseline~\cite{hendrycks2016baseline} on 3 out of 4 OOD datasets. However, we do observe a drop of in-distribution classification accuracy (-0.98\%) using random grouping, compared to the taxonomy-based grouping. We argue that feature clustering is a viable strategy when taxonomy is unavailable, as it outperforms MSP by 18.2\% (FPR95) on average. We additionally report how different numbers of groups $K$ affect the OOD detection performance for all three grouping strategies in Appendix~\ref{app:group_num_ablation}. \begin{figure*}[t] \centering \vspace{-0.2cm} \includegraphics[width=0.98\textwidth]{figures/feature_extractor_ablation_final.pdf} \caption{\small{Effect of using different pre-trained feature extractors. The x-axis indicates feature extractors with larger capacities from left to right. Only the top FC layer is fine-tuned in all experiments. Both OOD detection (\textit{bars}) and image classification (\textit{dashed lines}) benefit from improved feature extractors.}} \vspace{-0.2cm} \label{fig:exp_feature_extractor_ablation} \end{figure*} \subsubsection{MOS with Different Feature Extractors} \label{sec:extractor_ablation} \vspace{-0.2cm} We investigate how the performance of OOD detection changes as we employ different pre-trained feature extractors. In Figure~\ref{fig:exp_feature_extractor_ablation}, we compare the performance of using a family of 5 feature extractors (in increasing size): BiT-S-R50x1, BiT-S-R101x1, BiT-S-R50x3, BiT-S-R152x2, BiT-S-R101x3\footnote{\url{https://github.com/google-research/big_transfer}}. All models are ResNetv2 architectures with varying depths and width factors. It is important to note that since we fix the entire backbone and only fine-tune the last FC layer, this ablation is about the effect of the quality of feature extractors rather than model capacities. As we use feature extractors trained on larger capacities, the classification accuracy increases, with comparable performance between using the flat vs. group softmax. Overall the OOD detection performance improves as the capacity of feature extractors increases. More importantly, MOS consistently outperforms MSP~\cite{hendrycks2016baseline} in all cases. These results suggest that using pre-trained models with better feature representations will not only improve classification accuracy but also benefit OOD detection performance. \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{figures/finetune_layer_ablation_final.pdf} \caption{\small{Effect of fine-tuning different numbers of residual blocks in BiT-S-R101x1. We show both OOD detection (\textit{bars}) and image classification (\textit{dashed lines}) performance.}} \vspace{-0.2cm} \label{fig:exp_finetune_layer_ablation} \end{figure*} \vspace{-0.4cm} \subsubsection{MOS with Varying Fine-tuning Capacities} \label{sec:finetune_ablation} \vspace{-0.2cm} In this ablation, we explore the efficacy of fine-tuning more layers. Concretely, we go beyond the FC layer and fine-tune different numbers of residual blocks in BiT-S-R101x1. Figure~\ref{fig:exp_finetune_layer_ablation} shows the classification accuracy and OOD detection performance under different fine-tuning capacities. Noticeably, MOS consistently outperforms MSP~\cite{hendrycks2016baseline} in OOD detection under all fine-tuning capacities. As expected, we observe that fine-tuning more layers leads to better classification accuracy. However, increasing the number of fine-tuning layers would adversely affect OOD detection in some cases. We hypothesize that fine-tuning with more layers will result in more label-overfitted predictions, and undesirably produce higher confidence scores for OOD data. This suggests that only fine-tuning the top FC layers is not only computationally efficient but also in fact desirable for OOD detection performance. \section{Related Work} \paragraph{OOD Detection with Pre-trained Models} Hendrycks and Gimpel~\cite{hendrycks2016baseline} establish a common baseline for OOD detection by using maximum softmax probability (MSP). Several works attempt to improve the OOD uncertainty estimation by using ODIN score~\cite{liang2018enhancing}, deep ensembles~\cite{lakshminarayanan2017simple}, Mahalanobis distance-based confidence score~\cite{lee2018simple}, generalized ODIN score~\cite{hsu2020generalized}, and energy score~\cite{liu2020energy}. Lin et al.~\cite{lin2021mood} propose a dynamic OOD inference framework that improved the computational efficiency. However, previous methods driven by small datasets are suboptimal in a large-scale setting. In contrast, MOS scales better with large label space, outperforming existing methods by a large margin. \vspace{-0.4cm} \paragraph{OOD Detection with Model Fine-tuning} An orthogonal line of work explores training with auxiliary outlier data for model regularization~\cite{bevandic2018discriminative,geifman2019selectivenet,malinin2018predictive,mohseni2020self,subramanya2017confidence, liu2020energy}. Auxiliary outlier data can either be realistic images~\cite{hendrycks2018deep,mohseni2020self,papadopoulos2019outlier,liu2020energy,chen2020robust-new} or synthetic images generated by GANs~\cite{lee2018training}. Several loss functions have been designed to regularize model predictions of the auxiliary outlier data towards uniform distributions~\cite{lee2018training}, a background class for OOD data~\cite{chen2020robust-new, mohseni2020self}, or higher energies~\cite{liu2020energy}. In this work, our model is fine-tuned only on in-distribution data, as we do not assume the availability of auxiliary outlier data. Different from previous settings, it can be prohibitive to construct an auxiliary outlier dataset in large-scale image classification, since the in-distribution data has a much wider coverage of concepts. \paragraph{Generative Modeling Based OOD Detection} Generative models~\cite{kingma2013auto,tabak2013family,rezende2014stochastic,dinh2017density,van2016conditional} estimate the probability density of input data and can thus be directly utilized as OOD detectors with high density indicating in-distribution and low density indicating out-of-distribution. However, as shown in ~\cite{nalisnick2018deep}, deep generative models can undesirably assign a high likelihood to OOD data. Several strategies have been proposed to mitigate this issue, such as improved metrics~\cite{choi2018generative}, likelihood ratio~\cite{ren2019likelihood,serra2019input}, and modified training techniques~\cite{hendrycks2018deep}. In this work, we mainly focus on the discriminative-based approaches. It is important to note that generative models~\cite{hinz2018generating} can be prohibitively challenging to train and optimize on large-scale real-world datasets. \vspace{-0.4cm} \paragraph{OOD Detection for Large-scale Classification} Several works make pioneering efforts in large-scale OOD detection. Roady~\etal~\cite{roady2019outofdistribution} sample half of the classes from ImageNet-1k as in-distribution data, and evaluate the other half as OOD test data. They use a one-vs-rest training strategy and background class regularization, which requires access to an auxiliary dataset. KL matching was employed as the OOD scoring function in~\cite{hendrycks2019benchmark}. In this work, we propose a novel group-based solution that scales more effectively and efficiently for large-scale OOD detection. We also perform evaluations on more diverse real-world OOD datasets and conduct thorough ablations that improve understandings of the problem and solutions in many aspects. \vspace{-0.4cm} \paragraph{Learning with Hierarchical Labels} The hierarchical structure of the class categories has been utilized for efficient inference~\cite{deng2011fast,liu2013probabilistic}, improved classification accuracy~\cite{deng2014large}, and stronger object detection performance~\cite{redmon2017yolo9000}. Some works aim to learn a label tree structure when taxonomy is unavailable~\cite{bengio2010label,deng2011fast,liu2013probabilistic}. As a typical hierarchy, group-based learning has been widely adopted in image classification tasks~\cite{hinton2015distilling,ahmed2016network,yan2015hd,warde2014self,gross2017hard}. Recently, a group softmax classifier is proposed to tackle the problem of long-tail object detection, where categories are grouped according to the number of training instances~\cite{li2020overcoming}. We contribute to this field by showing the promise of using a group label structure for effective OOD detection. \vspace{-0.2cm} \section{Conclusion} \vspace{-0.1cm} In this paper, we propose a group-based OOD detection framework, along with a novel OOD scoring function, {MOS}, that effectively scales the OOD detection to a real-world setting with a large label space. We curate four diverse OOD evaluation datasets that allow future research to evaluate OOD detection methods in a large-scale setting. Extensive experiments show our group-based framework can significantly improve the performance of OOD detection in this large-scale setting compared to existing approaches. We hope our research can raise more attention to expand the view of OOD detection from small benchmarks to a large-scale real-world setting. {\small \bibliographystyle{ieee_fullname}
{ "timestamp": "2021-05-06T02:10:23", "yymm": "2105", "arxiv_id": "2105.01879", "language": "en", "url": "https://arxiv.org/abs/2105.01879" }
\section{GPU Implementation} \label{sec:gpu_imple} In this section, implementation details of the Kalman filter used to run track fitting on both CPUs and GPUs are presented. The code can be found in Ref.~\cite{xiaocong_ai_2021_4693389}. \subsection{Parallelization Strategy} As discussed in Sect.~\ref{sec:kf}, track fitting is the step in the track reconstruction chain that precisely estimates the reconstructed track parameters and the associated covariance matrices. If track fitting is performed sequentially in a single event, the execution time will increase almost linearly with increasing track multiplicity. However, the dependence of the track fitting execution time on the number of tracks weakens if the track fitting can be parallelized. The implementation of a track-level parallel strategy is straightforward, since the track fitting for each reconstructed track is completely independent. In addition, the algorithm can be parallelized \emph{within} a track fit, i.e.~intra-track parallelization. Possible gains come from the matrix operations, e.g. the transportation of the track parameters in a magnetic field and the Kalman filter update and smoothing, which are computationally expensive and have to be repeated for all the propagation steps and measurements by a total of up to $\mathcal{O}(10)$ times per track. However, in practise only very limited intra-track parallelization for those matrix operations can be achieved by using multiple threads, because the sizes of the matrices in one operation are usually relatively small. For example, the largest size matrix operated on in a single track fit in ACTS is the covariance matrix of the track parameters represented in the global coordinate system, which is of size 8$\times$8. This paper discusses both parallelization strategies for track fitting on GPUs: \begin{enumerate} \item Track-level parallelization: Track fitting for different tracks is executed in parallel using different CUDA threads (or blocks if further intra-track parallelization is used). \item Intra-track parallelization: The matrix operations involved in a single track fit are parallelized as much as possible using multiple threads within a single CUDA block. In this case, the block shared memory is used for the objects relevant with one track fit. \end{enumerate} The transportation of the track parameters and their associated covariance matrices in a magnetic field requires a numerical solution to the equation of particle motion. The adaptive Runge-Kutta-Nyström~\cite{Myrheim:1979ng} method is used to transport the track parameters in ACTS. When extrapolating the track parameters from one measurement point to the next, the covariance of the track parameters is updated with the transport Jacobian between the measurement points. Because the track parameters are represented in the local coordinate frame of the detector, the transform Jacobians between local and global track parameters at the two measurement points have to be applied. If the fitting is performed using one CUDA block per track, the matrix multiplication for the covariance transport can be parallelized using multiple threads. \subsection{CUDA Considerations and Limitations} Various CUDA programming requirements have consequences for the problem-specific factors that shape the parallelization strategies, and thus have an impact on the final implementation. The most challenging ones are detailed next. \subsubsection{Polymorphism} Virtual functions cannot be called inside a CUDA kernel unless the objects are constructed there. The Curiously Recurring Template Pattern (CRTP) is a C++ design pattern that emulates the behaviour of dynamic polymorphism through having a base class which is a template specialization for the derived class itself. The ACTS Kalman filter is designed to be independent of the detector's tracking geometry, which could contain surfaces of different concrete types for different tracking detectors. To realize this design pattern on accelerators, the surfaces are implemented with CRTP, instantiated outside of the Kalman filter, and fed to the algorithm. CRTP is successfully used to define the surfaces as shown in the code sample in Listing~\ref{lst:poly}. \begin{lstlisting}[language=C++, caption={Function definition in Surface base class using CRTP}, label={lst:poly}] template <typename Derived> inline const typename Derived::SurfaceBoundsType *Surface ::bounds() const { return static_cast<const Derived *>(this)->bounds(); } \end{lstlisting} \subsubsection{Thread Memory Limitations} The amount of memory available for a thread, which includes the stack frame size and the maximum number of registers, is automatically configured by the CUDA runtime environment based on device properties including the total amount of shared memory and the cache sizes, and also on the number of parallel threads per execution block. Because the GPU's performance gain is based on the ability to run thousands of threads in parallel, this limits considerably the amount of memory available per thread compared to the CPU. This memory limitation is a major concern for recursive functions, which have to be reimplemented using an iterative approach. \subsubsection{Limited Support for Linear Algebra Libraries} \label{subsubsec:linear_algebra_support} In the Kalman smoothing, the track parameter covariance matrix needs to be inverted to calculate the gain matrix. While the Eigen-based matrix class has a member function that returns the inverse matrix, this method is not supported in the GPU kernels. Also CUDA 10.0 discontinued the support for invoking cuBLAS functions from within the device kernels through \verb!cublas_device! routine~\cite{no-cublas}. Since at the moment neither of these two linear algebra libraries provides a solution for our scenario, a customized matrix inverter is implemented whose performance impact is discussed in Sect.~\ref{sec:perf}. \subsubsection{Precision and Rounding} Heterogeneous resources produce slightly different results due to different approaches to floating point arithmetic and rounding. CPUs typically promote float operands to doubles when possible, perform the operations in double precision and then truncate the result to simple precision. Moreover, the x86 floating point units use extended double precision registers (80-bit), while CUDA limits the register sizes to 32-bit and 64-bit as described by the IEEE standard 754-2008~\cite{ieee_754,cuda_doc_fp}. To mitigate these effects, the customized matrix inverter is always configured to perform the algebra operations using double precision. Floating point precision for other algebra operations is used for benchmarking the performance, and the comparison between the results with float and double precision is discussed in Sect.~\ref{subsec:tech_perf}. \subsection{Data Structure and Transfer} \label{subsec:data_transfer} Detector geometry information is a fundamental component for track parameter propagation and the integration of material effects during the track reconstruction. Because track reconstruction with a realistic detector description as used in full detector simulation requires significant computational resources, a simplified detector geometry, the so-called \emph{tracking geometry}, is used during the track reconstruction in ACTS for fast navigation and extrapolation of tracks. The basic geometrical component of the tracking geometry in ACTS is the surface. The surface object carries information about its geometrical orientation, shape and boundary, the material approximated from a full detector geometry, and a unique hierarchical geometry identifier. Magnetic fields are used for measuring the momentum of charged particles at high-energy physics experiments. When a charged particle passes through a magnetic field, it bends with the degree of bending inversely proportional to its momentum. In this paper, a constant magnetic field represented as an Eigen matrix of size 3$\times$1 is used for the performance studies presented. It should be noted that this represents a significant simplification with respect to an inhomogeneous magnetic field, as found in many experiments, for which position-specific field information may need to be stored and retrieved, i.e. ~more memory might be required. The description of inhomogeneous or nonparametric magnetic fields however is also possible within ACTS. The ACTS tracking Event Data Model (EDM) includes classes to describe tracking objects such as measurements and track parameters, which are represented with surfaces. The EDM for the track state is designed for the Kalman filter. It consists of a measurement, a predicted track parameter, a filtered track parameter and a smoothed track parameter, all located on a surface. Figure~\ref{fig:vis_ts} shows a track state on a surface. \begin{figure}[b] \centering \includegraphics[width=1.0\linewidth]{visual_ts.png} \caption{Demonstration of a track state with a measurement (orange), a predicted track parameter (green), a filtered track parameter (yellow) and a smoothed track parameter (blue) on a plane surface (gray). The covariance matrix of the local coordinates for both the measurement and the fitted track parameters is represented by an ellipse. The momentum direction of the fitted track parameters is represented by an arrow with its covariance matrix represented by a cone oriented in the direction of the arrow} \label{fig:vis_ts} \end{figure} During the construction of the detector geometry, a unique geometrical identifier is assigned to each detector surface. By storing the geometrical identifier in the tracking EDM, information about the geometry can be shared between the measurements and track parameters, and between the CPU and GPU. The Kalman filter, detector geometry and magnetic field are global data shared by different tracks during the track reconstruction, and are therefore stored in the kernel global memory. In addition, four sets of track-specific data are required to execute the track fitting kernel on the GPU: \begin{enumerate} \item Input measurements of the trajectory \item Starting track parameters to steer the track parameters propagation \item Track fitting configurations, e.g. a target surface to extract the fitted track parameters \item Fitted results including the fitting status, a collection of track states on the trajectory and the fitted track parameters on the target surface \end{enumerate} Track-specific data are allocated on the pinned memory on the host, i.e. page-locked memory, and this memory is allocated contiguously for each track. The number of detector surfaces intersected by the particle varies with the kinematics of the particle and the detector layout, i.e. the data loads are different between tracks. Managing this load imbalance would require dedicated memory management and task scheduling strategies. In this paper, the detector surfaces are therefore constructed to be boundless\footnote{Boundless surfaces always have an intersection with a track as long as the track is not parallel to the surface.} to guarantee that the same number of surfaces are traversed by the tracks. Each surface object requires approximately 120 bytes of memory. In addition to that, the size of memory allocated and transferred for different track-specific data for a detector with 10 plane surfaces is summarized in Table~\ref{Tab:data_size}. Considering the relatively large size of the track-specific data, and the fact that they are accessed only once during the track fitting, those data are also stored in the kernel global memory. \begin{table}[h!] \centering \begin{tabular}{|p{0.6\linewidth}|r|} \hline \textbf{Data type} & \textbf{Size (B)} \\ \hline \hline Input measurements & 280 \\ \hline Seeding parameters & 168 \\ \hline Fitting configurations & 144 \\ \hline Fitting status & 1 \\ \hline Fitted states & 8480 \\ \hline Fitted track parameters & 216 \\ \hline \hline Total & 9289 \\ \hline \end{tabular} \caption{The size (in bytes) of track-specific memory for a single track} \label{Tab:data_size} \end{table} \section{Conclusion} \label{sec:conclusion} The reconstruction of charged particle trajectories for current and future high-energy physics experiments is a significant computational challenge. New approaches are needed to cope with the dramatically increased event complexity and rates and with the movement away from x86 architectures. The Kalman filter algorithm is the mainstay of current track reconstruction strategies. We presented a proof-of-concept implementation of a full Kalman filter track fitting algorithm using ACTS on two different Nvidia GPU architectures using a simplified detector geometry and a constant magnetic field. We have performed studies of its physics and technical performance and compared this to results using CPUs with a particular focus on the limitations observed. Ideas for improvements for future implementations were discussed. As existing fast matrix inversion algorithms cannot run on GPUs, we developed a custom prototype matrix inversion algorithm, which does not match the CPU performance of highly-optimized state-of-the-art algorithms. When controlling for the matrix inversion algorithm, worse performance for low track multiplicity is obtained with the GPUs compared to the CPUs and the performance is improved by a factor of up to 4.6 with respect to the CPUs for events with more than 1,000 tracks. Significant performance differences are shown between the different GPU architectures. Parallelization within the track fit was implemented and performance gain was observed with a relatively low track multiplicity. The performance dependence on GPU configurations was also studied. The performance was largely independent of the grid size and did not change when using multiple kernels. Memory transfer and other overhead can account for up to 30\,\% of the total run time for the track fit. The typical HL-LHC track multiplicity of 10,000 tracks per event is a relatively small workload for GPUs. For events with 10,000 tracks, performance gain by up to a factor of 1.5 is achieved by using a smaller block size. The small workload is also the main limiting factor in the achieved occupancy on the GPU. We have compared different methods for the Kalman filter implementation and studied the dependence on the GPU configuration. We have identified limitations of the approach and highlighted areas for future work directions. Specifically, an evaluation of alternative approaches for GPU offloading, especially those provided by vendor-agnostic interfaces such as OpenMP, can be expected to result in improved performance portability. Moreover, further improvements to the GPU-based matrix inversion algorithm can be expected to bring its performance closer to existing CPU implementations. \section{Discussion} \label{sec:disc} The timing performance studies in Sect.~\ref{sec:perf} use a single GPU. Potential gains in the timing performance by utilizing multiple GPUs and the GPU occupancy (which could have an impact on the timing performance) are investigated in addition. While the performance of the GPU-based Kalman filter was tested on one GPU, the implementation can also run on multiple GPUs in parallel. The prototype uses a team of threads on the host, each one fitting the trajectory of a subset of tracks on a different GPU. Despite better problem size scaling, the multi-device solution has a slightly larger overall execution time for smaller numbers of tracks, as shown in Fig.~\ref{fig:multi_gpu}. Communication via Message Passing Interface (MPI) is required to fully exploit the parallelism by resolving synchronization overhead between the GPUs. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_gpu_count_var.pdf} \caption{The fitting time as a function of the number of tracks executed in one stream per device with linear grid of size 5120$\times$1 and block size of 8$\times$8$\times$1, when using one NAF-V100 (solid blue) and two NAF-V100 in parallel (dashed red)} \label{fig:multi_gpu} \end{figure} In addition, the latest versions of the Nvidia HPC Software Development Kit (SDK) provide new tools and libraries designed to maximize performance by optimizing memory transfers and scaling to multiple devices while targeting heterogeneous resources~\cite{nvidia-sdk}. Additionally, various studies regarding vendor-agnostic offloading approaches show promising results based on standard APIs and/or open-source, non-proprietary solutions~\cite{9309052,DBLP:conf/sc/GayatriYKD18}. These would be very interesting to explore in future iterations. The warp occupancy, defined as the ratio of active warps on an SM to the maximum number of active warps supported by the SM (e.g.~64 warps per SM on the V100), is analyzed using a different Nvidia performance tool: Nsight Compute~\cite{DBLP:journals/superfri/KnoblochM20}. Figure~\ref{fig:occupancy} shows the dependency of the theoretical warp occupancy and the achieved warp occupancy on the number of registers per thread and block size for a track fitting workload of 10,000 tracks. The theoretical occupancy is 100\,\% when the number of registers per thread is no larger than 32 and decreases when more registers are required for one thread, hence fewer threads are active. Since the maximum number of thread blocks per SM is 32 on the V100, the block size is not a limiting factor to theoretical occupancy as long as it is no less than 64. The achieved occupancy is well below the theoretical one, in particular when the block size is small. The reason is that an average of only 119 tracks are distributed to each SM of Cori-V100, providing a total workload of 10,000 tracks. Therefore, only 2 blocks are resident on the SM when the block size is 8$\times$8$\times$1 (128 threads), and at most 1 resident block when the block size is 16$\times$16$\times$1 (256 threads) or 32$\times$32$\times$1 (1024 threads). These correspond to an occupancy of 6.25\,\%, 12.5\,\%, and 50\,\%, respectively. In this case, reducing the number of registers per thread has no impact on the warp occupancy. Event-level parallelization is an effective approach to increase the workload, and hence improve the SM warp occupancy. This can be achieved by an offloading pattern with a fully contained chain of tracking modules that run on GPU requiring minimum data transfer between CPU and GPU~\cite{bocci2020heterogeneous}. The workload also needs to be accounted for when analyzing the impact of SM warp occupancy on the performance. For instance, better warp occupancy does not necessarily correspond to better timing performance, as shown in Fig.~\ref{fig:max-registers}, for the particular track fitting workload of 10,000 tracks studied here. Further discrepancy between the achieved occupancy and the theoretical one arises from the imbalanced track fitting workload both within blocks and across blocks due to different momentum hence propagation paths between tracks. In this paper, the workload imbalances were already controlled by using a homogeneous detector geometry for all the tracks. For a realistic detector, the concept of tracking regions as presented in Ref.~\cite{Lantz_2020,Cerati_2020} can be used for the parallelization of track reconstruction. In the case of track finding, there is additional workload imbalance from the selection of compatible measurements from a pool of non-static number of measurements on a detector surface and possibly splitting the track propagation into multiple branches if there are more than one compatible measurement found. One possible approach to suppress the workload imbalance level would be to group the tracks based on their kinematic properties so that one group of tracks will encounter the same segmented detector region, and assign different groups of tracks to different grids or blocks. See Ref.~\cite{Lantz_2020,Cerati_2020} for further discussion. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_gpu_occupancy.pdf} \caption{Warp occupancy levels using various number of registers per thread and block sizes for fitting 10,000 tracks executed in one stream with linear grid of size 5,120$\times$1 on Cori-V100. The black dashed line is the theoretical warp occupancy, and the circle blue, cross red and triangle green represent the achieved warp occupancy with block sizes of 8$\times$8$\times$1, 16$\times$16$\times$1 and 32$\times$32$\times$1, respectively. When there are 1024 threads per block, a maximum of 64 registers per thread is allowed} \label{fig:occupancy} \end{figure} \section{Introduction} \label{sec:intro} The reconstruction of the trajectories of charged particles for High-Energy Physics (HEP) experiments is a very computationally demanding task, which is performed both when selecting events in real time with the online \emph{trigger} and during the subsequent high-precision offline reconstruction of events for physics analysis. The most commonly used techniques are adaptive methods based on the Kalman filter~\cite{Billoir:1983mz,Fruhwirth:1987fm}, which account for the trajectories of charged particles in magnetic fields and the energy loss of charged particles in the detector material. See Ref.~\cite{RevModPhys.82.1419} for a review. As the execution time of such algorithms explodes combinatorially with the number of charged particles, the advent of the upgrade to the Large Hadron Collider (LHC), the High-Luminosity LHC (HL-LHC), portends an even greater challenge, with events containing up to 10,000 tracks. For many years, HEP has been relying on Moore's Law~\cite{moore}, the observation that the number of transistors on an integrated circuit doubles approximately every two years. As the circuits have begun to approach intrinsic limits in terms of density and power, Moore's Law has begun to slow, further complicating potential performance improvements~\cite{Shalf2020TheFO}. In addition, other computing architectures have become increasingly powerful and hence popular, such as graphical processing units (GPUs) and field programmable gate arrays (FPGAs). Therefore there has been a shift towards achieving speed improvements by adding additional cores, particularly at high-performance computing centers. These many-core systems require highly parallel code to be fully exploited, requiring additional knowledge from software developers. Moreover, much of the existing code for high-energy physics experiments is not well-suited to such architectures and hence requires significant development and adaptation to be able to exploit them. Porting algorithms to GPUs typically requires specialized code redesign and optimization, but performance gains through vectorization using Single Instruction Multiple Data (SIMD) instructions, and parallelization using many-core CPU architectures often require less significant changes to the code base. Several HEP experiments have leveraged the power of many-core systems for real-time online and/or offline track reconstruction~\cite{Cerati_2014,Cerati_2017,Cerati_2020,Lantz_2020,Kisel_2018}. These studies have demonstrated good scalability of the throughput of events per second with the number of CPU cores. GPU-accelerated track reconstruction has also been studied. For example, both the ALICE~\cite{ALICE_2008} and LHCb~\cite{LHCb_2008} experiments at the LHC have proposed a GPU-based High Level Trigger (HLT) to handle the much increased data rate expected during Run 3 of LHC~\cite{rohr2018track,Rohr_2019,Aaij2020}. In particular, LHCb has implemented a fully GPU-based high-throughput HLT framework, which processes a data rate of up to 40 Tbit/s using approximately 500 GPUs~\cite{Aaij2020}. In these studies, ALICE and LHCb used a simplified or parametrized Kalman filter for track fitting for maximum speed with some impact on track resolution compared to offline track reconstruction using a full Kalman filter. The level of resolution loss is either acceptable for the online identification of interesting events for further offline analysis~\cite{Aaij2020} or is recovered through dedicated optimization of the HLT tracking algorithms~\cite{rohr2018track}. Initial studies of porting a full Kalman filter to GPUs can be found in Ref.~\cite{Cerati_2017}. Other track finding algorithms such as the Cellular Automaton and Hough Transforms have also been studied on GPUs~\cite{Funke_2014,rinaldi2015gpgpu}. GPUs are also used for accelerating other steps of online event processing at HEP experiments, e.g. cluster finding~\cite{Aaij2020,bocci2020heterogeneous}, vertex reconstruction~\cite{Bruch_2017} and event selection~\cite{Sen_2015} and Ref.~\cite{Bruch_2020} presents a recent review of applications of GPUs for online event processing in HEP. The trend is generally towards bringing the full reconstruction chain to GPUs in order to minimize the penalties from intermediate data transfer between host and GPUs (see Ref.~\cite{Aaij2020,bocci2020heterogeneous}). Beyond HEP, GPU-accelerated Kalman filtering has been explored for a range of applications~\cite{Huang_2011,Xu_2016}. However, these use cases tend to focus on much larger (up to three orders of magnitude) matrix sizes than are typical in HEP applications, and so the direct applicability is limited. We present a proof-of-concept of a full Kalman filter algorithm on GPUs utilizing A Common Tracking Software (ACTS)~\cite{Gumpert:2243297,Ai:2019kze,Gessinger:2020nne,Ai:2020jbw,ai2021common}, which provides a toolkit of algorithms for track reconstruction within a generic, framework- and experiment-independent software package. Detailed studies of the physics and technical performance are presented for two different GPU architectures and compared to performance on CPUs. In particular, we identify and discuss the key challenges in the implementation and highlight future directions towards the development of an even more performant full Kalman filter algorithm. \section*{Declarations} \paragraph{Funding} This work was funded by the NSF under Cooperative Agreement OAC-1836650, and supported by DASHH with the Grant-No. HIDSS-0002. \paragraph{Conflict of interest} The authors declare that they have no conflict of interest. \paragraph{Availability of data and material} Not applicable. No associated data except for code. \paragraph{Code availability} The code used for this research (including a Singularity container for reproducibility) is available open source~\cite{xiaocong_ai_2021_4693389}. \bibliographystyle{spmpsci} \section{Parallelization and Offloading Techniques} There is a wide range of tools and frameworks available that can improve the runtime performance of scientific code via parallelization and offloading. Two of the most widely used frameworks are Open Multi-Processing (OpenMP)~\cite{660313} and Compute Unified Device Architecture (CUDA)~\cite{cuda_doc}. While the former traditionally allows parallelization on CPU systems via multiple threads, the latter is used to offload parts of the code to massively-parallel Nvidia GPUs. \subsection{OpenMP} OpenMP is a compiler-based high-level approach for thread parallelization on shared memory architectures. One of its outstanding features is that it is very easy to use and does not require knowledge of threading and operating system internals~\cite{DBLP:journals/ieeecc/Clark98}. It is available for C, C++ and Fortran, some of the most-widely used programming languages for scientific computing. OpenMP achieves its simplicity by being integrated into the compiler, which facilitates the parallelization of applications. Compiler support is widely available, which allows it to be used on personal computers as well as supercomputers. Applications are annotated with so-called \emph{pragmas}, and turned into parallel code by the compiler; for instance, a simple \verb!#pragma omp parallel for! pragma instructs the compiler to parallelize the subsequent \emph{for} loop. OpenMP takes care of thread management and scheduling as well as data decomposition, which allows developers to focus on the problem they want to solve. One of OpenMP's drawbacks has been its focus on CPU-based parallelism. However, it has recently been extended with improved offloading functionality that allows the compiler to offload certain parts of an application to accelerators such as GPUs and FPGAs. Consequently, OpenMP now can target both CPUs and GPUs, which offers better portability than vendor-specific approaches such as CUDA~\cite{DBLP:conf/iwomp/DaleyAWW20}. \subsection{CUDA} \label{cuda} CUDA is a parallel computing platform and application programming interface introduced by Nvidia for their line of GPUs~\cite{DBLP:journals/queue/NickollsBGS08}. It allows highly-parallel GPUs to be used for general-purpose computations such as those common in high-energy physics. It is available for a range of programming languages, including C, C++, and Fortran. Wrappers are also available for additional programming languages, such as Python, R, Julia and many others. GPUs are very specialized processing units and feature a high number of computing cores, which can be leveraged for scientific computations. Programs are offloaded to the GPUs in the form of so-called \emph{compute kernels}, that is, single functions and their associated data. A kernel executes in parallel across a set of threads, which can use per-thread registers. Moreover, threads are aggregated into so-called warps that are executed concurrently. Several of these warps can be grouped into a thread block, which has access to a fast region of shared memory that all threads within the block can access. Finally, thread blocks can be combined into grids by the programmer. Thread blocks in a grid can only share data via global memory. Details of the CUDA programming model can be found in Ref.~\cite{cuda_program}. CUDA extends existing languages and requires dedicated compilers (\texttt{nvcc} for C/C++ and \texttt{nvfortran} for Fortran). While it allows for the optimal use of Nvidia GPUs, it is not portable and cannot be used for GPUs produced by other vendors. However, there have been attempts to provide abstraction layers or conversion tools for other approaches to be able to run CUDA code via OpenCL~\cite{DBLP:journals/pc/DuWLTPD12,DBLP:conf/iwocl/BabejJ20}. There are also a variety of libraries that automatically offload compute-intensive operations to the GPUs. Examples include libraries such as cuBLAS for linear algebra and cuFFT for fast-fourier transforms~\cite{7476520}. Competing approaches including OpenACC~\cite{DBLP:conf/sc/HerdmanGPBMJ14} and OpenCL are available but have not been not as widely adopted so far. For the parallel GPU implementation presented in this paper, we have chosen CUDA because it is the de-facto standard for GPU-accelerated code and is widely supported. Our attempts to develop a GPU-accelerated solution using OpenMP were not successful so far due to offloading support in OpenMP still being in an early stage of development. \section{Performance Evaluation} \label{sec:perf} The software performance is studied using a simple telescope-like detector geometry with 10 planar surfaces perpendicular to the global $x$ axis and placed equidistantly with 30 $mm$ between two adjacent planes. Realistic HEP track reconstruction applications typically involve a more complicated detector geometry. A constant magnetic field of 2\,T along the global $z$ axis is used. Samples containing a single muon per event are used for the performance evaluation. Muons are used for this study because they interact only minimally with the detector material and thus high quality track fits are expected. The muons have a transverse momentum uniformly distributed between 1 and 10\,GeV with both the azimuthal angle and polar angle fixed to zero. The Fatras fast simulation engine~\cite{Edmonds:1091969} within the ACTS toolkit is used to generate simulated hits of the single muons on the detector surfaces. Figure~\ref{fig:vis_simulation} illustrates the simulated hits on the detector surfaces for a sample of 10,000 single muons. The dense tracking environment at the HL-LHC is not expected to exceed 10,000 tracks per event. As the pattern recognition algorithms required to find track candidates from measurements are beyond the scope of this paper, the known trajectories of the simulated particles are used for the fitting in place of track candidates provided by a pattern recognition step. The measurements corresponding to the track candidates are obtained by smearing the positions of the simulated hits with Gaussian distributions to model detector resolution effects. A resolution of 50\,$\mu m$ in both the $x$ and $y$ dimensions of the detector, representative of the resolution of current pixel detectors at the LHC, is used. The initial set of track parameters for the track fit is based on the simulated particle vertex and momentum, smeared by Gaussian noise. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{visual_simulation.png} \caption{The detector configuration used to study the performance. It consists of 10 identical planar surfaces (gray planes) perpendicular to the $x$-axis. The trajectories of 10,000 simulated single muons (blue lines) and the associated simulated hits on the detector surfaces (orange dots) are indicated} \label{fig:vis_simulation} \end{figure} \subsection{Hardware and Software Environment} Computing nodes of two supercomputers are used for running the performance tests: \begin{enumerate} \item Cori Intel Xeon Haswell node, Cori Intel Xeon Phi Knight's Landing (KNL) node, and Cori GPU node at the National Energy Research Scientific Computing Center (NERSC)~\cite{cori-specs} \item Intel Xeon Skylake (SL) node and ATLAS-GPU01 node at the National Analysis Facility (NAF) at DESY \end{enumerate} Only GPUs from the Nvidia Tesla family are studied. All nodes use the CentOS 7 operating system and all GPUs are using CUDA version 10.2.89. Tables~\ref{Tab:cpu-config} and~\ref{Tab:gpu-config} show the detailed hardware and software configurations of the systems for CPUs and GPUs respectively. \begin{table}[h!] \begin{tabular}{|p{0.06\linewidth}|p{0.24\linewidth}| p{0.15\linewidth}|p{0.14\linewidth}|p{0.1\linewidth}|} \hline \textbf{Sys.} & \textbf{Model Name} & \textbf{S$\times$C$\times$T} & \textbf{Clock Rate (GHz)} & \textbf{Mem. (GB)} \\ \hline \hline 1 & Intel Xeon E5-2698 v3 \newline (Cori-Haswell) & 2x16x2 & 2.30 & 128 \\ \hline 1 & Intel Xeon Phi 7250 (Cori-KNL) & 1x68x4 & 1.40 & 96 \\ \hline 2 & Intel Xeon Gold 5115 (NAF-SL) & 2x10x2 & 2.40 & 376 \\ \hline \end{tabular} \caption{CPU configurations. The Sys. column specifies whether the CPUs are used in the NERSC (1) or NAF (2) system. The S$\times$C$\times$T column represents the number of sockets (S), cores per socket (C) and threads per core (T)} \label{Tab:cpu-config} \end{table} \begin{table}[h!] \begin{tabular}{|p{0.06\linewidth}|p{0.21\linewidth}|p{0.1\linewidth}|p{0.1\linewidth}| p{0.1\linewidth}| p{0.1\linewidth}|p{0.1\linewidth}|} \hline \textbf{Sys.} & \textbf{GPU} & \textbf{FP32 Cores} & \textbf{FP64 Cores} & \textbf{Clock Rate (GHz)} & \textbf{Mem. (GB)}\\ \hline \hline 1 & GV100-SXM2 \newline (Cori-V100)& 5120 & 2560 & 1.53 & 16 \\ \hline 2 & GP100-PCIe \newline (NAF-P100)& 3584 & 1792 & 1.48 & 16\\ \hline 2 & GV100-SXM2 \newline (NAF-V100) & 5120 & 2560 & 1.53 & 32 \\ \hline \end{tabular} \caption{GPU configurations. The Sys. column specifies whether the GPUs are used in the NERSC (1) or NAF (2) system. FP32 and FP64 columns denote the numbers of floating point compute units for single and double precision arithmetic operations, respectively} \label{Tab:gpu-config} \end{table} These machines cover a wide range of architectures, with Cori-Haswell representing a standard compute node with two processors and a moderate amount of cores (for a total of 64 threads), while Cori-KNL features a higher core count due to the Xeon Phi's particular architecture (for a total of 272 threads). Moreover, NAF-P100 and NAF-V100 allow two different GPU generations to be compared, with NAF-P100 being from the older Pascal architecture and NAF-V100 belonging to the newer Volta architecture. The NAF-P100 is connected through a Peripheral Component Interconnect Express (PCIe) serial connector while the NAF-V100 uses the SXM2 connector, a multi-line serial connector that provides both Nvidia NVLink and PCIe connectivity~\cite{nvidia-doc}. Cori-V100 is identical to NAF-V100 except for the lower amount of main memory. Each of the GPUs contains multiple Streaming Multiprocessors (SMs), which are similar to CPUs. However, each GPU is equipped with a large number of SMs, specifically, up to 60 SMs for the P100 and up to 84 SMs for the V100. Each SM contains many CUDA Cores, which execute compute kernels in the form of threads (64 single precision/32 double precision cores per SM for both P100 and V100). Each warp consists of 32 threads, with each warp running on one SM and each SM being able to execute up to 64 warps simultaneously. Due to a large number of total cores, it is important to distribute work across a sufficient number of warps. This allows the dedicated warp schedulers to achieve maximum utilization by keeping the cores busy with instructions. While on P100 all threads share a single program counter (and therefore have to execute the same instruction at the same time), V100 manages execution state per-thread, allowing more independence. \subsection{Tracking Performance} \label{subsec:trk_perf} The resolution of the Kalman filter-based track fit is validated by calculating the pulls of the track parameters as follows: \begin{equation} v_{pull} = \frac{v_{fit} - v_{truth}}{\sigma_{v}}, \end{equation} where $v_{fit}$ and $\sigma_{v}$ are the value and uncertainty of the fitted track parameter respectively, and $v_{truth}$ is the corresponding parameter for the simulated particle. The pull distributions of the six fitted perigee track parameters for a simulated sample of 10,000 single muons obtained from Cori-V100 are shown in Fig.~\ref{fig:pull_muon}. The pull distributions have means compatible with zero and widths compatible with one, which demonstrates that the track parameters and their uncertainties are estimated correctly by the track fit. \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{fitted_param_Tesla_V100-SXM2-16GB_nTracks_10000.pdf} \caption{The distributions of the pull values of the fitted perigee track parameters, $(d_0, z_0, \phi, \theta, \frac{q}{p}, t)$, for a sample of 10,000 single muons obtained from Cori-V100. The black dots are the pull values, and the blue lines are Gaussian fits to the distributions} \label{fig:pull_muon} \end{figure*} \subsection{Computing Performance} \label{subsec:tech_perf} The timing performance of the track fitting for different number of tracks on various computing architectures and configurations is measured. Ten tests are run for each measurement. The mean time of the ten tests is taken as the measurement result, and square root of the average of their squared deviations from the mean is taken as the measurement error. The baseline tests are performed using only track-level parallelization, i.e.~without intra-track parallelization using shared memory. CUDA supports user configuration of the runtime properties of the GPU kernels, e.g. grid size and block size, and the launching of multiple kernels with multiple CUDA streams. Furthermore, the number of registers per thread can be controlled via the CUDA \verb!__launch_bounds__()! qualifier when the kernel is defined. Unless explicitly specified, the tests are performed using one CUDA stream with 255 registers per thread as default configuration~\cite{nvidia-doc}. Moreover, we choose a grid size of 5120$\times$1 to match the number of processing units on the V100 GPU, and a block size of 8$\times$8$\times$1, which is also the largest size of the matrix dealt with by the Kalman filter in ACTS. Performance with intra-track parallelization and different CUDA configurations are also studied for comparison. All the tests use single precision arithmetic operations with the exception of (a) the matrix inversion algorithm required by the smoother as detailed in Sect.~\ref{subsubsec:linear_algebra_support}, and (b) the explicit scenario which compares the different precisions described in Sect.~\ref{subsubsec:cpu_vs_gpu_section}. A singularity container with the executable and the required dependencies used to produce the results presented here is accessible in Ref.~\cite{xiaocong_ai_2021_4693389}. \subsubsection{Performance of the Custom Matrix Inversion Algorithm} Because the Eigen-based matrix inversion algorithm used by ACTS cannot be called inside CUDA kernels, a custom algorithm for matrix inversion implemented for this purpose is used. Measurements are performed to compare our custom implementation to the Eigen-based implementation on the CPUs of both systems. As shown in Fig.~\ref{fig:inverters}, it adds additional time to the fitting when the number of tracks exceeds 100. While the Eigen-based implementation is significantly faster by a constant factor when using only one thread, this effect is much less pronounced when using as many threads. Improving the performance of the custom matrix inversion operations on GPUs to match specialized linear algebra libraries would be expected to further improve the performance. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_inverters.pdf} \caption{The fitting time as a function of the number of tracks with different matrix inversion algorithms on Cori-Haswell (dashed blue for custom matrix inversion, and solid blue for Eigen-based matrix inversion) and Cori-KNL (dashed red for custom matrix inversion, and solid red for Eigen-based matrix inversion). The top panel shows the results with 60 and 250 threads on Cori-Haswell and Cori-KNL, respectively, and the bottom panel shows the results with a single thread} \label{fig:inverters} \end{figure} \subsubsection{Performance of CUDA Code on CPU} The GPU-based track fitting program compiled with \texttt{nvcc} can also run on CPUs (although the CUDA driver and CUDA runtime need to be accessible in order to successfully allocate page-locked memory on the host). In this case, the track-level parallelization is achieved by using OpenMP threads and the host-device model including memory and execution offloading is bypassed. The \texttt{nvcc} compiler uses the host compiler (\texttt{gcc} in this case) to generate the executable. This approach ensures comparable execution time when the number of tracks is small but it induces a small performance penalty (between 4\,\% and 18\,\%) compared with the standard C++ implementation compiled with OpenMP support when the number of tracks exceeds 1000, as shown in Fig.~\ref{fig:cuda_on_cpu}. Nevertheless, it demonstrates the potential for single-source code targeting heterogeneous hardware resources. This is especially important for large and long-running software projects that might be used on different hardware architectures. However, this still requires the code to be written using CUDA and implies the portability limitations discussed in Section~\ref{cuda}. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_cuda-cpp-cpu.pdf} \caption{The fitting time as a function of the number of tracks on NAF-SL using 60 threads with Eigen-based matrix inversion obtained with the \texttt{gcc} (solid blue) and \texttt{nvcc} (dashed red) compiled executables, respectively} \label{fig:cuda_on_cpu} \end{figure} \subsubsection{Performance Comparison Between GPU Architectures} Performance of the Kalman filter-based track fitting on the P100 and V100 Nvidia Tesla cards, is compared. Figure~\ref{fig:gpus} shows that fitting time on NAF-P100 is more than a factor of two longer than on Cori-V100, therefore the following tests will focus on the V100 due to its superior performance. These performance differences are expected to a certain extent because the P100 and the V100 come from different hardware generations, with the V100 generally featuring more cores, higher clock rates and an improved interconnect. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_gpu_var.pdf} \caption{The fitting time as a function of the number of tracks on NAF-P100 (dashed red) and Cori-V100 (solid blue)} \label{fig:gpus} \end{figure} \subsubsection{Performance Comparison Between CPU and GPU} \label{subsubsec:cpu_vs_gpu_section} Figure~\ref{fig:cpu-vs-gpu} compares the fitting time on Cori-V100 with that on Cori-Haswell using different approaches to the matrix inversion. As mentioned previously, only the custom matrix inversion is possible with Cori-V100. When considering the custom matrix inverter, Cori-V100 displays superior performance compared to Cori-Haswell when the number of tracks exceeds 1000. However, the Eigen-based implementation on CPUs still outperforms our custom inverter on GPUs, demonstrating that it is important to not only consider the potential benefits from porting the actual code to GPUs but to also take supporting libraries into account. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_cpu-vs-gpu.pdf} \caption{The fitting time as a function of the number of tracks on Cori-Haswell using 60 threads with Eigen-based matrix inversion (dashed blue) and custom matrix inversion (dotted blue), and on Cori-V100 (solid red)} \label{fig:cpu-vs-gpu} \end{figure} Our matrix inversion algorithm performs all the operations in double precision regardless of the operands' types. Figure~\ref{fig:float_vs_double} shows that there is little variation in performance between the different operands' types: when running the code on Cori-Haswell, there is virtually no performance difference for more than 1000 tracks, while double operands are slightly slower on Cori-V100. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_vs_double.pdf} \caption{The fitting time as a function of the number of tracks using \texttt{float} and \texttt{double} operands on Cori-Haswell (solid blue for float, and dotted blue for double) and Cori-V100 (dashed red for float, and dotted blue for double). The tests are performed using the custom matrix inversion and the ones executed on Cori-Haswell are using 60 OpenMP threads} \label{fig:float_vs_double} \end{figure} As a consequence of parallel execution, the fitting time per track varies inversely with the number of tracks for all the considered platforms. In particular, for an HL-LHC scenario with up to 10,000 tracks, both the current prototype on GPUs and the Eigen-based implementation on CPUs show a fitting time per track in the order of microseconds. \subsubsection{Performance with Different GPU Configurations} The impact of usage of intra-track parallelization with shared memory, and variations in number of streams per device, grid size and block size as well as the number of fitted tracks are investigated and the most important results are discussed next. Figure~\ref{fig:memory-var} shows that performance gain could be achieved by using intra-track parallelization with shared memory when the number of tracks is below 1000. However, when the number of tracks exceeds 1000, using intra-track parallelization results in a performance penalty due to limitation of the available shared memory and resident threads per SM. Figure~\ref{fig:nStreams-var-float} shows that no significant performance gain is obtained from using more streams per device. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_memory_var_nStreams_var.pdf} \caption{The fitting time as a function of the number of tracks with linear grid of size 100,000$\times$1 with (dashed red) or without (solid blue) intra-track parallelization on Cori-V100} \label{fig:memory-var} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_gridSize_var_nStreams_var.pdf} \caption{The fitting time as a function of the number of tracks with linear grids of sizes 5120$\times$1 (top) and 100,000$\times$1 (bottom), with one (solid blue) or four (dashed red) streams per device on Cori-V100} \label{fig:nStreams-var-float} \end{figure} Figure~\ref{fig:grid-block-vars} shows the required wall clock time with different grid and block sizes for 10,000 tracks. Note that when there are 1024 threads per block, the CUDA \verb!__launch_bounds__()! must be specified with the maximum number of threads per block no less than 1024 to avoid "too many resources requested for launch" errors. This results in a maximum of 64 registers per thread at Cori-V100 with 65,536 32-bit registers per SM. While the performance differences between the two grid sizes are negligible, larger block sizes increase the runtime by up to a factor of 1.5. These results show that it is important to choose block sizes that are appropriate for the underlying hardware, i.e.~when the overall workload is not large enough to saturate all the SMs on the GPU, a relatively large block size could lead to further imbalance of the workload distributed to the SMs and hence compromise the performance. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_gridSize_var_nStreams_var_blockSize_var.pdf} \caption{The fitting time for 10,000 tracks as a function of bidimensional (left) or unidimensional (right) block sizes with linear grids of sizes 5120$\times$1 (top) and 100,000$\times$1 (bottom), with one (blue) or four (red) streams per device on Cori-V100} \label{fig:grid-block-vars} \end{figure} The number of registers per thread can be reduced with the goal of running more threads per block without exceeding the hardware limits (i.e. the maximum number of blocks per SM). Figure~\ref{fig:max-registers} shows how a variation in the number of registers per thread affects the overall performance for a track fitting workload of 10,000 tracks. The performance varies little with the number of registers per thread. This is because the number of resident threads on the SMs are not increased by reducing the number of registers per thread when the overall workload is small. Increasing the block size can enforce at least the same amount of threads as the block size resident on some SMs, but this could lead to unwanted performance compromise due to inefficient GPU utilization. See Section~\ref{sec:disc} for further discussion. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{float_max_registers.pdf} \caption{The fitting time for 10,000 tracks using various number of registers per thread and block sizes on Cori-V100 node. The circle blue, cross red and triangle green represent block sizes of 8$\times$8$\times$1, 16$\times$16$\times$1 and 32$\times$32$\times$1, respectively. When there are 1024 threads per block, a maximum of 64 registers per thread is allowed} \label{fig:max-registers} \end{figure} The different components of the fitting time are analysed using Nsight Systems, one of Nvidia's new performance analysis tools~\cite{DBLP:journals/superfri/KnoblochM20}. Figure~\ref{fig:v100_timeline} shows the timeline of those CUDA API activities required for GPU offloading, including the memory allocation on GPU, kernel launching, device synchronization and memory deallocation on the GPU, and the timeline of memory transfer and kernel execution in either one CUDA stream or multiple streams for 10,000 tracks. When using only a single stream, kernel execution accounts for roughly 70\,\% of the total runtime, and memory transfer accounts for roughly 17\,\% with significant impact on the performance. The performance gain from overlapping the data transfer and kernel execution with multiple streams is limited by the memory transfer time. A dedicated CUDA synchronization method might improve this. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{CUDA_timeline_profiling_nTracks10k_gridsize5120_shared0.pdf} \caption{The timeline of CUDA activities for offloading track fitting for 10,000 tracks to Cori-V100 with one stream (top) and four streams (bottom). The starting times of memory allocation on GPU are taken as the $0$ point of the timelines} \label{fig:v100_timeline} \end{figure} \section{The Kalman Filter and ACTS} \label{sec:kf} Track reconstruction is typically a multi-stage procedure, wherein candidates can be rejected at each stage. This approach allows high reconstruction efficiency and purity to be achieved in the final output collection, while reducing the overhead from processing unwanted candidates further than necessary. It starts from measurements (deposited energy in sensitive elements of the detector) and combines them in various configurations (including appropriate calibrations at various stages) to form plausible candidate trajectories. Accurate estimations of the parameters which define the mathematical form used to describe the trajectory are then made. After any required pre-processing of the raw measurements, a typical first step is \emph{Seeding}, in which small sets of compatible measurements are grouped using simple criteria and an initial trajectory estimate made. Seeds passing requirements can then be used as the basis for \emph{Track Finding}, in which additional compatible measurements are added to the trajectory through the detector. Once the full set of measurements for the trajectory is obtained, a \emph{Track Fitting} step can be performed, in order to precisely estimate the parameters and their covariance. A commonly used approach and important tool in many track reconstruction applications is the Kalman filter~\cite{Billoir:1983mz,Fruhwirth:1987fm}. Developed in the late 1950s, the initial application of the Kalman filter procedure was in ballistics~\cite{kalman1960}, where it allowed telemetry data for the heading and acceleration of the projectile to be combined with information on its location. The generalized procedure, in which measurements are combined with predictions based on an underlying model, results in state estimates more precise than either measurements or predictions alone, and has since been very widely used in many fields. Within track reconstruction, a typical Kalman filter \emph{step} would proceed as follows (see Fig.~\ref{fig:kalmanfilter}): \begin{enumerate} \item An initial estimate of the track state (i.e. helix parameters) at a given position is taken as the starting point \item This track state is propagated according to the track model on to the next \emph{Measurement Surface} (i.e. the reference plane of a sensitive detector), providing a prediction of track state on this surface \item The prediction is combined with the measurement at this surface, if present, either through a weighted average or the so-called \emph{Gain Matrix} formalism forming a new track state which is used to update the initial estimate \item This new estimate is then used for further Kalman steps, up to the end of the trajectory \end{enumerate} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{filter_v2.pdf} \caption{Illustration of the different steps of the Kalman filter using a simplified detector model consisting of only two layers. (Left) The track state at the $(k-1)$th surface is indicated with the light green ellipse, and a measurement on the $k$th surface is indicated in red. (Center) A prediction is made for the track state on the $k$th surface and the size of the prediction is indicated with the dark green ellipse. (Right) The track state on the $k$th surface is updated by including the measurement on the $k$th surface \label{fig:kalmanfilter}} \end{figure*} The Kalman procedure has the property that the next state estimate can be determined from the one preceding it. While this is a useful property in many cases, as it requires no `history' to be stored, it has the consequence that only the final state contains the full information about all the steps preceding it, and therefore the best possible precision. To allow the prior track states to benefit from this information (e.g. to allow a $\chi ^2$ quality metric to be defined based on measurement residuals), an additional stage is needed. This \emph{Smoother} stage can be performed using one of two approaches: either using essentially the same procedure as the forward Kalman filter but in reverse direction as illustrated in Fig.~\ref{fig:smoother} or using the Rauch-Tung-Striebel (RTS) smoother \cite{RTS} formalism with the stored Jacobians between states calculated during the forward Kalman filter steps. The latter approach does not require a second propagation of the track parameters and is therefore expected to have better timing performance. \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{filtersmoother_v2.pdf} \caption{Illustration of a forward filter and a backwards smoother on a simplified four layer detector geometry. The red points indicate the measurements and their uncertainties on each layer. The green points indicate the predictions. The predictions from the forward filter (left) are obtained when the filter is run from left to right. The predictions from the backwards filter are obtained during a second pass of the filter when it is run from right to left\label{fig:smoother}} \end{figure} ACTS has its origins in the track reconstruction algorithms used by the ATLAS experiment~\cite{Aad:1129811}. In addition to a tracking toolkit, ACTS also includes a fast simulation package. The ACTS code is designed to be inherently thread-safe to support parallel code execution and the data structures are vectorised. The implementation has been designed to be fully agnostic to detection technologies, detector design, and software frameworks so that it can be used by a range of experiments. The Eigen library~\cite{Guennebaud:2010aa} is used for algebra operations. In addition, ACTS is designed to be an R\&D platform for the development of new algorithms and the porting of existing algorithms to new hardware platforms. See Ref.\cite{Ai:2020jbw,ai2021common} for further details. While various representations of trajectories are possible, in this paper we will focus on helical trajectories of charged particles in a solenoidal magnetic field described using the following parameters: \begin{itemize} \item Two parameters $loc_0$ and $loc_1$ describing the spatial coordinates represented in the local frame of the measurement plane. In the special case of describing the parameters at a perigee surface\footnote{A surface defined at the point of closest approach to a reference point, for example the nominal interaction point in a particle collider}, these become the transverse and longitudinal impact parameters $d_0$ and $z_0$ respectively. \item The polar and azimuthal angles of the particle momentum vector direction, $\phi$ and $\theta$, at that point. \item A curvature parameter, expressed as the ratio of charge to momentum $\frac{q}{p}$. \item The time $t$. \end{itemize} Both the backwards-propagation and the RTS Kalman smoothing approaches are available within ACTS. The latter approach is used for the performance studies in this paper.
{ "timestamp": "2021-11-23T02:04:39", "yymm": "2105", "arxiv_id": "2105.01796", "language": "en", "url": "https://arxiv.org/abs/2105.01796" }
\section{Introduction} Assessing lesion growth across multiple time points is a major task for radiologists and oncologists. The sizes of lesions are important clinical indicators for monitoring disease progression and therapy response in oncology. A widely-used guideline is RECIST (Response Evaluation Criteria In Solid Tumors)~\cite{eisenhauer2009new}, which requires users to first select an axial slice where the lesion has the largest spatial extent, then measure the longest diameter of the lesion (long axis), followed by its longest perpendicular diameter (short axis). This process is highly tedious and time-consuming. More importantly, it is prone to inconsistency between different observers~\cite{tang2018semi}, even with considerable clinical knowledge. Segmentation masks may be another quantitative and meaningful metric to assess lesion sizes, which is arguably more accurate/precise than RECIST diameters and avoids the subjectivity of selecting long and short axes. However, it is impractical and infeasible for radiologists to manually delineate the contour of every target lesion on a daily basis due to the heavy work load that would require. Deep learning based computer-aided diagnosis techniques \cite{wang2020knowledge,tang2020automated,tang2020e2net,yan2020learning,cai2020deep,yan2020self,tang2021disentangled,cheng2021scalable} have been extensively studied by researchers, including automatic lesion segmentation. Most existing works focused on tumors of specific types, such as lung nodules~\cite{WANG2017172,jin2018ct}, liver tumors~\cite{li2018h,tang2020e2net}, and lymph nodes~\cite{zhu2020lymph}. However, radiologists often encounter different types of lesions when reading an image. Universal lesion segmentation~\cite{cai2018accurate,tang2018ct,tang2020one,agarwal2020weakly,tang2021ahrnet} and measurement~\cite{tang2018semi,tang2020one} have drawn attentions in recent years, aiming at learning from a large-scale dataset to handle a variety of lesions in one algorithm. These work leverage NIH DeepLesion dataset~\cite{yan2018deeplesion}, which contains the RECIST annotations of over 30K lesions of various types. Among them, \cite{tang2018semi} requires users to draw a box around the lesion to indicate the lesion of interest. It first employs a spatial transform network to normalize the lesion region, then adapts a stacked hourglass network to regress the four endpoints of the RECIST diameters. \cite{tang2020one} requests users to only click a point on or near the lesion, more convenient and efficient than \cite{tang2018semi}. It uses an improved mask R-CNN to detect the lesion region and subsequently performs segmentation and RECIST diameter prediction \cite{tang2020one}. User click information is fed into the model as the input together with the image. This strategy treats lesions with diverse sizes and shapes in the same way, thus may not be optimal at locating the lesion region precisely. In this paper, we propose a novel framework named prior-guided dual-path network (PDNet). Following \cite{tang2020one}, given a 2D computed tomography (CT) slice and a click guidance in a lesion region, our goal is to segment the lesion and predict its RECIST diameters automatically and reliably. To achieve this goal, we adopt a two-stage framework. The first stage extracts the lesion of interest (LOI) by segmentation rather than detection in \cite{tang2020one}, since sometimes the detection results do not cover the lesions that we click in. The second stage obtains the lesion segmentation and RECIST diameter prediction results from the extracted LOI. We propose a novel prior encoder to encode the click prior information into attention maps, which can deal with considerable size and shape variations of the lesions. We also design a scale-aware attention block with dual-path connection to improve the decoder. PDNet is evaluated on manually-labeled lesion masks and RECIST diameters in DeepLesion dataset~\cite{yan2018deeplesion}. To prove the generalizability of our method, we additionally collected an external test set from 6 public lesion datasets of 5 organs. Experimental results show that PDNet outperforms the previous state-of-the-art method~\cite{tang2020one} and a strong baseline nnUNet~\cite{isensee2018nnu} on two test sets, for both lesion segmentation and RECIST diameter prediction tasks. \section{Methodology} Our framework includes two stages. The first stage extracts the lesion of interest (LOI) by segmentation; the second stage performs lesion segmentation and RECIST diameter prediction from the extracted LOI. A prior-guided dual-path network (PDNet) is proposed for the tasks at both stages. Fig. \ref{fig:framework} shows the overview of the proposed PDNet. It consists of three components: an image encoder, a prior encoder with click-driven attention, and a decoder with dual-path connection. \begin{figure*}[t!] \centering \includegraphics[width=0.99\linewidth]{framework.pdf} \caption{Overview of the proposed prior-guided dual-path network (PDNet).} \label{fig:framework} \end{figure*} \textbf{Image Encoder:} The image encoder aims to extract highly discriminative features from an input CT image. Also, representing features at multiple scales is of great importance for our tasks. Recently, Zhang \textit{et al}. \cite{zhang2020resnest} present a split-attention block and stack several such blocks in ResNet style \cite{He_2016_CVPR} to create a split-attention network, named ResNeSt. ResNeSt is able to capture cross-channel feature correlations by combining the channel-wise attention with multi-path network layout. Extensive experiments demonstrate that it universally improves the learned feature representations to boost performance across numerous vision tasks. Therefore, this work utilizes ResNeSt-50 \cite{zhang2020resnest} as backbone to extract highly discriminative multi-scale features in the image encoder. As shown in Fig. \ref{fig:framework}, ResNeSt-50 has five blocks that output multi-scale features with different channels. To relieve the computation burden, they are compressed to 32 channels using a convolutional layer with 32 $3 \times 3$ kernels. \textbf{Prior Encoder with Click-driven Attention:} Given a click guidance, a click image and a distance transform image are generated and considered as prior information following \cite{tang2020one}, as shown in Fig. \ref{fig:framework}. In \cite{tang2020one}, the prior information is integrated into the model by directly treating it as input for feature extraction. We argue that the representation ability of features extracted from image encoder may be weaken by this strategy. That is because the sizes and shapes of different lesions are highly diverse, but their prior information generated using this strategy are the same. To avoid this, we separately build a prior encoder (PE) with click-driven attention, which is able to learn lesion-specific attention matrices by effectively exploring the click prior information. With them, the representation ability of the extracted multi-scale features from image encoder will be enhanced to improve the performance of our tasks. As shown in Fig. \ref{fig:framework}, the prior encoder takes as input the compressed multi-scale features and a 3-channel image (the original CT image, the click image, and the distance transform image), and outputs attention enhanced multi-scale features. The prior encoder includes five atrous spatial pyramid pooling (ASPP) \cite{chen2018deeplab} based attention modules and a convolutional layer with 32 3$\times$3 kernels and a stride of 2. The detailed structure of ASPP based attention module can be found from the purple box in Fig. \ref{fig:framework}, where 5 side outputs (the pink solid arrows) are added to introduce the deep mask supervision to learn the attention matrices. \textbf{Decoder with Dual-path Connection:} It is known that the low-level scale features focus on fine-grained lesion parts (\textit{e}.\textit{g}., edges) but are short of global contextual information, while the high-level scale features are capable of segmenting the entire lesion regions coarsely but at the cost of losing some detailed information. With this inspiration, unlike UNet \cite{ronneberger2015u} where the decoder only considers current scale features and its neighbouring high-level scale features gradually, we build a new decoder that can aggregate the attention enhanced multi-scale features more comprehensively. Specifically, each scale features are reasonably interacted with all lower-level and higher-level scale features in the decoder, which is accomplished by using dual-path connection (\textit{i}.\textit{e}., top-down connection and bottom-up connection). The top-down connection (T2D) adopts a bilinear interpolation operation on the high-level scale features for up-sampling followed by a convolutional layer with 32 3$\times$3 kernels for smoothing. The bottom-up connection (B2U) performs a convolution operation with 32 3$\times$3 kernels and a large stride for down-sampling. Then the current scale features are concatenated with all up-sampled and down-sampled features from other scales in the channel dimension, suggesting that each concatenated features can represent the global contextual and local detail information of the lesion. The concatenated features can be directly used for lesion segmentation or RECIST diameter prediction with a convolutional layer of 1 or 4 3$\times$3 kernels. But before that, to further improve the feature representations, we build a scale-aware attention module (SA) based on the channel attention mechanism of DANet \cite{fu2019dual}, which selectively emphasizes interdependent channel features by integrating associated features among all feature channels. The SA's structure is shown in the red box of Fig. \ref{fig:framework}. Different lesions have different scales, but SA is able to adaptively select suitable scale or channel features for them for better accuracy in our tasks. To get a full-size prediction, a deconvolutional layer with 32 4$\times$4 kernels and a stride of 2 is attached to the last SA. Also, 6 and 3 side outputs are added in the decoder to introduce the deep mask supervision (the pink solid arrows) and deep diameter supervision (the pink dotted arrows), respectively. The deep diameter supervision is only used for high-resolution side outputs, because a high-quality RECIST diameter prediction requires large spatial and detailed information. \textbf{Model Optimization:} Following \cite{tang2018semi,tang2020one}, we also convert the RECIST diameter prediction problem into a key point regression problem. It means that the model will predict four key point heatmaps to locate the four endpoints of RECIST diameters. For both tasks, a mean squared error loss ($l_{mse}$) is used to compute the errors between predictions and supervisions. As a pixel-wise loss, it will be affected by imbalanced foreground and background pixels. Unfortunately, lesion and non-lesion regions are highly imbalanced at stage 1 of our framework. To deal with this problem, an additional IOU loss \cite{rahman2016optimizing} ($l_{iou}$) is introduced for the lesion segmentation task, which handles the global structures of lesions instead of every single pixel. As described above, 11 side outputs with deep mask supervision and 3 side outputs with deep diameter supervision are used in PDNet. Therefore, the loss is $l_{seg}=\sum_{i=1}^{11}{[l^i_{mse}+l^i_{iou}]}$ for lesion segmentation and $l_{dp}=\sum_{i=1}^{3}{l^i_{mse}}$ for RECIST diameter prediction. The final loss is $l=\lambda l_{seg}+(1-\lambda) l_{dp}$, where $\lambda$ is set to 0.01 to balance the magnitude of the two losses. Two PDNet models used in two stages are trained separately. For the segmentation task, we do not have manual lesion masks in DeepLesion. Therefore, we first construct an ellipse from a RECIST annotation following \cite{tang2019uldor}. Then the morphological snake (MS) algorithm \cite{marquez2013morphological} is used to refine the ellipse to get a pseudo mask with good quality, serving as the mask supervision. For $l_{dp}$ update, we generate four 2D Gaussian heatmaps with a standard deviation of $\sigma$ from four endpoints of each RECIST annotation, serving as the diameter supervision. We set $\sigma=3$ at stage 1, $\sigma=7$ at stage 2. We also apply an iterative refining strategy. When the training is done, we run the model over all training data to get their lesion segmentation results, and then use the MS algorithm to refine them. With an ellipse and a refined segmentation result, we can update the pseudo mask by setting their intersections as foreground, their differences as uncertain regions that will be ignored for loss computation during training, and the rest as background. The new pseudo masks can be used to retrain the models. The final models are obtained after three training iterations. \section{Experiments} \textbf{Datasets and Evaluation Criteria:} The DeepLesion dataset \cite{yan2018deeplesion} contains $32,735$ CT lesion images with RECIST diameter annotations from $10,594$ studies of $4,459$ patients. Various lesions throughout the whole body are included, such as lung nodules, bone lesions, liver tumors, enlarged lymph nodes, and so on. Following \cite{cai2018accurate,tang2020one}, $1000$ lesion images from 500 patients with manual segmentations serve as a test set. The rest patient data are used for training. An external test set with 1350 lesions from 900 patients is built for external validation by collecting lung, liver, pancreas, kidney tumors, and lymph nodes from multiple public datasets, including Decathlon-Lung \cite{simpson2019large} (50), LIDC \cite{armato2011lung} (200), Decathlon-HepaticVessel \cite{simpson2019large} (200), Decathlon-Pancreas \cite{simpson2019large} (200), KiTS \cite{heller2019kits19} (150), and NIH-Lymph Node \cite{roth2014new} (100), specifically. Each lesion has a 3D mask. To make it suitable for evaluation, we select an axial slice for each lesion where the lesion has the largest spatial extent based on its 3D mask. The long and short diameters calculated from the 2D lesion mask of the selected slice are treated as the ground truths of the RECIST diameters. We utilize the same criteria as in \cite{tang2020one} to compute the quantitative results. The pixel-wise precision, recall, and dice coefficient (Dice) are used for lesion segmentation. The mean and standard deviation of differences between the diameter lengths (mm) of the predictions and manual annotations are used for RECIST diameter prediction. \textbf{Implementation Details:} PDNet is implemented in PyTorch. The image encoder is initialized with the ImageNet \cite{DengDSLL009} pre-trained weights. At both stages, we train PDNet using Adam optimizer \cite{kingma2014adam} with an initial learning rate of 0.001 for 120 epochs and decay it by 0.1 after 60 and 90 epochs. During training, all CT images are resized to 512$\times$512 first. Then the input images are generated by randomly rotating by $\theta \in [-10^{\circ}, 10^{\circ}]$ and cropping a square sub-image whose size is $s \in [480, 512]$ at stage 1 and 1.5 to 3.5 times as large as the lesion's long side with random offsets at stage 2. They are resized to 512$\times$512 and 256$\times$256 for both stages, respectively. For testing, input images are generated by resizing to 512$\times$512 at stage 1 or cropping a square sub-image whose size is 2.5 times the long side of lesion segmentation result produced by the first PDNet model at stage 2. To mimic the clicking behavior of a radiologist, a point is randomly selected from a region obtained by eroding the ellipse to half of its size. Besides requiring a click, it can also cooperate with lesion detection and tracking techniques \cite{tang2019uldor,yan2019mulan,yan2020learning,cai2020deep} to perform fully automatic lesion segmentation and RECIST diameter prediction. \begin{figure*}[t!] \centering \begin{minipage}[b]{\linewidth} \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=0.99\linewidth]{result.png} \\ \end{minipage} \begin{minipage}[b]{0.15\linewidth} \centering \centerline{(a)}\medskip \end{minipage} \begin{minipage}[b]{0.13\linewidth} \centering \centerline{(b)}\medskip \end{minipage} \begin{minipage}[b]{0.13\linewidth} \centering \centerline{(c)}\medskip \end{minipage} \begin{minipage}[b]{0.27\linewidth} \centering \centerline{(d)}\medskip \end{minipage} \begin{minipage}[b]{0.28\linewidth} \centering \centerline{(e)}\medskip \end{minipage} \end{minipage} \caption{Visual examples of results on the DeepLesion test set (the first three rows) and the external test set (the last two rows), where the pink and green curves/crosses are the manual annotations and automatic results. Given a CT image (a) and a click guidance (red spot), the $1^{st}$ PDNet produces an initial lesion segmentation result (c) at stage 1, based on which a LOI (b) is extracted and taken as input of stage 2. The final results of lesion segmentation (left) and RECIST diameter prediction (right) are obtained by the $2^{nd}$ nnUNet (d) and $2^{nd}$ PDNet (e). Best viewed in color.} \label{fig:result} \end{figure*} \textbf{Experimental Results:} As a powerful segmentation framework, nnUNet \cite{isensee2018nnu} built based on UNets \cite{ronneberger2015u} has been successfully used in many medical image segmentation tasks, thus it can serve as a strong baseline in this task. We train an nnUNet model for each stage by taking as input the 3-channel image and using the same setting as PDNet. At stage 1, nnUNet produced poor segmentation performance, \textit{e}.\textit{g}., the DICE score is about 0.857 on the DeepLesion test set, suggesting that the LOI extracted from it is not good enough to serve as the input of stage 2. Therefore, the $2^{nd}$ nnUNet takes as input the LOIs extracted by the $1^{st}$ PDNet, achieving a DICE of 0.911. Fig. \ref{fig:result} shows five visual examples of the results produced by nnUNet and PDNet. We can see that \textbf{1)} the $1^{st}$ PDNet can segment the lesion region (Fig. \ref{fig:result}(c)), even if they are small (the $4^{th}$ row), heterogeneous (the $2^{nd}$ row), or have blurry boundaries (the $5^{th}$ row), irregular shapes (the $3^{rd}$ row), etc. It indicates that the LOIs can be extracted reliably at stage 1 (Fig. \ref{fig:result}(b)). \textbf{2)} The lesion segmentation results can be improved significantly by the $2^{nd}$ PDNet (Fig. \ref{fig:result}(e)), but a part of them become worse when using the $2^{nd}$ nnUNet (\textit{e}.\textit{g}., the $1^{st}$ and $3^{rd}$ rows in Fig. \ref{fig:result}(d)). \textbf{3)} The RECIST diameters predicted by PDNet are much closer to the references than nnUNet. The qualitative results validate that the proposed framework can segment the lesions and predict their RECIST diameters reliably using only a click guidance. It may struggle when lesions have highly blurry boundaries or irregular shapes (rows 3 and 5), in which all methods will fail to segment them well. \begin{table}[t!] \begin{center} \caption{Results of lesion segmentation and RECIST diameter prediction on two test sets. The mean and standard deviation of all metrics are reported.} \label{tab:result} { \scriptsize \begin{tabular}{|@{}*{1}{m{2.5cm}<{\centering}@{}}|@{}*{1}{m{1.9cm}<{\centering}@{}|@{}}*{1}{m{1.9cm}<{\centering}@{}|@{}}*{1}{m{1.9cm}<{\centering}@{}|@{}}*{1}{m{1.9cm}<{\centering}@{}|@{}}*{1}{m{1.9cm}<{\centering}@{}|@{}}} \hline & \multicolumn{3}{c|}{Lesion segmentation} & \multicolumn{2}{c|}{RECIST diameter prediction} \\ \cline{2-6} \multirow{-2}{*}{Method} & Precision & Recall & Dice & Long axis & Short axis \\ \hline \multicolumn{6}{|c|}{DeepLesion test set} \\ \hline Cai \textit{et al}. \cite{cai2018accurate} & 0.893$\pm$0.111 & 0.933$\pm$0.095 & 0.906$\pm$0.089 & - & - \\ Tang \textit{et al}. \cite{tang2018semi} & - & - & - & 1.893$\pm$2.185 & 1.614$\pm$1.874 \\ Tang \textit{et al}. \cite{tang2020one} & 0.883$\pm$0.057 & \textbf{0.947$\pm$0.074} & 0.912$\pm$0.039 & 1.747$\pm$1.983 & 1.555$\pm$1.808 \\ nnUNet \cite{isensee2018nnu} & \textbf{0.977$\pm$0.033} & 0.852$\pm$0.086 & 0.907$\pm$0.050 & 2.108$\pm$1.997 & 1.839$\pm$1.733 \\ PDNet & 0.961$\pm$0.044 & 0.898$\pm$0.077 & \textbf{0.924$\pm$0.045} & \textbf{1.733$\pm$1.470} & \textbf{1.524$\pm$1.374} \\ \hline \multicolumn{6}{|c|}{External test set} \\ \hline nnUNet \cite{isensee2018nnu} & \textbf{0.946$\pm$0.062} & 0.815$\pm$0.099 & 0.870$\pm$0.054 & 2.334$\pm$1.906 & 1.985$\pm$1.644 \\ PDNet & 0.927$\pm$0.074 & \textbf{0.857$\pm$0.093} & \textbf{0.885$\pm$0.049} & \textbf{2.174$\pm$1.437} & \textbf{1.829$\pm$1.339} \\ \hline \end{tabular} } \end{center} \end{table} \begin{table}[!t] \begin{center} { \caption{Category-wise results in terms of segmentation Dice and the prediction error of diameter lengths on the external test set.} \label{tab:category-result} \scriptsize \begin{tabu} to 0.99\textwidth {| X[c] | X[c] | X[c] | X[c] | X[c] | X[c] |} \hline Method & Lung & Liver & Pancreas & Kidney & Lymph node \\ \hline \multicolumn{6}{|c|}{Lesion segmentation (Dice)} \\ \hline nnUNet \cite{isensee2018nnu} & 0.853$\pm$0.054 & 0.876$\pm$0.057 & 0.877$\pm$0.055 & 0.890$\pm$0.057 & 0.865$\pm$0.050 \\ PDNet & 0.876$\pm$0.046 & 0.893$\pm$0.051 & 0.886$\pm$0.050 & 0.911$\pm$0.050 & 0.876$\pm$0.045 \\ \hline \multicolumn{6}{|c|}{RECIST diameter prediction (long axis)} \\ \hline nnUNet \cite{isensee2018nnu} & 2.396$\pm$2.004 & 2.862$\pm$2.090 & 2.655$\pm$2.048 & 2.493$\pm$1.963 & 1.958$\pm$1.639 \\ PDNet & 2.435$\pm$1.461 & 2.378$\pm$1.463 & 2.220$\pm$1.536 & 2.603$\pm$1.533 & 1.897$\pm$1.293 \\ \hline \multicolumn{6}{|c|}{RECIST diameter prediction (short axis)} \\ \hline nnUNet \cite{isensee2018nnu} & 2.223$\pm$1.404 & 2.383$\pm$1.808 & 2.242$\pm$1.637 & 2.342$\pm$1.854 & 1.712$\pm$1.440 \\ PDNet & 2.243$\pm$1.333 & 2.168$\pm$1.405 & 1.977$\pm$1.359 & 2.362$\pm$1.488 & 1.486$\pm$1.174 \\ \hline \end{tabu} } \end{center} \end{table}% Table \ref{tab:result} lists the quantitative results of different methods on two test sets. It can be seen that \textbf{1)} compared to the best previous work \cite{tang2020one}, PDNet boosts the Dice score by a large margin of 1.2\% (from 0.912 to 0.924), and also gets smaller diameter errors on the DeepLesion test set. It means that PDNet can simultaneously segment the lesions accurately and produce reliable RECIST diameters close to the radiologists' manual annotations. \textbf{2)} Compared to the strong baseline nnUNet, PDNet gets much better results on both test sets. This is because PDNet is able to extract more comprehensive multi-scale features to better represent the appearances of different kinds of lesions. \textbf{3)} Compared to the DeepLesion test set, the performance drops for both nnUNet and PDNet on the external test set, \textit{e}.\textit{g}., the Dice score of PDNet decreases from 0.924 to 0.885. In the external test set, some lesion masks are not well annotated, thus the generated ground-truth RECIST diameters will also be affected. The $4^{th}$ row of Fig. \ref{fig:result} shows an unsatisfactory annotation, where the manual annotation is larger than the lesion's actual size. Meanwhile, the segmentation results produced by PDNet are better aligned to the lesion boundaries visually. Table \ref{tab:category-result} lists the category-wise results on the external test set. PDNet achieves better performance in terms of all metrics and categories except the RECIST diameter prediction on lung and kidney tumors. After investigation, a potential reason is that a part of lung and kidney tumors have highly irregular shapes, whose diameters generated from manual masks are very likely to be large , and nnUNet tends to predict larger diameters than PDNet in these cases. These results evidently demonstrate the effectiveness and robustness of our method. \textbf{Ablation Studies:} To investigate the contributions of PDNet's components, \textit{i}.\textit{e}., prior encoder (PE), top-down connection (T2D), bottom-up connection (B2U), and scale-aware attention module (SA), we configure different models by sequentially adding them into the base model that includes the image encoder with input of the 3-channel image and a UNet-style decoder. Table \ref{tab:ablation} outlines their quantitative comparisons. As can be seen, \textbf{1)} each added component improves the performance at both stages, demonstrating that the proposed strategies contribute to learning more comprehensive features for our tasks. \textbf{2)} The largest improvement gain is brought by introducing PE, especially for stage 1, demonstrating that PE can effectively explore the click prior information to learn lesion-specific attention matrices which heavily enhances the extracted multi-scale features for performance improvement. \begin{table}[!t] \begin{center} { \caption{Results of different settings of our method in terms of Dice and the prediction error of diameter lengths on the DeepLesion test set.} \label{tab:ablation} \scriptsize \begin{tabu} to 0.99\textwidth {| X[0.3c] | X[0.3c] | X[0.3c] | X[0.3c] | X[0.3c] | X[c] | X[c] | X[c] | X[c] |} \hline \multicolumn{5}{|c|}{Settings} & \multicolumn{1}{c|}{Stage 1} & \multicolumn{3}{c|}{Stage 2} \\ \cline{1-9} \multirow{6}{*}{\rotatebox[origin=c]{270}{Base model}} & PE & T2D & B2U & SA & Dice & Dice & Long axis & Short axis\\ \cline{2-9} & & & & & 0.871$\pm$0.123 & 0.909$\pm$0.068 & 1.961$\pm$2.278 & 1.704$\pm$1.948 \\ & \checkmark & & & & 0.890$\pm$0.089 & 0.915$\pm$0.055 & 1.861$\pm$1.934 & 1.617$\pm$1.684 \\ & \checkmark & \checkmark & & & 0.900$\pm$0.067 & 0.919$\pm$0.054 & 1.809$\pm$1.731 & 1.577$\pm$1.508 \\ & \checkmark & \checkmark & \checkmark & & 0.905$\pm$0.070 & 0.921$\pm$0.050 & 1.758$\pm$1.696 & 1.544$\pm$1.470 \\ & \checkmark & \checkmark & \checkmark & \checkmark & 0.911$\pm$0.060 & 0.924$\pm$0.045 & 1.733$\pm$1.470 & 1.524$\pm$1.374 \\ \hline \end{tabu} } \end{center} \end{table}% \section{Conclusions} This paper proposes a novel deep neural network architecture, prior-guided dual-path network (PDNet), for accurate lesion segmentation and RECIST diameter prediction. It works in a two-stage manner. Providing very simple human guide information, an LOI can be extracted precisely by segmentation at stage 1 and its segmentation and RECIST diameters can be predicted accurately at stage 2. As such, it offers a useful tool for radiologists to get reliable lesion size measurements (segmentation and RECIST diameters) with greatly reduced time and labor. It can potentially provide high positive clinical values. \bibliographystyle{ieeetr}
{ "timestamp": "2021-05-06T02:08:05", "yymm": "2105", "arxiv_id": "2105.01828", "language": "en", "url": "https://arxiv.org/abs/2105.01828" }
\section{Introduction} Path planning is an optimization problem with one or multiple objectives that aims to find a desirable path between two poses. Autonomous robots rely on various path planning algorithms to meet specific performance metrics \cite{gonzalez2016review}. In robotics and motion planning, benchmarking and comparison between algorithms is key to the experimental evaluation of newly proposed algorithms. Papers often report performance scores such as length or curvature of the path, but also increasingly complex metrics, such as execution time and memory consumption. However, as the diversity of the algorithms is expanding, especially with the advances in the areas of machine learning (ML), it becomes a challenging issue to benchmark algorithms efficiently. To address this issue, we present a unified framework that supports benchmarking and development of classical and learned planning algorithms. In this paper we introduce PathBench, a motion planning platform that can be used to develop, assess, compare, and visualize the performance and behaviour of path planning algorithms, Fig. \ref{fig: sim}. PathBench has three key features. (1) It supports both 2D and 3D classical and learned algorithms. Existing machine learning based algorithms, such as value iteration networks (VIN) \cite{tamar2016value}, gated path planning networks (GPPN) \cite{lee2018gated}, motion planning networks (MPNet) \cite{qureshi2019motion}, as well as Online LSTM~\cite{nicola2018lstm}, and CAE-LSTM~\cite{inoue2019robot} methods, are incorporated into PathBench. PathBench has a structured environment to facilitate easy development and integration of new classical and ML-based algorithms. (2) PathBench's benchmarking features allow evaluation against the suites of added path planning algorithms, both classical algorithms and machine-learned models, with standardized metrics and environments. (3) PathBench provides a ROS (Robot Operating System) real-time extension for interacting with a real-world robot. Examples are provide for these features. \begin{figure}[t] \centering \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{images/2dastar.png} \caption{\scriptsize A*} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{images/2drrtc.png} \caption{\scriptsize RRT-Connect} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{images/fpo.png} \caption{\scriptsize GPPN } \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{images/2dwpn1.png} \caption{\scriptsize {WPN}} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{images/3dastar.png} \caption{\scriptsize A* } \vspace{0.5pt} \end{subfigure} \hfill \begin{subfigure}[b]{0.31\linewidth} \includegraphics[width=\linewidth]{images/3drrtc.png} \caption{\scriptsize RRT-Connect} \end{subfigure} \vspace{1 mm} \caption{The results of different classical and learned planners in PathBench. The red entity is the agent, light green entities are traces to the goal, black/light gray entities are obstacles and everything else is custom display information (e.g. in (a) and (d), dark gray represents the search space)} \label{fig: sim} \vspace{- 3mm} \end{figure} \section{Related Work} In this section, classical and learned planning algorithms, and existing benchmarking frameworks are reviewed briefly. \subsection{Classical and Learned Planning Algorithms} There are four general categories for planning algorithms: graph search, sampling-based, sensory-based, and numerical optimization \cite{gonzalez2016review}. Graph search algorithms typically operate on grid and lattice maps, representing discretized state spaces. Popular examples of such algorithms are, Dijkstra \cite{choset2005principles}, A* \cite{duchovn2014path}, and wavefront \cite{luo2014effective}. Rapidly-exploring random tree (RRT) \cite{lavalle1998rapidly} and probabilistic roadmap (PRM) \cite{kavraki1994probabilistic}, are examples of sampling-based planning algorithms. These algorithms randomly sample the configuration space or the state space to create a path. In high-dimension planning problems, the sampling-based algorithms work efficiently in comparison with graph-based methods in term of computational resources, at the cost of being non-optimal. In contrast to the sampling and graph based methods, the sensor-based planning algorithms only plan for the view~\cite{sensor1997, Paull_TMech_2013}, i.e local maps. Examples of such algorithms include Bug1 and Bug2~\cite{choset2005principles, rajko2001pursuit}. The fourth category, the numerical optimization planners, operate by optimizing a cost function composed of one or multiple terms. The cost functions could include various constraints such as kinematics \cite{ziegler2014making} and the smoothness of the trajectory \cite{dolgov2010path}. The recent deep learning methods, such as CoMPNet~\cite{CoMPNet} and VIN~\cite{tamar2016value}, are also considered numerical optimization planners. The availability of large-scale data and parallel processing systems has shifted the attention of the researcher to learning-based planning algorithms~\cite{inoue2019robot,Chen2016Humanoids,gupta2017cognitive}. Some of the planning algorithms aim at improving specific parts of the classical algorithms. For instance, Qureshi {\it et al.} perform sampling on particular regions of the configuration space as opposed to the whole space~\cite{qureshi2018deeply}. Using a similar approach, Chamzas {\it et al.} reduce the computational complexity of the classical planning algorithms \cite{chamzas2019using}. Other planning algorithms generate full paths via neural networks. Motion planning networks (MPNet), generate a path from start to goal via a trained neural network which takes a point cloud map as an input~\cite{qureshi2018motion}. The recent version of MPNet accounts for the kinematics constraint as well \cite{CoMPNet}, \cite{qureshi2019motion}. Various architectures and machine learning methods are being used in planning including: recurrent neural networks (RNN) in OracleNet~\cite{bency2019neural}, 3D supervised imitation learning in (TDPP-Net) \cite{TDPPNet}, unsupervised Generative Adversarial Networks (GANs) in \cite{mohammadi2018path} and \cite{choi2020pathgan}, and reinforcement learning (RL) strategies in value iteration networks (VIN)~\cite{tamar2016value}, gated path planning networks (GPPN) \cite{lee2018gated}, universal planning networks (UPN) \cite{srinivas2018universal} \cite{Levine2013}, guided policy search (GPS) \cite{Levine2013}, and learning-from demonstration (LfD)~\cite{Abbeel2010}. With the continual advancement in machine and deep learning techniques and hardware capabilities, increased development of new learning based path planning algorithms can be foreseen. \subsection{Simulation and Benchmarking Platforms} {Benchmarking} of path planning algorithms is the scientific approach of evaluation in the robotics community. Currently, there are a variety of standardized libraries relevant to path planning, such as \textit{ROS} \cite{Quigley09}, \textit{OpenRAVE} \cite{diankov2008openrave}, \textit{OMPL} \cite{sucan2012the_open_motion_planning_library}, \textit{MoveIt} \cite{moveit} (which has benchmarking capabilities \cite{moll2015benchmarking}), \textit{SBPL} \cite{plaku2007oops}, and \textit{OOPS\textsubscript{MP}} \cite{plaku2007oops}. These will briefly be summarized, before comparing to our platform. {\it ROS.} The Robot Operating System (ROS) is a middleware which contains various planning algorithms and simulation environments (including 2D and 3D) for different types of robots: ground robots with different degrees of freedom constraints, flying robots (drones), and manipulator robots. ROS is the standard in robotics for simulation and development. {\it OpenRAVE.} The Open Robotics and Animation Virtual Environment (OpenRAVE), is an open-source cross-platform software architecture, targeted for real-world robots, which includes 3D simulation, visualization, planning, scripting, and control. Compared to ROS, it is focused on autonomous motion planning and high-level scripting rather than low-level control and message protocols. {\it OMPL.} The Open Motion Planning Library (OMPL) is a standalone library which focuses on motion planning exclusively. It is more lightweight than ROS and has reduced capabilities (there is no collision detection). The library of available path planners is limited to sampling-based planners such as RRT or PRM, but there is a variety of optimized implementations for each type of planner. OMPL has a benchmarking extension called: planner arena, where a community-contributed database of benchmarking data allows for visualization of algorithm comparison. {\it SBPL.} The Search-Based Planning Library (SBPL) is a small library of graph search implementations with 2D and 3D environments, but no benchmarking capability. {\it MoveIt.} Combines both ROS and OMPL to create a high-level implementation for cleaner and faster development of new algorithms. It has more capabilities than ROS and OMPL and includes custom benchmarking techniques \cite{moll2015benchmarking}. {\it OOPS\textsubscript{MP}}. The Online, Open-source, Programming System for Motion Planning (OOPS\textsubscript{MP}) is an online platform for comparing sampling-based motion planners on a common set of 2D and 3D maps, also providing implementations of common algorithms, with analysis and visualization tools for benchmarking. {\it PathBench.} Our implementation offers a more abstract overview of the environment, naturally allowing for graph-based, sampling-based, and machine learning based approaches to path planning. It supports 2D and 3D environments. PathBench includes not only a simulation environment and benchmarking techniques, but also a generator for creating synthetic datasets for ML applications and a ML training pipeline for generic ML models. \begin{table}[t] \footnotesize \center \caption{Platform capabilities comparison. PathBench supports benchmarking of classical and learned planning algorithms. } \vspace{1mm} \begin{tabular}{|c| c c | c c c | \hline {\scriptsize \rotatebox[origin=c]{0}{Platform}} & {\scriptsize \rotatebox[origin=c]{60}{Visualization}} & {\scriptsize \rotatebox[origin=c]{60}{Benchmarking}} & {\scriptsize \rotatebox[origin=c]{60}{Sample-Based}} &{\scriptsize \rotatebox[origin=c]{60}{Graph-Based}} & {\scriptsize \rotatebox[origin=c]{60}{ML-Based}} \\%& {\scriptsize \rotatebox[origin=c]{90}{Efficiency}} & {\scriptsize \rotatebox[origin=c]{90}{Variety}} & {\scriptsize \rotatebox[origin=c]{90}{ECI}} \\ \hline {ROS} & \textcolor{green}{\checkmark} & \textcolor{red}{$\times$} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{red}{$\times$} \\% & {R} & {H} & {H} \\ \hline {OpenRAVE} & \textcolor{green}{\checkmark} & \textcolor{red}{$\times$} & \textcolor{green}{\checkmark} & \textcolor{red}{$\times$} & \textcolor{red}{$\times$} \\% {M} & {H} & {H} \\ \hline {OMPL} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{red}{$\times$} & \textcolor{red}{$\times$} \\% & {H} & {R} & {R} \\ \hline {MoveIt} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{red}{$\times$} & \textcolor{red}{$\times$} \\% & {M} & {H} & {H} \\ \hline {SBPL} & \textcolor{green}{\checkmark} & \textcolor{red}{$\times$} & \textcolor{red}{$\times$} & \textcolor{green}{\checkmark} & \textcolor{red}{$\times$} \\% {R} & {R} & {M} \\ \hline {OOPS\textsubscript{MP}} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{red}{$\times$} & \textcolor{red}{$\times$} \\% & {M} & {M} & {M} \\ \hline \textbf{PathBench} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} & \textcolor{green}{\checkmark} \\% & {H} & {M} & {R} \\ \hline \end{tabular} \label{tab: plat_comparison} \vspace{-5 mm} \end{table} Further to these standard libraries, other works have sought to address the issue of benchmarking for motion planning. Sturtevan provides a standard test set of maps, and suggest standardized metrics for grid based planning for gaming environments~\cite{sturtevant2012benchmarks}. These maps have been ported into PathBench and are discussed further in Sec.~\ref{sec:maps}. Althoff {\it et al.} provide a collection of composable benchmarks for motion planning of cars on roads, allowing reproducible results on problems~\cite{althoff2017commonroad}. The main advantages of PathBench over existing standardized libraries are its native support of machine learning path planning algorithms, as well as its simple, lightweight, and extensible design, allowing fast prototyping for a research environment. It provides a standardized set of maps and metrics, so that benchmarking of new and existing algorithms can be performed quickly. Moreover, we provide a clean API interface for the algorithms which makes them portable to the standardized libraries. Also, we provide a {ROS} real-time extension which converts the internal map move actions into network messages (velocity control commands) using the {ROS} APIs. See Table~\ref{tab: plat_comparison} for platform comparison. \section{PathBench Platform} An overview of the architecture of PathBench is shown in Fig. \ref{fig: sim_platform}. PathBench is composed of four main components: \emph{Simulator}, \emph{Generator}, \emph{Trainer}, and \emph{Analyzer} joined by the \emph{infrastructure} section. The infrastructure is responsible for linking all other components and provides general service libraries and utilities. The simulator is responsible for environment interactions and algorithm visualization. It provides custom collision detection systems and a graphics framework for rendering the internal state of the algorithms. The generator is responsible for generating and labelling the training data used to train the ML models. The trainer is a class wrapper over the third party machine learning libraries. It provides a generic training pipeline based on the holdout method and standardized access to the training data. Finally, the analyzer manages the statistical measures used in the practical assessment of the algorithms. Custom metrics can be defined, as well as graphical displays for visual comparisons. PathBench has been written in Python, and uses PyTorch \cite{paszke2017automatic} for ML. \begin{figure}[t] \centering \includegraphics[scale=0.4]{images/sim_high_overview.png} \vspace{1mm} \caption{PathBench structure overview. Arrows represent information flow/usage ($A \xleftarrow{gets/uses} B$). The machine learning section is responsible for training dataset generation and model training. The Environment section controls the interaction between the agent and the map, and supplies graphical visualization. The \textit{ROS} section provides support for real-time interaction with a real physical robot. The Evaluation section provides benchmarking methods for algorithm assessment. For a detailed architecture, please refer to the website of PathBench.} \label{fig: sim_platform} \vspace{-5 mm} \end{figure} \subsection{Simulator} The {simulator} is both a visualizer and an engine for developing algorithms (Fig. \ref{fig: sim}). It supports animations and custom map display components which render the {algorithm}'s internal data. Simulator has a {map} that contains different entities such as the {agent}, {goal} and {obstacle}s, and provides a clean interface that defines the movement and interaction between them. Therefore, a {map} can be extended to support various environments; however, each map has to implement its own physics engine or use a third party one (e.g. the \textit{pymunk} physics engine or \textit{OpenAI Gym} ). The current implementation supports three types of 2D/3D maps: {DenseMap}, {SparseMap} and {RosMap}, corresponding to static grid map, point cloud map, and grid map with live updates, respectively. Additionally, the {simulator} provides animations that are achieved through key frames and synchronization primitives. The graphical framework used for the visualization of planners and GUI is Panda3D \cite{panda}. Simulator configurations and visualization customizations can be directly controlled within the Panda3D GUI, see Fig.~\ref{fig: simulator3d}. \subsection{Generator} \label{sec: generator} The {generator} can execute four actions: (1) map generation, (2) map labelling, (3) map augmentation, and (3) map modification, each explained briefly below. {\it 1) Generation.} The generation procedure accepts as input, different hyper-parameters such as the type of generated maps, number of generated maps, number of dimensions, obstacle fill rate range, number of obstacle range, minimum room size range and maximum room size range. Currently, the generator can produce four types of maps: uniform random fill map, block map, house map and point cloud map, (See Fig. \ref{fig:3maps}) and it can be extended to support different synthetic maps such as mazes and cave generation using cellular automata. All generated maps are placed into a directory in both .pickle and JSON formats. {\it 2) Labelling.} The labelling procedure takes a map and converts it into training data by picking only the specified features and labels. Features/labels include agent and goal positions, global map, local view, valid moves, etc. A* is used as ground truth for feature/label generation. All features/labels can be saved as a variable sequence (needed for LSTM) or single global input (needed for auto-encoder). {\it 3) Augmentation.} The augmentation procedure takes an existing training data file and augments it with the specified extra features and labels. It is used to remove the need for re-generating a whole training set. {\it 4) Modification.} A custom lambda function which takes as input a {map} and returns another {map} and can be defined to modify the underlining structure of the map (e.g. modify the agent position, the goal position, create doors, etc.). \begin{figure}[t] \centering \includegraphics[scale=0.2]{images/simulator3d.png} \vspace{1mm} \caption{ The GUI of the simulator is where configuration of the path planners, environments and customization of the visualization are made interactively. Simulator configuration window is used to set up path planning sessions. View editor allows for adjustment of visualized environment appearance.} \label{fig: simulator3d} \vspace{-5 mm} \end{figure} \subsection{Trainer} The training pipeline is composed of: (1) data pre-processing, (2) data splitting, (3) training, (4) evaluation, (5) results display, and (6) pipeline end, explained below briefly. {\it 1) Data Pre-processing.} Data is loaded from the specified training sets, and only the features and labels used throughout the model are picked from the training set and converted to a PyTorch {dataset}. In total, there can be four datasets: one feature sequence, one single feature tensor, one label sequence, and one single label tensor. Sequential data is wrapped into a {PackedDataset} which sorts the input in reverse order of its sequence length (max length first, min length last). {\it 2) Data Splitting.} The pre-processed data is shuffled and split into three categories: training, validation and testing (usually 60\%, 20\%, 20\%) according to the holdout method. The {CombinedSubsets} object is used to couple the feature dataset and label dataset of the same category into a single dataset. Then, all data is wrapped into its {DataLoader} object with the same batch size as the training configuration (usually 50). {\it 3) Training.} The training process puts the model into training mode and takes the training {DataLoader} and validation {DataLoader} and feeds them through the model $n$ times, where $n$ is the number of specified epochs. The training mode allows the gradients to be updated and at each new epoch, the optimizer sets all gradients to 0. Each model has to extend a special \texttt{batch\_start} hook function which is called on each new batch. The \texttt{batch\_start} function is responsible for passing the data through the network and returning the loss result. The trainer takes the loss result and applies a backward pass by calling the \texttt{.backward()} method from the loss. Afterwards, the optimizer is stepped, and the weights of the model are updated. The statistics, such as the loss over time, for the training and validation sets are logged by two {EvaluationResults} objects (one for training and one for validation) which are returned to the pipeline. The {EvaluationResults} class contains several hook functions which are called through the training process at their appropriate times: \texttt{start}, \texttt{epoch\_start}, \texttt{epoch\_finish}, \texttt{batch\_start}, \texttt{batch\_finish}, \texttt{finish}. At each epoch end, the {EvaluationResults} object prints the latest results. {\it 4) Evaluation.} The evaluation process puts the model into evaluation mode and has a similar structure to the training process. The evaluation mode does not allow gradients to update. The testing dataset is passed only once through the model and an {EvaluationResults} object containing the final model statistics is returned to the pipeline. {\it 5) Results Display.} This procedure displays the final results from the three {EvaluationResults} objects (training, validation, testing) and final statistics such as the model loss are printed. The training and validation loss logs are displayed as a \textit{matplotlib} \cite{Hunter:2007} figure. This method can be easily extended to provide more insight into the network architecture (e.g. the covolutional autoencoder {(CAE)} model displays a plot which contains the original image, the reconstructed version, the latent space snapshot and the resulting feature maps). {\it 6) Pipeline End.} At the end, the model is saved by serialising the model \texttt{.state\_dict()}, model configuration, plots from results display process, and full printing log. \subsection{Analyzer} The {analyzer} is used to assess and compare the performance of the path planners. This is achieved by making use of the {BasicTesting} component. When a new session is run through the {AlgorithmRunner}, statistical measures depending on the type of testing can be collected by attaching a {BasicTesting} component. The {BasicTesting} component is also linked to the simulator to enable visualization testing. The key frame feature and synchronization variable are tied to the {BasicTesting} component, which allows the user to enhance each key frame and define custom behaviour. Each {algorithm} instance can create debugging views called {MapDisplay}s which can render custom information on the screen such as the the internal state of the {algortihm} (e.g. search space, total fringe, graph, map and its entities etc) In addition to manually running a {simulator} instance to assess an {algorithm}, the {analyzer} supports the following analysis procedures: \begin{itemize} \addtolength{\itemindent}{-.4cm} \item {Simple Analysis.} $n$ map samples are picked from each generated map type, and $m$ algorithms are assessed on them. The results are averaged and printed. Barplots and violinplots, for metrics discussed in Sec.~\ref{sec:performance metrics}, are generated with results from Simple Analysis. \item {Complex Analysis.} $n$ maps are selected (generated or hand-made), and $m$ algorithms are run on each map $x$ (usually 50) times with random agent and goal positions. In the end, all $n \times x$ results are averaged and reported. Similarly to Simple Analysis, barplots and violinplots for selected metrics can be generated with results. \item {Training Dataset Analysis.} A training set analyzer procedure is provided to inspect the training datasets by using the basic metrics (e.g. Euclidean distance, success rate, map obstacle ratio, search space, total fringe, steps, etc. See the website of the project for all metrics and statistics). \end{itemize} \begin{figure}[t] \begin{minipage}[b]{0.32\columnwidth} \centering \includegraphics[width=.99\columnwidth]{images/urfmap.png}\\ \includegraphics[width=.95\columnwidth]{images/3durf.png} \\ (a) \end{minipage} \begin{minipage}[b]{0.32\columnwidth} \centering \includegraphics[width=.99\columnwidth]{images/blockmap.png}\\ \includegraphics[width=.99\columnwidth]{images/3dblock.png} \\ (b) \end{minipage} \begin{minipage}[b]{0.32\columnwidth} \centering \includegraphics[width=0.99\columnwidth]{images/housemap.png}\\ \includegraphics[width=.99\columnwidth]{images/3dhouse.png} \\ (c) \end{minipage} \vspace{1mm} \caption{The 3 types of generatable grid maps are: (a) Uniform random fill ($64\times64$ 2D and $16\times16$ 3D dimensions, [0.1, 0.3] obstacle fill rate range), (b) Block map ($64\times64$ 2D and $16\times16$ 3D dimensions, [0.1, 0.3] obstacle fill rate range, [1, 6] number of obstacles range), (c) House atlas ($64\times64$ 2D and $16\times16$ 3D dimensions, [8, 15] minimum room size range, [35, 45] maximum room size range). (Note: Magenta colour is used as goal for all 2D maps as the green goal is difficult to spot.)} \label{fig:3maps} \vspace{-2 mm} \end{figure} \section{{Supported Path Planning Algorithms}} \label{sec:supported algorithms} With the advantage of supporting both classical and machine learning based path planning algorithms, PathBench provides a lightweight framework where development and evaluation of new algorithms can be conducted. Currently supported algorithms are categorized and introduced in the following and additional algorithms can be implemented to PathBench easily. \subsection{{Classical Algorithms}} The classical path planning algorithms implemented into PathBench can be categorized into four different categories. These algorithms include 1) A*, wavefront, Dijkstra planners for graph-based planners; 2) RRT, simple probabilistic roadmap (sPRM) and variations, such as RT, RRT* and RRT-Connect for graph-based category; 3) Bug1 and Bug2~\cite{sensor1997} for sensory-based planning; and 4) potential field algorithm~\cite{pot1992} for numerical optimization category. The numerical optimization approach is also a major component of machine learning methods that are introduced in the learned planning algorithms section that follows. Moreover, additional sampling based algorithms from the Open Motion Planning Library (OMPL) are also added to improve benchmarking capability of PathBench. The algorithms developed inside PathBench support step-by-step planning animation, which is an interesting feature for debugging and educational purposes. \subsection{{Learned Planning Algorithms}}\label{sec:learnt-alg} Several machine learning-based path planning algorithms implemented into PathBench explained below. {\it 1) Value Iteration Networks (VIN)~\cite{tamar2016value}.} VIN is a fully differentiable neural network with an embedded planning module that performs reinforcement learning (RL) for path planning. VIN improves from standard CNN based networks that learned by reactive policies by representing the classical value- iteration planning algorithm in the form of a CNN. The obtained NN model, VIN, provides useful predictions when computing paths. {\it 2) Motion Planning Networks (MPNet)~\cite{qureshi2019motion}.} MPNet is a neural network based algorithm that uses two neural networks to conduct path planning. The first model, encoder network, embeds the obstacles' point cloud to latent space. The second model, planning network, learns to path plan with the obstacle embedding, start and end position of the robot agent. A hybrid path planning approach combining MPNet and RRT* was developed to improve the success rate of planning in. {\it 3) Gated Path Planning Networks (GPPN) \cite{lee2018gated}.} GPPN is a deep neural network based path planning approach that improves on VIN. It shows success even on a challenging 3D environment, where the planner is only provided with first-person RGB images. {\it 4) Online LSTM~\cite{nicola2018lstm}.} The Online LSTM (long short-term memory) utilizes an LSTM network to determine what action an agent should take next when given the current pose and measurements to achieve the goal. The network takes the start, goal, and current range-bearing measurement as input. This algorithm is greedy, works well on easy maps, but gets in local minimum in complex maps. {\it 5) CAE-LSTM~\cite{inoue2019robot}.} The CAE-LSTM adds a convolutional auto-encoder (CAE) component on top of the LSTM network of the Online LSTM \cite{nicola2018lstm} to improve path generation in complex environments and long corridors. The latent variable of the auto-encoder represents a compact representation of the map of the environment. This algorithm works well in complex maps, but deviates from the shortest path. {\it 6) Bagging LSTM~\cite{wpnconf}.} This algorithm uses an ensemble approach~\cite{dietterich2000ensemble} to get the best out of Online and CAE-LSTM. {\it 7) Waypoint Planning Networks~(WPN)~\cite{wpnconf}.} WPN integrates three ML-based planning algorithms; Online LSTM, CAE-LSTM, and bagging LSTM; with a waypoint module. WPN makes use of maps and the start and goal positions of the robot as network inputs. Unlike Online LSTM, CAE-LSTM, and bagging LSTM which produce a path, WPN only suggest waypoints towards the goal. \begin{figure}[t] \centering \includegraphics[width=.24\columnwidth]{images/city1.png} \includegraphics[width=.24\columnwidth]{images/vid1.png} \includegraphics[width=.24\columnwidth]{images/vid2.png} \vspace{2mm} \caption{External maps can be ported into PathBench for benchmarking with ease. The maps above are from video games and real-world city \cite{sturtevant2012benchmarks}.} \label{fig:extmaps} \vspace{-2 mm} \end{figure} \section{{Supported Maps}} \label{sec:maps} The map is the environment of which simulation and benchmarking of algorithms are performed. A Map contains different entities such as the agent, goal and obstacles, and provides a clean interface that defines the movement and interaction between them. Therefore, a map can be extended to support various environments. The following are supported 2D and 3D map types currently in PathBench. \subsection{Synthetic Maps} As mentioned previously in the Generator (Sec.~\ref{sec: generator}), four synthetic map types can be created and used inside PathBench. The simplest map type, block maps, contains a random number of randomly sized blocks that act as obstacles. On the other hand, the map type of uniform random fill maps consists of single obstacles placed at random in the maps' free spaces. The third map type, house maps, aims to mimic typical floorplans by placing obstacles in the form of randomly sized and partitioned walls. Lastly, 3D point cloud maps that contain a set of obstacles in an unbounded 3D space can also be generated and used in PathBench. The inclusion of point cloud maps is to facilitate the development and support of algorithms that work exclusively with point clouds, such as MPNet \cite{qureshi2019motion}. Different map types are included in PathBench, so that map-type specific performance of path planning algorithms can be analyzed further (See Fig.~\ref{fig:3maps}.) \subsection{Real Maps} Real-world maps can be utilized inside PathBench with the RosMap class. The RosMap extends 2D occupancy grid maps to integrate the gmapping~\cite{gmapping} and other similar 2D SLAM algorithms by converting the SLAM output image into an internal map environment. The RosMap environment has support for live updates, meaning that algorithms can query an updated view by running a SLAM scan. The map uses simple callback functions to make SLAM update requests and convert movement actions into network messages using the ROS publisher-subscriber communication system. \subsection{External Maps} External maps can be imported into PathBench to diversify the datasets. Houseexpo~\cite{houseexpo} is a large dataset of 2D floor plans built on SUNCG dataset \cite{song2016ssc}. It contains 35,126 2D floor plans that have 252,550 rooms in total and can be used for PathBench benchmarking. In addition, other video game and real-world datasets can also be converted for PathBench use easily. 2D grid world and 3D voxel maps from video games, such as Warcraft III, Dragon Age and Warframe, and real world 2D street maps from OpenStreetMaps geo-spatial database are implemented into PathBench to demonstrate the ease of integrating external datasets \cite{sturtevant2012benchmarks,brewer2018voxels}, see Fig. \ref{fig:extmaps}. Benchmarking results on external maps are shown in Sec. \ref{sec:result}. \section{{Performance Metrics}} \label{sec:performance metrics} In order to evaluate and benchmark performance of various algorithms inside PathBench, several metrics are chosen, including success rate, path length, distance left to goal when failed, time, path deviation, search space, memory consumption, obstacle clearance and smoothness of trajectory. Algorithm selection can be aided by evaluating the benchmarked results of task-specific metrics. The following outlines the metrics and rationales behind their selection. {\it 1) Success Rate (\%).} The rate of success of finding a path, from start to goal, demonstrates the reliability of the algorithm. {\it 2) Path Length (metres).} The total distance taken to reach the goal showcases the efficiency of the path generated. {\it 3) Distance Left To Goal (metres).} The Euclidean distance left from the agent to the goal, in case of a algorithm failure. This shows the extent of the planning failure. {\it 4) Time (seconds).} The total time taken to reach the goal. Time required for planning is an important factor for real life robotics applications. {\it 5) Path Deviation (\%).} The path length difference when compared to the shortest possible path, generated by A*. Allows comparison to an "optimal" planner. {\it 6) Search Space (\%).} The amount of space that was explored and used to find the final path to the goal. {\it 7) Maximum Memory Consumption (MB).} The maximum amount of memory used during a path generation session. Memory usage could be a limiting factor for various robotics settings, thus being a relevant benchmarking metric. {\it 8) Obstacle clearance (metres).} Obstacle clearance provides the mean distance of the agent from obstacles during traversal. {\it 9) Smoothness of trajectory (degrees).} The average angle change between consecutive segments of paths shows how drastic and sudden the agent's movement changes could be. Other than the metrics above, additional metrics can be implemented into PathBench if required. Nowak {\it et al.} provide potential metrics that could be added, including orientation error, number of collisions, number of narrow passages traversed and number of parameters to tune \cite{nowak2010}. \section{Experimental Results} \label{sec:result} In this section, several experiments using PathBench, with classical and learned planners on different maps are presented. \subsection{Algorithmic Benchmarking} Classical and learned algorithms, currently supported by PathBench, are benchmarked inside PathBench with different types of maps. All results are produced by PathBench on Ubuntu 18.04 with Intel Core i5-6200U CPU and an Nvidia GeForce 940MX. For training of the learned algorithms, three types of synthetic map of size $64 \times 64$ pixels were procedurally generated: uniform random fill map, block map, and house map. Fig.~\ref{fig:3maps} shows samples of these maps. In these maps, start and goal points are chosen randomly. Evaluations are done on maps that have never been seen by the algorithm. \begin{table}[t] \centering \caption{Results of classical algorithms: On 2D 64$\times$64 PathBench built in maps, 512$\times$512 city maps, and video game maps with 800 to 1200 cells in dimension. The failed cases occur when there is no valid path towards the given goal.} \vspace{1mm} \begin{tabular}{|c|c|c|c|c|c|} \hline \makecell {\textbf{\scriptsize map} \\ \textbf{\scriptsize type}} & \textbf{\scriptsize planner} & \makecell {\textbf{\scriptsize path} \\ \textbf{\scriptsize dev.}\scriptsize{(\%)}} & \makecell {\textbf{\scriptsize distance left} \\ (if failed,m)} &\makecell{\textbf{\scriptsize time} \\ (sec)} & \makecell {\textbf{\scriptsize success} \\ \textbf{\scriptsize rate} \scriptsize{(\%)} }\\ \hline \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{PathBench Maps}}} &{\scriptsize A* \cite{duchovn2014path}} & 0.00 & 0.00 & 0.103 & 100.0 \\%& 10.5\% \\ \cline{2-6} &{\scriptsize Wavefront \cite{luo2014effective}} & 0.34 & 0.00 & 0.334 & 100.0 \\%& -\% \\ \cline{2-6} &{\scriptsize Dijkstra \cite{choset2005principles}} & 0.00 & 0.00 & 0.578 & 100.0 \\%& 42.7 \\ \cline{2-6} &{\scriptsize SPRM \cite{kavraki1994probabilistic}} & 30.24 & 1.87 & 0.596 & 95.0 \\%& 42.7 \\ \cline{2-6} &{\scriptsize RRT~\cite{lavalle1998rapidly}} & 13.11 & 1.83 & 7.334 & 97.6 \\%& - \\ \cline{2-6} &{\scriptsize RRT* \cite{rrtstar}} & 6.29 & 1.46 & 9.412 & 94.3 \\%& - \\ \cline{2-6} &{\scriptsize RRT-Connect~\cite{rrtconnect}} & 17.77 & 0.18 & 0.137 & 99.5 \\%& - \\ \hline \hline \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{City Maps}}} &{\scriptsize A*} & 0.00 & 0.00 & 3.816 & 100.0 \\%& 10.5\% \\ \cline{2-6} &{\scriptsize Wavefront} & 1.08 & 0.00 & 8.468 & 100.0\\%& -\% \\ \cline{2-6} &{\scriptsize Dijkstra} & 0.00 & 0.00 & 9.928 & 100.0 \\%& 42.7 \\ \cline{2-6} &{\scriptsize SPRM} & 123.64 & 13.68 & 5.377 & 93.6 \\%& 42.7 \\ \cline{2-6} &{\scriptsize RRT} & 54.98 & 3.43 & 39.248 & 95.7 \\%& - \\ \cline{2-6} &{\scriptsize RRT* } & 32.71 & 4.11 & 43.464 & 95.1 \\%& - \\ \cline{2-6} &{\scriptsize RRT-Connect} & 65.08 & 5.35 & 3.489 & 96.6 \\%& - \\ \hline \hline \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{Video Game Maps}}} &{\scriptsize A*} & 0.00 & 0.00 & 31.567 & 100.0 \\%& 10.5\% \\ \cline{2-6} &{\scriptsize Wavefront} & 1.65 & 0.00 & 43.517 & 100.0 \\%& -\% \\ \cline{2-6} &{\scriptsize Dijkstra} & 0.00 & 0.00 & 42.366 & 100.0 \\%& 42.7 \\ \cline{2-6} &{\scriptsize SPRM} & 287.21 & 0.00 & 44.498 & 100.0 \\%& 42.7 \\ \cline{2-6} &{\scriptsize RRT} & 128.90 & 64.38 & 64.881 & 42.6 \\%& - \\ \cline{2-6} &{\scriptsize RRT* } & 104.30 & 56.21 & 68.971 & 36.7 \\%& - \\ \cline{2-6} &{\scriptsize RRT-Connect} & 112.90 & 25.92 & 30.84 & 95.30 \\%& - \\ \hline \end{tabular} \label{tab:classicresult} \vspace{-2 mm} \end{table} \begin{table}[t] \centering \caption{Results of learned algorithms: On the same 2D 64$\times$64 PathBench built in maps, 512$\times$512 city maps, and video game maps from Table~\ref{tab:classicresult}.} \vspace{1mm} \begin{tabular}{|c|c|c|c|c|c|} \hline \makecell {\textbf{\scriptsize map} \\ \textbf{\scriptsize type}} & \textbf{\scriptsize planner} & \makecell {\textbf{\scriptsize path } \\ \textbf{\scriptsize dev.}\scriptsize{(\%)}} & \makecell {\textbf{\scriptsize distance left} \\ (if failed,m)} &\makecell{\textbf{\scriptsize time} \\ (sec)} & \makecell {\textbf{\scriptsize success} \\ \textbf{\scriptsize rate}\scriptsize{(\%)}}\\ \hline \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{PathBench Maps}}} &{\scriptsize VIN \cite{tamar2016value}} &76.31 & 21.80 & 0.583 & 28.7 \\%& \\ \cline{2-6} &{\scriptsize MPNet \cite{qureshi2019motion} } & 28.17 & 32.60 & 0.988 & 21.4 \\%& - \\ \cline{2-6} &{\scriptsize GPPN \cite{lee2018gated}} & 5.81 & 26.11 & 5.813 & 35.7 \\%& - \\ \cline{2-6} &{\scriptsize Online LSTM~\cite{nicola2018lstm} } & 0.45 & 7.65 & 0.172 & 67.6\\%& 42.7 \\ \cline{2-6} &{\scriptsize CAE-LSTM~\cite{inoue2019robot}} & 0.50 & 9.27 & 0.216 & 61.3\\%& 42.7 \\ \cline{2-6} &{\scriptsize Bagging LSTM \cite{wpnconf}} & 1.63 & 1.41 & 1.052 & 92.3 \\%& - \\ \cline{2-6} &{\scriptsize WPN \cite{wpnconf}} & 1.86 & 0.00 & 0.617 & 100.0 \\%& -\% \\ \hline \hline \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{City Maps}}} &{\scriptsize VIN} & 100.00 & 184.21 & 32.612 & 0.0 \\%& 10.5\% \\ \cline{2-6} &{\scriptsize GPPN } & 100.00 & 184.21 & 57.633 & 0.0 \\%& - \\ \cline{2-6} &{\scriptsize Online LSTM} & 1.05 & 87.51 & 3.328 & 16.7\\%& 42.7 \\ \cline{2-6} &{\scriptsize CAE-LSTM} & 4.49 & 91.20 & 4.241 & 10.0\\%& 42.7 \\ \cline{2-6} &{\scriptsize Bagging LSTM} & 31.67 & 45.06 & 20.268 & 43.3 \\%& - \\ \cline{2-6} &{\scriptsize WPN} & 8.44 & 0.00 & 13.816 & 100.0 \\%& -\% \\ \hline \hline \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{Video Game}}} &{\scriptsize VIN} & 100.00 & 276.30 & 42.19 & 0.0 \\%& 10.5\% \\ \cline{2-6} &{\scriptsize GPPN } & 100.00 & 276.30 & 65.877 & 0.0 \\%& - \\ \cline{2-6} &{\scriptsize Online LSTM} & 0.00 & 217.10 & 16.380 & 8.3 \\%& 42.7 \\ \cline{2-6} &{\scriptsize CAE-LSTM} & 3.05 & 199.60 & 27.330 & 5.3\\%& 42.7 \\ \cline{2-6} &{\scriptsize Bagging LSTM } & 0.41 & 155.89 & 123.901 & 20.6 \\%& - \\ \cline{2-6} &{\scriptsize WPN} & 10.55 & 0.00 & 110.307 & 100.0 \\%& -\% \\ \hline \end{tabular} \label{tab:learnedresult} \vspace{-2 mm} \end{table} \begin{figure}[t] \includegraphics[width=1.0\columnwidth]{images/plot.png} \vspace{-4mm} \caption{Graphical analysis of 2D benchmarking for classical and learned algorithms.} \label{fig:barvio} \vspace{-1 mm} \end{figure} \subsubsection{2D Synthetic Maps: Simple Analysis} To demonstrate the benchmarking ability of PathBench and its support for the machine learning algorithms, all the algorithms described in Sec.~\ref{sec:learnt-alg} are analyzed against classical path planning algorithms in $64 \times 64$ 2D PathBench maps. One thousand maps of each of the three types of PathBench maps were used. Table~\ref{tab:classicresult} and Table~\ref{tab:learnedresult} present detailed comparative results for simple analysis of 3000 2D PathBench maps. Fig.~\ref{fig:barvio} displays some of the key results in bar and violin plots. \subsubsection{2D External Maps: Complex Analysis} Both classical and learned algorithms were also benchmarked using the analyzer's complex analysis tool, in order to demonstrate the framework's ability to evaluate algorithm performance on specific map types. The analysis was performed on $n=30$ external city maps from OpenStreetMaps' geo-spatial database \cite{sturtevant2012benchmarks}, with 10 random samples collected for averaging of results on each $512 \times 512$ map. Thirty video game maps with height and width varying from 800 to 1200 cells were benchmarked in a similar manner. Results of benchmarking on video game and city maps are also listed in Table~\ref{tab:classicresult} and Table~\ref{tab:learnedresult}. The use of external environments in this experiment demonstrates the capability of PathBench to incorporate additional datasets. \subsubsection{3D Maps: Simple Analysis} To demonstrate PathBench's support for 3D path planning, analysis of path planning algorithms on 3D $28 \times 28 \times 28$ PathBench maps was conducted. The benchmarking results that averaged algorithm performance on 1000 maps of each PathBench map type, uniform random fill map, block map, and house map, is shown in Table~\ref{tab:3dresults}. By looking at the results, we can quickly assess some strengths and weaknesses of each planning approach. For example, the three graph-based algorithms all find a solution 100\% of the time, provided one exists. RRT* paths are always shorter than those of RRT as expected, and RRT-Connect has much higher success rate than any other sampling-based method, while being considerably faster. The number of samples taken is a parameter that can be modified easily to configure sampling-based algorithms' behaviour. Although A* can generate the shortest path length in both 2D and 3D planning scenarios, RRT-Connect is capable of planning at a significantly faster time in 3D environments. Machine learning algorithms, on the other hand, experience lower success rates for all map types. VIN and GPPN have shown to not scale well with the increase in map size and could not successfully provide any paths in the city and video game datasets. WPN is an exception with the ability to plan at 100\% success rate for all map types used. However, machine learning algorithms' path planning times in general are higher when compared to classical approaches, especially as the map size increases. MPNet was only tested on PathBench maps, due to constraint of the implementation used. The open source version of the network only allowed encoding of a limited number of obstacles. Performing this kind of simple and rapid analysis is trivial in PathBench. Furthermore, we can notice that the algorithms maintain their same behaviour across different environments. \begin{table}[t] \centering \caption{Results of classical algorithms on 3D 28$\times$28$\times$28 PathBench built in maps.} \vspace{1mm} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{\scriptsize planner} & \makecell {\textbf{\scriptsize success} \\ \textbf{\scriptsize rate}{\scriptsize (\%)} } & \makecell {\textbf{\scriptsize path } \\ {\scriptsize \textbf{len.}(m)} } & \makecell {\textbf{\scriptsize path} \\ \textbf{\scriptsize dev. {\scriptsize(\%)}} }& \makecell{\textbf{\scriptsize time} \\ (sec)} & \makecell {\textbf{\scriptsize path smooth-} \\ \textbf{\scriptsize ness}{\scriptsize (deg.)}} \\ \hline {\scriptsize A*} & 100.0 & 20.69 & 0.00 & 0.475 & 0.28 \\ \hline {\scriptsize Wavefront} & 100.0 & 21.27 & 0.51 & 6.118 & 0.11 \\ \hline {\scriptsize Dijkstra} & 100.0 & 20.69 & 0.00 & 8.453 & 0.13 \\ \hline {\scriptsize SPRM} & 100.0 & 36.87 & 16.18 & 0.248 & 0.37 \\ \hline {\scriptsize RRT-Connect} & 99.7 & 38.22 & 17.56 & 0.097 & 0.41 \\ \hline \end{tabular} \label{tab:3dresults} \vspace{-2 mm} \end{table} \subsection{Real-world Robot Interfacing} PathBench has the capability to natively interface with ROS and Gazebo to allow for seamless path planning for simulated and real world robotic applications. PathBench is able to visualize and plan for both fixed map environements and exploration environments. The planning is done in PathBench in real-time, and the control commands are sent to ROS to guide the robot. We can also demonstrate the live map capabilities of PathBench, using an algorithm with exploration capabilities. We use WPN-view for our testing. The robot is able to plan into known space, and also plan into the unknown environment, while the PathBench map is updated as it explores. This can also be seen in the supplementary video and GitHub, where the exploration is demonstrated. \vspace{-1 mm} \section{Conclusion} \label{sec:conclusion} PathBench presents a significant advantage in terms of developing and evaluating classical and learned motion planning algorithms, by providing development environment and benchmarking tools with standard and customizable metrics. PathBench has been demonstrated across a wide range of algorithms and datasets. In the future, PathBench will be extended to allow benchmarking and training of additional learning-based algorithms, along with support for higher dimensional planning with constraints. \section*{Acknowledgment} This work was partially funded by DRDC-IDEaS (CPCA-0126) and EPSRC (EP/P010040/1). We acknowledge the technical contributions made by Judicael E Clair, Danqing Hu, Radina Milenkova, Zeena Patel, Abel Shields, and John Yao to the programming of the project and producing of Fig.~\ref{fig: sim}-e-f, Fig.~\ref{fig: simulator3d}, and Fig.~\ref{fig:3maps}'s second row. Notable programming contributions made include the added support of PathBench's rendering and support of 3D path planning environments, infrastructural changes for efficiency and a renewed ROS interface. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-06T02:05:24", "yymm": "2105", "arxiv_id": "2105.01777", "language": "en", "url": "https://arxiv.org/abs/2105.01777" }
\section{Introduction} A course on numerical methods for applied scientists and engineers is typically taught with standard predictor-corrector and multistep time-stepping methods applied to ODEs in one chapter, followed by spatial discretization operators of PDEs in another. In real-world applications, the discretization of a PDE model of a physical phenomenon consists of both spatial and temporal components. As a result, the order of convergence of the PDE depends on both the cell width $\Delta x$ and the time step $\Delta t$. In past decades, when low-order finite difference and finite volume discretizations were common, the combined effect of the spatial and temporal discretization errors received little attention. Now that high-order discretizations are ubiquitous in PDE applications, the combined effect of $\Delta x$ and $\Delta t$ on the local truncation error has important consequences. After surveying standard numerical analysis textbooks including~\citet{burden1985numerical}, \citet{chapra2010numerical}, \citet{cheney2012numerical}, \citet{iserles2009first}, \citet{strikwerda2004finite}, we found this topic to be missing. Therefore, this paper investigates the simultaneous effects of $\Delta x$ and $\Delta t$ on the local truncation error of hyperbolic PDEs for several standard spatial and temporal discretizations. There are very few references in the scientific journals as well. One of the best available derivations we found is on a webpage~\citep{Langtangen15}. However, the author did not express the temporal and mixed derivatives of the dependent variable of the PDE as functions of quantities at the current time level, which are known a priori, and this is a key step to arrive at our final form of the local truncation error. Only two published papers on the order of convergence in space and time could be found. \citet{love2013convergence} demonstrate that the order of convergence for finite difference approximations of PDEs may be different for refinement only in time than for refinement in both space and time. They employ linear algebra, especially matrix analysis, to determine the error and convergence rates of linear PDEs with refinement only in time, and substantiate the results with numerical experiments. In contrast, we derive the truncation error of linear and non-linear hyperbolic PDEs from first principles, using theoretical analysis and symbolic algebra, and verify our results numerically using finite difference and finite volume methods, for refinement in both space and time, only in space, and only in time. Our approach is more general and straightforward, and applicable to any discretization. \citet{jeong2019verification} perform convergence tests of three parabolic PDEs---the heat equation, the Allen-Cahn equation, and the Cahn-Hilliard equation---by refining the spatial and temporal steps separately and together. They urge their readers to be cautious of the varying order of convergence, but do not perform any detailed examination of the underlying cause of this discrepancy. The analysis presented in our paper explains the order of the convergence results in \citet{jeong2019verification}. There exists numerical methods which treat space and time together such as Lax-Wendroff and Cauchy-Kowalevski. \citet{qiu2005discontinuous} apply a Lax-Wendroff time discretization procedure to the discontinuous Galerkin method for solving hyperbolic conservation laws. Arbitrary derivatives (ADER) time-stepping methods involving space-time basis functions also belong to this category. \citet{normanhigh} develops a high-order WENO-limited finite-volume algorithm for modeling atmospheric flow using the ADER-differential transform time discretization. Models like MITgcm~\citep{marshall1997finite} use direct space-time methods for modeling non-linear advection and other physical phenomena in the ocean. Incremental remapping, cell-integrated, and flux-form semi-Lagrangian approaches are some other instances where space and time considerations are convolved together. Even though our theory holds for Lax-Wendroff methods, we do not test our theory for more sophisticated methods that treat space and time together. Finally, our theory is applicable only when the global solution error at a time horizon has the same order of accuracy as the global truncation error. If the order of accuracy of the global solution error exceeds that of the global truncation error, superconvergence (or supraconvergence) is observed. Under such circumstances, consistency is not a necessary condition for convergence e.g.~\citep{cockburn1997priori}. \citet{cao2018some} discuss some recent developments in superconvergence of Discontinuous Galerkin methods for time-dependent PDEs, while \citet{peixoto2016accuracy} mentions superconvergence effects in the accuracy analysis of mimetic finite volume operators on geodesic grids. Our theory considers neither superconvergence nor the order reduction arising from the application of (a) dissipative Riemann solvers to compute numerical fluxes, and (b) monotone slope-limiting strategies to ensure oscillation-free profiles. Both techniques are common practices in finite volume methods. \subsection{Notation} Consider a differential equation in abstract form \begin{equation} L(u) = f, \label{AbstractEquation} \end{equation} where $L(u)$ is a function of the dependent variable $u$ and its derivatives with respect to the independent variables, and the forcing term $f$ is only a function of the independent variables. We assume that $L$ includes the boundary condition. Discretizing~\eqref{AbstractEquation} results in the discrete difference equation \begin{equation} L_{\Delta}(u_{\Delta}) = f_{\Delta}, \label{DifferentialEquation} \end{equation} where the subscript $\Delta$ represents the set of discretization parameters for the spatial and temporal grids. The exact solution $u$ can be distinguished from the numerical solution $u_{\Delta}$, which is a function of $\Delta$. In a finite difference approximation, $u_{\Delta}$ is computed at a set of grid points in space and time, and the abstract function $L_\Delta(u_\Delta)$ at any grid point in space and time typically consists of algebraic equations in $u_\Delta$ at that point and some neighboring ones. The local truncation error of the difference equation is the residual when substituting the exact solution $u$ into the difference equation~\eqref{DifferentialEquation} \begin{equation} \tau_\Delta = L_{\Delta}(u) - f_{\Delta}. \label{DifferenceEquation} \end{equation} A small value of $\tau_\Delta$ indicates that the difference equation closely resembles the differential equation, thereby implying proximity of $u_{\Delta}$ to $u$. A discretization method of a PDE is said to be consistent if the local truncation error $\tau_{\Delta} \to 0$ as $\Delta \to 0$. In the special case when $L_\Delta$ is linear, the error $e_\Delta = u - u_{\Delta}$ can be written in terms of the residual since \begin{equation} L_{\Delta} e_{\Delta} = L_{\Delta}\left(u - u_{\Delta}\right) = L_{\Delta} u - L_{\Delta} u_{\Delta} = L_{\Delta} u - f_{\Delta} = \tau_{\Delta} \hspace{1mm} \text{ i.e. } \hspace{1mm} e_{\Delta} = L_{\Delta}^{-1} \tau_{\Delta}, \end{equation} which represents the relationship between the actual error and the local truncation error of the difference equation. Taking norms of both sides, \begin{equation} \|e_{\Delta}\| = \|L_{\Delta}^{-1} \tau_{\Delta}\| \le \|L_{\Delta}^{-1}\|\hspace{1mm}\|\tau_{\Delta}\|. \label{HalfOfLaxEquivalenceTheoremProof} \end{equation} The numerical solution of the PDE converges to its exact analytical counterpart if $\|e_{\Delta}\| \to 0$ as $\Delta \to 0$. The numerical scheme is stable if $\|L_{\Delta}^{-1}\|$ is bounded independent of $\Delta$. Therefore, if the numerical solution is stable, then consistency implies convergence using~\eqref{HalfOfLaxEquivalenceTheoremProof}. This in fact constitutes half of the proof of the Lax Equivalence Theorem (\citet{lax1956survey}), which states that a consistent finite difference method for a well-posed linear initial value problem is convergent if and only if it is stable. \subsection{Outline of the Paper} Our analysis consists of determining the local truncation error of a set of ordinary differential equations (ODEs) and PDEs from first principles for a variety of spatial and temporal discretizations. Even though the derivation of the local truncation error of the characteristic ODE can be found in textbooks, it is instructive to first introduce our method and notations with the generic ODE, which are present in Section~\ref{sec:odes}. Then, we extend the analysis to the generic hyperbolic PDE in Section~\ref{sec:pdes}, the main subject of this paper. Section~\ref{sec:numerical_results} contains numerical results which demonstrate our theoretical findings. Our conclusions are presented in Section~\ref{sec:conclusion}. The supplementary documents contain two appendices and error expansions for high-order spatial discretizations and non-linear hyperbolic PDEs. Appendix~A outlines the algorithm for implementing Williamson's low-storage third-order Runge-Kutta method~\citep{williamson1980low}, and Carpenter and Kennedy's low-storage fourth-order Runge-Kutta method~\citep{carpenter1994fourth} to advance an ODE or a PDE over one time step, along with the important coefficients. Finally, Appendix B lists the leading order terms in the local truncation error of an ODE and an inhomogeneous variable-coefficient advection equation for a variety of time-stepping methods. \section{Ordinary Differential Equations} \label{sec:odes} We start by considering the generic first-order ODE \begin{equation} u_t = \mathcal{F}(u,t). \label{ODE1D} \end{equation} The right-hand side $\mathcal{F}$ can be a linear or non-linear function of both $u$ and $t$. Given the exact solution $u^n$ at time level~$t^n$, the {\em local truncation error of the difference equation} at time level $t^{n+1} = t^n + \Delta t$ is \begin{equation} \tau^{n+1} = L_{\Delta} u^n - f_{\Delta}^n. \label{ODE1DTruncationErrorDifferenceEquation_1} \end{equation} For an ODE, $\Delta = \Delta t$, and for any time-stepping method, \eqref{ODE1DTruncationErrorDifferenceEquation_1} can be written as \begin{equation} \tau^{n+1} = \frac{1}{\Delta t} \left(u^{n+1} - \hat{u}^{n+1}\right), \label{ODE1DTruncationErrorDifferenceEquation_2} \end{equation} where $u^{n+1}$ is the exact solution at time $t^{n+1}$, and $\hat{u}^{n+1}$ is its numerical counterpart as a function of quantities known at time level $t^n$. To maintain clarity of the presentation, we call the {\em local truncation error of the numerical solution} the {\em local truncation error}, which is defined as \begin{equation} \hat{\tau}^{n+1} = \Delta t \tau^{n+1} = \Delta t \left(L_{\Delta} u^n - f_{\Delta}^n\right) = u^{n+1} - \hat{u}^{n+1}. \label{ODE1DTruncationErrorNumericalSolution_1} \end{equation} Calculating the local truncation error starts by expanding each term in~\eqref{ODE1DTruncationErrorNumericalSolution_1} about a common center. For example, the Taylor expansion of $u^{n+1}$ centered at $t^n$ is \begin{equation} u^{n+1} = u^n + \Delta t u_t^n + \frac{\Delta t^2}{2} u_{tt}^n + \frac{\Delta t^3}{6} u_{ttt}^n + \frac{\Delta t^4}{24} u_{tttt}^n + \cdots. \label{ODE1DExactSolutionAtTimeLevelNPlusOne_1} \end{equation} Then, the time derivatives in~\eqref{ODE1DExactSolutionAtTimeLevelNPlusOne_1} are written in terms of the $u$ and $t$ derivatives of $\mathcal{F}$ by using the chain rule \begin{subequations} \label{TimeDerivativesOfDependentVariableODE1D} \begin{align} u_{t} &= \mathcal{F} \equiv \mathcal{F}^{(1)}, \\ u_{tt} &= \mathcal{F}_t + \mathcal{F}_u \mathcal{F} \equiv \mathcal{F}^{(2)}, \\ u_{ttt} &= \mathcal{F}_{tt} + 2 \mathcal{F}_{ut} \mathcal{F} + \mathcal{F}_u \mathcal{F}_t + \mathcal{F}_{uu} \mathcal{F}^2 + \mathcal{F}_u^2 \mathcal{F} \equiv \mathcal{F}^{(3)}, \\ u_{tttt} &= \mathcal{F}_{ttt} + 3 \mathcal{F}_{utt} \mathcal{F} + 3 \mathcal{F}_{ut} \mathcal{F}_t + 5 \mathcal{F}_{ut} \mathcal{F}_u \mathcal{F} + 3 \mathcal{F}_{uu} \mathcal{F} \mathcal{F}_t + \mathcal{F}_u \mathcal{F}_{tt} \nonumber \\ &\hspace{0.325cm}+3 \mathcal{F}_{uut} \mathcal{F}^2 + 4 \mathcal{F}_{uu} \mathcal{F}_u \mathcal{F}^2 + \mathcal{F}_{uuu} \mathcal{F}^3 + \mathcal{F}_u^2 \mathcal{F}_t + \mathcal{F}_u^3 \mathcal{F} \equiv \mathcal{F}^{(4)}, \end{align} \end{subequations} and so on. Inserting~\eqref{TimeDerivativesOfDependentVariableODE1D} into~\eqref{ODE1DExactSolutionAtTimeLevelNPlusOne_1}, \begin{equation} u^{n+1} = u^n + \Delta t \left(\mathcal{F}^{(1)}\right)^n + \frac{\Delta t^2}{2} \left(\mathcal{F}^{(2)}\right)^n + \frac{\Delta t^3}{6} \left(\mathcal{F}^{(3)}\right)^n + \frac{\Delta t^4}{24} \left(\mathcal{F}^{(4)}\right)^n + \cdots = u^n + \sum \limits_{k=1}^{\infty} \frac{\Delta t^k}{k!} \left(\mathcal{F}^{(k)}\right)^n. \label{ODE1DExactSolutionAtTimeLevelNPlusOne_2} \end{equation} We then insert~\eqref{ODE1DExactSolutionAtTimeLevelNPlusOne_2} into~\eqref{ODE1DTruncationErrorNumericalSolution_1}, expand each term in the formula for $\hat{u}^{n+1}$ using a Taylor series with a common center, and the result is the final form of $\hat{\tau}^{n+1}$. The numerical solution of any time-stepping method can be expressed as \begin{equation} \hat{u}^{n+1} = u^n + \sum \limits_{k=1}^{\infty} \frac{\Delta t^k}{k!} \left(\widehat{\mathcal{F}}^{(k)}\right)^n, \label{ODE1DNumericalSolutionAtTimeLevelNPlusOne} \end{equation} as we will illustrate with a number of examples. Here $\left(\widehat{\mathcal{F}}^{(k)}\right)^n$ is the discrete equivalent of $\left(\mathcal{F}^{(k)}\right)^n$ for $k = 1,2,\ldots$, and is defined by the time-stepping method. More specifically, $\left(\widehat{\mathcal{F}}^{(k)}\right)^n$ represents the coefficient of $\frac{\Delta t^k}{k!}$ in the numerical solution $\hat{u}^{n+1}$. For a time-stepping method of order $\beta$, $\left(\widehat{\mathcal{F}}^{(k)}\right)^n = \left(\mathcal{F}^{(k)}\right)^n$ for $k = 1,2,\ldots,\beta$. Inserting~\eqref{ODE1DExactSolutionAtTimeLevelNPlusOne_2} and~\eqref{ODE1DNumericalSolutionAtTimeLevelNPlusOne} into~\eqref{ODE1DTruncationErrorNumericalSolution_1}, \begin{equation} \hat{\tau}^{n+1} = \sum \limits_{k=\beta+1}^{\infty} \frac{\Delta t^k}{k!} \left\{\left(\mathcal{F}^{(k)}\right)^n - \left(\widehat{\mathcal{F}}^{(k)}\right)^n\right\} = \frac{c_{\beta+1}^n}{(\beta+1)!} \Delta t^{\beta+1} + \mathcal{O}\left(\Delta t^{\beta+2}\right) = \mathcal{O}\left(\Delta t^{\beta+1}\right), \label{ODE1DTruncationErrorNumericalSolution_2} \end{equation} where $c_{\beta+1} = \mathcal{F}^{(\beta+1)} - \widehat{\mathcal{F}}^{(\beta+1)} \ne 0$. At any time horizon $T = N \Delta t$, the global truncation error of the numerical solution results from the accumulation of these local truncation errors over $N = T/\Delta t$ time steps, and is one order of $\Delta t$ less than the local truncation error. As pointed out by \citet{leveque2002finite}, the order of accuracy of a numerical method is not the only important attribute worth considering. The magnitude of the numerical error also depends on the coefficients of the leading order terms of the truncation error, which in turn depends on the problem being solved, the spatial and temporal discretizations, and the time horizon at which the error is being computed. If this coefficient is a few orders of magnitude larger for a high-order method than for a low-order method, the latter may result in a lower magnitude of the numerical error and turn out to be the better option. Moreover, it is only in the asymptotic regime, where the discretization parameters tend to zero, that the higher order terms are negligible with respect to the leading order terms. However, in practice, one may not employ such small values of the discretization parameters, in which case the coefficients of the higher terms cannot be neglected, and the leading order terms may not reign supreme in terms of the error magnitude. Finally, in some applications like ocean modeling, numerical stability which guarantees that small errors are not amplified by the numerical method, and the conservation of physical quantities are assigned higher priority over accuracy. In his paper, however, we focus mostly on the order of accuracy and not as much on numerical stability. We now derive the final form of the local truncation error for the following sets of explicit and implicit time-stepping methods, belonging to the Method of Lines, and demonstrate that the numerical solution after one time step assumes the form \eqref{ODE1DNumericalSolutionAtTimeLevelNPlusOne} and the local truncation error assumes the form \eqref{ODE1DTruncationErrorNumericalSolution_2}. \begin{mylist}\mbox{Explicit time-stepping methods for local truncation error analysis:} \begin{enumerate}[label=(\alph*),noitemsep] \label{myListOfExplicitTimeSteppingMethodsForAnalysis} \item first-order Forward Euler method; \item explicit midpoint method, belonging to the second-order Runge-Kutta family; \item low-storage third-order Runge-Kutta method of~\citet{williamson1980low}; \item second-order Adams-Bashforth method; \item third-order Adams-Bashforth method. \end{enumerate} \end{mylist} \begin{mylist}\mbox{Implicit time-stepping methods for local truncation error analysis:} \begin{enumerate}[label=(\alph*),noitemsep] \label{myListOfImplicitTimeSteppingMethodsForAnalysis} \item first-order Backward Euler method; \item second-order implicit midpoint method; \item second-order trapezoidal rule (Crank-Nicolson). \end{enumerate} \end{mylist} \subsection{Forward Euler Time-Stepping Method} The first-order Forward Euler method is \begin{equation} \hat{u}^{n+1} = u^n + \Delta t \mathcal{F}^n, \end{equation} and the local truncation error is \begin{equation} \hat{\tau}^{n+1} = u^{n+1} - \hat{u}^{n+1} = \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)}\right)^n + O \left(\Delta t^3\right) = \mathcal{O} \left(\Delta t^2\right). \end{equation} \subsection{Runge-Kutta Time-Stepping Methods} We next consider two explicit Runge-Kutta methods: the second-order explicit midpoint method and the low-storage third-order method of \citet{williamson1980low}. The explicit midpoint method is \begin{align} \hat{u}^{n+1} = u^n + \Delta t \mathcal{F} \left(\hat{u}^{n+\frac{1}{2}}, t^{n+\frac{1}{2}}\right) &= u^n + \Delta t \mathcal{F} \left(u^n + \frac{\Delta t}{2} \mathcal{F}^n, t^n + \frac{\Delta t}{2}\right) \nonumber \\ &= u^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)}\right)^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)}\right)^n + \frac{\Delta t^3}{3!} \left(\widehat{\mathcal{F}}^{(3)}\right)^n + \mathcal{O}\left(\Delta t^4\right), \label{ODE1DDepedentVariableNPlusOneEMM} \end{align} where we have expanded $\mathcal{F} \left(u^n + \frac{\Delta t}{2} \mathcal{F}^n, t^n + \frac{\Delta t}{2}\right)$ in a Taylor series about $u^n$ and $t^n$, where \begin{equation} \left(\widehat{\mathcal{F}}^{(3)}\right)^n = \frac{3}{4} \left(\mathcal{F}_{uu} \mathcal{F}^2 + 2 \mathcal{F}_{ut} \mathcal{F} + \mathcal{F}_{tt}\right)^n \ne \left(\mathcal{F}^{(3)}\right)^n, \end{equation} and $\hat{u}^{n+\frac{1}{2}} = u^n + \frac{\Delta t}{2} \mathcal{F}^n$ is the predicted solution at time level $t^{n+\frac{1}{2}} = t^n + \frac{\Delta t}{2}$. The local truncation error is \begin{align} \hat{\tau}^{n+1} = u^{n+1} - \hat{u}^{n+1} = \frac{\Delta t^3}{3!} c_3^n + \mathcal{O}\left(\Delta t^4\right) = \mathcal{O}\left(\Delta t^3\right), \end{align} where \begin{equation} c_3^n = \left(\mathcal{F}^{(3)}\right)^n - \left(\widehat{\mathcal{F}}^{(3)}\right)^n = \left(\mathcal{F}_u \mathcal{F}_t + \mathcal{F}_u^2 \mathcal{F}\right)^n + \frac{1}{4} \left(\mathcal{F}_{uu} \mathcal{F}^2 + 2 \mathcal{F}_{ut} \mathcal{F} + \mathcal{F}_{tt}\right)^n \ne 0. \end{equation} The third-order Runge-Kutta method of~\citet{williamson1980low} is outlined in Algorithm~1. This method has the advantage that it requires only two levels of storage. The stages are \vspace{1.5mm}\\ \textbf{Stage 1} \begin{align} \hat{u}^{n+\frac{1}{3}} &= u^n + \frac{1}{3} \Delta t \mathcal{F} \left(u^n,t^n\right) \equiv u^n + \Delta \hat{u}^{n + \frac{1}{3}}, \\ \mathcal{F}_{\text{mean}} \left(\hat{u}^{n+\frac{1}{3}},t^{n+\frac{1}{3}}\right) &= -\frac{5}{9} \mathcal{F} \left(u^n,t^n\right) + \mathcal{F} \left(\hat{u}^{n+\frac{1}{3}},t^{n+\frac{1}{3}}\right) \equiv -\frac{5}{9} \mathcal{F}^n + \mathcal{F} \left(u^n + \Delta \hat{u}^{n + \frac{1}{3}}, t^n + \frac{1}{3} \Delta t\right). \end{align} \textbf{Stage 2} \begin{align} \hat{u}^{n+\frac{3}{4}} &= u^{n+\frac{1}{3}} + \frac{15}{16} \Delta t \mathcal{F}_{\text{mean}} \left(\hat{u}^{n+\frac{1}{3}},t^{n+\frac{1}{3}}\right) \equiv u^n + \Delta \hat{u}^{n + \frac{3}{4}}, \\ \mathcal{F}_{\text{mean}} \left(\hat{u}^{n+\frac{3}{4}},t^{n+\frac{3}{4}}\right) &= -\frac{153}{128} \mathcal{F}_{\text{mean}} \left(\hat{u}^{n+\frac{1}{3}},t^{n+\frac{1}{3}}\right) + \mathcal{F} \left(\hat{u}^{n+\frac{3}{4}},t^{n+\frac{3}{4}}\right) \equiv -\frac{153}{128} \mathcal{F}_{\text{mean}}^{n+\frac{1}{3}} + \mathcal{F} \left(u^n + \Delta \hat{u}^{n + \frac{3}{4}},t^n + \frac{3}{4} \Delta t\right). \end{align} \textbf{Stage 3} \begin{align} \hat{u}^{n+1} &= u^{n+\frac{3}{4}} + \frac{8}{15} \Delta t \mathcal{F}_{\text{mean}} \left(\hat{u}^{n+\frac{3}{4}},t^{n+\frac{3}{4}}\right) = u^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)}\right)^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)}\right)^n + \frac{\Delta t^3}{3!} \left(\mathcal{F}^{(3)}\right)^n + \frac{\Delta t^4}{4!} \left(\widehat{\mathcal{F}}^{(4)}\right)^n + \mathcal{O}\left(\Delta t^5\right). \end{align} The full expressions for $\mathcal{F}_{\text{mean}} \left(\hat{u}^{n+\theta_k},t^{n+\theta_k}\right)$ are obtained by Taylor expanding \begin{equation} \mathcal{F} \left(\hat{u}^{n+\theta_k},t^{n+\theta_k}\right) \equiv \mathcal{F} \left(u_j^n + \Delta \hat{u}^{n+\theta_k}, t^n+\theta_k \Delta t\right), \end{equation} about $u_j^n$ and $t^n$ for $k=1$, $2$ and $\theta_1 = \frac{1}{3}$, $\theta_2 = \frac{3}{4}$. The final expression of the numerical solution at time level $t^{n+1}$ is \begin{align} \left(\widehat{\mathcal{F}}^{(4)}\right)^n &= \frac{1}{18} \left(17 \mathcal{F}^3 \mathcal{F}_{uuu} + 66 \mathcal{F}^2 \mathcal{F}_u \mathcal{F}_{uu} + 51 \mathcal{F}^2 \mathcal{F}_{uut} + 54 \mathcal{F} \mathcal{F}_t \mathcal{F}_{uu}\right. \nonumber \\ &\hspace{0.975cm}+ 78 \mathcal{F} \mathcal{F}_u \mathcal{F}_{ut} + 51 \mathcal{F} \mathcal{F}_{utt} + 54 \mathcal{F}_t \mathcal{F}_{ut} + 12 \mathcal{F}_{tt} \mathcal{F}_u + 17 \mathcal{F}_{ttt}\Big)^n \ne \left(\mathcal{F}^{(4)}\right)^n. \end{align} Therefore, the local truncation error is \begin{align} \hat{\tau}^{n+1} = u^{n+1} - \hat{u}^{n+1} &= \frac{\Delta t^4}{4!} c_4^n + \mathcal{O}\left(\Delta t^5\right) = \mathcal{O}\left(\Delta t^4\right), \end{align} where \begin{align} c_4^n &= \left(\mathcal{F}^{(4)}\right)^n - \left(\widehat{\mathcal{F}}^{(4)}\right)^n \nonumber \\ &= \frac{1}{18} \left(\mathcal{F}^3 \mathcal{F}_{uuu} + 6 \mathcal{F}^2 \mathcal{F}_u \mathcal{F}_{uu} + 3 \mathcal{F}^2 \mathcal{F}_{uut} + 18 \mathcal{F} \mathcal{F}_u^3 + 12 \mathcal{F} \mathcal{F}_u \mathcal{F}_{ut} + 3 \mathcal{F} \mathcal{F}_{utt} + 18 \mathcal{F}_t \mathcal{F}_u^2 + 6 \mathcal{F}_{tt} \mathcal{F}_u + \mathcal{F}_{ttt}\right)^n \ne 0. \end{align} Summarizing, for a predictor-corrector Runge-Kutta method of order $\beta$, $\left(\widehat{\mathcal{F}}^{(k)}\right)^n = \left(\mathcal{F}^{(k)}\right)^n$ for $k = 1,2,\ldots,\beta$, and $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)^n$ only consists of terms in $\left(\mathcal{F}^{(\beta+1)}\right)^n$, but not necessarily with the correct multiplicative factor. As a result, $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)^n \ne \left(\mathcal{F}^{(\beta+1)}\right)^n$ and the local truncation error assumes the form~\eqref{ODE1DTruncationErrorNumericalSolution_2}. \subsection{Adams-Bashforth Time-Stepping Methods} We now consider multistep Adams-Bashforth methods. These methods involve the solution at time levels $t^{n-m}$ for $m = 1, 2, \ldots$ that is given by \begin{equation} u^{n-m} = u^n - \frac{m \Delta t}{1!} \left(\mathcal{F}^{(1)}\right)^n + \frac{(m \Delta t)^2}{2!} \left(\mathcal{F}^{(2)}\right)^n - \frac{(m \Delta t)^3}{3!} \left(\mathcal{F}^{(3)}\right)^n + \mathcal{O}\left(\Delta t^4\right) \equiv u^n + \Delta u^{n-m}, \end{equation} where $\Delta u^{n-m} = \sum \limits_{k=1}^{\infty} \frac{(-m \Delta t)^k}{k!} \left(\mathcal{F}^{(k)}\right)^n$. The second-order Adams-Bashforth method leads to the numerical solution \begin{align} \hat{u}^{n+1} &= u^n + \Delta t \left\{\frac{3}{2} \mathcal{F}\left(u^n,t^n\right) - \frac{1}{2} \mathcal{F}\left(u^{n-1},t^{n-1}\right)\right\} \equiv u^n + \Delta t \left\{\frac{3}{2} \mathcal{F}^n - \frac{1}{2} \mathcal{F}\left(u^n + \Delta u^{n-1}, t^n - \Delta t\right)\right\} \nonumber \\ &= u^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)}\right)^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)}\right)^n + \frac{\Delta t^3}{3!} \left(\widehat{\mathcal{F}}^{(3)}\right)^n + \mathcal{O}\left(\Delta t^4\right), \end{align} where $\left(\widehat{\mathcal{F}}^{(3)}\right)^n = -\frac{3}{2} \left(\mathcal{F}^{(3)}\right)^n \ne \left(\mathcal{F}^{(3)}\right)^n$. Therefore, the local truncation error is \begin{equation} \hat{\tau}^{n+1} = u^{n+1} - \hat{u}^{n+1} = \frac{\Delta t^3}{3!} c_3^n + \mathcal{O}\left(\Delta t^4\right) = \mathcal{O}\left(\Delta t^3\right), \end{equation} where $c_3^n = \left(\mathcal{F}^{(3)}\right)^n - \left(\widehat{\mathcal{F}}^{(3)}\right)^n = \frac{5}{2} \left(\widehat{\mathcal{F}}^{(3)}\right)^n \ne 0$. Repeating the same calculation for the third-order Adams-Bashforth method, we obtain the numerical solution \begin{align} \hat{u}^{n+1} &= u^n + \Delta t \left\{\frac{23}{12} \mathcal{F}\left(u^n,t^n\right) - \frac{16}{12} \mathcal{F}\left(u^{n-1},t^{n-1}\right) + \frac{5}{12} \mathcal{F}\left(u^{n-2},t^{n-2}\right)\right\} \nonumber \\ &\equiv u^n + \Delta t \left\{\frac{23}{12} \mathcal{F}\left(u^n,t^n\right) - \frac{16}{12} \mathcal{F}\left(u^n + \Delta u^{n-1}, t - \Delta t\right) + \frac{5}{12} \mathcal{F}\left(u^n + \Delta u^{n-2}, t - 2\Delta t\right)\right\} \nonumber \\ &= u^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)}\right)^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)}\right)^n + \frac{\Delta t^3}{3!} \left(\mathcal{F}^{(3)}\right)^n + \frac{\Delta t^4}{4!}\left(\widehat{\mathcal{F}}^{(4)}\right)^n + \mathcal{O}\left(\Delta t^5\right), \end{align} and the local truncation error \begin{equation} \hat{\tau}^{n+1} = u^{n+1} - \hat{u}^{n+1} = \frac{\Delta t^4}{4!} c_4^n + \mathcal{O}\left(\Delta t^5\right) = \mathcal{O}\left(\Delta t^4\right), \end{equation} where $\left(\widehat{\mathcal{F}}^{(4)}\right)^n = -8 \left(\mathcal{F}^{(4)}\right)^n \ne \left(\mathcal{F}^{(4)}\right)^n$, and $c_4^n = \left(\mathcal{F}^{(4)}\right)^n - \left(\widehat{\mathcal{F}}^{(4)}\right)^n = 9 \left(\widehat{\mathcal{F}}^{(4)}\right)^n \ne 0$. Summarizing, a multistep Adams-Bashforth method of order $\beta$ results in $\left(\widehat{\mathcal{F}}^{(k)}\right)^n = \left(\mathcal{F}^{(k)}\right)^n$ for $k = 1,2,\ldots,\beta$, and $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)^n = \gamma \left(\mathcal{F}^{(\beta+1)}\right)^n \ne \left(\mathcal{F}^{(\beta+1)}\right)^n$ for some $\gamma \ne 1$, which in turn produces the same form of the local truncation error as~\eqref{ODE1DTruncationErrorNumericalSolution_2}. \subsection{Implicit Time-Stepping Methods} We consider the three implicit time-stepping methods of List~\ref{myListOfImplicitTimeSteppingMethodsForAnalysis}. The Backward Euler method is \begin{equation} \hat{u}^{n+1} = u^n + \Delta t \mathcal{F} \left(u^{n+1},t^{n+1}\right) = u^n + \Delta t \mathcal{F} \left(u^n + \Delta u^{n+1},t^n + \Delta t\right) = u^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)}\right)^n + \frac{\Delta t^2}{2!} \left(\widehat{\mathcal{F}}^{(2)}\right)^n + \mathcal{O}\left(\Delta t^3\right), \end{equation} where $\Delta u^{n+1} = \sum \limits_{k=1}^{\infty} \frac{\Delta t^k}{k!} \left(\mathcal{F}^{(k)}\right)^n$ from~\eqref{ODE1DExactSolutionAtTimeLevelNPlusOne_2} and $\left(\widehat{\mathcal{F}}^{(2)}\right)^n = 2 \left(\mathcal{F}^{(2)}\right)^n \ne \left(\mathcal{F}^{(2)}\right)^n$. The local truncation error is \begin{equation} \hat{\tau}^{n+1} = u^{n+1} - \hat{u}^{n+1} = \frac{\Delta t^2}{2!} c_2^n + \mathcal{O}\left(\Delta t^3\right) = \mathcal{O}\left(\Delta t^2\right), \end{equation} where $c_2^n = \left(\mathcal{F}^{(2)}\right)^n - \left(\widehat{\mathcal{F}}^{(2)}\right)^n = -\left(\mathcal{F}^{(2)}\right)^n \ne 0$. The second-order implicit midpoint method is \begin{align} \hat{u}^{n+1} &= u^n + \Delta t \mathcal{F} \left(\frac{1}{2} \left(u^n + u^{n+1}\right),t^{n+\frac{1}{2}}\right) \equiv u^n + \Delta t \mathcal{F} \left(u^n + \frac{1}{2} \Delta u^{n+1},t^n + \frac{\Delta t}{2}\right) \nonumber \\ &= u^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)}\right)^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)}\right)^n + \frac{\Delta t^3}{3!} \left(\widehat{\mathcal{F}}^{(3)}\right)^n + \mathcal{O}\left(\Delta t^4\right), \end{align} where \begin{equation} \left(\widehat{\mathcal{F}}^{(3)}\right)^n = \frac{3}{4} \left(\mathcal{F}_{uu} \mathcal{F}^2 + 2 \mathcal{F}_u^2 \mathcal{F} + \mathcal{F}_{ut} \mathcal{F} + 2 \mathcal{F}_u \mathcal{F}_t + \mathcal{F}_{tt}\right)^n \ne \left(\mathcal{F}^{(3)}\right)^n. \end{equation} The local truncation error is \begin{equation} \hat{\tau}^{n+1} = u^{n+1} - \hat{u}^{n+1} = \frac{\Delta t^3}{3!} c_3^n + \mathcal{O}\left(\Delta t^4\right) = \mathcal{O}\left(\Delta t^3\right), \end{equation} where \begin{equation} c_3^n = \left(\mathcal{F}^{(3)}\right)^n - \left(\widehat{\mathcal{F}}^{(3)}\right)^n = \frac{1}{24} \left(\mathcal{F}_{uu} \mathcal{F}^2 - 2 \mathcal{F}_u^2 \mathcal{F} + 2 \mathcal{F}_{ut} \mathcal{F} - 2 \mathcal{F}_u \mathcal{F}_t + \mathcal{F}_{tt}\right)^n \ne 0. \end{equation} Finally, the trapezoidal rule (Crank-Nicolson) is \begin{align} \hat{u}^{n+1} &= u^n + \frac{\Delta t}{2} \left\{\mathcal{F} \left(u^n,t^n\right) + \mathcal{F} \left(u^{n+1},t^{n+1}\right)\right\} \equiv u^n + \frac{\Delta t}{2} \left\{\mathcal{F} \left(u^n,t^n\right) + \mathcal{F} \left(u^n + \Delta u^{n+1},t^n + \Delta t\right)\right\} \nonumber \\ &= u^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)}\right)^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)}\right)^n + \frac{\Delta t^3}{3!} \left(\widehat{\mathcal{F}}^{(3)}\right)^n + \mathcal{O}\left(\Delta t^4\right), \end{align} and the local truncation error is \begin{equation} \hat{\tau}^{n+1} = u^{n+1} - \hat{u}^{n+1} = \frac{\Delta t^3}{3!} c_3^n + \mathcal{O}\left(\Delta t^4\right) = \mathcal{O}\left(\Delta t^3\right), \end{equation} where $\left(\widehat{\mathcal{F}}^{(3)}\right)^n = \frac{3}{2} \left(\mathcal{F}^{(3)}\right)^n \ne \left(\mathcal{F}^{(3)}\right)^n$, and $c_3^n = \left(\mathcal{F}^{(3)}\right)^n - \left(\widehat{\mathcal{F}}^{(3)}\right)^n = -\frac{1}{2} \left(\mathcal{F}^{(3)}\right)^n \ne 0$. Based on our examples, we observe that an implicit time-stepping method of order $\beta$ results in $\left(\widehat{\mathcal{F}}^{(k)}\right)^n = \left(\mathcal{F}^{(k)}\right)^n$ for $k = 1,2,\ldots,\beta$. When considering a predictor-corrector method, $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)^n$ consists of all the terms as in $\left(\mathcal{F}^{(\beta+1)}\right)^n$, but mostly with different pre-factors, and if considering a multistep method, $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)^n$ is a scalar multiple of $\left(\mathcal{F}^{(\beta+1)}\right)^n$, with the scalar factor not equal to one. In all cases, though, $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)^n \ne \left(\mathcal{F}^{(\beta+1)}\right)^n$, resulting in the the local truncation error assuming the form~\eqref{ODE1DTruncationErrorNumericalSolution_2}. \subsection{Local Truncation Errors of a First-Order Linear ODE} \label{LTE_First_Order_Linear_ODE} Motivated by the form of the linear inhomogeneous variable-coefficient advection equation appearing in Section \ref{LTE_Linear_Inhomogeneous_Variable_Coefficient_Advection_Equation}, we consider the ODE \begin{equation} u_t + \left(p_0 + q_1\right) u = f(t). \label{ODE1DParticularChoice} \end{equation} Using the notation in~\eqref{ODE1D}, we have \begin{equation} \mathcal{F}(u,t) = -\left(p_0 + q_1\right) u + f(t), \label{mathcal_F_Particular_Choice} \end{equation} where $p_0$, $q_1$ are constants, and $f(t)$ is a function of the independent variable $t$. The last two rows of Table~B.1 in Appendix~B express the analytical second- and third-order derivatives of the dependent variable $u$ as functions of $u$, $p_0$, $q_1$, $f(t)$ and its derivatives i.e.~$f_t$, $f_{tt}$, $f_{ttt}$, $\ldots$ at time level $t^n$. Therefore, these rows of Table~B.1 are equivalent to~\eqref{TimeDerivativesOfDependentVariableODE1D} with this specific form of $\mathcal{F}(u,t)$. Table~B.2 lists the local truncation error of~\eqref{ODE1DParticularChoice} advanced with the five explicit time-stepping methods of List~\ref{myListOfExplicitTimeSteppingMethodsForAnalysis}. Unfortunately, we cannot employ implicit time-stepping methods to advance~\eqref{ODE1DParticularChoice} since it would require knowledge of the functional form of $f(t)$. We were able to do so with the generic ODE~\eqref{ODE1D}, since at any later time level $t^{n+k} = t^n + k \Delta t$, for $k > 0$, we could substitute the Taylor expansion of the exact solution $u^{n+k}$ about $u^n$ in $\mathcal{F}\left(u^{n+k},t^{n+k}\right)$, and expand it as a Taylor series about $u^n$ and $t^n$. It is noteworthy that the local truncation error of \eqref{ODE1DParticularChoice} satisfies the form~\eqref{ODE1DTruncationErrorNumericalSolution_2} for any choice of $\mathcal{F}(u,t)$. The motivation behind the particular choice \eqref{mathcal_F_Particular_Choice} of $\mathcal{F}(u,t)$ in this example will become apparent in Section \ref{LTE_Linear_Inhomogeneous_Variable_Coefficient_Advection_Equation}. \section{Partial Differential Equations} \label{sec:pdes} In this paper we consider first-order one-dimensional hyperbolic PDEs of the form $u_t = \mathcal{F}(u,u_x,x,t)$. For notational convenience, we replace $u_x$ with $v$ so that the generic hyperbolic PDE we investigate is \begin{equation} u_t = \mathcal{F}(u,v,x,t). \label{AdvectionEquation1DFunctionalForm} \end{equation} The exact solution of~\eqref{AdvectionEquation1DFunctionalForm} at spatial location $x_j = j \Delta x$ and time level $t^{n+1} = t^n + \Delta t$ is \begin{equation} u_j^{n+1} = u_j^n + \Delta t \left(u_t\right)_j^n + \frac{\Delta t^2}{2} \left(u_{tt}\right)_j^n + \frac{\Delta t^3}{6} \left(u_{ttt}\right)_j^n + \frac{\Delta t^4}{24} \left(u_{tttt}\right)_j^n + \cdots. \label{PDE1DDependentVariableAtTimeLevelNPlusOne_1} \end{equation} Repeatedly differentiating~\eqref{AdvectionEquation1DFunctionalForm} with respect to time, and expressing the right-hand side in terms of known quantities at the current time level, as we did in~\eqref{TimeDerivativesOfDependentVariableODE1D}, \begin{subequations} \label{TemporalDerivativesOfDependentVariablePDE1D} \begin{align} \frac{\partial^k u}{\partial t^k} &= \mathcal{F}^{(k)}, \label{TemporalDerivativesOfDependentVariablePDE1D_1} \\ \frac{\partial^k v}{\partial t^k} \equiv \frac{\partial^k}{\partial t^k} \left(\frac{\partial u}{\partial x}\right) &= \mathcal{G}^{(k)}, \label{TemporalDerivativesOfDependentVariablePDE1D_2} \end{align} \end{subequations} for $k=1,2,3,\ldots$ with $\mathcal{F}^{(1)} = \mathcal{F}$, $\mathcal{G}^{(1)} = \mathcal{G}$. Additional expressions of $\mathcal{F}^{(k)}$ and $\mathcal{G}^{(k)}$ are in Tables~\ref{TemporalDerivativesOfDependentVariableAsFunctionsOfKnownQuantities_GenericPDE} and~\ref{TemporalDerivativesOfSpatialGradientOfDependentVariableAsFunctionsOfKnownQuantities_GenericPDE}. For a generic PDE, all derivatives of $\mathcal{F}$ need to be considered, but since we are interested in linear and non-linear advection equations, we assume that \begin{subequations} \label{myApproximationsForPDEsUsedToModelPhysicalPhenomena} \begin{align} \frac{\partial^{k} \mathcal{F}}{\partial u^{k}} &= 0, \text{ for $k = 2, 3, \ldots$}, \label{myApproximationsForPDEsUsedToModelPhysicalPhenomena_1} \\ \frac{\partial^{k} \mathcal{F}}{\partial v^{k}} &= 0, \text{ for $k = 2, 3, \ldots$}, \label{myApproximationsForPDEsUsedToModelPhysicalPhenomena_2} \\ \frac{\partial^{l}}{\partial t^{l}} \left(\frac{\partial^{k} \mathcal{F}}{\partial u^{k}}\right) &= 0, \text{ for $k = 1, 2, \ldots$, $l = 1, 2, \ldots$}, \label{myApproximationsForPDEsUsedToModelPhysicalPhenomena_3} \\ \frac{\partial^{l}}{\partial t^{l}} \left(\frac{\partial^{k} \mathcal{F}}{\partial v^{k}}\right) &= 0, \text{ for $k = 1, 2, \ldots$, $l = 1, 2, \ldots$}. \label{myApproximationsForPDEsUsedToModelPhysicalPhenomena_4} \end{align} \end{subequations} Assumptions \eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_1} and \eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_2} imply that $\mathcal{F}$ only consists of linear functions of $u$ or linear functions of $v$ or products of linear functions of $u$ and $v$. For example, the inviscid Burgers' equation $u_t + u u_x \equiv u_t + uv = 0$ can be expressed as $u_t = \mathcal{F} \equiv -uv$, so it satisfies assumptions \eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_1} and \eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_2}. Assumptions \eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_3} and \eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_4} eliminate the existence of terms of the form $f(t) g(u)$ and $f(t) h(v)$, where $f(t)$, $g(u)$, and $h(v)$ are non-constant functions of $t$, $u$, and $v$ respectively. Even though assumptions \eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_1}--\eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_4} help us reduce the extent of the leading order terms of the local truncation error from hundreds of pages to a few pages, they are not necessary to arrive at the final results. Inserting~\eqref{TemporalDerivativesOfDependentVariablePDE1D_1} into~\eqref{PDE1DDependentVariableAtTimeLevelNPlusOne_1}, we rewrite $u_j^{n+1}$ as a function of known quantities at time level $t^n$ \begin{equation} u_j^{n+1} = u_j^n + \Delta t \left(\mathcal{F}^{(1)}\right)_j^n + \frac{\Delta t^2}{2} \left(\mathcal{F}^{(2)}\right)_j^n + \frac{\Delta t^3}{6} \left(\mathcal{F}^{(3)}\right)_j^n + \frac{\Delta t^4}{24} \left(\mathcal{F}^{(4)}\right)_j^n + \cdots = u_j^n + \sum \limits_{k=1}^{\infty} \frac{\Delta t^k}{k!} \left(\mathcal{F}^{(k)}\right)_j^n. \label{PDE1DDependentVariableAtTimeLevelNPlusOne_2} \end{equation} \begin{table}[!htp] \centering \caption{Temporal derivatives of the dependent variable of the generic hyperbolic PDE \eqref{AdvectionEquation1DFunctionalForm} with approximations~\eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena}, up to fourth order, expressed as functions of quantities known at the current time level, and $v=u_x$, $w_1 = u_{xx}$, $w_2 = u_{xxx}$, and $w_3 = u_{xxxx}$ for notational convenience.} \vspace{3mm} \setlength{\tabcolsep}{0.35em} \begin{tabular}{cc} \toprule {\colorbox{shade_1} {\parbox{1.3cm}{\centering $u_t \equiv \mathcal{F}^{(1)}$}}} & {\colorbox{shade_1} {\parbox{0.3cm}{$\mathcal{F}$}}} \vspace{1mm} \\ {\colorbox{shade_2} {\parbox{1.375cm}{\centering $u_{tt} \equiv \mathcal{F}^{(2)}$}}} & {\colorbox{shade_2} {\parbox{4.75cm}{\centering $\mathcal{F} \mathcal{F}_u + \mathcal{F}_t + \mathcal{F}_u \mathcal{F}_v v + \mathcal{F}_v^2 w_1 + \mathcal{F}_v \mathcal{F}_x$}}} \vspace{1mm} \\ {\colorbox{shade_3} {\parbox{1.45cm}{\centering $u_{ttt} \equiv \mathcal{F}^{(3)}$}}} & {\colorbox{shade_3} {\parbox{10cm}{$\mathcal{F} \mathcal{F}_u^2 + 2 \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} v + 3 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v w_1 + 2 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_x + \mathcal{F} \mathcal{F}_{ux} \mathcal{F}_v + \mathcal{F}_t \mathcal{F}_u + \mathcal{F}_{tt} \vspace{1mm} \\ + 2 \mathcal{F}_u^2 \mathcal{F}_v v + \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v v^2 + 3 \mathcal{F}_u \mathcal{F}_v^2 w_1 + \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vx} v + 2 \mathcal{F}_u \mathcal{F}_v \mathcal{F}_x + 3 \mathcal{F}_{uv} \mathcal{F}_v^2 v w_1 \vspace{1mm} \\ + \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x v + 2 \mathcal{F}_{ux} \mathcal{F}_v^2 v + \mathcal{F}_v^3 w_2 + 3 \mathcal{F}_v^2 \mathcal{F}_{vx} w_1 + \mathcal{F}_v^2 \mathcal{F}_{xx} + \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_x + \mathcal{F}_v \mathcal{F}_{xt}$}}} \vspace{1mm} \\ {\colorbox{shade_4} {\parbox{1.525cm}{\centering $u_{tttt} \equiv \mathcal{F}^{(4)}$}}} & {\colorbox{shade_4} {\parbox{13.75cm}{$3 \mathcal{F}^2 \mathcal{F}_{uv}^2 w_1 + 3 \mathcal{F}^2 \mathcal{F}_{uv} \mathcal{F}_{ux} + \mathcal{F} \mathcal{F}_u^3 + 8 \mathcal{F} \mathcal{F}_u^2 \mathcal{F}_{uv} v + 3 \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv}^2 v^2 + 16 \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v w_1 + 3 \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_{vx} v \vspace{1mm} \\ + 8 \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_x + 2 \mathcal{F} \mathcal{F}_u \mathcal{F}_{uvx} \mathcal{F}_v v + 3 \mathcal{F} \mathcal{F}_u \mathcal{F}_{ux} \mathcal{F}_v + 14 \mathcal{F} \mathcal{F}_{uv}^2 \mathcal{F}_v v w_1 + 3 \mathcal{F} \mathcal{F}_{uv}^2 \mathcal{F}_x v + 11 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{ux} \mathcal{F}_v v \vspace{1mm} \\ + 6 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v^2 w_2 + 14 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{vx} w_1 + 5 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{xx} + 3 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{vx} \mathcal{F}_x + 3 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{xt} + 4 \mathcal{F} \mathcal{F}_{uvx} \mathcal{F}_v^2 w_1 \vspace{1mm} \\ + 2 \mathcal{F} \mathcal{F}_{uvx} \mathcal{F}_v \mathcal{F}_x + \mathcal{F} \mathcal{F}_{ux} \mathcal{F}_v \mathcal{F}_{vx} + \mathcal{F} \mathcal{F}_{uxx} \mathcal{F}_v^2 + \mathcal{F}_t \mathcal{F}_u^2 + 3 \mathcal{F}_t \mathcal{F}_u \mathcal{F}_{uv} v + 4 \mathcal{F}_t \mathcal{F}_{uv} \mathcal{F}_v w_1 + 3 \mathcal{F}_t \mathcal{F}_{uv} \mathcal{F}_x \vspace{1mm} \\ + \mathcal{F}_t \mathcal{F}_{ux} \mathcal{F}_v + \mathcal{F}_{tt} \mathcal{F}_u + \mathcal{F}_{ttt} + 3 \mathcal{F}_u^3 \mathcal{F}_v v + 8 \mathcal{F}_u^2 \mathcal{F}_{uv} \mathcal{F}_v v^2 + 6 \mathcal{F}_u^2 \mathcal{F}_v^2 w_1 + 3 \mathcal{F}_u^2 \mathcal{F}_v \mathcal{F}_{vx} v + 3 \mathcal{F}_u^2 \mathcal{F}_v \mathcal{F}_x \vspace{1mm} \\ + \mathcal{F}_u \mathcal{F}_{uv}^2 \mathcal{F}_v v^3 + 26 \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v^2 v w_1 + 2 \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{vx} v^2 + 13 \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x v + 2 \mathcal{F}_u \mathcal{F}_{uvx} \mathcal{F}_v^2 v^2 + 9 \mathcal{F}_u \mathcal{F}_{ux} \mathcal{F}_v^2 v \vspace{1mm} \\ + 4 \mathcal{F}_u \mathcal{F}_v^3 w_2 + 12 \mathcal{F}_u \mathcal{F}_v^2 \mathcal{F}_{vx} w_1 + \mathcal{F}_u \mathcal{F}_v^2 \mathcal{F}_{vxx} v + 3 \mathcal{F}_u \mathcal{F}_v^2 \mathcal{F}_{xx} + \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vx}^2 v + 3 \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_x + 2 \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{xt} \vspace{1mm} \\ + 7 \mathcal{F}_{uv}^2 \mathcal{F}_v^2 v^2 w_1 + \mathcal{F}_{uv}^2 \mathcal{F}_v \mathcal{F}_x v^2 + 6 \mathcal{F}_{uv} \mathcal{F}_{ux} \mathcal{F}_v^2 v^2 + 6 \mathcal{F}_{uv} \mathcal{F}_v^3 v w_2 + 12 \mathcal{F}_{uv} \mathcal{F}_v^3 w_1^2 + 14 \mathcal{F}_{uv} \mathcal{F}_v^2 \mathcal{F}_{vx} v w_1 \vspace{1mm} \\ + 14 \mathcal{F}_{uv} \mathcal{F}_v^2 \mathcal{F}_x w_1 + 3 \mathcal{F}_{uv} \mathcal{F}_v^2 \mathcal{F}_{xx} v + 2 \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_x v + 5 \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x^2 + \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{xt} v + 8 \mathcal{F}_{uvx} \mathcal{F}_v^3 v w_1 \vspace{1mm} \\ + 2 \mathcal{F}_{uvx} \mathcal{F}_v^2 \mathcal{F}_x v + 6 \mathcal{F}_{ux} \mathcal{F}_v^3 w_1 + 6 \mathcal{F}_{ux} \mathcal{F}_v^2 \mathcal{F}_{vx} v + 3 \mathcal{F}_{ux} \mathcal{F}_v^2 \mathcal{F}_x + 3 \mathcal{F}_{uxx} \mathcal{F}_v^3 v + \mathcal{F}_v^4 w_3 + 6 \mathcal{F}_v^3 \mathcal{F}_{vx} w_2 \vspace{1mm} \\ + 4 \mathcal{F}_v^3 \mathcal{F}_{vxx} w_1 + \mathcal{F}_v^3 \mathcal{F}_{xxx} + 7 \mathcal{F}_v^2 \mathcal{F}_{vx}^2 w_1 + 3 \mathcal{F}_v^2 \mathcal{F}_{vx} \mathcal{F}_{xx} + \mathcal{F}_v^2 \mathcal{F}_{vxx} \mathcal{F}_x + \mathcal{F}_v^2 \mathcal{F}_{xxt} + \mathcal{F}_v \mathcal{F}_{vx}^2 \mathcal{F}_x + \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_{xt} + \mathcal{F}_v \mathcal{F}_{xtt}$}}} \vspace{1mm} \\ \bottomrule \\ \end{tabular} \label{TemporalDerivativesOfDependentVariableAsFunctionsOfKnownQuantities_GenericPDE} \end{table} \begin{table}[!htp] \centering \caption{Temporal derivatives of the spatial gradient of the dependent variable of the generic hyperbolic PDE \eqref{AdvectionEquation1DFunctionalForm} with approximations~\eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena}, up to third order, expressed as functions of quantities known at the current time level, and $v=u_x$, $w_1 = u_{xx}$, $w_2 = u_{xxx}$, and $w_3 = u_{xxxx}$ for notational convenience.} \vspace{3mm} \setlength{\tabcolsep}{0.35em} \begin{tabular}{cc} \toprule {\colorbox{shade_1} {\parbox{1.3cm}{\centering $v_t \equiv \mathcal{G}^{(1)}$}}} & {\colorbox{shade_1} {\parbox{2.25cm}{\centering $\mathcal{F}_u v + \mathcal{F}_v w_1 + \mathcal{F}_x$}}} \vspace{1mm} \\ {\colorbox{shade_2} {\parbox{1.375cm}{\centering $v_{tt} \equiv \mathcal{G}^{(2)}$}}} & {\colorbox{shade_2} {\parbox{10.5cm}{$\mathcal{F} \mathcal{F}_{uv} w_1 + \mathcal{F} \mathcal{F}_{ux} + \mathcal{F}_u^2 v + \mathcal{F}_u \mathcal{F}_{uv} v^2 + 2 \mathcal{F}_u \mathcal{F}_v w_1 + \mathcal{F}_u \mathcal{F}_{vx} v + \mathcal{F}_u \mathcal{F}_x + 3 \mathcal{F}_{uv} \mathcal{F}_v v w_1 \vspace{1mm} \\ + \mathcal{F}_{uv} \mathcal{F}_x v + 2 \mathcal{F}_{ux} \mathcal{F}_v v + \mathcal{F}_v^2 w_2 + 3 \mathcal{F}_v \mathcal{F}_{vx} w_1 + \mathcal{F}_v \mathcal{F}_{xx} + \mathcal{F}_{vx} \mathcal{F}_x + \mathcal{F}_{xt}$}}} \vspace{1mm} \\ {\colorbox{shade_3} {\parbox{1.45cm}{\centering $v_{ttt} \equiv \mathcal{G}^{(3)}$}}} & {\colorbox{shade_3} {\parbox{13.75cm}{$4 \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} w_1 + 2 \mathcal{F} \mathcal{F}_u \mathcal{F}_{uvx} v + 2 \mathcal{F} \mathcal{F}_u \mathcal{F}_{ux} + 5 \mathcal{F} \mathcal{F}_{uv}^2 v w_1 + 5 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{ux} v + 3 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v w_2 + 5 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{vx} w_1 \vspace{1mm} \\ + 2 \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{xx} + 4 \mathcal{F} \mathcal{F}_{uvx} \mathcal{F}_v w_1 + 2 \mathcal{F} \mathcal{F}_{uvx} \mathcal{F}_x + \mathcal{F} \mathcal{F}_{ux} \mathcal{F}_{vx} + \mathcal{F} \mathcal{F}_{uxx} \mathcal{F}_v + \mathcal{F}_t \mathcal{F}_{uv} w_1 + \mathcal{F}_t \mathcal{F}_{ux} + \mathcal{F}_u^3 v \vspace{1mm} \\ + 4 \mathcal{F}_u^2 \mathcal{F}_{uv} v^2 + 3 \mathcal{F}_u^2 \mathcal{F}_v w_1 + 2 \mathcal{F}_u^2 \mathcal{F}_{vx} v + \mathcal{F}_u^2 \mathcal{F}_x + \mathcal{F}_u \mathcal{F}_{uv}^2 v^3 + 17 \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v v w_1 + 2 \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_{vx} v^2 \vspace{1mm} \\ + 6 \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_x v + 2 \mathcal{F}_u \mathcal{F}_{uvx} \mathcal{F}_v v^2 + 7 \mathcal{F}_u \mathcal{F}_{ux} \mathcal{F}_v v + 3 \mathcal{F}_u \mathcal{F}_v^2 w_2 + 9 \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vx} w_1 + \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vxx} v + 2 \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{xx} \vspace{1mm} \\ + \mathcal{F}_u \mathcal{F}_{vx}^2 v + 2 \mathcal{F}_u \mathcal{F}_{vx} \mathcal{F}_x + \mathcal{F}_u \mathcal{F}_{xt} + 7 \mathcal{F}_{uv}^2 \mathcal{F}_v v^2 w_1 + \mathcal{F}_{uv}^2 \mathcal{F}_x v^2 + 6 \mathcal{F}_{uv} \mathcal{F}_{ux} \mathcal{F}_v v^2 + 6 \mathcal{F}_{uv} \mathcal{F}_v^2 v w_2 + 9 \mathcal{F}_{uv} \mathcal{F}_v^2 w_1^2 \vspace{1mm} \\ + 14 \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{vx} v w_1 + 8 \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x w_1 + 3 \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{xx} v + 2 \mathcal{F}_{uv} \mathcal{F}_{vx} \mathcal{F}_x v + 2 \mathcal{F}_{uv} \mathcal{F}_x^2 + \mathcal{F}_{uv} \mathcal{F}_{xt} v + 8 \mathcal{F}_{uvx} \mathcal{F}_v^2 v w_1 \vspace{1mm} \\ + 2 \mathcal{F}_{uvx} \mathcal{F}_v \mathcal{F}_x v + 6 \mathcal{F}_{ux} \mathcal{F}_v^2 w_1 + 6 \mathcal{F}_{ux} \mathcal{F}_v \mathcal{F}_{vx} v + 3 \mathcal{F}_{ux} \mathcal{F}_v \mathcal{F}_x + 3 \mathcal{F}_{uxx} \mathcal{F}_v^2 v + \mathcal{F}_v^3 w_3 + 6 \mathcal{F}_v^2 \mathcal{F}_{vx} w_2 \vspace{1mm} \\ + 4 \mathcal{F}_v^2 \mathcal{F}_{vxx} w_1 + \mathcal{F}_v^2 \mathcal{F}_{xxx} + 7 \mathcal{F}_v \mathcal{F}_{vx}^2 w_1 + 3 \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_{xx} + \mathcal{F}_v \mathcal{F}_{vxx} \mathcal{F}_x + \mathcal{F}_v \mathcal{F}_{xxt} + \mathcal{F}_{vx}^2 \mathcal{F}_x + \mathcal{F}_{vx} \mathcal{F}_{xt} + \mathcal{F}_{xtt}$}}} \vspace{1mm} \\ \bottomrule \\ \end{tabular} \label{TemporalDerivativesOfSpatialGradientOfDependentVariableAsFunctionsOfKnownQuantities_GenericPDE} \end{table} We now derive the local truncation error of the generic hyperbolic PDE~\eqref{AdvectionEquation1DFunctionalForm} discretized in space using a finite difference method, and advanced in time with the time-stepping methods in Lists~\ref{myListOfExplicitTimeSteppingMethodsForAnalysis} and~\ref{myListOfImplicitTimeSteppingMethodsForAnalysis}. The final form of this local truncation error, however, remains the same for any hyperbolic PDE, employing any type of spatial discretization, including finite element and finite volume methods, and any explicit or implicit time-stepping method, including predictor-corrector and multistep. \subsection{Forward Euler Time-Stepping Method} With the first-order Forward Euler time-stepping method, the numerical solution of the advection equation~\eqref{AdvectionEquation1DFunctionalForm} at spatial location $x_j$ and time level $t^{n+1}$ is \begin{equation} \hat{u}_j^{n+1} = u_j^n + \Delta t \widetilde{\mathcal{F}} \left(u^n_{j-j_{\text{lower}}}, u^n_{j-j_{\text{lower}}+1}, \ldots, u^n_j, \ldots, u^n_{j-j_{\text{lower}}+\alpha},x_{j-j_{\text{lower}}}, x_{j-j_{\text{lower}}+1}, \ldots, x_j, \ldots, x_{j-j_{\text{lower}}+\alpha},t^n\right), \label{PDE1DSolutionAtTimelevelNPlusOneUsingForwardEuler_1} \end{equation} where $\widetilde{\mathcal{F}}$ is a spatially discretized version of $\mathcal{F}$ with $\alpha$ being the order of the spatial discretization, and the index $j_{\text{lower}}$ depends on the finite difference scheme. If we apply the first-order upwind finite difference scheme to an advection problem with positive advection velocity, then $\alpha = 1$, $j_{\text{lower}} = 1$, and~\eqref{PDE1DSolutionAtTimelevelNPlusOneUsingForwardEuler_1} reduces to \begin{equation} \hat{u}_j^{n+1} = u_j^n + \Delta t \widetilde{\mathcal{F}} \left(u_{j-1}^n,u_j^n,x_{j-1},x_j,t^n\right). \label{PDE1DSolutionAtTimelevelNPlusOneUsingForwardEuler_1_FirstOrderFiniteVolumeUpwind} \end{equation} Using the definition of $\widetilde{\mathcal{F}}$ corresponding to Forward Euler, we rewrite~\eqref{PDE1DSolutionAtTimelevelNPlusOneUsingForwardEuler_1} as \begin{equation} \hat{u}_j^{n+1} = u_j^n + \Delta t \mathcal{F} \left(u_j^n, v_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right), x_j, t^n\right) = u_j^n + \Delta t \left\{\mathcal{F} \left(u_j^n, v_j^n,x_j, t^n\right) + \mathcal{O}\left(\Delta x^{\alpha}\right)\right\} = u_j^n + \Delta t \left(\mathcal{F}^{(1)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n, \label{PDE1DSolutionAtTimelevelNPlusOneUsingForwardEuler_2} \end{equation} and the local truncation error is \begin{equation} \hat{\tau}_j^{n+1} = u_j^{n+1} - \hat{u}_j^{n+1} = \Delta t \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^2}{2} \left(\mathcal{F}^{(2)}\right)_j^n + \mathcal{O}\left(\Delta t^3\right) = \Delta t \mathcal{O}\left(\Delta x^{\alpha}\right) + \mathcal{O}\left(\Delta t^2\right). \end{equation} \subsection{Runge-Kutta Time-Stepping Methods} We next derive the local truncation error of the numerical solution resulting from the explicit second-order midpoint method and Williamson's low-storage third-order Runge-Kutta~\citep{williamson1980low}. The explicit midpoint method is \begin{equation} \hat{u}_j^{n+1} = u_j^n + \Delta t \mathcal{F} \left(\hat{u}_j^{n+\frac{1}{2}},\hat{v}_j^{n+\frac{1}{2}}+O\left(\Delta x^{\alpha}\right),x_j,t^{n+\frac{1}{2}}\right), \label{PDE1DSolutionAtTimelevelNPlusOneUsingEMM_1} \end{equation} where \begin{equation} \hat{u}_j^{n+\frac{1}{2}} = u_j^n + \frac{\Delta t}{2} \mathcal{F} \left(u_j^n,v_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n\right) = u_j^n + \frac{\Delta t}{2} \left(\mathcal{F} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n \equiv u_j^n + \Delta \hat{u}^{n+\frac{1}{2}}_j. \label{PDE1DSolutionAtTimelevelNPlusHalfUsingEMM} \end{equation} Equation~\eqref{PDE1DSolutionAtTimelevelNPlusHalfUsingEMM} is the predicted value of $u$ at spatial location $x_j$ and time level $t^{n+\frac{1}{2}} = t^n + \frac{\Delta t}{2}$. Its exact spatial derivative is \begin{equation} \hat{v}_j^{n+\frac{1}{2}} = v_j^n + \frac{\Delta t}{2} \left(\mathcal{F}_x + \mathcal{F}_u v + \mathcal{F}_v w_1 + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n \equiv v_j^n + \Delta \hat{v}_j^{n+\frac{1}{2}}, \label{PDE1DSolutionSpatialDerivativeAtTimelevelNPlusHalfUsingEMM} \end{equation} where $w_1 = v_x = u_{xx}$. Inserting~\eqref{PDE1DSolutionAtTimelevelNPlusHalfUsingEMM} and~\eqref{PDE1DSolutionSpatialDerivativeAtTimelevelNPlusHalfUsingEMM} into~\eqref{PDE1DSolutionAtTimelevelNPlusOneUsingEMM_1}, \begin{align} \hat{u}_j^{n+1} &= u_j^n + \Delta t \mathcal{F} \left(u_j^n + \Delta \hat{u}^{n+\frac{1}{2}}_k,v_j^n + \Delta \hat{v}^{n+\frac{1}{2}}_k + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n + \frac{\Delta t}{2}\right) \nonumber \\ &= u_j^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^3}{3!} \left(\widehat{\mathcal{F}}^{(3)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^4\right), \end{align} where $\left(\widehat{\mathcal{F}}^{(3)}\right)_j^n \ne \left(\mathcal{F}^{(3)}\right)_j^n$. The local truncation error is \begin{align} \hat{\tau}_j^{n+1} &= u_j^{n+1} - \hat{u}_j^{n+1} \nonumber \\ &= \frac{\Delta t}{1!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^2}{2!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^3}{3!} \left(c_3 + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^4\right) = \Delta t \mathcal{O}\left(\Delta x^{\alpha}\right) + \Delta t^2 \mathcal{O}\left(\Delta x^{\alpha}\right) + \mathcal{O}\left(\Delta t^3\right), \end{align} where $\left(c_3\right)_j^n = \left(\mathcal{F}^{(3)}\right)_j^n - \left(\mathcal{F}^{\hat{(3)}}\right)_j^n \ne 0$. The full expressions for $\frac{1}{3!} \widehat{\mathcal{F}}^{(3)}$ and $\frac{1}{3!} c_3$ are in Table~\ref{FHat3c3ExplicitMidpointMethod}. The derivation of the local truncation error of the numerical solution resulting from Williamson's low-storage third-order Runge-Kutta time-stepping method~\citep{williamson1980low} has the following stages: \vspace{1.5mm} \\ \textbf{Stage 1} \begin{align} \hat{u}_j^{n+\frac{1}{3}} &= u_j^n + \frac{\Delta t}{3} \mathcal{F} \left(u_j^n,v_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n\right) = u_j^n + \frac{\Delta t}{3} \left(\mathcal{F} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n \equiv u_j^n + \Delta \hat{u}_j^{n + \frac{1}{3}}, \\ \hat{v}_j^{n+\frac{1}{3}} &= v_j^n + \frac{\Delta t}{3} \left(\mathcal{F}_x + \mathcal{F}_u v + \mathcal{F}_v w_1 + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n \equiv v_j^n + \Delta \hat{v}_j^{n + \frac{1}{3}}, \\ \widetilde{\mathcal{F}}_{\text{mean}} \left(\hat{u}_j^{n+\frac{1}{3}},\hat{v}_j^{n+\frac{1}{3}},x_j,t^{n+\frac{1}{3}}\right) &= -\frac{5}{9} \left(\mathcal{F} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{F} \left(\hat{u}_j^{n+\frac{1}{3}},\hat{v}_j^{n+\frac{1}{3}} + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^{n+\frac{1}{3}}\right) \nonumber \\ &= -\frac{5}{9} \left(\mathcal{F} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{F} \left(u_j^n + \Delta \hat{u}_j^{n+\frac{1}{3}}, v_j^n + \Delta \hat{v}_j^{n+\frac{1}{3}} + \mathcal{O}\left(\Delta x^{\alpha}\right), x_j, t^n+\frac{\Delta t}{3}\right). \end{align} \textbf{Stage 2} \begin{align} \hat{u}_j^{n+\frac{3}{4}} &= \hat{u}_j^{n+\frac{1}{3}} + \frac{15}{16} \Delta t \left(\widetilde{\mathcal{F}}_{\text{mean}}\right)_j^{n+\frac{1}{3}} \equiv u_j^n + \Delta \hat{u}_j^{n + \frac{3}{4}}, \\ \hat{v}_j^{n+\frac{3}{4}} &= \hat{v}_j^{n+\frac{1}{3}} + \frac{15}{16} \Delta t \left(\widetilde{\mathcal{F}}_{\text{mean},x} + \widetilde{\mathcal{F}}_{\text{mean},u} v + \widetilde{\mathcal{F}}_{\text{mean},v} w_1\right)_j^{n+\frac{1}{3}} \equiv v_j^n + \Delta \hat{v}_j^{n + \frac{3}{4}}, \\ \widetilde{\mathcal{F}}_{\text{mean}} \left(\hat{u}_j^{n+\frac{3}{4}},\hat{v}_j^{n+\frac{3}{4}},x_j,t^{n+\frac{3}{4}}\right) &= -\frac{153}{128} \left(\widetilde{\mathcal{F}}_{\text{mean}}\right)_j^{n+\frac{1}{3}} + \mathcal{F} \left(\hat{u}_j^{n+\frac{3}{4}},\hat{v}_j^{n+\frac{3}{4}} + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^{n+\frac{3}{4}}\right) \nonumber \\ &\equiv -\frac{153}{128} \left(\widetilde{\mathcal{F}}_{\text{mean}}\right)_j^{n+\frac{1}{3}} + \mathcal{F} \left(u_j^n + \Delta \hat{u}_j^{n+\frac{3}{4}}, v_j^n + \Delta \hat{v}_j^{n+\frac{3}{4}} + \mathcal{O}\left(\Delta x^{\alpha}\right), x_j, t^n+\frac{3}{4} \Delta t\right). \end{align} \begin{table}[!htp] \centering \caption{The term $\frac{1}{3!} \widehat{\mathcal{F}}^{(3)}$ in the numerical solution $\hat{u}_j^{n+1}$ given by \eqref{Exact_Solution_Time_Level_nP1_Theorem}, and the term $\frac{1}{3!} c_3$ in the local truncation error $\hat{\tau}_j^{n+1}$ given by \eqref{LocalTruncationErrorNumericalSolutionFinalForm_1}, of the generic hyperbolic PDE~\eqref{AdvectionEquation1DFunctionalForm} with approximations~\eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena}, advanced in time with the explicit midpoint method, at spatial location~$x_j$ and time level $t^{n+1} = t^n + \Delta t$, expressed as functions of quantities known at the current time level~$t^n$, and $v=u_x$, $w_1 = u_{xx}$, and $w_2 = u_{xxx}$ for notational convenience.} \vspace{3mm} \setlength{\tabcolsep}{0.35em} \begin{tabular}{cc} \toprule {\colorbox{shade_1} {\parbox{0.85cm}{\centering $\frac{1}{3!} \widehat{\mathcal{F}}^{(3)}$}}} & {\colorbox{shade_1} {\parbox{6.125cm}{$\frac{1}{4} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} v + \frac{1}{4} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v w_1 + \frac{1}{4} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_x + \frac{1}{8} \mathcal{F}_{tt}$}}} \vspace{1mm} \\ {\colorbox{shade_2} {\parbox{0.55cm}{\centering $\frac{1}{3!} c_3$}}} & {\colorbox{shade_2} {\parbox{11.25cm}{$\frac{1}{6} \mathcal{F} \mathcal{F}_u^2 + \frac{1}{12} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} v + \frac{1}{4} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v w_1 + \frac{1}{12} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_x + \frac{1}{6} \mathcal{F} \mathcal{F}_{ux} \mathcal{F}_v + \frac{1}{6} \mathcal{F}_t \mathcal{F}_u + \frac{1}{24} \mathcal{F}_{tt} \vspace{1mm} \\ + \frac{1}{3} \mathcal{F}_u^2 \mathcal{F}_v v + \frac{1}{6} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v v^2 + \frac{1}{2} \mathcal{F}_u \mathcal{F}_v^2 w_1 + \frac{1}{6} \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vx} v + \frac{1}{3} \mathcal{F}_u \mathcal{F}_v \mathcal{F}_x + \frac{1}{2} \mathcal{F}_{uv} \mathcal{F}_v^2 v w_1 \vspace{1mm} \\ + \frac{1}{6} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x v + \frac{1}{3} \mathcal{F}_{ux} \mathcal{F}_v^2 v + \frac{1}{6} \mathcal{F}_v^3 w_2 + \frac{1}{2} \mathcal{F}_v^2 \mathcal{F}_{vx} w_1 + \frac{1}{6} \mathcal{F}_v^2 \mathcal{F}_{xx} + \frac{1}{6} \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_x + \frac{1}{6} \mathcal{F}_v \mathcal{F}_{xt}$}}} \vspace{1mm} \\ \bottomrule \\ \end{tabular} \label{FHat3c3ExplicitMidpointMethod} \end{table} \begin{table}[!htp] \centering \caption{The term $\frac{1}{4!} \widehat{\mathcal{F}}^{(4)}$ in the numerical solution $\hat{u}_j^{n+1}$ given by \eqref{Exact_Solution_Time_Level_nP1_Theorem}, and the term $\frac{1}{4!} c_4$ in the local truncation error $\hat{\tau}_j^{n+1}$ given by \eqref{LocalTruncationErrorNumericalSolutionFinalForm_1}, of the generic hyperbolic PDE~\eqref{AdvectionEquation1DFunctionalForm} with approximations~\eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena}, advanced in time with the low-storage third-order Runge-Kutta method of \citet{williamson1980low}, at spatial location~$x_j$ and time level $t^{n+1} = t^n + \Delta t$, expressed as functions of quantities known at the current time level~$t^n$, and $v=u_x$, $w_1 = u_{xx}$, and $w_2 = u_{xxx}$ for notational convenience.} \vspace{3mm} \setlength{\tabcolsep}{0.35em} \begin{tabular}{cc} \toprule {\colorbox{shade_1} {\parbox{0.85cm}{\centering $\frac{1}{4!} \widehat{\mathcal{F}}^{(4)}$}}} & {\colorbox{shade_1} {\parbox{13.75cm}{$\frac{1}{8} \mathcal{F}^2 \mathcal{F}_{uv}^2 w_1 + \frac{1}{8} \mathcal{F}^2 \mathcal{F}_{uv} \mathcal{F}_{ux} + \frac{11}{36} \mathcal{F} \mathcal{F}_u^2 \mathcal{F}_{uv} v + \frac{1}{8} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv}^2 v^2 + \frac{35}{72} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v w_1 + \frac{1}{8} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_{vx} v \vspace{1mm} \\ + \frac{11}{36} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_x + \frac{1}{18} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uvx} \mathcal{F}_v v + \frac{35}{72} \mathcal{F} \mathcal{F}_{uv}^2 \mathcal{F}_v v w_1 + \frac{1}{8} \mathcal{F} \mathcal{F}_{uv}^2 \mathcal{F}_x v + \frac{13}{36} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{ux} \mathcal{F}_v v \vspace{1mm} \\ + \frac{13}{72} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v^2 w_2 + \frac{35}{72} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{vx} w_1 + \frac{13}{72} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{xx} + \frac{1}{8} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{vx} \mathcal{F}_x + \frac{1}{8} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{xt} + \frac{1}{18} \mathcal{F} \mathcal{F}_{uvx} \mathcal{F}_v^2 w_1 \vspace{1mm} \\ + \frac{1}{18} \mathcal{F} \mathcal{F}_{uvx} \mathcal{F}_v \mathcal{F}_x + \frac{1}{8} \mathcal{F}_t \mathcal{F}_u \mathcal{F}_{uv} v + \frac{1}{8} \mathcal{F}_t \mathcal{F}_{uv} \mathcal{F}_v w_1 + \frac{1}{8} \mathcal{F}_t \mathcal{F}_{uv} \mathcal{F}_x + \frac{1}{36} \mathcal{F}_{tt} \mathcal{F}_u + \frac{17}{432} \mathcal{F}_{ttt} + \frac{13}{72} \mathcal{F}_u^2 \mathcal{F}_{uv} \mathcal{F}_v v^2 \vspace{1mm} \\ + \frac{13}{36} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v^2 v w_1 + \frac{13}{36} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x v + \frac{13}{72} \mathcal{F}_{uv} \mathcal{F}_v^3 w_1^2 + \frac{13}{36} \mathcal{F}_{uv} \mathcal{F}_v^2 \mathcal{F}_x w_1 + \frac{13}{72} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x^2 + \frac{1}{36} \mathcal{F}_v \mathcal{F}_{xtt}$}}} \vspace{1mm} \\ {\colorbox{shade_2} {\parbox{0.55cm}{\centering $\frac{1}{4!} c_4$}}} & {\colorbox{shade_2} {\parbox{14.25cm}{$\frac{1}{24} \mathcal{F} \mathcal{F}_u^3 + \frac{1}{36} \mathcal{F} \mathcal{F}_u^2 \mathcal{F}_{uv} v + \frac{13}{72} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v w_1 + \frac{1}{36} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_x + \frac{1}{36} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uvx} \mathcal{F}_v v + \frac{1}{8} \mathcal{F} \mathcal{F}_u \mathcal{F}_{ux} \mathcal{F}_v \vspace{1mm} \\ + \frac{7}{72} \mathcal{F} \mathcal{F}_{uv}^2 \mathcal{F}_v v w_1 + \frac{7}{72} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_{ux} \mathcal{F}_v v + \frac{5}{72} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v^2 w_2 + \frac{7}{72} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{vx} w_1 + \frac{1}{36} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{xx} + \frac{1}{9} \mathcal{F} \mathcal{F}_{uvx} \mathcal{F}_v^2 w_1 \vspace{1mm} \\ + \frac{1}{36} \mathcal{F} \mathcal{F}_{uvx} \mathcal{F}_v \mathcal{F}_x + \frac{1}{24} \mathcal{F} \mathcal{F}_{ux} \mathcal{F}_v \mathcal{F}_{vx} + \frac{1}{24} \mathcal{F} \mathcal{F}_{uxx} \mathcal{F}_v^2 + \frac{1}{24} \mathcal{F}_t \mathcal{F}_u^2 + \frac{1}{24} \mathcal{F}_t \mathcal{F}_{uv} \mathcal{F}_v w_1 + \frac{1}{24} \mathcal{F}_t \mathcal{F}_{ux} \mathcal{F}_v + \frac{1}{72} \mathcal{F}_{tt} \mathcal{F}_u \vspace{1mm} \\ + \frac{1}{432} \mathcal{F}_{ttt} + \frac{1}{8} \mathcal{F}_u^3 \mathcal{F}_v v + \frac{11}{72} \mathcal{F}_u^2 \mathcal{F}_{uv} \mathcal{F}_v v^2 + \frac{1}{4} \mathcal{F}_u^2 \mathcal{F}_v^2 w_1 + \frac{1}{8} \mathcal{F}_u^2 \mathcal{F}_v \mathcal{F}_{vx} v + \frac{1}{8} \mathcal{F}_u^2 \mathcal{F}_v \mathcal{F}_x + \frac{1}{24} \mathcal{F}_u \mathcal{F}_{uv}^2 \mathcal{F}_v v^3 \vspace{1mm} \\ + \frac{13}{18} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v^2 v w_1 + \frac{1}{12} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{vx} v^2 + \frac{13}{72} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x v + \frac{1}{12} \mathcal{F}_u \mathcal{F}_{uvx} \mathcal{F}_v^2 v^2 + \frac{3}{8} \mathcal{F}_u \mathcal{F}_{ux} \mathcal{F}_v^2 v + \frac{1}{8} \mathcal{F}_u \mathcal{F}_v^3 w_2 \vspace{1mm} \\ + \frac{1}{2} \mathcal{F}_u \mathcal{F}_v^2 \mathcal{F}_{vx} w_1 + \frac{1}{24} \mathcal{F}_u \mathcal{F}_v^2 \mathcal{F}_{vxx} v + \frac{1}{8} \mathcal{F}_u \mathcal{F}_v^2 \mathcal{F}_{xx} + \frac{1}{24} \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vx}^2 v + \frac{1}{8} \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_x + \frac{1}{12} \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{xt} + \frac{7}{24} \mathcal{F}_{uv}^2 \mathcal{F}_v^2 v^2 w_1 \vspace{1mm} \\ + \frac{1}{24} \mathcal{F}_{uv}^2 \mathcal{F}_v \mathcal{F}_x v^2 + \frac{1}{4} \mathcal{F}_{uv} \mathcal{F}_{ux} \mathcal{F}_v^2 v^2 + \frac{1}{4} \mathcal{F}_{uv} \mathcal{F}_v^3 v w_2 + \frac{23}{72} \mathcal{F}_{uv} \mathcal{F}_v^3 w_1^2 + \frac{7}{12} \mathcal{F}_{uv} \mathcal{F}_v^2 \mathcal{F}_{vx} v w_1 + \frac{2}{9} \mathcal{F}_{uv} \mathcal{F}_v^2 \mathcal{F}_x w_1 \vspace{1mm} \\ + \frac{1}{8} \mathcal{F}_{uv} \mathcal{F}_v^2 \mathcal{F}_{xx} v + \frac{1}{12} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_x v + \frac{1}{36} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x^2 + \frac{1}{24} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_{xt} v + \frac{1}{3} \mathcal{F}_{uvx} \mathcal{F}_v^3 v w_1 + \frac{1}{12} \mathcal{F}_{uvx} \mathcal{F}_v^2 \mathcal{F}_x v \vspace{1mm} \\ + \frac{1}{4} \mathcal{F}_{ux} \mathcal{F}_v^3 w_1 + \frac{1}{24} \mathcal{F}_{ux} \mathcal{F}_v^3 w_2 + \frac{1}{4} \mathcal{F}_{ux} \mathcal{F}_v^2 \mathcal{F}_{vx} v + \frac{1}{8} \mathcal{F}_{ux} \mathcal{F}_v^2 \mathcal{F}_x + \frac{1}{8} \mathcal{F}_{uxx} \mathcal{F}_v^3 v + \frac{1}{24} \mathcal{F}_v^4 w_2 + \frac{1}{4} \mathcal{F}_v^3 \mathcal{F}_{vx} w_2 + \frac{1}{6} \mathcal{F}_v^3 \mathcal{F}_{vxx} w_1 \vspace{1mm} \\ + \frac{1}{24} \mathcal{F}_v^3 \mathcal{F}_{xxx} + \frac{7}{24} \mathcal{F}_v^2 \mathcal{F}_{vx}^2 w_1 + \frac{1}{8} \mathcal{F}_v^2 \mathcal{F}_{vx} \mathcal{F}_{xx} + \frac{1}{24} \mathcal{F}_v^2 \mathcal{F}_{vxx} \mathcal{F}_x + \frac{1}{24} \mathcal{F}_v^2 \mathcal{F}_{xxt} + \frac{1}{24} \mathcal{F}_v \mathcal{F}_{vx}^2 \mathcal{F}_x + \frac{1}{24} \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_{xt} + \frac{1}{72} \mathcal{F}_v \mathcal{F}_{xtt}$}}} \vspace{1mm} \\ \bottomrule \\ \end{tabular} \label{FHat4c4WilliamsonLowStorageThirdOrderRungeKuttaMethod} \end{table} \noindent \textbf{Stage 3} \begin{align} \hat{u}_j^{n+1} &= \hat{u}^{n+\frac{3}{4}} + \frac{8}{15} \Delta t \left(\widetilde{\mathcal{F}}_{\text{mean}}\right)_j^{n+\frac{3}{4}} \nonumber \\ &= u_j^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n \nonumber \\ &\hspace{0.35cm}+ \frac{\Delta t^3}{3!} \left(\mathcal{F}^{(3)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^4}{4!} \left(\widehat{\mathcal{F}}^{(4)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^5\right), \end{align} where $\left(\widehat{\mathcal{F}}^{(4)}\right)_j^n \ne \left(\mathcal{F}^{(4)}\right)_j^n$. The full expressions for $\mathcal{F}_{\text{mean}} \left(\hat{u}_j^{n+\theta_k},\hat{v}_j^{n+\theta_k},x_j,t^{n+\theta_k}\right)$ are found by Taylor expanding $\mathcal{F} \left(\hat{u}_j^{n+\theta_k},\hat{v}_j^{n+\theta_k},x_j,t^{n+\theta_k}\right) \equiv \mathcal{F} \left(u_j^n + \Delta \hat{u}_j^{n+\theta_k}, v_j^n + \Delta \hat{v}_j^{n+\theta_k}, x_j, t^n+\theta_k \Delta t\right)$ about $u_j^n$, $v_j^n$, and $t^n$ for $k=1$, $2$ and $\theta_1 = \frac{1}{3}$, $\theta_2 = \frac{3}{4}$. After Stage 3, we can determine the local truncation error \begin{align} \hat{\tau}_j^{n+1} = u_j^{n+1} - \hat{u}_j^{n+1} &= \frac{\Delta t}{1!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^2}{2!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^3}{3!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^4}{4!} \left(c_4 + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^5\right) \nonumber \\ &=\Delta t \mathcal{O}\left(\Delta x^{\alpha}\right) + \Delta t^2 \mathcal{O}\left(\Delta x^{\alpha}\right) + \Delta t^3 \mathcal{O}\left(\Delta x^{\alpha}\right) + \mathcal{O}\left(\Delta t^4\right), \end{align} where $\left(c_4\right)_j^n = \left(\mathcal{F}^{(4)}\right)_j^n - \left(\mathcal{F}^{\hat{(4)}}\right)_j^n \ne 0$. The full expressions for $\frac{1}{4!} \widehat{\mathcal{F}}^{(4)}$ and $\frac{1}{4!} c_4$ are in Table~\ref{FHat4c4WilliamsonLowStorageThirdOrderRungeKuttaMethod}. Summarizing, a predictor-corrector Runge-Kutta method of order $\beta$ results in $\left(\widehat{\mathcal{F}}^{(k)}\right)_j^n = \left(\mathcal{F}^{(k)}\right)_j^n$ for $k = 1,2,\ldots,\beta$, and $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)_j^n$ only consists of some of the terms in $\left(\mathcal{F}^{(\beta+1)}\right)_j^n$ but not necessarily with the same multiplicative factors, so $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)_j^n \ne \left(\mathcal{F}^{(\beta+1)}\right)_j^n$. \subsection{Adams-Bashforth Time-Stepping Methods} We now consider Adams-Bashforth methods, which involve the solution $u_j^{n-m} = u_j^n + \Delta u_j^{n-m}$ and its spatial derivative $v_j^{n-m} = v_j^n + \Delta v_j^{n-m}$, at spatial location~\ $x_j$ and time level $t^{n-m}$, where \begin{align} \Delta u_j^{n-m} = \sum_{k=1}^{\infty} \frac{(-m \Delta t)^k}{k!} \left(\mathcal{F}^{(k)}\right)_j^n \quad \text{and} \quad \Delta v_j^{n-m} = \sum_{k=1}^{\infty} \frac{(-m \Delta t)^k}{k!} \left(\mathcal{G}^{(k)}\right)_j^n. \end{align} Second-order Adams-Bashforth results in the numerical solution \begin{align} \hat{u}_j^{n+1} &= u_j^n + \Delta t \left\{\frac{3}{2} \mathcal{F}\left(u_j^n,v_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n\right) - \frac{1}{2} \mathcal{F}\left(u_j^{n-1},v_j^{n-1} + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^{n-1}\right)\right\} \nonumber \\ &= u_j^n + \Delta t \left\{\frac{3}{2} \mathcal{F}\left(u_j^n,v_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n\right) - \frac{1}{2} \mathcal{F}\left(u_j^n + \Delta u_j^{n-1},v_j^n + \Delta v_j^{n-1} + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n - \Delta t\right)\right\} \nonumber \\ &= u_j^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^3}{3!}\left(\widehat{\mathcal{F}}^{(3)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^4\right), \end{align} and the local truncation error \begin{align} \hat{\tau}_j^{n+1} = u_j^{n+1} - \hat{u}_j^{n+1} &= \frac{\Delta t}{1!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^2}{2!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^3}{3!} \left(c_3 + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^4\right) \nonumber \\ &= \Delta t \mathcal{O}\left(\Delta x^{\alpha}\right) + \Delta t^2 \mathcal{O}\left(\Delta x^{\alpha}\right) + \mathcal{O}\left(\Delta t^3\right), \end{align} where $\left(\widehat{\mathcal{F}}^{(3)}\right)_j^n = -\frac{3}{2} \left(\mathcal{F}^{(3)}\right)_j^n \ne \left(\mathcal{F}^{(3)}\right)_j^n$, and $\left(c_3\right)_j^n = \left(\mathcal{F}^{(3)}\right)_j^n - \left(\widehat{\mathcal{F}}^{(3)}\right)_j^n = \frac{5}{2} \left(\mathcal{F}^{(3)}\right)_j^n \ne 0$. Third-order Adams-Bashforth method results in the numerical solution \begin{align} \hat{u}_j^{n+1} &= u_j^n + \Delta t \left\{\frac{23}{12} \mathcal{F}\left(u_j^n,v_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n\right) - \frac{16}{12} \mathcal{F}\left(u_j^{n-1},v_j^{n-1} + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^{n-1}\right)\right. \nonumber \\ &\hspace{1.625cm} \left.+ \frac{5}{12} \mathcal{F}\left(u_j^{n-2},v_j^{n-2} + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^{n-2}\right)\right\} \nonumber \\ &= u_j^n + \Delta t \left\{\frac{23}{12} \mathcal{F}\left(u_j^n,v_j^n,t^n\right) - \frac{16}{12} \mathcal{F}\left(u_j^n + \Delta u_j^{n-1},v_j^n + \Delta v_j^{n-1} + \mathcal{O}\left(\Delta x^{\alpha}\right),t^n - \Delta t\right)\right. \nonumber \\ &\hspace{1.625cm} \left.+ \frac{5}{12} \mathcal{F}\left(u_j^n + \Delta u_j^{n-2},v_j^n + \Delta v_j^{n-2} + \mathcal{O}\left(\Delta x^{\alpha}\right),t^n - 2 \Delta t\right) + \mathcal{O}\left(\Delta x^{\alpha}\right)\right\} \nonumber \\ &= u_j^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n \nonumber \\ &\hspace{0.35cm}+ \frac{\Delta t^3}{3!}\left(\mathcal{F}^{(3)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^4}{4!}\left(\widehat{\mathcal{F}}^{(4)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^5\right), \end{align} and the local truncation error \begin{align} \hat{\tau}_j^{n+1} = u_j^{n+1} - \hat{u}_j^{n+1} &= \frac{\Delta t}{1!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^2}{2!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^3}{3!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^4}{4!} \left(c_4 + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^5\right) \nonumber \\ &= \Delta t \mathcal{O}\left(\Delta x^{\alpha}\right) + \Delta t^2 \mathcal{O}\left(\Delta x^{\alpha}\right) + \Delta t^3 \mathcal{O}\left(\Delta x^{\alpha}\right) + \mathcal{O}\left(\Delta t^4\right), \end{align} where $\left(\widehat{\mathcal{F}}^{(4)}\right)_j^n = -8 \left(\mathcal{F}^{(4)}\right)_j^n \ne \left(\mathcal{F}^{(4)}\right)_j^n$, and $\left(c_4\right)_j^n = \left(\mathcal{F}^{(4)}\right)_j^n - \left(\widehat{\mathcal{F}}^{(4)}\right)_j^n = 9 \left(\mathcal{F}^{(4)}\right)_j^n \ne 0$. Summarizing, when applying a $\beta$-order Adams-Bashforth method, we obtain $\left(\widehat{\mathcal{F}}^{(k)}\right)_j^n = \left(\mathcal{F}^{(k)}\right)_j^n$ for $k = 1,2,\ldots,\beta$, and $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)_j^n = \gamma \left(\mathcal{F}^{(\beta+1)}\right)_j^n \ne \left(\mathcal{F}^{(\beta+1)}\right)_j^n$ where $\gamma \ne 1$. \subsection{Implicit Time-Stepping Methods} We consider the three implicit time-stepping methods in List~\ref{myListOfImplicitTimeSteppingMethodsForAnalysis}. Each method's local truncation error involves the solution $u_j^{n+1} = u_j^n + \Delta u_j^{n+1}$ and its spatial derivative $v_j^{n+1} = v_j^n + \Delta v_j^{n+1}$ at spatial location $x_j$ and time level $t^{n+1}$, where \begin{align} \Delta u_j^{n+1} = \sum_{k=1}^{\infty} \frac{(\Delta t)^k}{k!} \left(\mathcal{F}^{(k)}\right)_j^n \quad \text{and} \quad \Delta v_j^{n+1} = \sum_{k=1}^{\infty} \frac{(\Delta t)^k}{k!} \left(\mathcal{G}^{(k)}\right)_j^n. \end{align} Applying the first-order Backward Euler method, \begin{align} \hat{u}_j^{n+1} &= u_j^n + \Delta t \mathcal{F} \left(u_j^{n+1},v_j^{n+1}+O\left(\Delta x^{\alpha}\right),x_j,t^{n+1}\right) \equiv u_j^n + \Delta t \mathcal{F} \left(u_j^n + \Delta u_j^{n+1},v_j^n + \Delta v_j^{n+1} + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n + \Delta t\right) \nonumber \\ &= u_j^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^2}{2!} \left(\widehat{\mathcal{F}}^{(2)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^3\right), \end{align} where $\left(\widehat{\mathcal{F}}^{(2)}\right)_j^n = 2 \left(\mathcal{F}^{(2)}\right)_j^n \ne \left(\mathcal{F}^{(2)}\right)_j^n$. The local truncation error is \begin{equation} \hat{\tau}_j^{n+1} = u_j^{n+1} - \hat{u}_j^{n+1} = \frac{\Delta t}{1!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^2}{2!} \left(c_2 + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^3\right) = \Delta t \mathcal{O}\left(\Delta x^{\alpha}\right) + \mathcal{O}\left(\Delta t^2\right), \end{equation} where $\left(c_2\right)^n = \left(\mathcal{F}^{(2)}\right)_j^n - \left(\widehat{\mathcal{F}}^{(2)}\right)_j^n = -\left(\mathcal{F}^{(2)}\right)_j^n \ne 0$. \begin{table}[!htp] \centering \caption{The term $\frac{1}{3!} \widehat{\mathcal{F}}^{(3)}$ in the numerical solution $\hat{u}_j^{n+1}$ given by \eqref{Exact_Solution_Time_Level_nP1_Theorem}, and the term $\frac{1}{3!} c_3$ in the local truncation error $\hat{\tau}_j^{n+1}$ given by \eqref{LocalTruncationErrorNumericalSolutionFinalForm_1}, of the generic hyperbolic PDE~\eqref{AdvectionEquation1DFunctionalForm} with approximations~\eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena}, advanced in time with the implicit midpoint method, at spatial location~$x_j$ and time level $t^{n+1} = t^n + \Delta t$, expressed as functions of quantities known at the current time level~$t^n$, and $v=u_x$, $w_1 = u_{xx}$, and $w_2 = u_{xxx}$ for notational convenience.} \vspace{3mm} \setlength{\tabcolsep}{0.35em} \begin{tabular}{cc} \toprule {\colorbox{shade_1} {\parbox{0.85cm}{\centering $\frac{1}{3!} \widehat{\mathcal{F}}^{(3)}$}}} & {\colorbox{shade_1} {\parbox{11.25cm}{$\frac{1}{4} \mathcal{F} \mathcal{F}_u^2 + \frac{1}{4} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} v + \frac{1}{2} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_v w_1 + \frac{1}{4} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_x + \frac{1}{4} \mathcal{F} \mathcal{F}_{ux} \mathcal{F}_v + \frac{1}{4} \mathcal{F}_t \mathcal{F}_u + \frac{1}{8} \mathcal{F}_{tt} \vspace{1mm} \\ + \frac{1}{2} \mathcal{F}_u^2 \mathcal{F}_v v + \frac{1}{4} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v v^2 + \frac{3}{4} \mathcal{F}_u \mathcal{F}_v^2 w_1 + \frac{1}{4} \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vx} v + \frac{1}{2} \mathcal{F}_u \mathcal{F}_v \mathcal{F}_x + \frac{3}{4} \mathcal{F}_{uv} \mathcal{F}_v^2 v w_1 \vspace{1mm} \\ + \frac{1}{4} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x v + \frac{1}{2} \mathcal{F}_{ux} \mathcal{F}_v^2 v + \frac{1}{4} \mathcal{F}_v^3 w_2 + \frac{3}{4} \mathcal{F}_v^2 \mathcal{F}_{vx} w_1 + \frac{1}{4} \mathcal{F}_v^2 \mathcal{F}_{xx} + \frac{1}{4} \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_x + \frac{1}{4} \mathcal{F}_v \mathcal{F}_{xt}$}}} \vspace{1mm} \\ {\colorbox{shade_2} {\parbox{0.55cm}{\centering $\frac{1}{3!} c_3$}}} & {\colorbox{shade_2} {\parbox{11.25cm}{$-\frac{1}{12} \mathcal{F} \mathcal{F}_u^2 + \frac{1}{12} \mathcal{F} \mathcal{F}_u \mathcal{F}_{uv} v + \frac{1}{12} \mathcal{F} \mathcal{F}_{uv} \mathcal{F}_x - \frac{1}{12} \mathcal{F} \mathcal{F}_{ux} \mathcal{F}_v - \frac{1}{12} \mathcal{F}_t \mathcal{F}_u + \frac{1}{24} \mathcal{F}_{tt} - \frac{1}{6} \mathcal{F}_u^2 \mathcal{F}_v v \vspace{1mm} \\ - \frac{1}{12} \mathcal{F}_u \mathcal{F}_{uv} \mathcal{F}_v v^2 - \frac{1}{4} \mathcal{F}_u \mathcal{F}_v^2 w_1 - \frac{1}{12} \mathcal{F}_u \mathcal{F}_v \mathcal{F}_{vx} v - \frac{1}{6} \mathcal{F}_u \mathcal{F}_v \mathcal{F}_x - \frac{1}{4} \mathcal{F}_{uv} \mathcal{F}_v^2 v w_1 - \frac{1}{12} \mathcal{F}_{uv} \mathcal{F}_v \mathcal{F}_x v \vspace{1mm} \\ - \frac{1}{6} \mathcal{F}_{ux} \mathcal{F}_v^2 v - \frac{1}{12} \mathcal{F}_v^3 w_2 - \frac{1}{4} \mathcal{F}_v^2 \mathcal{F}_{vx} w_1 - \frac{1}{12} \mathcal{F}_v^2 \mathcal{F}_{xx} - \frac{1}{12} \mathcal{F}_v \mathcal{F}_{vx} \mathcal{F}_x - \frac{1}{12} \mathcal{F}_v \mathcal{F}_{xt}$}}} \vspace{1mm} \\ \bottomrule \\ \end{tabular} \label{FHat3c3ImplicitMidpointMethod} \end{table} The second-order predictor-corrector implicit midpoint method is \begin{align} \hat{u}_j^{n+1} &= u_j^n + \Delta t \mathcal{F} \left(\frac{1}{2} \left(u_j^n + u_j^{n+1}\right),\frac{1}{2} \left(v_j^n + v_j^{n+1}\right) + \mathcal{O}\left(\Delta x^{\alpha}\right),t^{n+\frac{1}{2}}\right) \nonumber \\ &\equiv u_j^n + \Delta t \mathcal{F} \left(u_j^n + \frac{1}{2} \Delta u_j^{n+1},v_j^n + \frac{1}{2} \Delta v_j^{n+1} + \mathcal{O}\left(\Delta x^{\alpha}\right),t^n + \frac{\Delta t}{2}\right) \nonumber \\ &= u_j^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^3}{3!} \left(\widehat{\mathcal{F}}^{(3)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^4\right), \end{align} with the local truncation error \begin{align} \hat{\tau}_j^{n+1} = u_j^{n+1} - \hat{u}_j^{n+1} &= \frac{\Delta t}{1!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^2}{2!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^3}{3!} \left(c_3 + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^4\right) \nonumber \\ &= \Delta t \mathcal{O}\left(\Delta x^{\alpha}\right) + \Delta t^2 \mathcal{O}\left(\Delta x^{\alpha}\right) + \mathcal{O}\left(\Delta t^3\right), \end{align} where $\left(\widehat{\mathcal{F}}^{(3)}\right)_j^n \ne \left(\mathcal{F}^{(3)}\right)_j^n$ and $\left(c_3\right)_j^n = \left(\mathcal{F}^{(3)}\right)_j^n - \left(\widehat{\mathcal{F}}^{(3)}\right)_j^n \ne 0$. The full expressions for $\left(\mathcal{F}^{(3)}\right)_j^n$ and $\left(c_3\right)_j^n$ are in Table~\ref{FHat3c3ImplicitMidpointMethod}. Finally, the trapezoidal rule results in the numerical solution \begin{align} \hat{u}_j^{n+1} &= u_j^n + \frac{\Delta t}{2} \left\{\mathcal{F} \left(u_j^n,v_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n\right) + \mathcal{F} \left(u_j^{n+1},v_j^{n+1} + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^{n+1}\right)\right\} \nonumber \\ &\equiv u_j^n + \frac{\Delta t}{2} \left\{\mathcal{F} \left(u_j^n,v_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n\right) + \mathcal{F} \left(u_j^n + \Delta u_j^{n+1},v_j^n + \Delta v_j^{n+1} + \mathcal{O}\left(\Delta x^{\alpha}\right),x_j,t^n + \Delta t\right)\right\} \nonumber \\ &= u_j^n + \frac{\Delta t}{1!} \left(\mathcal{F}^{(1)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^2}{2!} \left(\mathcal{F}^{(2)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \frac{\Delta t^3}{3!} \left(\widehat{\mathcal{F}}^{(3)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^4\right), \end{align} and the local truncation error \begin{align} \hat{\tau}_j^{n+1} = u_j^{n+1} - \hat{u}_j^{n+1} &= \frac{\Delta t}{1!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^2}{2!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^3}{3!} \left(c_3 + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^4\right) \nonumber \\ &= \Delta t \mathcal{O}\left(\Delta x^{\alpha}\right) + \Delta t^2 \mathcal{O}\left(\Delta x^{\alpha}\right) + \mathcal{O}\left(\Delta t^3\right), \end{align} where $\left(\widehat{\mathcal{F}}^{(3)}\right)_j^n = \frac{3}{2} \left(\mathcal{F}^{(3)}\right)_j^n \ne \left(\mathcal{F}^{(3)}\right)_j^n$, and $c_3^n = \left(\mathcal{F}^{(3)}\right)_j^n - \left(\widehat{\mathcal{F}}^{(3)}\right)_j^n = -\frac{1}{2} \left(\mathcal{F}^{(3)}\right)_j^n \ne 0$. Based on our examples, an implicit time-stepping method of order $\beta$ results in $\left(\widehat{\mathcal{F}}^{(k)}\right)_j^n = \left(\mathcal{F}^{(k)}\right)_j^n$ for $k = 1,2,\ldots,\beta$. For predictor-corrector methods, the term $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)_j^n$ consists of all the terms as in $\left(\mathcal{F}^{(\beta+1)}\right)_j^n$, but mostly with different pre-factors, and for multistep methods, the term $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)_j^n$ is a scalar multiple of $\left(\mathcal{F}^{(\beta+1)}\right)_j^n$, with the scalar factor not equal to one. In either case, $\left(\widehat{\mathcal{F}}^{(\beta+1)}\right)_j^n \ne \left(\mathcal{F}^{(\beta+1)}\right)_j^n$. \subsection{Local and Global Truncation Errors of a Hyperbolic PDE} We have observed that the local truncation error of both the predictor-corrector and the multistep time-stepping methods when applied to the first-order hyperbolic PDE have the same generic form. The result is the following Theorem. \begin{thm} \label{Theorem_1} The global truncation error of a hyperbolic PDE $u_t = \mathcal{F}(u,u_x,x,t)$ on a uniform mesh with spacing $\Delta x$ after an integral number of time steps of magnitude $\Delta t$ is \begin{equation} \hat{\tau}_G = \mathcal{O}\left({\Delta x}^{\alpha}\right) + \Delta t \mathcal{O}\left({\Delta x}^{\alpha}\right) + {\Delta t}^2 \mathcal{O}\left({\Delta x}^{\alpha}\right) + \cdots + {\Delta t}^{\beta-1} \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right), \label{GlobalTruncationErrorNumericalSolutionFinalForm} \end{equation} which reduces to \begin{equation} \hat{\tau}_G \approx \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right) \text{ for $\Delta t \ll 1$}. \label{GlobalTruncationErrorNumericalSolutionFinalForm_ApproximationForDeltaTMuchLessThanOne} \end{equation} \begin{proof} Given the exact solution $u_j^n$ of a hyperbolic PDE $u_t = \mathcal{F}(u,u_x,x,t)$ on a uniform mesh with spacing $\Delta x$, at spatial locations $x_j$ for $j=1,2,\ldots$, and at time level $t^n$, the exact solution at time level $t^{n+1} = t^n + \Delta t$ may be obtained by Taylor expanding $u_j^n$ about time level $t^n$ as \begin{equation} u_j^{n+1} = u_j^n + \sum \limits_{k=1}^{\infty} \frac{\Delta t^k}{k!} \left(\frac{\partial^k u}{\partial t^k}\right)_j^n \equiv u_j^n + \sum \limits_{k=1}^{\infty} \frac{\Delta t^k}{k!} \left(\mathcal{F}^{(k)}\right)_j^n, \label{Exact_Solution_Time_Level_nP1_Theorem} \end{equation} where $\left(\mathcal{F}^{(k)}\right)_j^n = \left(\frac{\partial^k u}{\partial t^k}\right)_j^n$ is the $k^{\text{th}}$-order spatial derivative at $x_j$ and $t^n$. The numerical solution at time level $t^{n+1}$, obtained with a time-stepping method belonging to the Method of Lines, may be written in the general form \begin{equation} \hat{u}_j^{n+1} = u_j^n + \sum \limits_{k=1}^{\infty} \frac{\Delta t^k}{k!} \left(\widehat{\mathcal{F}}^{(k)} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n, \label{Numerical_Solution_Time_Level_nP1_Theorem} \end{equation} where $\alpha$ is the order of the spatial discretization and $\widehat{\mathcal{F}}^{(k)}$ is specified by the time-stepping method. If $\beta$ represents the order of the time-stepping method, \begin{equation} \left(\widehat{\mathcal{F}}^{(k)}\right)_j^n = \left(\mathcal{F}^{(k)}\right)_j^n \equiv \left(\frac{\partial^k u}{\partial t^k}\right)_j^n \mbox{ for } k = 1,2,\ldots,\beta. \label{FHat_F_Theorem} \end{equation} Subtracting \eqref{Numerical_Solution_Time_Level_nP1_Theorem} from \eqref{Exact_Solution_Time_Level_nP1_Theorem}, we obtain the local truncation error \begin{align} \hat{\tau}_j^{n+1} &= u_j^{n+1} - \hat{u}_j^{n+1} \nonumber \\ &= \sum \limits_{k=1}^{\infty} \frac{\Delta t^k}{k!} \left\{\left(\mathcal{F}^{(k)}\right)_j^n - \left(\widehat{\mathcal{F}}^{(k)}\right)_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right)\right\}. \label{Local_Truncation_Error_Time_Level_nP1_Theorem} \end{align} Combining~\eqref{Local_Truncation_Error_Time_Level_nP1_Theorem} and \eqref{FHat_F_Theorem}, \begin{align} \hat{\tau}_j^{n+1} &= \sum \limits_{k=1}^{\beta} \frac{\Delta t^k}{k!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \sum \limits_{k=\beta+1}^{\infty} \frac{\Delta t^k}{k!} \left\{\left(\mathcal{F}^{(k)}\right)_j^n - \left(\widehat{\mathcal{F}}^{(k)}\right)_j^n + \mathcal{O}\left(\Delta x^{\alpha}\right)\right\} \nonumber \\ &= \frac{\Delta t}{1!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^2}{2!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^3}{3!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \cdots + \frac{\Delta t^{\beta}}{{\beta}!} \mathcal{O}\left(\Delta x^{\alpha}\right) + \frac{\Delta t^{\beta+1}}{(\beta+1)!} \left(c_{\beta+1} + \mathcal{O}\left(\Delta x^{\alpha}\right)\right)_j^n + \mathcal{O}\left(\Delta t^{\beta+2}\right) \label{LocalTruncationErrorNumericalSolutionFinalForm_1} \\ &= \Delta t \mathcal{O}\left({\Delta x}^{\alpha}\right) + {\Delta t}^2 \mathcal{O}\left({\Delta x}^{\alpha}\right) + {\Delta t}^3 \mathcal{O}\left({\Delta x}^{\alpha}\right) + \cdots + {\Delta t}^{\beta} \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta+1}\right), \label{LocalTruncationErrorNumericalSolutionFinalForm} \end{align} where $\left(c_{\beta+1}\right)_j^n = \left(\mathcal{F}^{(\beta+1)}\right)_j^n - \left(\widehat{\mathcal{F}}^{(\beta+1)}\right)_j^n \ne 0$. The global truncation error at a time horizon, after an integral number of time steps, is one order of $\Delta t$ less than its local counterpart, and can be expressed as \begin{equation} \left(\hat{\tau}_G\right)_j = \mathcal{O}\left({\Delta x}^{\alpha}\right) + \Delta t \mathcal{O}\left({\Delta x}^{\alpha}\right) + {\Delta t}^2 \mathcal{O}\left({\Delta x}^{\alpha}\right) + \cdots + {\Delta t}^{\beta-1} \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right), \label{GlobalTruncationErrorNumericalSolutionFinalForm_GivenSpatialLocation} \end{equation} which reduces to \begin{equation} \left(\hat{\tau}_G\right)_j = \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right) \text{ for $\Delta t \ll 1$}. \label{GlobalTruncationErrorNumericalSolutionFinalForm_ApproximationForDeltaTMuchLessThanOne_GivenSpatialLocation} \end{equation} Replacing $\left(\hat{\tau}_G\right)_j$ by its norm over all spatial locations $x_j$, \eqref{GlobalTruncationErrorNumericalSolutionFinalForm_GivenSpatialLocation} and \eqref{GlobalTruncationErrorNumericalSolutionFinalForm_ApproximationForDeltaTMuchLessThanOne_GivenSpatialLocation} become \eqref{GlobalTruncationErrorNumericalSolutionFinalForm} and \eqref{GlobalTruncationErrorNumericalSolutionFinalForm_ApproximationForDeltaTMuchLessThanOne} respectively. \end{proof} If we employ a stable numerical scheme and the global solution error is the same order of accuracy as the global truncation error, we arrive at the following Corollaries. \end{thm} \begin{cor} \label{Corollary_1} The order of convergence of a hyperbolic PDE in the asymptotic regime at constant ratio of time step to cell width is specified by the minimum of the orders of the spatial and temporal discretizations. \end{cor} \begin{cor} \label{Corollary_2} To achieve the maximum possible order of convergence in the asymptotic regime at constant ratio of time step to cell width, the time-stepping method used to advance a hyperbolic PDE should at least have the same order of accuracy as the spatial discretization. \end{cor} \begin{cor} \label{Corollary_3} The discretization of a hyperbolic PDE under only spatial or only temporal refinement in the asymptotic regime is not guaranteed to converge. \end{cor} We can compare the behavior of the local truncation error of the generic hyperbolic PDE with that of the generic ODE. We know that an order $\beta$ time-stepping method is constructed so that if the solution of an ODE were exact at time $t^n$, then the error at time step $t^{n+1} = t^n + \Delta t$ will be $\hat{\tau}^{n+1} = \mathcal{O}\left(\Delta t^{\beta + 1}\right)$. If we express the exact solution $u^{n+1}$ and the numerical solution $\hat{u}^{n+1}$ as polynomials in $\Delta t$, the coefficients of $\Delta t^k$ for $k = 0, 1, 2, \ldots, \beta$ in $\hat{u}^{n+1}$ match those of $u^{n+1}$. When the ODE is expressed in the generic form~\eqref{ODE1D}, these coefficients of $\Delta t^k$ for $k = 1, 2, \ldots$ consist of partial and mixed derivatives of the right-hand side term $\mathcal{F}(u,t)$, to be referred to as the tendency term from here onward, with respect to $u$ and $t$. To pinpoint the source of these derivatives, we recapitulate that $\hat{u}^{n+1}$ consists of tendency terms, either at intermediate time levels between (and including) $t^n$ and $t^{n+1}$ for a predictor-corrector time-stepping method, or at current and previous time levels $t^{n-k}$ for $k=0,1,2,\ldots$ for a multistep time-stepping method. When the tendency terms are Taylor expanded about $(u^n,t^n)$, the result is the above-mentioned partial and mixed $u$- and $t$-derivatives of $\mathcal{F}(u,t)$ in the polynomial expression for $\hat{u}^{n+1}$. After expressing the mixed and $t$-derivatives of $\mathcal{F}(u,t)$ as functions of known quantities at time level $t^n$, we observe that the coefficients of $\Delta t^k$ for $k=0,1,\ldots,\beta$ in this polynomial are equal to those of $u^{n+1}$, which in turn results in $\hat{\tau}^{n+1} = u^{n+1} - \hat{u}^{n+1} = \mathcal{O}\left(\Delta t^{\beta+1}\right)$. A fundamental reason for this result is that for given values of $u$ and $t$, the tendency term $\mathcal{F}(u,t)$ is exact for an ODE. The derivation of the local truncation error of a generic hyperbolic PDE involves the same operations as the generic ODE, but with one fundamental difference---the above-mentioned tendency terms are replaced with their spatially discretized versions. Since the time derivative of the dependent variable of a hyperbolic PDE is a function of the dependent and independent variables, and also the spatial derivatives of the dependent variable, we need to perform a discretization in space while computing the tendency term at any instant of time. Whenever we perform this operation, we introduce an $\mathcal{O}\left(\Delta x^{\alpha}\right)$ term, where $\alpha$ is the order of the spatial discretization. Denoting $v=u_x$, $w_1=u_{xx}$, $w_2=u_{xxx}$, $\ldots$, the Taylor expansion of the spatially discretized tendency terms centered at $u_j^n$, $v_j^n$, $\left(w_1\right)_j^n$, $\left(w_2\right)_j^n$, $\ldots$, $x_j$ and $t^n$, contain \begin{enumerate}[label=(\alph*),noitemsep] \item the dependent variables $u$, $v$, $w_1$, $w_2$, $\ldots$ defined at $x_j$ and $t^n$; \item the partial and mixed derivatives of $\mathcal{F}(u,v,x,t)$ with respect to the dependent and independent variables $u$, $v$, $x$, and $t$ all defined at $x_j$ and $t^n$; \item the $\mathcal{O}\left(\Delta x^{\alpha}\right)$ terms. \end{enumerate} If the spatial discretization operator were exact, the $\mathcal{O}\left({\Delta x}^{\alpha}\right)$ terms would be absent. In other words, the discretization error would only consist of its temporal component, and the local truncation error would assume the form $\mathcal{O}\left({\Delta t}^{\beta+1}\right)$, identical to the result when applying an order $\beta$ time-stepping method to an ODE. However, since we cannot make this assumption for a general PDE, terms involving $\mathcal{O}\left({\Delta x}^{\alpha}\right)$ are expected to appear when we replace a tendency term with its spatially discretized version. This introduces an $\mathcal{O}\left({\Delta x}^{\alpha}\right)$ term in the coefficient of ${\Delta t}^k$ for $k = 1, 2, 3, \ldots$ in the final expression for $\hat{u}_j^{n+1}$ that is not present in the corresponding coefficient of ${\Delta t}^k$ in $u_j^{n+1}$. This $\mathcal{O}\left({\Delta x}^{\alpha}\right)$ term in the coefficient of ${\Delta t}^k$ for $k = 1, 2, 3, \ldots$ cannot be ignored in the final expression for $\hat{\tau}_j^{n+1}$ defined as the difference between $u_j^{n+1}$ and $\hat{u}_j^{n+1}$. \subsection{Convergence at Constant Ratio of Time Step to Cell Width} Here we assume that our numerical scheme is stable, and the global solution error is the same order of accuracy as the global truncation error. Then, in the asymptotic regime, where the magnitude of the truncation error is dominated by the powers of $\Delta t$ and $\Delta x$ rather than their coefficients, the order of convergence of the global solution error norm is the minimum of the $\mathcal{O}\left(\Delta x^{\alpha}\right)$ and the $\mathcal{O}\left(\Delta t^{\beta}\right)$ terms in~\eqref{GlobalTruncationErrorNumericalSolutionFinalForm}. If $\Delta t$ is proportional to $\Delta x$, meaning that the ratio of the time step to cell width is held fixed (or the Courant number is kept constant for a one-dimensional linear constant-coefficient advection problem), the order of the global solution error becomes \begin{equation} \hat{\tau}_G = \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right) = \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\gamma^{\beta} \Delta x}^{\beta}\right) = \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta x}^{\beta}\right) \approx \mathcal{O}\left({\Delta x}^{\text{min}({\alpha},{\beta})}\right), \end{equation} where $\gamma = \Delta t/\Delta x$. Therefore, the order of convergence can not exceed the order of the spatial discretization~$\alpha$, and to achieve this order of convergence, we need to apply a time-stepping method of order~$\beta$ with $\beta \ge \alpha$. Within a specific family of time-stepping methods, the most computationally efficient choice is $\beta = \alpha$. Table~\ref{Table_GobalSolutionErrorNorm} lists the order of convergence of the error norm in the asymptotic regime at constant ratio of $\Delta t$ to $\Delta x$ for varying orders of spatial and temporal discretizations. \begin{table}[!tp] \centering \caption{Order of convergence of the error norm in the asymptotic regime at constant ratio of time step to cell width with spatial and temporal discretizations up to order four. FE denotes first-order Forward Euler, RK2, RK3, RK4 denote the second-, third-, fourth-order (predictor-corrector) Runge-Kutta methods, respectively, and AB2, AB3, AB4 denote the second-, third-, fourth-order (multistep) Adams-Bashforth methods, respectively.} \vspace{3mm} \begin{tabular}{cccc} \toprule {Order of} & {Time-Stepping} & {Order of} & {Order of Convergence of Error Norm in} \\ {Spatial} & {Method} & {Time-Stepping} & {Asymptotic Regime at Constant Ratio} \\ {Discretization} & {Employed} & {Method} & {of Time Step to Cell Width} \\ \midrule 1 & FE & 1 & 1 \\ 1 & RK2 or AB2 & 2 & 1 \\ 1 & RK3 or AB3 & 3 & 1 \\ 1 & RK4 or AB4 & 4 & 1 \\ 2 & FE & 1 & 1 \\ 2 & RK2 or AB2 & 2 & 2 \\ 2 & RK3 or AB3 & 3 & 2 \\ 2 & RK4 or AB4 & 4 & 2 \\ 3 & FE & 1 & 1 \\ 3 & RK2 or AB2 & 2 & 2 \\ 3 & RK3 or AB3 & 3 & 3 \\ 3 & RK4 or AB4 & 4 & 3 \\ 4 & FE & 1 & 1 \\ 4 & RK2 or AB2 & 2 & 2 \\ 4 & RK3 or AB3 & 3 & 3 \\ 4 & RK4 or AB4 & 4 & 4 \\ \bottomrule \\ \end{tabular} \label{Table_GobalSolutionErrorNorm} \end{table} \subsection{Refinement Only in Space or Only in Time} If only spatial or temporal refinement is performed with a stable numerical scheme, the leading order terms of the global solution error are the $\mathcal{O}\left(\Delta x^{\alpha}\right)$ and the $\mathcal{O}\left(\Delta t^{\beta}\right)$ terms. In this case, convergence cannot be guaranteed due to the $\mathcal{O}\left(\Delta t^{\beta}\right)$ term for refinement only in space, and due to the $\mathcal{O}\left(\Delta x^{\alpha}\right)$ term for refinement only in time. More specifically, under only spatial or temporal refinement, the global truncation error does not necessarily converge to zero. As a result, our numerical solution may not even be consistent, and convergence may be impossible. Under certain circumstances, the magnitude of the global solution error norm can even increase with only temporal refinement. The simplest example is the one-dimensional linear homogeneous constant-coefficient advection equation \begin{equation} u_t + a u_x = 0, \label{LinearAdvecton1D_SimplestExample} \end{equation} discretized in space with the first-order upwind finite difference scheme and advanced in time with the first-order Forward Euler method. The local truncation error for this problem at spatial location $x_j$ and time level $t^{n+1}$ is \begin{equation} \hat{\tau}_j^{n+1} = \left(-\frac{1}{2} |a| \Delta t \Delta x + \frac{1}{2} a^2 \Delta t^2\right) \left(u_{xx}\right)_j^n + \cdots, \end{equation} where $|a|$ is the magnitude of the constant wave speed $a$. The leading order term of the local truncation error are diffusive in nature, and can be expressed as \begin{equation} \left[\hat{\tau}_j^{n+1}\right]_{\text{leading order}} = \left(-\frac{1}{2} |a| \Delta t \Delta x + \frac{1}{2} a^2 \Delta t^2\right) \left(u_{xx}\right)_j^n = -\frac{1}{2} |a| \Delta x \Delta t \left(1 - \frac{|a| \Delta t}{\Delta x}\right) \left(u_{xx}\right)_j^n \equiv -\frac{1}{2} |a| \Delta x \Delta t \left(1 - C\right) \left(u_{xx}\right)_j^n, \end{equation} where $C = |a| \Delta t/\Delta x$ is the Courant number, which is positive and must be less than one to ensure numerical stability. The global truncation error is one order of $\Delta t$ less, and can be approximated as \begin{equation} \left[\left(\hat{\tau}_G\right)_j\right]_{\text{leading order}} = -\frac{1}{2} |a| \Delta x \left(1 - \frac{|a| \Delta t}{\Delta x}\right) \left(u_{xx}\right)_j^n = -\frac{1}{2} |a| \Delta x \left(1 - C\right) \left(u_{xx}\right)_j^n. \end{equation} Maintaining $C < 1$, if $\Delta x$ is held constant and $\Delta t$ is refined, then $(1-C)$ increases towards $1$, and the magnitude of the global truncation error increases. Moreover, the error will be diffusive in nature. Figure~\ref{LinearConstantCoefficientAdvection1D_ReductionInTimeStep} shows the numerical solution of the linear advection equation~\eqref{LinearAdvecton1D_SimplestExample} on the domain $[0,1]$ with wave speed $a=1$, periodic boundary conditions, initial condition $u(x,0) = u_0(x) = \sin (2\pi x)$, and spatial resolution $\Delta x = 1/2^8$. The exact solution is $u(x,t) = u_0 \sin(2\pi(x-t))$. At $t=1.0$, we see that the error is larger with a time step $\Delta t = 10^{-4}$ when compared to the error with a 20 times larger time step. This is because the numerical diffusion, contributing to the error, is larger for the numerical solution using a smaller value of $\Delta t$, as evidenced by the higher reduction in the solution amplitude. To consider the effect of refinement only in space, we write the leading order term of the global truncation error as \begin{equation} \left[\left(\hat{\tau}_G\right)_j\right]_{\text{leading order}} = -\frac{1}{2} a^2 \Delta t \left(\frac{\Delta x}{|a| \Delta t} - 1\right) \left(u_{xx}\right)_j^n \equiv -\frac{1}{2} a^2 \Delta t \left(\frac{1}{C} - 1\right) \left(u_{xx}\right)_j^n. \end{equation} If $\Delta x$ is refined and $\Delta t$ held fixed so that $C < 1$ at all spatial resolutions, then $\left(\frac{1}{C} - 1\right)$ decreases towards 1, and the magnitude of the global truncation error, approximating the global solution error, decreases. The unexpected behavior of the error norm with only temporal refinement can be attributed to the interaction of the leading order terms in the global truncation error. Since these terms have opposite signs, the magnitude of their difference increases with the reduction in $\Delta t$ at constant $\Delta x$ in the regime of $C \in (0,1)$. If, however, refinement were performed in both space and time by keeping $\Delta t$ proportional to $\Delta x$, the global truncation error would be dominated by the term (or the sum of the terms) with the lowest power of $\Delta x$ (or $\Delta t$) and only its magnitude, and not its sign, will play the pivotal role in the error. \begin{figure}[!htp] \centering \includegraphics[scale=.35]{fig1.pdf} \hspace{0.5cm} \includegraphics[scale=.35]{fig2.pdf} \caption{The numerical solution of $u_t + u_x = 0$ with periodic boundary conditions at $t=1.0$ with $\Delta x = 1/2^8$ and two different time step sizes. As predicted by the theory, larger errors are incurred when a smaller time step size is used.} \label{LinearConstantCoefficientAdvection1D_ReductionInTimeStep} \end{figure} \subsection{Verification of the Spatial or Temporal Order of Accuracy} We have established that asymptotic convergence may not be achieved with only spatial or temporal refinement. However, we can apply a technique to capture the order of the spatial and temporal discretizations. By considering only the leading order terms, the global solution error at a spatial location $x_j$ and a time horizon can be approximated as \begin{equation} \left(\hat{\tau}_G\right)_j \approx \left[\left(\hat{\tau}_G\right)_j\right]_{\text{leading order}} = \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right) = \zeta \Delta x^{\alpha} + {\zeta}_{\beta+1} \Delta t^{\beta}, \label{Global_Solution_Error_with_Leading_Order_Terms} \end{equation} where the coefficients $\zeta$ and ${\zeta}_{\beta+1}$ are independent of $\Delta x$ and $\Delta t$, and ${\zeta}_{\beta+1} = c_{\beta+1}/(\beta+1)!$ from~\eqref{LocalTruncationErrorNumericalSolutionFinalForm_1}. If ${\zeta} \Delta x^{\alpha} \gg {\zeta}_{\beta+1} \Delta t^{\beta}$, then the spatial order of convergence can be calculated by refining $\Delta x$ while keeping $\Delta t$ fixed. However, a general setting requires an alternative method to find the spatial order of convergence. This can be done by considering two uniform meshes with cell widths $\Delta x_i$ and $\Delta x_{i+1}$ with $\Delta x_{i+1} < \Delta x_i$. Then we can write \begin{subequations} \begin{align} \left(\hat{\tau}_{G^x_i}\right)_j &\approx {\zeta} \Delta x_i^{\alpha} + {\zeta}_{\beta+1} \Delta t^{\beta}, \\ \left(\hat{\tau}_{G^x_{i+1}}\right)_j &\approx {\zeta} \Delta x_{i+1}^{\alpha} + {\zeta}_{\beta+1} \Delta t^{\beta}. \end{align} \end{subequations} Assuming $\left(\hat{\tau}_{G^x_{i+1}}\right)_j < \left(\hat{\tau}_{G^x_i}\right)_j$, we define \begin{equation} \Delta \left\{\left(\hat{\tau}_{G^x_{i,i+1}}\right)_j\right\} = \left(\hat{\tau}_{G^x_i}\right)_j - \left(\hat{\tau}_{G^x_{i+1}}\right)_j = {\zeta} \left(\Delta x_i^{\alpha} - \Delta x_{i+1}^{\alpha}\right) = {\zeta} \Delta x_{i+1}^{\alpha} \left\{\left(\frac{\Delta x_i}{\Delta x_{i+1}}\right)^{\alpha} - 1\right\} > 0. \end{equation} Defining $p = \Delta x_{i+1}/\Delta x_i < 1$ to be the ratio between the two mesh sizes, we can write \begin{equation} \Delta \left\{\left(\hat{\tau}_{G^x_{i,i+1}}\right)_j\right\} = {\zeta} \Delta x_{i+1}^{\alpha} \left(p^{-\alpha} - 1\right). \end{equation} Taking the logarithm of both sides, \begin{equation} \log \left[\Delta \left\{\left(\hat{\tau}_{G^x_{i,i+1}}\right)_j\right\}\right] = \theta + \alpha \log \left(\Delta x_{i+1}\right), \end{equation} where $\theta = \log \left\{{\zeta} \left(p^{-\alpha} - 1\right)\right\}$ is constant. So, we can compute the spatial order of accuracy by first choosing a sequence of grids with ${\Delta x}_{i+1}/{\Delta x}_i = p$ for $i=1,2,\ldots,M$, and all satisfying any CFL condition. Then, after interpolating the error to the coarsest mesh with spacing $\Delta x_1$, we can find the line of best fit of the norm of the difference between successive global solution errors $\Delta \left\{\left(\hat{\tau}_{G^x_{i,i+1}}\right)_{\text{norm}}\right\}$ vs.~the cell width~$\Delta x_{i+1}$ on a log-log scale for $i=1,2,\ldots,M-1$, and determine its slope which is the spatial order of accuracy. Proceeding in a similar fashion, by refining only the time step by a constant ratio at a fixed spatial resolution, and plotting the norm of the difference between successive global solution errors, we can obtain the temporal order of accuracy. Since the spatial resolutions remain the same, we skip the interpolation step for refinement only in time. Now, the exact solution is independent of the spatial resolution and the time step. So, for refinement only in time, if $u_j^{n+1}$ represents the solutions at time level $t^{n+1}$ and $\left(\hat{u}_{i}\right)_j^{n+1}$ represents its numerical counterpart obtained with time step~$\Delta t_i$, we can write \begin{equation} \left(\Delta \hat{\tau}_{G^t_{i,i+1}}\right)_j^{n+1} \equiv \left(\hat{\tau}_{G^t_i}\right)_j^{n+1} - \left(\hat{\tau}_{G^t_{i+1}}\right)_j^{n+1} = \left\{u_j^{n+1} - \left(\hat{u}_i\right)_j^{n+1}\right\} - \left\{u_j^{n+1} - \left(\hat{u}_{i+1}\right)_j^{n+1}\right\} = \left(\hat{u}_{i+1}\right)_j^{n+1} - \left(\hat{u}_i\right)_j^{n+1} \equiv \left(\Delta \hat{u}_{i,i+1}\right)_j^{n+1}. \end{equation} So, if we take the difference $\left(\Delta \hat{u}_{i,i+1}\right)_j^{n+1}$ between the numerical solutions $\left(\hat{u}_i\right)_j^{n+1}$ and $\left(\hat{u}_{i+1}\right)_j^{n+1}$ obtained with time steps $\Delta t_i$ and $\Delta t_{i+1}$ for $i = 1,2,\ldots,M-1$ at every mesh point $x_j$ and time level $t^{n+1}$, compute its norm $\left(\Delta \hat{u}_{G^t_{i,i+1}}\right)_{\text{norm}}^{n+1}$ and plot it against $\Delta t_{i+1}$, we will attain convergence with order equal to that of the time-stepping method. If we are performing only a spatial refinement, we first need to interpolate the numerical solution to the coarsest mesh and then follow the same steps to obtain convergence with the same order as that of the spatial discretization. From practical considerations, this approach has the clear advantage of not having to deal with an exact or manufactured solution. It is worth keeping in mind that the sole purpose of these atypical convergence exercises is to verify the correct implementation of the spatial or temporal discretizations. The solution error norm under only spatial or temporal refinement is not expected to converge in the asymptotic regime. It is only when the time step and the cell width are refined simultaneously while keeping their ratio constant that we can expect convergence. As an alternative, one can perform refinement in both $\Delta x$ and $\Delta t$, while maintaining $\Delta x^{\alpha} \propto \Delta t^{\beta}$, and plot the error norm (a) against $\Delta x$ to capture the spatial order of accuracy, and (b) against $\Delta t$ to capture the temporal order of accuracy. In this paper, we have not performed convergence studies with this refinement strategy, but we want to mention it for the sake of completeness. The first limitation of this refinement strategy is that for a high-order spatial and a low-order temporal discretization, keeping $\Delta t$ proportional to $\Delta x^{\alpha/\beta}$ can refine the time step to such an extent that the machine precision error dominates the discretization error. The second limitation is that one needs to know the order of the spatial and temporal discretizations i.e. the values of $\alpha$ and $\beta$ apriori. This knowledge is not necessary for plotting the differences in the numerical solution (or the error) for successive resolutions with refinement only in space (or only in time) to capture the spatial (or temporal) order of accuracy. As a result, we can even apply this technique to obtain the order of accuracy of complex spatial or temporal discretizations, even when it is difficult to extract the orders of accuracy analytically. \subsection{Reduction in the Observed Order of Convergence} Under certain circumstances, order reduction in the global solution is observed. Tables~\ref{Table_NatureOfConvergence_SpaceAndTime} and~\ref{Table_NatureOfConvergence_OnlySpace_OnlyTime} discuss the nature of convergence of the global solution error approximated as~\eqref{Global_Solution_Error_with_Leading_Order_Terms} before and after reaching the asymptotic regime. We consider the behavior when refining in space and time, and when refining only in space or time. As we reach the asymptotic regime, we can observe reduction in the order of convergence if \begin{enumerate}[label=(\alph*),noitemsep] \item $\alpha > \beta$ and ${\zeta} \Delta x^{\alpha} \gg {\zeta}_{\beta+1} \Delta t^{\beta}$, or $\alpha < \beta$ and ${\zeta} \Delta x^{\alpha} \ll {\zeta}_{\beta+1} \Delta t^{\beta}$, for refinement in both space and time while keeping $\Delta t$ proportional to $\Delta x$; \item ${\zeta} \Delta x^{\alpha} \gg {\zeta}_{\beta+1} \Delta t^{\beta}$ for refinement only in space; \item ${\zeta} \Delta x^{\alpha} \ll {\zeta}_{\beta+1} \Delta t^{\beta}$ for refinement only in time. \end{enumerate} In Section \ref{sec:convergence_plots}, we will encounter order reduction with the convergence plots of a linear variable-coefficient advection equation and a non-linear advection equation, with refinement in both space and time, when they are (a) discretized in space with a non-monotone finite volume method, and advanced in time with the explicit midpoint and the second-order Adams-Bashforth methods, and (b) discretized in space with a monotone finite volume method, and advanced in time with Forward Euler, the explicit midpoint and the second-order Adams-Bashforth methods. \begin{table}[!t] \centering \caption{Convergence of the global solution error approximated as \eqref{Global_Solution_Error_with_Leading_Order_Terms} before and after reaching the asymptotic regime, while maintaining $\Delta t/\Delta x = r$.} \vspace{3mm} \setlength{\tabcolsep}{0.25em} \begin{tabular}{ccc} \toprule & $\alpha > \beta$ & $\alpha < \beta$ \\ {\colorbox{shade_0} {\parbox{2.8cm}{${\zeta} \Delta x^{\alpha} \gg {\zeta}_{\beta+1} \Delta t^{\beta} \\ \text{i.e. } {\zeta} \Delta x^{\alpha-\beta} \gg r {\zeta}_{\beta+1}$}}} & {\colorbox{shade_1} {\parbox{5.9cm}{Before asymptotic regime: $\tau_G \approx {\zeta} \Delta x^{\alpha}$ \\ Convergence: Attained with slope $\alpha$ \\ After asymptotic regime: $\tau_G \approx r {\zeta}_{\beta+1} \Delta x^{\beta}$ \\ Convergence: Attained with slope $\beta$ \\ Order reduction: Observed}}} & {\colorbox{shade_1} {\parbox{5.9cm}{Before asymptotic regime: $\tau_G \approx {\zeta} \Delta x^{\alpha}$ \\ Convergence: Attained with slope $\alpha$ \\ After asymptotic regime: $\tau_G \approx {\zeta} \Delta x^{\alpha}$ \\ Convergence: Attained with slope $\alpha$ \\ Order reduction: Not observed}}} \vspace{1mm} \\ {\colorbox{shade_0} {\parbox{2.8cm}{${\zeta} \Delta x^{\alpha} \ll {\zeta}_{\beta+1} \Delta t^{\beta} \\ \text{i.e. } {\zeta} \Delta x^{\alpha-\beta} \ll r {\zeta}_{\beta+1}$}}} & {\colorbox{shade_1} {\parbox{5.9cm}{Before asymptotic regime: $\tau_G \approx r {\zeta}_{\beta+1} \Delta x^{\beta}$ \\ Convergence: Attained with slope $\beta$ \\ After asymptotic regime: $\tau_G \approx r {\zeta}_{\beta+1} \Delta x^{\beta}$ \\ Convergence: Attained with slope $\beta$ \\ Order reduction: Not observed}}} & {\colorbox{shade_1} {\parbox{5.9cm}{Before asymptotic regime: $\tau_G \approx r {\zeta}_{\beta+1} \Delta x^{\beta}$ \\ Convergence: Attained with slope $\beta$ \\ After asymptotic regime: $\tau_G \approx {\zeta} \Delta x^{\alpha}$ \\ Convergence: Attained with slope $\alpha$ \\ Order reduction: Observed}}} \vspace{1mm} \\ \bottomrule \\ \end{tabular} \label{Table_NatureOfConvergence_SpaceAndTime} \end{table} \begin{table}[!t] \centering \caption{Convergence of the global solution error approximated as \eqref{Global_Solution_Error_with_Leading_Order_Terms} before and after reaching the asymptotic regime with refinement only in space or only in time.} \vspace{3mm} \setlength{\tabcolsep}{0.25em} \begin{tabular}{ccc} \toprule & {Refinement only in space} & {Refinement only in time} \\ {\colorbox{shade_0} {\parbox{2.3cm}{\centering ${\zeta} \Delta x^{\alpha} \gg {\zeta}_{\beta+1} \Delta t^{\beta}$}}} & {\colorbox{shade_1} {\parbox{5.65cm}{Before asymptotic regime: $\tau_G \approx {\zeta} \Delta x^{\alpha}$ \\ Convergence: Attained with slope $\alpha$ \\ After asymptotic regime: $\tau_G \approx {\zeta}_{\beta+1} \Delta t^{\beta}$ \\ Convergence: Not attained \\ Order reduction: Observed}}} & {\colorbox{shade_1} {\parbox{5.65cm}{Before asymptotic regime: $\tau_G \approx {\zeta} \Delta x^{\alpha}$ \\ Convergence: Not attained \\ After asymptotic regime: $\tau_G \approx {\zeta} \Delta x^{\alpha}$ \\ Convergence: Not attained \\ Order reduction: Not applicable}}} \vspace{1mm} \\ {\colorbox{shade_0} {\parbox{2.3cm}{\centering ${\zeta} \Delta x^{\alpha} \ll {\zeta}_{\beta+1} \Delta t^{\beta}$}}} & {\colorbox{shade_1} {\parbox{5.65cm}{Before asymptotic regime: $\tau_G \approx {\zeta}_{\beta+1} \Delta t^{\beta}$ \\ Convergence: Not attained \\ After asymptotic regime: $\tau_G \approx {\zeta}_{\beta+1} \Delta t^{\beta}$ \\ Convergence: Not attained \\ Order reduction: Not applicable}}} & {\colorbox{shade_1} {\parbox{5.65cm}{Before asymptotic regime: $\tau_G \approx {\zeta}_{\beta+1} \Delta t^{\beta}$ \\ Convergence: Attained with slope $\beta$ \\ After asymptotic regime: $\tau_G \approx {\zeta} \Delta x^{\alpha}$ \\ Convergence: Not attained \\ Order reduction: Observed}}} \vspace{1mm} \\ \bottomrule \\ \end{tabular} \label{Table_NatureOfConvergence_OnlySpace_OnlyTime} \end{table} \subsection{Local Truncation Error of a Linear Inhomogeneous Variable-Coefficient Advection Equation} \label{LTE_Linear_Inhomogeneous_Variable_Coefficient_Advection_Equation} We consider the linear variable-coefficient one-dimensional inhomogeneous advection equation \begin{equation} u_t + p(x) u + (q(x) u)_x = f(x,t), \label{LinearAdvection1D_1} \end{equation} which can be expressed as \begin{equation} u_t + F_x = s \equiv -p(x) u + f(x,t), \label{LinearAdvection1D} \end{equation} where $F = q(x) u$ is the flux with $q(x) > 0$, and the source term $s(u,x,t) = -p(x) u + f(x,t)$ consists of two parts: a linear variable-coefficient function of the dependent variable $-p(x) u$ and a function of the independent variables $f(x,t)$. The term $-p(x) u$ is motivated by the Coriolis acceleration $f \hat{k} \times \vec{u}$ appearing in the horizontal momentum equations of geophysical flows. Here $f$ denotes the Coriolis parameter, which may be a constant on the idealized f-plane, or linear in latitude on the beta-plane, which is an example of the variable-coefficient case in equation \eqref{LinearAdvection1D}. Leveraging the computational power of SymPy, a symbolic package of Python, we calculate the first few relevant terms containing ${\Delta t}^l {\Delta x}^k$ for $l = 1,2,\ldots$ and $k=0,1,\ldots$ of the local truncation error of~\eqref{LinearAdvection1D} for various spatial and temporal discretizations. Tables~B.7--B.11b list these terms for the first-order upwind finite difference spatial discretization, and the five explicit time-stepping methods of List~\ref{myListOfExplicitTimeSteppingMethodsForAnalysis}. The supplementary text file `LocalTruncationError\_Output.rtf' contains these results for second- and third-order upwind finite difference spatial discretizations. Determining symbolic representations of the local truncation error of~\eqref{LinearAdvection1D} with SymPy consists of a few steps. We start by using the spatial and temporal discertizations to find expressions of $\hat{u}_j^{n+1}$ as functions of quantities defined at spatial locations adjacent to and including $x_j$ and temporal locations adjacent to and including~$t^n$. Next, Taylor expansions are used to expand every term about $(x_j,t^n)$. The third step requires expressing the temporal and mixed derivatives of $u$ as functions of quantities at the current time level $t^n$, which are assumed to be known a priori. For the advection equation~\eqref{LinearAdvection1D}, these derivatives are in Tables~B.3--B.6. In the final step, we compute the difference between the exact solution $u_j^{n+1}$ and its numerical counterpart~$\hat{u}_j^{n+1}$ to arrive at the final form of the local truncation error $\hat{\tau}_j^{n+1}$. For all spatial and temporal discretizations we consider, $\hat{\tau}_j^{n+1}$ can be expressed as \begin{equation} \hat{\tau}_j^{n+1} = \sum \limits_{k=1}^{\infty} \frac{\Delta t^k}{k!} \left(c_k + \mathcal{O} \left(\Delta x^{\alpha}\right)\right)_j^n, \label{LocalTruncationErrorNumericalSolutionFinalForm_Compact} \end{equation} where $\alpha$ and $\beta$ represent the orders of the spatial and temporal discretizations, and $c_k = 0$ for $k=1,2,\ldots,\beta$, and this expression is a compact form of~\eqref{LocalTruncationErrorNumericalSolutionFinalForm}. Now we consider the special situation when $p(x)$ is constant, $p(x) = p_0$, and $q(x)$ is linear, $q(x) = q_0 + q_1 x$, so that $q_x(x) = q_1$, and $u(x,t)$ and $f(x,t)$ are functions of only $t$. Then, \begin{enumerate}[label=(\alph*),noitemsep] \item the linear advection equation~\eqref{LinearAdvection1D} reduces to the linear ODE~\eqref{ODE1DParticularChoice}; \item the coefficients of $\Delta x^k$ for $k \ge \alpha$ in the $\mathcal{O}\left(\Delta x^{\alpha}\right)$ terms reduce to zero; \item the local truncation error of~\eqref{LinearAdvection1D}, assuming form~\eqref{LocalTruncationErrorNumericalSolutionFinalForm_Compact}, reduces to that of~\eqref{ODE1DParticularChoice}, assuming form~\eqref{ODE1DTruncationErrorNumericalSolution_2} given by \begin{equation} \hat{\tau}^{n+1} = \sum \limits_{k=\beta+1}^{\infty} \frac{c_k^n}{k!} \Delta t^k = \frac{c_{\beta+1}}{(\beta+1)!} \Delta t^{\beta+1} + \mathcal{O} \left(\Delta t^{\beta+2}\right). \end{equation} \end{enumerate} Tables B.12--B.16 contain the coefficients of $\Delta t^l \Delta x^k$ for $l=1,2,\ldots,\beta+1$ and $k = 0,1,2,3$ in the local truncation error of the linear advection equation~\eqref{LinearAdvection1D} discretized in space with the first-order upwind finite difference scheme ($\alpha=1$), and advanced in time with the five explicit time-stepping methods of List~\ref{myListOfExplicitTimeSteppingMethodsForAnalysis} i.e.~the first-order Forward Euler method ($\beta=1$), the second-order explicit midpoint method ($\beta=2$), the second-order Adams-Bashforth method ($\beta=2$), Williamson's low-storage third-order Runge-Kutta method~\citep{williamson1980low} ($\beta=3$), and the third-order Adams-Bashforth method ($\beta=3$). Since $c_{\beta+1}/(\beta+1)!$ is the coefficient of $\Delta t^{\beta+1} \Delta x^0$ in the local truncation error, its explicit expression is present in the row with $l=\beta+1$ and $k=0$. With the assumption \begin{equation} p_x(x) = q_{xx}(x) = u_x(x,t) = f_x(x,t) = 0, \end{equation} the linear advection equation~\eqref{LinearAdvection1D} reduces to the linear ODE~\eqref{ODE1DParticularChoice}, and the above-mentioned coefficient of $\Delta t^{\beta+1} \Delta x^0$ reduces to that of $\Delta t^{\beta+1}$ in the local truncation error of~\eqref{ODE1DParticularChoice} advanced with the same time-stepping method and listed in Table~B.2. One can also verify that the local truncation error of the generic hyperbolic PDE~\eqref{AdvectionEquation1DFunctionalForm} of Section~\ref{sec:pdes} using any of the time-stepping methods of List~\ref{myListOfExplicitTimeSteppingMethodsForAnalysis} reduces to that of the generic ODE~\eqref{ODE1D} of Section~\ref{sec:odes} with approximations~\eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_1} and~\eqref{myApproximationsForPDEsUsedToModelPhysicalPhenomena_3}, advanced with the same time-stepping method. Finally, expressing the particular ODE~\eqref{ODE1DParticularChoice} as \begin{equation} u_t = \mathcal{F}(u,t) \equiv -\left(p_0 + q_1\right) u + f(t), \label{ODE1DParticularChoice_ExpressedInGenericForm} \end{equation} and the particular PDE~\eqref{LinearAdvection1D} as \begin{equation} u_t = \mathcal{F} \left(u,u_x,x,t\right) \equiv -p(x) u - (q(x) u)_x + f(x,t), \label{LinearAdvection1D_ExpressedInGenericForm} \end{equation} the local truncation error of the generic ODE~\eqref{ODE1D} and the generic hyperbolic PDE~\eqref{AdvectionEquation1DFunctionalForm} advanced with any of the time-stepping methods of List~\ref{myListOfExplicitTimeSteppingMethodsForAnalysis} reduce to that of the particular ODE~\eqref{ODE1DParticularChoice} and the particular advection equation~\eqref{LinearAdvection1D} with the specific formulation of $\mathcal{F}$ given by~\eqref{ODE1DParticularChoice_ExpressedInGenericForm} and~\eqref{LinearAdvection1D_ExpressedInGenericForm}, respectively. \subsection{Local Truncation Error of a Non-Linear Inhomogeneous Advection Equation} We conclude our analysis by considering the non-linear advection equation \begin{equation} \left(\bar{u} + u\right)_t + \left(\bar{u} + u\right) \left(\bar{u} + u\right)_x + p_0 \left(\bar{u} + u\right) = \hat{f}(x,t) \label{NonLinearAdvection1D_1}, \end{equation} which can be expressed in conservative form as \begin{equation} u_t + \left(\bar{u} u + \frac{u^2}{2}\right)_x + p_0 u = f(x,t), \label{NonLinearAdvection1D} \end{equation} where $p_0$ is a constant and $f(x,t) = \hat{f}(x,t) - \bar{u} \bar{u}_x - p_0 \bar{u}$. Motivated by applications in fluid dynamics, $u$ has been decomposed into a constant mean component $\bar{u}$, and a perturbation term $u$, which is a function of space and time. If $\bar{u}$, $p_0$ and $f(x,t)$ are reduced to zero,~\eqref{NonLinearAdvection1D} reduces to the inviscid Burgers' equation. The supplementary text file `LocalTruncationError\_Output.rtf' contains the leading order terms of the local truncation error of~\eqref{NonLinearAdvection1D} discretized in space with the first-, second-, and third-order upwind finite difference schemes and advanced in time with the five explicit time-stepping methods of List~\ref{myListOfExplicitTimeSteppingMethodsForAnalysis}. Similar to our reasoning in Section \ref{LTE_First_Order_Linear_ODE}, we cannot employ implicit time-stepping methods to advance~\eqref{LinearAdvection1D_1} and~\eqref{NonLinearAdvection1D_1}, since it would require knowledge of the functional forms of $p(x)$, $q(x)$, and $f(x,t)$ for~\eqref{LinearAdvection1D_1}, and $\bar{u}(x)$ and $f(x,t)$ for~\eqref{NonLinearAdvection1D_1}. \section{Numerical Results} \label{sec:numerical_results} In this section, we numerically verify our theoretical findings for the spatial and temporal order of convergence of hyperbolic PDEs. We perform convergence studies on the linear variable-coefficient inhomogeneous advection equation \begin{equation} u_t + x u_x + 2 u \equiv u_t + (x u)_x + u = s, \quad x \in [0,1], \: t>0, \label{LinearAdvection1D_NumericalExperiment} \end{equation} and the nonlinear inhomogeneous advection equation \begin{equation} (1 + u)_t + (1 + u) (1 + u)_x + (1 + u) \equiv u_t + \left(u + \frac{1}{2} u^2\right)_x + 1 = s, \quad x \in [0,1], \: t>0, \label{NonLinearAdvection1D_NumericalExperiment} \end{equation} with periodic boundary conditions. The linear advection equation~\eqref{LinearAdvection1D_NumericalExperiment} is a special case of~\eqref{LinearAdvection1D} with $p(x) = 1$ and $q(x) = x$, while the non-linear advection equation~\eqref{NonLinearAdvection1D_NumericalExperiment} is a special case of~\eqref{NonLinearAdvection1D} with $\bar{u}=p_0=1$. The exact solution is chosen to be \begin{equation} u_{\text{exact}}(x,t) = \hat{u} \sin (kx - \omega t) + 2 \hat{u} \cos (2kx - \omega t), \label{ExactSolution_NumericalExperiments} \end{equation} which is a superposition of two sinusoidal wave modes, with $\hat{u}$, $k$, and $\omega$ representing the amplitude, wavenumber, and angular velocity of the first wave mode. The second wave mode has twice the amplitude, half the wavelength, and half the phase speed as the first one, and leads in phase by 90 degrees. If $c$ denotes the phase speed of the first wave mode, we can write $\omega = ck = \frac{c}{2} (2k)$, so that the angular velocity remains the same for both wave modes. We specify $k=2\pi$, $c=1$, and $\hat{u} = 1$ for the linear advection equation and $\hat{u} = 0.01$ for the non-linear advection equation. By substituting $t=0$ in \eqref{ExactSolution_NumericalExperiments}, we obtain the initial condition \begin{equation} u_{\text{exact}}(x,0) = \hat{u} \sin (kx) + 2 \hat{u} \cos (2kx). \label{Initialondition_NumericalExperiments} \end{equation} The motivation behind the choice \eqref{ExactSolution_NumericalExperiments} for the exact solution is to eliminate artificially high rates of convergence that sometimes occurs with `nice' test problems such as a single sinusoidal wave mode whose leading order terms in the local truncation error can be zero. By substituting the exact solution~\eqref{ExactSolution_NumericalExperiments} into the left-hand side of~\eqref{LinearAdvection1D_NumericalExperiment} and~\eqref{NonLinearAdvection1D_NumericalExperiment}, we obtain the corresponding source terms on the right-hand side. We employ finite difference and finite volume methods for spatial discretization and the following set of predictor-corrector and multistep time-stepping methods, ranging from first- to fourth-order, to advance our numerical solution in time: \begin{mylist}\mbox{Time-stepping methods for numerical experiments:} \begin{enumerate}[noitemsep] \label{myListOfTimeSteppingMethodsForNumericalExperiments} \item[FE1:] first-order Forward Euler method \item[RK2:] explicit midpoint method, belonging to the second-order Runge-Kutta family \item[RK3:] low-storage third-order Runge-Kutta method of~\citet{williamson1980low} \item[RK4:] low-storage five-stage fourth-order Runge-Kutta method of~\citet{carpenter1994fourth} \item[AB2:] second-order Adams-Bashforth method \item[AB3:] third-order Adams-Bashforth method \item[AB4:] fourth-order Adams-Bashforth method \end{enumerate} \end{mylist} \subsection{Spatial Discretization} \label{SpatialDiscretization} We consider two spatial discretization methods: a standard first-order finite difference upwind scheme, and a piecewise parabolic reconstruction (PPR) finite volume scheme that is equivalent to the spatial discretization part of the piecewise parabolic method (PPM) of~\citet{colella1984piecewise}. The PPR involves fitting a parabolic profile within each cell. The three constants needed to uniquely define this parabola are determined by solving a linear system for the cell-averaged solution and the left and right edge estimates. Finalizing the values of these edge estimates consists of a few steps. Starting with the mean solution within each cell, PPR first interpolates the solution to the edges. This interpolation is fourth-order accurate on a uniform mesh. Then PPR applies the monotonized-central slope limiter and adjusts the edge estimates to flatten any local maximum or minimum with the cell. A close variant of the original PPR scheme appears in~\citet{engwirda2016weno}, where the application of the slope limiter and adjustments of the edge estimates are performed in the reverse order. We have used both versions of PPR, and obtained similar results. In this paper, however, we present our results with the original version. The application of the slope limiter and the edge adjustments guarantee that the parabolic profile within each cell is oscillation-free, monotonicity-preserving, and total variation diminishing. Even though these monotone slope-limiting strategies ensure numerical stability, it comes at the cost of accuracy. More specifically, the combined effect of the slope limiter and the flattening of new local extrema manifests as spurious numerical dissipation and results in discontinuities at cell edges, thereby compromising both the spatial and temporal orders of approximation. The numerical flux at every edge is typically a function of these edge estimates among other parameters, and is inevitably different from the flux computed with the edge estimate from the first interpolation step, which would be fourth-order accurate on a uniform mesh. In the original PPM~\cite{colella1984piecewise}, the numerical flux is computed as the integral average of the flux passing through each edge from the current time step to the next one. Using this time-centered approximation of the numerical fluxes, the time-centered tendencies are computed and used to determine the solution at the next time step. Instead of adopting this approach to advance our numerical solution in time, we combine PPR with the seven time-stepping methods of List~\ref{myListOfTimeSteppingMethodsForNumericalExperiments}. However, we still need to compute numerical fluxes at a previous time level for a multistep time-stepping method, or at a fraction of a time step for a predictor-corrector time-stepping method. This is where we employ the slightly dissipative local Lax-Friedrichs Riemann solver of~\citet{rusanov1961calculation}, which determines the dot product of the numerical flux vector in the direction of the flow at every edge with the outward unit normal vector as \begin{equation} \mathbb{F}^*\left(u^{\text{int}},u^{\text{ext}};\hat{n}\right) = \vec{\mathbb{F}} \cdot \hat{n} = \frac{1}{2} \left\{\left(F^{\text{int}} + F^{\text{ext}}\right) \text{sign}\left(\hat{n}\right) - \left|\lambda\right|_{\max} \left(u^{\text{ext}} - u^{\text{int}}\right)\right\}. \end{equation} Here $F^{\text{int}}$ and $F^{\text{ext}}$ are the fluxes of $u^{\text{int}}$ and $u^{\text{ext}}$, the internal and external states at the edge of the cell, where the solution tendency is being computed, and $\hat{n}$ is the unit normal vector directed from the internal state to the external one. For example, to determine the numerical flux at the right edge of the cell $[x_{j-\frac{1}{2}},x_{j+\frac{1}{2}}]$, we specify $u^{\text{int}} = u^R_j$, $u^{\text{ext}} = u^L_{j+1}$, and $\hat{n} = \hat{x}$, where the superscripts $L$ and $R$ represent the left and right edge estimates. The term $\left|\lambda\right|_{\max} = \max(F^{\text{int}}_u,F^{\text{ext}}_u)$ is the larger of the magnitudes of the two wave speeds, the first one computed as a function of the internal state $u^{\text{int}}$, and the second one as a function of the external state $u^{\text{ext}}$. We use this formulation of the numerical flux in our experiments modeling both linear and non-linear advection. We evaluate the gradient of the flux within the tendency of the cell-averaged solution $\bar{u}_j$ as \begin{equation} \left[F_x\right]_j = \frac{1}{2 \Delta x}\left\{\left.\mathbb{F}^*\left(u^{\text{int}},u^{\text{ext}};\hat{n}\right)\right|_{j-\frac{1}{2}} + \left.\mathbb{F}^*\left(u^{\text{int}},u^{\text{ext}};\hat{n}\right)\right|_{j+\frac{1}{2}}\right\}. \end{equation} Equipped with the flux gradient and the source terms, we compute the solution tendency and advance the solution to the next time step. \subsection{Computing the Error Norm} In addition to the slope limiter, the monotonicity-preserving strategies, and the (dissipative) Riemann solver, the order of convergence of a hyperbolic PDE with refinement in both space and time also depends on whether or not the prognostic variable is chosen to be cell-integrated or cell-averaged for a finite volume method, and whether the numerical solution (or the error) is interpolated to the coarsest mesh for a finite difference method. For refinement only in time, we use a mesh with the same spatial resolution. When we refine only in space, we need to interpolate the numerical solution (or the error) to the coarsest mesh so that we can compute the difference between the numerical solution (or the error) for successive pairs of spatial resolutions. However, when we perform a refinement in space and time simultaneously, it is not immediately clear if we need to perform the above-mentioned interpolation to the coarsest mesh. \subsubsection{Interpolation to the Coarsest Mesh for a Finite Difference Method} If $\alpha$ and $\beta$ denote the spatial and temporal orders of accuracy of a PDE, we know that the coefficients of $\Delta x^k$ for $k = \alpha, \alpha+1, \ldots$ in the coefficients of $\Delta t^l$ for $l = 1,2,\ldots$ within the local truncation error of a PDE are functions of the spatial gradients of the dependent variable, the coefficients of the PDE, and the source terms at the current time. We now consider the error of the numerical solution computed at a set of grid points, as in a finite difference method. If we compute the error norm over the entire spatial interval at a certain time horizon using the magnitudes of the error at every grid point, we may obtain a higher error norm for a fine mesh than a coarse one. The reason for this discrepancy is that the decrease in the magnitude of the local truncation error at a set of points on the fine mesh due to the reduction in the magnitude of $\Delta x$ may be off-set by the increase in the magnitude of the above-mentioned spatial gradients at a subset of these points, which may not even exist on the coarse mesh. This is more pronounced in convergence studies for higher-dimensional problems involving unstructured meshes where a fine mesh is not necessarily embedded within a coarse one. With the global solution error being approximated by the global truncation error, which in turn is one order of $\Delta t$ less than the local truncation error, we may not even obtain numerical convergence by computing the error norm based on the magnitudes of the error obtained at the native set of points within each mesh. Even if we achieve convergence, the order may be less than the expected one. Therefore, for refinement in both space and time using a finite difference method, it is advisable to interpolate the error to the set of coarsest mesh points and then determine the error norm. \subsubsection{Cell-Integrated vs.~Cell-Averaged Quantity as Prognostic Variable for a Finite Volume Method} We consider the formulation of any finite volume method for solving the one-dimensional inhomogeneous advection equation, expressed in conservative form \begin{equation} u_t + F_x = s, \label{FiniteVolume1DConservativeForm} \end{equation} where $u$ is the scalar quantity being advected, and the flux $F$ can be a linear or non-linear function of $u$ with constant or spatially dependent coefficients. The source term $s$ can be a function of $x$, $t$ and $u$, but not of $u_x$. Integrating~\eqref{FiniteVolume1DConservativeForm} with respect to $x$ over the cell~$j$ with, \begin{equation} U_t = F_{j-\frac{1}{2}} - F_{j+\frac{1}{2}} + S, \label{FiniteVolume1DConservativeFormIntegratedOverCell} \end{equation} where \begin{equation} U = \int \limits_{x_{j-\frac{1}{2}}}^{x_{j+\frac{1}{2}}} u dx, \quad \text{and} \quad S = \int \limits_{x_{j-\frac{1}{2}}}^{x_{j+\frac{1}{2}}} s dx. \end{equation} Therefore the spatial order of accuracy of the right-hand side of~\eqref{FiniteVolume1DConservativeFormIntegratedOverCell} depends on how the flux terms are constructed. In the PPR scheme, before the application of the slope limiter and the monotonicity-preserving strategies, this order of accuracy is 4. If $m$ is the interpolant's order of accuracy, and we define the prognostic variable to be the cell-integrated solution $U$, then the spatial order of accuracy of the finite volume method is $m$. If, however, we define our prognostic variable to be the cell-averaged solution \begin{align} \bar{u} = \frac{U}{\Delta x} = \frac{1}{\Delta x} \int \limits_{x_{j-\frac{1}{2}}}^{x_{j+\frac{1}{2}}} u dx, \end{align} which is the standard practice with finite volume methods, our prognostic equation becomes \begin{equation} \bar{u}_t = \frac{F_{j-\frac{1}{2}} - F_{j+\frac{1}{2}}}{\Delta x} + \bar{s}, \quad \text{where} \quad \bar{s} = \frac{S}{\Delta x} = \frac{1}{\Delta x} \int \limits_{x_{j-\frac{1}{2}}}^{x_{j+\frac{1}{2}}} s dx, \label{FiniteVolume1DConservativeFormAveragedOverCell} \end{equation} and the spatial order of accuracy of the finite volume method drops to $m-1$. Based on Theorem~\ref{Theorem_1}, the order of convergence of $U$ and $\bar{u}$ at constant ratio of time step to cell width will be $\min(m,n)$ and $\min(m-1,n)$, respectively, where $n$ is the order of the time-stepping method. \subsubsection{Interpolation to the Coarsest Mesh for a Finite Volume Method} Since a finite volume method advances the cell-integrated or the cell-averaged solution in time, it considers the entire variation of the solution within the cells. Therefore it may not be necessary to interpolate the finite volume solution (or its error) to the coarsest mesh for simultaneous refinement in space and time, unlike a finite difference solution (or its error) defined only at a set of grid points. However, if the finer meshes are not embedded within the coarsest mesh, or if we are refining only in space, we need to perform this interpolation. \begin{figure}[!htp] \centering \includegraphics[scale=.305]{fig3.pdf} \caption{A schematic of the cell-integrated solution of a fine mesh consisting of five cells (between solid lines) being interpolated to the coarsest mesh consisting of three cells (shown by fill colors), for a finite volume method.} \label{Figure_PPR_InterpolationToCoarsestMesh} \end{figure} By integrating the parabolic profiles of the solution within the cells of a fine mesh which are either partially or entirely contained within the cells of the coarsest mesh, we can interpolate the cell-integrated solution to the cells of the coarsest mesh. Let $U^{f \to c}$ denote the cell-integrated solution and $E^{f \to c}$ the cell-integrated error interpolated to the coarsest mesh. Figure \ref{Figure_PPR_InterpolationToCoarsestMesh} illustrates an example where 5 cells of the fine mesh are contained within 3 cells of the coarsest mesh. The cell-integrated solution of the fine mesh interpolated to cells $AB$, $BC$, and $CD$ of the coarsest mesh are the blue, green, and red shaded areas, respectively. Now, the cell-averaged solution and error interpolated to the coarsest mesh are $\bar{u}^{f \to c} = U^{f \to c}/\Delta x_c$ and $\bar{e}^{f \to c} = E^{f \to c}/\Delta x_c$, respectively, where $\Delta x_c$ is the cell-width of the coarsest mesh. Since $\Delta x_c$ is constant, a log-log plot of $U^{f \to c}$ $\left(\text{or }E^{f \to c}\right)$ versus $\Delta x_f$ will have the same slope but a different intercept as that of $\bar{u}^{f \to c}$ or $\left(\text{or }\bar{e}^{f \to c}\right)$ versus $\Delta x_f$, where $\Delta x_f$ represents the cell width of the fine meshes. \subsection{Convergence Plots} \label{sec:convergence_plots} Figures~\ref{Figure_Numerical_Convergence_Plots_1}--\ref{Figure_Numerical_Convergence_Plots_3} show the convergence plots of the linear variable-coefficient advection equation~\eqref{LinearAdvection1D_NumericalExperiment} and the non-linear advection equation~\eqref{NonLinearAdvection1D_NumericalExperiment} using first-order upwind spatial discretization (Figure~\ref{Figure_Numerical_Convergence_Plots_1}), and fourth-order accurate PPR in space, without (Figure~\ref{Figure_Numerical_Convergence_Plots_2}) and with (Figure~\ref{Figure_Numerical_Convergence_Plots_3}) the application of the slope limiter and monotonicity-preserving strategies. From here onward, we refer to these two finite volume methods as non-monotone and monotone, respectively. The seven time-stepping methods of List \ref{myListOfTimeSteppingMethodsForNumericalExperiments} are applied. Refinement is performed in both space and time (first row), only in space (second row), and only in time (third row). Since our domain size is one, the cell width is $\Delta x = 1/N_{\text{cells}}$, where $N_{\text{cells}}$ is the number of cells. Table~\ref{Table_Numerical_Experiments_Parameters} lists the values of $N_{\text{cells}}$ for the various combinations of spatial and temporal discretizations, and refinement types. Letting $\eta = \Delta t/\Delta x$ denote the ratio of the time step to the cell width, we define two more parameters $\hat{\eta}_{\text{space}}$ and $\hat{\eta}_{\text{time}}$, and assign to them the values listed in Table~\ref{Table_Numerical_Experiments_Parameters}. For refinement in both space and time, we specify $\eta = \hat{\eta}_{\text{space}}$ and $\Delta t = \hat{\eta}_{\text{space}} \Delta x$. For refinement only in space, we specify $\eta = \hat{\eta}_{\text{space}}$ for the smallest value of $\Delta x$, say $\Delta x_{\text{smallest}}$, and use $\Delta t = \hat{\eta}_{\text{space}} \Delta x_{\text{smallest}}$ throughout the study. For refinement only in time, we specify the largest $\Delta t$, say $\Delta t_{\text{largest}}$, as $\Delta t_{\text{largest}} = \hat{\eta}_{\text{time}} \Delta x$, and vary $\Delta t$ from $\Delta t_{\text{largest}}$ to $\Delta t_{\text{largest}}/2^5$ by factors of $1/2$. We know that the width of the absolute stability region of the Runge-Kutta methods, used to advance the characteristic ODE, either remains the same or slightly increases with the order, whereas absolute stability region of the Adams-Bashforth methods decreases by almost half with an increase in order. This is what motivated us to specify smaller values of $\hat{\eta}_{\text{space}}$ and $\hat{\eta}_{\text{time}}$ for the Adams-Bashforth methods than for the Runge-Kutta methods. Table~\ref{Table_Numerical_Experiments_Parameters} also lists the values of the time horizon $T_{\text{horizon}}$ as a fractions of the time period of the first wave mode, $T_1$. Since stability is not guaranteed for the non-monotone finite volume method, we are sometimes compelled to use smaller values of $N_{\text{cells}}$, $\hat{\eta}_{\text{space}}$, $\hat{\eta}_{\text{time}}$, and $T_{\text{horizon}}$, so that the numerical solutions remain bounded. Even though it is advisable to ensure monotonicity in practice, for the purpose of numerical verification of our theory, the non-monotone method has proven to be a helpful exercise. \begin{table}[!htp] \centering \caption{Choice of parameters $N_{\text{cells}}$, $\hat{\eta}_{\text{space}}$, $\hat{\eta}_{\text{time}}$, and $T_{\text{horizon}}$ for the linear variable-coefficient advection equation \eqref{LinearAdvection1D_NumericalExperiment} and the non-linear advection equation \eqref{NonLinearAdvection1D_NumericalExperiment} using first-order upwind and fourth-order PPR in space, with and without the application of the slope limiter and monotonicity-preserving strategies, for refinement in both space and time, refinement only in space, and refinement only in time. The parameter $\hat{\eta}_{\text{space}}$ is defined for refinement in both space and time, and for refinement only in space, whereas the parameter $\hat{\eta}_{\text{time}}$ is defined for refinement only in time. $T_1$ represents the time period of the first wave mode.} \setlength{\tabcolsep}{0.6em} \renewcommand{\arraystretch}{1.125} \begin{tabular}{cccccc} \toprule \multirow{2}{*}{Parameter} & Advection & Refinement & Spatial & Time & \multirow{2}{*}{Values} \\ & Type & Type & Discretization & Integrators & \\ \midrule \multirow{10}{*}{$N_{\text{cells}}$} & \multirow{2}{*}{Both} & Space-Time, & First-Order Upwind, & \multirow{2}{*}{All} & \multirow{2}{*}{$2^6,\ldots,2^{12}$} \\ & & Space & Fourth-Order PPR (Monotone) & & \\ & \multirow{2}{*}{Both} & Space-Time, & \multirow{2}{*}{Fourth-Order PPR (Non-Monotone)} & \multirow{2}{*}{All} & \multirow{2}{*}{$2^5,\ldots,2^{10}$} \\ & & Space & & & \\ & \multirow{2}{*}{Linear} & \multirow{2}{*}{Time} & First-Order Upwind, & \multirow{2}{*}{All} & \multirow{2}{*}{$2^7$} \\ & & & Fourth-Order PPR (Monotone) & & \\ & \multirow{2}{*}{Non-Linear} & \multirow{2}{*}{Time} & First-Order Upwind, & \multirow{2}{*}{All} & \multirow{2}{*}{$2^6$} \\ & & & Fourth-Order PPR (Monotone) & & \\ & Both & Time & Fourth-Order PPR (Non-Monotone) & All & $2^7$ \\ \midrule \multirow{10}{*}{$\hat{\eta}_{\text{space}}$} & \multirow{2}{*}{Both} & \multirow{2}{*}{Both} & \multirow{2}{*}{First-Order Upwind} & AB4 & 0.125 \\ & & & & Rest & 0.25 \\ & Both & Both & Fourth-Order PPR (Monotone) & FE1 & 0.2 \\ & Linear & \multirow{2}{*}{Space-Time} & \multirow{2}{*}{Fourth-Order PPR (Non-Monotone)} & \multirow{2}{*}{FE1} & 0.15 \\ & Non-Linear & & & & 0.2 \\ & Linear & \multirow{2}{*}{Space} & \multirow{2}{*}{Fourth-Order PPR (Non-Monotone)} & \multirow{2}{*}{FE1} & 0.0125 \\ & Non-Linear & & & & 0.1 \\ & \multirow{3}{*}{Both} & \multirow{3}{*}{Both} & \multirow{3}{*}{Fourth-Order PPR (Both)} & AB2, AB3, AB4 & 0.15 \\ & & & & RK2 & 0.2 \\ & & & & RK3, RK4 & 0.25 \\ \midrule \multirow{3}{*}{$\hat{\eta}_{\text{time}}$} & \multirow{2}{*}{Both} & \multirow{2}{*}{Time} & \multirow{2}{*}{First-Order Upwind} & RK4 & 0.32 \\ & & & & Rest & 0.16 \\ & Both & Time & Fourth-Order PPR (Both) & All & 0.16 \\ \midrule \multirow{7}{*}{$T_{\text{horizon}}$} & Both & All & First-Order Upwind & All & $0.25T_1$ \\ & Both & All & Fourth-Order PPR (Monotone) & All & $0.25T_1$ \\ & \multirow{2}{*}{Linear} & \multirow{2}{*}{Space} & \multirow{2}{*}{Fourth-Order PPR (Non-Monotone)} & FE1 & $0.03125T_1$ \\ & & & & Rest & $0.125T_1$ \\ & \multirow{2}{*}{Linear} & Space-Time, & \multirow{2}{*}{Fourth-Order PPR (Non-Monotone)} & \multirow{2}{*}{All} & \multirow{2}{*}{$0.125 T_1$} \\ & & Time & & & \\ & Non-Linear & All & Fourth-Order PPR (Non-Monotone) & All & $0.125 T_1$ \\ \bottomrule \end{tabular} \label{Table_Numerical_Experiments_Parameters} \end{table} \begin{table}[!htp] \centering \caption{Spatial and temporal discretizations for which the error of the linear variable-coefficient advection equation \eqref{LinearAdvection1D_NumericalExperiment} and the non-linear advection equation \eqref{NonLinearAdvection1D_NumericalExperiment} decreases with increase in $\Delta \xi$, for at least some values of $\Delta \xi$, where $\Delta \xi = \Delta x$ for refinement only in space, and $\Delta \xi = \Delta t$ for refinement only in time.} \setlength{\tabcolsep}{0.6em} \renewcommand{\arraystretch}{1.125} \begin{tabular}{cccc} \toprule Advection & Refinement & Spatial & Time \\ Type & Type & Discretization & Integrators \\ \midrule Linear & Time & First-Order Upwind & FE1, RK2, AB2, RK4 \\ Non-Linear & Time & First-Order Upwind & FE1, AB3, RK4, AB4 \\ Linear & Time & Fourth-Order PPR (Non-Monotone) & RK2, AB2, AB3 \\ Non-Linear & Time & Fourth-Order PPR (Non-Monotone) & RK2, AB2, AB3 \\ Linear & Space & Fourth-Order PPR (Monotone) & FE1 \\ Non-Linear & Space & Fourth-Order PPR (Monotone) & FE1 \\ Linear & Time & Fourth-Order PPR (Monotone) & RK3, AB3, AB4 \\ Non-Linear & Time & Fourth-Order PPR (Monotone) & RK3, AB3 \\ \bottomrule \end{tabular} \label{Table_Numerical_Experiments_Reduction_in_Error_With_Refinement} \end{table} Before discussing the nature of the convergence plots, we point out that an increase in the error with refinement only in space or only in time is a more common phenomenon than one might think. By studying the behavior of the actual error norm with only spatial or temporal refinement, we have noted all such occurrences in our numerical experiments and listed them in Table~\ref{Table_Numerical_Experiments_Reduction_in_Error_With_Refinement}. \pagebreak \begin{figure}[!htp] \centering \includegraphics[scale=.3175]{fig4.pdf} \hspace{0.15cm} \includegraphics[scale=.3175]{fig5.pdf} \caption{Variation of the error of the linear variable-coefficient advection equation \eqref{LinearAdvection1D_NumericalExperiment} (left) and the non-linear advection equation \eqref{NonLinearAdvection1D_NumericalExperiment} (right) employing the monotone finite volume method and advanced with Forward Euler, with refinement only in space.} \label{NonConvergentError} \end{figure} The convergence curves of the linear and non-linear advection equations are similar in nature, and the following explanations are applicable to both. With the finite difference method, the spatial and temporal resolutions have reached the asymptotic regime, and the order of convergence at $\eta = \hat{\eta}_{\text{space}}$ is limited by the first-order accuracy of the spatial approximation. By plotting differences in the numerical solution or the error between successive pairs of spatial (or temporal) resolutions, we capture the true order of the spatial (or temporal) discretization. With the finite volume method, we know that the spatial approximation of the cell-integrated solution, as determined by the flux approximation of the PPR, is fourth-order accurate on a uniform mesh. With the non-monotone finite volume method, the resolutions have reached the asymptotic regime for all but the convergence study in both space and time using third-order Runge-Kutta and Adams-Bashforth methods. For these methods, the slope of the convergence curves is~4 instead of 3, which can happen for one of two reasons. The first reason is that the coefficient of $\Delta t^3$ in the global truncation error is actually zero. However, if this were the case, we would not have attained third-order convergence by plotting the difference in the norm of the numerical solution for successive temporal resolutions with refinement only in time. The second reason, and the only plausible explanation is that $\mathcal{O}\left(\Delta x^4\right) + \mathcal{O}\left(\Delta t^4\right) \gg \mathcal{O}\left(\Delta t^3\right)$ for the range of values of $\Delta x$ and $\Delta t$ used in the convergence study in both space and time. As a result, when $\Delta t \propto \Delta x$, the convergence slope is 4, and not 3. With further refinement in $\Delta x$ and $\Delta t$, we expect to reach the asymptotic regime and obtain a convergence slope of 3. However, as mentioned before, we were unable to do so and keep the solution stable, without applying the slope limiter and the monotonicity-preserving strategies. With the explicit midpoint method and the second-order Adams-Bashforth method, we observe order reduction, as we reach the asymptotic regime, and the slope of the convergence curves drop to 3 (as expected) after the first 3 points. Just like the finite difference method, plotting the differences in the numerical solution or the error between successive pairs of spatial (or temporal) resolutions reveals the true order of the spatial (or temporal) discretization. For the monotone finite volume method, the combined effect of (a) the slope limiter, (b) the monotonicity-preserving strategies, and (c) the dissipative Riemann solver modify the expression of the truncation error and reduce the spatial and temporal orders of accuracy. For spatial and temporal discretizations of order $\alpha$ and $\beta$, we expect the global truncation error to assume the form \begin{equation} \hat{\tau}_G = \mathcal{O}\left({\Delta x}^{\alpha}\right) + \Delta t \mathcal{O}\left({\Delta x}^{\alpha}\right) + {\Delta t}^2 \mathcal{O}\left({\Delta x}^{\alpha}\right) + \cdots + {\Delta t}^{\beta-1} \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right). \label{GlobalTruncationErrorNumericalSolutionFinalForm_Chapter3Summary} \end{equation} However, operations (a)--(c) can modify the global truncation error to \begin{equation} \left[\hat{\tau}_G\right]_{\text{modified}} = \mathcal{O}\left({\Delta x}^{\alpha_0}\right) + \Delta t \mathcal{O}\left({\Delta x}^{\alpha_1}\right) + {\Delta t}^2 \mathcal{O}\left({\Delta x}^{\alpha_2}\right) + \cdots + {\Delta t}^{\beta-1} \mathcal{O}\left({\Delta x}^{\alpha_{\beta-1}}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right), \label{ModifiedGlobalTruncationErrorNumericalSolutionFinalForm_Chapter3Summary} \end{equation} where $\alpha_i \le \alpha$ for $i=0,1,\ldots,\beta-1$. This effectively reduces the spatial order of accuracy to \begin{equation} [\alpha]_{\text{modified}} = \min\left(\alpha_0,\alpha_1,\ldots,\alpha_{\beta-1}\right), \end{equation} and the temporal order of accuracy to \begin{equation} [\beta]_{\text{modified}} = r, \end{equation} where $r \in [1, \beta]$ signifies the first occurrence of $\alpha_r = 0$ and an $\mathcal{O}(1)$ term as the coefficient of $\Delta t^r$. The continued application of operations (a)--(c) can keep introducing an $\mathcal{O}(1)$ term to every coefficient of $\Delta t^i$ for $i=0,1,\ldots$, and as a result, we cannot expect to attain convergence at all in the asymptotic regime. This is indeed what we observe in the case of refinement only in space with the Forward Euler method. We will discuss the nature of this particular plot in more detail later. In many practical applications, we do not reach the asymptotic regime with the specified values of $\Delta x$ and $\Delta t$, in which case the dominant coefficients come to our rescue. We observe this phenomenon for every other convergence plot with the monotone finite volume method. With $[\alpha]_{\text{modified}} \approx 3.0$, we do not attain a convergence slope larger than 3 even with the fourth-order Runge-Kutta or Adams-Bashforth methods. We observe order reduction with the Forward Euler, the explicit midpoint and the second-order Adams-Bashforth methods, as the slope of the convergence curve reduces to almost 1 for Forward Euler and close to 2 for the other two methods after the first 4 points. For refinement only in space, we obtain a convergence slope of almost 3 for all but the Forward Euler method. For refinement only in time, we obtain expected orders of convergence for the second- and third-order methods. With Forward Euler, we observe order reduction as the convergence slope drops to 1 (as expected) after the first 3 points. However, unlike the previous experiments (employing finite difference and non-monotone PPR in space), with the fourth-order Runge-Kutta and Adams-Bashforth methods, the convergence slope drops to 2 from 4. So, the only reasonable explanation is that operations (a)--(c) reduce the value of $r$ and $[\beta]_{\text{modified}}$ to at most 2, and the coefficient of $\Delta t^2$ turns out to be the dominant one. \begin{figure}[!htp] \centering \hspace{0.165cm} \includegraphics[scale=.3125]{fig6.pdf} \hspace{0.785cm} \includegraphics[scale=.3125]{fig7.pdf} \includegraphics[scale=.3125]{fig8.pdf} \hspace{0.5cm} \includegraphics[scale=.3125]{fig9.pdf} \includegraphics[scale=.3125]{fig10.pdf} \hspace{0.5cm} \includegraphics[scale=.3125]{fig11.pdf} \caption{Convergence plots of the linear variable-coefficient advection equation \eqref{LinearAdvection1D_NumericalExperiment} (left column) and the non-linear advection equation~\eqref{NonLinearAdvection1D_NumericalExperiment} (right column) using first-order upwind in space, for refinement in both space and time (first row), refinement only in space (second row), and refinement only in time (third row). Abbreviations for the seven time-stepping methods (legends) are given in List 3.} \label{Figure_Numerical_Convergence_Plots_1} \end{figure} \begin{figure}[!htp] \centering \hspace{0.165cm} \includegraphics[scale=.3175]{fig12.pdf} \hspace{0.785cm} \includegraphics[scale=.3175]{fig13.pdf} \includegraphics[scale=.3175]{fig14.pdf} \hspace{0.5cm} \includegraphics[scale=.3175]{fig15.pdf} \includegraphics[scale=.3175]{fig16.pdf} \hspace{0.5cm} \includegraphics[scale=.3175]{fig17.pdf} \caption{Similar to Figure \ref{Figure_Numerical_Convergence_Plots_2} but using fourth-order accurate PPR in space, without the application of the slope limiter and monotonicity-preserving strategies.} \label{Figure_Numerical_Convergence_Plots_2} \end{figure} \begin{figure}[!htp] \centering \hspace{0.165cm} \includegraphics[scale=.3175]{fig18.pdf} \hspace{0.785cm} \includegraphics[scale=.3175]{fig19.pdf} \includegraphics[scale=.3175]{fig20.pdf} \hspace{0.5cm} \includegraphics[scale=.3175]{fig21.pdf} \includegraphics[scale=.3175]{fig22.pdf} \hspace{0.5cm} \includegraphics[scale=.3175]{fig23.pdf} \caption{Similar to Figure \ref{Figure_Numerical_Convergence_Plots_2} but using fourth-order accurate PPR in space, with the application of the slope limiter and monotonicity-preserving strategies.} \label{Figure_Numerical_Convergence_Plots_3} \end{figure} \begin{table}[!htp] \centering \caption{Coefficients $\left[\zeta_{\gamma},\zeta_{\gamma+1}\right]$ of the best fit polynomial $\zeta_{\gamma} \Delta \xi^{\gamma} + \zeta_{\gamma+1} \Delta \xi^{\gamma+1}$ to (a) the error of the linear variable-coefficient advection equation~\eqref{LinearAdvection1D_NumericalExperiment} with refinement in both space and time, and (b) the norm of the difference in the numerical solution of~\eqref{LinearAdvection1D_NumericalExperiment} for successive resolutions with refinement only in space or only in time. For refinement in both space and time and refinement only in space, $\Delta \xi = \Delta x$, and for refinement only in time, $\Delta \xi = \Delta t$. For the finite difference and the non-monotone finite volume methods, the resolutions have already reached the asymptotic regime, and $\gamma = \alpha$ for refinement only in space, $\gamma = \beta$ for refinement only in time, and $\gamma = \min(\alpha,\beta)$ for refinement in both space and time, with $\alpha$ and~$\beta$ representing the spatial and temporal orders of accuracy. For the monotone finite volume method, the same definitions hold, except $\alpha$ and $\beta$ are replaced by their reduced equivalents. Since the norm of the difference in the numerical solution with only spatial refinement is non-convergent for the monotone finite volume method advanced with Forward Euler after the first 3 points, we fit the polynomial to only these points.} \setlength{\tabcolsep}{0.05em} \renewcommand{\arraystretch}{1.5} \begin{tabular}{cccccc} \toprule Refinement & Time & \multicolumn{3}{c}{$\left[\zeta_{\gamma},\zeta_{\gamma+1}\right]$} \\ Type & Integrator & First-Order & Fourth-Order PPR & Fourth-Order PPR \\ & & Upwind & (Non-Monotone) & (Monotone) \\ \midrule Space-Time & FE1 & $\left[+1.18 \times 10^{+01}, -4.46 \times 10^{+01}\right]$ & $\left[+1.62 \times 10^{-02}, +2.52 \times 10^{-02}\right]$ & $\left[+1.94 \times 10^{-02}, -5.03 \times 10^{-01}\right]$ \\ Space-Time & RK2 & $\left[+1.31 \times 10^{+01}, -6.28 \times 10^{+01}\right]$ & $\left[+1.07 \times 10^{-02}, -3.17 \times 10^{-01}\right]$ & $\left[+2.03 \times 10^{-01}, -3.01 \times 10^{-01}\right]$ \\ Space-Time & AB2 & $\left[+1.31 \times 10^{+01}, -6.31 \times 10^{+01}\right]$ & $\left[+1.29 \times 10^{-02}, -2.77 \times 10^{-01}\right]$ & $\left[+2.20 \times 10^{-01}, -8.45 \times 10^{+00}\right]$ \\ Space-Time & RK3 & $\left[+1.31 \times 10^{+01}, -6.29 \times 10^{+01}\right]$ & $\left[-2.72 \times 10^{-02}, +3.14 \times 10^{+01}\right]$ & $\left[+1.49 \times 10^{+02}, -3.91 \times 10^{+03}\right]$ \\ Space-Time & AB3 & $\left[+1.31 \times 10^{+01}, -6.28 \times 10^{+01}\right]$ & $\left[-3.31 \times 10^{-02}, +3.15 \times 10^{+01}\right]$ & $\left[+1.47 \times 10^{+02}, -3.83 \times 10^{+03}\right]$ \\ Space-Time & RK4 & $\left[+1.31 \times 10^{+01}, -6.29 \times 10^{+01}\right]$ & $\left[+2.84 \times 10^{+01}, +6.79 \times 10^{+01}\right]$ & $\left[+1.51 \times 10^{+02}, -3.99 \times 10^{+03}\right]$ \\ Space-Time & AB4 & $\left[+1.31 \times 10^{+01}, -6.29 \times 10^{+01}\right]$ & $\left[+2.84 \times 10^{+01}, +6.79 \times 10^{+01}\right]$ & $\left[+1.51 \times 10^{+02}, -4.03 \times 10^{+03}\right]$ \\ Space & FE1 & $\left[+1.31 \times 10^{+01}, -1.82 \times 10^{+02}\right]$ & $\left[+1.27 \times 10^{+02}, -2.41 \times 10^{+02}\right]$ & $\left[+1.23 \times 10^{+03}, -6.25 \times 10^{+04}\right]$ \\ Space & Rest & $\left[+1.31 \times 10^{+01}, -1.82 \times 10^{+02}\right]$ & $\left[+1.27 \times 10^{+02}, -2.43 \times 10^{+02}\right]$ & $\left[+1.13 \times 10^{+03}, -5.13 \times 10^{+04}\right]$ \\ Time & FE1 & $\left[+2.75 \times 10^{+00}, +1.36 \times 10^{+01}\right]$ & $\left[+1.37 \times 10^{-02}, +3.76 \times 10^{-02}\right]$ & $\left[+1.53 \times 10^{-02}, +8.88 \times 10^{+01}\right]$ \\ Time & RK2 & $\left[+8.25 \times 10^{+00}, +1.12 \times 10^{+01}\right]$ & $\left[+4.89 \times 10^{-02}, +7.43 \times 10^{-02}\right]$ & $\left[+1.62 \times 10^{+00}, +8.89 \times 10^{+01}\right]$ \\ Time & AB2 & $\left[+1.90 \times 10^{+01}, -4.48 \times 10^{+01}\right]$ & $\left[+1.03 \times 10^{-01}, -4.42 \times 10^{-01}\right]$ & $\left[+3.90 \times 10^{+00}, +8.41 \times 10^{+02}\right]$ \\ Time & RK3 & $\left[+2.04 \times 10^{+01}, +5.09 \times 10^{+01}\right]$ & $\left[+1.27 \times 10^{-01}, +8.75 \times 10^{-02}\right]$ & $\left[+1.34 \times 10^{+02}, -1.56 \times 10^{+04}\right]$ \\ Time & AB3 & $\left[+1.22 \times 10^{+02}, -6.83 \times 10^{+02}\right]$ & $\left[+7.11 \times 10^{-01}, -1.00 \times 10^{+01}\right]$ & $\left[+1.08 \times 10^{+03}, -1.56 \times 10^{+04}\right]$ \\ Time & RK4 & $\left[+4.21 \times 10^{+02}, +7.98 \times 10^{+02}\right]$ & $\left[+5.24 \times 10^{+00}, +1.45 \times 10^{+01}\right]$ & $\left[+4.13 \times 10^{-02}, +4.26 \times 10^{+01}\right]$ \\ Time & AB4 & $\left[+6.96 \times 10^{+02}, -7.58 \times 10^{+03}\right]$ & $\left[+4.24 \times 10^{+00}, -6.05 \times 10^{+01}\right]$ & $\left[-2.17 \times 10^{-02}, +2.35 \times 10^{+02}\right]$ \\ \bottomrule \end{tabular} \label{Table_Linear_Numerical_Advection_Error_Coefficients} \end{table} \begin{table}[!htp] \centering \caption{Similar to Table \ref{Table_Linear_Numerical_Advection_Error_Coefficients} but for the non-linear advection equation~\eqref{NonLinearAdvection1D_NumericalExperiment}.} \setlength{\tabcolsep}{0.05em} \renewcommand{\arraystretch}{1.5} \begin{tabular}{cccccc} \toprule Refinement & Time & \multicolumn{3}{c}{$\left[\zeta_{\gamma},\zeta_{\gamma+1}\right]$} \\ Type & Integrator & First-Order & Fourth-Order PPR & Fourth-Order PPR \\ & & Upwind & (Non-Monotone) & (Monotone) \\ \midrule Space-Time & FE1 & $\left[+2.09 \times 10^{-01}, -1.28 \times 10^{+00}\right]$ & $\left[+2.23 \times 10^{-04}, +2.17 \times 10^{-04}\right]$ & $\left[+2.53 \times 10^{-04}, +1.13 \times 10^{-01}\right]$ \\ Space-Time & RK2 & $\left[+2.23 \times 10^{-01}, -1.75 \times 10^{+00}\right]$ & $\left[+1.66 \times 10^{-04}, -5.69 \times 10^{-03}\right]$ & $\left[+9.97 \times 10^{-04}, +1.95 \times 10^{-01}\right]$ \\ Space-Time & AB2 & $\left[+2.23 \times 10^{-01}, -1.75 \times 10^{+00}\right]$ & $\left[+1.43 \times 10^{-04}, -4.90 \times 10^{-03}\right]$ & $\left[+8.94 \times 10^{-04}, +2.44 \times 10^{-01}\right]$ \\ Space-Time & RK3 & $\left[+2.23 \times 10^{-01}, -1.76 \times 10^{+00}\right]$ & $\left[+2.45 \times 10^{-04}, +5.08 \times 10^{-01}\right]$ & $\left[+1.11 \times 10^{+00}, +1.23 \times 10^{+01}\right]$ \\ Space-Time & AB3 & $\left[+2.23 \times 10^{-01}, -1.76 \times 10^{+00}\right]$ & $\left[+2.20 \times 10^{-04}, +5.09 \times 10^{-01}\right]$ & $\left[+1.08 \times 10^{+00}, +1.33 \times 10^{+01}\right]$ \\ Space-Time & RK4 & $\left[+2.23 \times 10^{-01}, -1.76 \times 10^{+00}\right]$ & $\left[+5.31 \times 10^{-01}, -4.54 \times 10^{-01}\right]$ & $\left[+1.14 \times 10^{+00}, +1.12 \times 10^{+01}\right]$ \\ Space-Time & AB4 & $\left[+2.23 \times 10^{-01}, -1.76 \times 10^{+00}\right]$ & $\left[+5.30 \times 10^{-01}, -4.54 \times 10^{-01}\right]$ & $\left[+1.14 \times 10^{+00}, +1.06 \times 10^{+01}\right]$ \\ Space & FE1 & $\left[+2.22 \times 10^{-01}, -5.07 \times 10^{+00}\right]$ & $\left[+7.96 \times 10^{+00}, -1.43 \times 10^{+01}\right]$ & $\left[+1.03 \times 10^{+01}, +1.23 \times 10^{+01}\right]$ \\ Space & Rest & $\left[+2.22 \times 10^{-01}, -5.06 \times 10^{+00}\right]$ & $\left[+7.95 \times 10^{+00}, -1.43 \times 10^{+01}\right]$ & $\left[+9.39 \times 10^{+00}, +1.18 \times 10^{+02}\right]$ \\ Time & FE1 & $\left[+1.85 \times 10^{-02}, +1.80 \times 10^{-01}\right]$ & $\left[+1.40 \times 10^{-04}, +9.36 \times 10^{-04}\right]$ & $\left[+4.67 \times 10^{-04}, +6.50 \times 10^{-01}\right]$ \\ Time & RK2 & $\left[+7.42 \times 10^{-02}, +1.74 \times 10^{-01}\right]$ & $\left[+7.15 \times 10^{-04}, +4.26 \times 10^{-04}\right]$ & $\left[+3.37 \times 10^{-02}, +3.75 \times 10^{-01}\right]$ \\ Time & AB2 & $\left[+1.40 \times 10^{-01}, +8.77 \times 10^{-02}\right]$ & $\left[+1.10 \times 10^{-03}, -3.35 \times 10^{-03}\right]$ & $\left[+8.30 \times 10^{-02}, +3.37 \times 10^{+00}\right]$ \\ Time & RK3 & $\left[+2.15 \times 10^{-01}, +7.57 \times 10^{-01}\right]$ & $\left[+2.39 \times 10^{-03}, +1.35 \times 10^{-03}\right]$ & $\left[+1.54 \times 10^{+00}, -1.32 \times 10^{+02}\right]$ \\ Time & AB3 & $\left[+2.21 \times 10^{+00}, -1.21 \times 10^{+01}\right]$ & $\left[+7.27 \times 10^{-03}, -7.97 \times 10^{-02}\right]$ & $\left[+1.27 \times 10^{+01}, -3.04 \times 10^{+02}\right]$ \\ Time & RK4 & $\left[+8.26 \times 10^{+00}, +2.32 \times 10^{+01}\right]$ & $\left[+7.20 \times 10^{-02}, +6.95 \times 10^{-02}\right]$ & $\left[+9.75 \times 10^{-04}, +5.37 \times 10^{-01}\right]$ \\ Time & AB4 & $\left[+3.62 \times 10^{+01}, -2.62 \times 10^{+02}\right]$ & $\left[+4.55 \times 10^{-02}, +2.45 \times 10^{-01}\right]$ & $\left[+5.04 \times 10^{-04}, +1.81 \times 10^{+00}\right]$ \\ \bottomrule \end{tabular} \label{Table_NonLinear_Numerical_Advection_Error_Coefficients} \end{table} Now, the nature of the convergence curve of the monotone finite volume method with refinement only in space and advanced with the Forward Euler method deserves a standalone explanation. Figure \ref{NonConvergentError} shows the variation of the actual error of the linear variable-coefficient advection equation \eqref{LinearAdvection1D_NumericalExperiment} (left) and the non-linear advection equation \eqref{NonLinearAdvection1D_NumericalExperiment} (right) with refinement only in space. The error decreases for the first 4 points at third-order (as verified from Figure \ref{Figure_Numerical_Convergence_Plots_3}) after which it becomes non-convergent due to the application of the slope limiter, the monotonicity-preserving strategies, and the dissipative local Lax-Friedrichs Riemann solver. More specifically, the error passes through phases of local maxima and minima. As a result, when we plot the norm of the difference in the error (or the numerical solution) for successive spatial resolutions, the curve is non-convergent after the first 3 points and follows a similar pattern, as observed in Figure \ref{Figure_Numerical_Convergence_Plots_3}. To understand the role of the coefficients of the leading order terms of the error, we have fit the polynomial $\zeta_{\gamma} \Delta \xi^{\gamma} + \zeta_{\gamma+1} \Delta \xi^{\gamma+1}$ to (a) the error of the linear variable-coefficient advection equation~\eqref{LinearAdvection1D_NumericalExperiment} and the non-linear advection equation~\eqref{NonLinearAdvection1D_NumericalExperiment} with refinement in both space and time, and (b) the norm of the difference in the numerical solution of~\eqref{LinearAdvection1D_NumericalExperiment} and~\eqref{NonLinearAdvection1D_NumericalExperiment} with refinement only in space or only in time. For the advection equations~\eqref{LinearAdvection1D_NumericalExperiment} and~\eqref{NonLinearAdvection1D_NumericalExperiment}, Tables~\ref{Table_Linear_Numerical_Advection_Error_Coefficients} and~\ref{Table_NonLinear_Numerical_Advection_Error_Coefficients} list the coefficients $\left[\zeta_{\gamma},\zeta_{\gamma+1}\right]$. For refinement in both space and time and refinement only in space, $\Delta \xi = \Delta x$, and for refinement only in time, $\Delta \xi = \Delta t$. For the finite difference and the non-monotone finite volume methods, the resolutions have reached the asymptotic regime, and $\gamma = \alpha$ for refinement only in space, $\gamma = \beta$ for refinement only in time, and $\gamma = \min(\alpha,\beta)$ for refinement in both space and time, with $\alpha$ and~$\beta$ representing the spatial and temporal orders of accuracy. The convergence curve of the advection equations discretized with the non-monotone finite volume method and advanced with the explicit midpoint method and the second-order Adams-Bashforth methods, reach the asymptotic regime after the first three points for refinement in both space and time. So, we do not include these points for determining the best fit polynomial. Even though $\zeta_{\gamma+1} \Delta \xi^{\gamma+1} \ll \zeta_{\gamma} \Delta \xi^{\gamma}$ in the asymptotic regime, if the slope of the convergence curve is $\gamma+1$ instead of $\gamma$, it is immediately clear that we have not reached the aymptotic regime, and we would expect $\zeta_{\gamma+1}$ to be at least a few orders of magnitude larger than $\zeta_{\gamma}$. This is what we observe for the convergence of the advection equations discretized with the non-monotone finite volume method and advanced with the third-order Runge-Kutta or Adams-Bashforth methods, with refinement in both space and time. For these cases, $\zeta_{4}$ is three orders of magnitude larger than $\zeta_{3}$. If, however, the resolutions have reached the asymptotic regime, and the slope of the convergence curve is $\gamma$, we observe $\zeta_{\gamma+1}$ to be (a) less than $\zeta_{\gamma}$, or (b) of the same order of magnitude as $\zeta_{\gamma}$, or (c) at most one order of magnitude larger than $\zeta_{\gamma}$. For the monotone finite volume method, $\alpha$ and $\beta$ are replaced by their reduced equivalents, as observed in the convergence plots of Figure \ref{Figure_Numerical_Convergence_Plots_2}. If we observe order reduction in the convergence curves obtained with these monotone finite volume methods, and we use all points to obtain the best fit polynomial, $\zeta_{\gamma+1}$ can be a few orders of magnitude larger than $\zeta_{\gamma}$. Finally, the norm of the difference in the numerical solution with only spatial refinement is non-convergent for the monotone finite volume method advanced with Forward Euler after the first 3 points. So, we fit the polynomial to only these points. Summarizing, we obtain the expected orders of convergence with the finite difference method, but not with the finite volume method for some of the time-stepping methods. This is because of two main reasons. First, the resolutions have not reached the asymptotic regime for some of these time-stepping methods, for example, with the advection equations discretized with the non-monotone finite volume method and advanced with the third-order Runge-Kutta or Adams-Bashforth method, for refinement in both space and time. As a result, the coefficients of the leading order terms in the truncation error predominate over the powers of $\Delta x$ and $\Delta t$. This trend is expected to reverse with spatial or temporal refinement as we approach the asymptotic regime. However, we were unable to do so with the non-monotone finite volume method without the solution becoming unstable. Now, the extent of refinement of the discretization parameters $\Delta x$ and $\Delta t$ required to reach the asymptotic regime depends on the problem being solved. In our effort to reach the asymptotic regime, we may keep refining $\Delta x$ and $\Delta t$, and eventually the numerical error with the reduced values of $\Delta x$ and $\Delta t$ can drop below machine precision. In practice, we may not reach the asymptotic regime, and the magnitude of the error is dictated by the coefficients of the leading order terms in the truncation error, rather than the powers of $\Delta x$ and $\Delta t$. If, however, the coefficients of the leading order terms in the truncation error dominate for the largest values of $\Delta x$ and $\Delta t$, and we approach (or reach) the asymptotic regime before the machine precision error dominates, we expect to obtain reduction in the convergence slope. This is observed with the refinement of the advection equations in both space and time, when they are (a) discretized in space with the non-monotone finite volume method, and advanced in time with the explicit midpoint and the second-order Adams-Bashforth methods, and (b) discretized in space with the monotone finite volume method, and advanced in time with Forward Euler, the explicit midpoint and the second-order Adams-Bashforth methods, with simultaneous refinement in space and time. The second reason for some of the convergence slopes obtained with the finite volume method not matching the theoretical predictions is the following one. The slope limiter, the monotonicity-preserving strategies, and the disspative Riemann solver, all of which enable us to employ a high order finite volume method and increase the resolution while ensuring numerical stability, drop the order of accuracy. So, for verification purposes, we should adhere to the non-monotone finite volume methods. In other words, we should refrain from adopting any strategy which can reduce the order of accuracy. But despite the order reduction due to these strategies adopted in the monotone finite volume methods, we observe that the optimum order of convergence at constant ratio of time step to cell width is obtained by a time-stepping method of at least the same order of accuracy as that of the spatial discretization. \section{Conclusion} \label{sec:conclusion} We have derived expressions for the local truncation error of generic and specific hyperbolic PDEs, consisting of linear and non-linear advection equations, advanced in time with a variety of time-stepping methods, belonging to the Method of Lines, e.g.~Forward Euler, predictor-corrector methods like Runge-Kutta and multistep methods like Adams-Bashforth. We used first-, second-, and third-order upwind spatial discretization on a uniform mesh. The local truncation error assumes the form \begin{equation} \hat{\tau} = \Delta t \mathcal{O}\left({\Delta x}^{\alpha}\right) + {\Delta t}^2 \mathcal{O}\left({\Delta x}^{\alpha}\right) + {\Delta t}^3 \mathcal{O}\left({\Delta x}^{\alpha}\right) + \cdots + {\Delta t}^{\beta} \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta+1}\right). \label{LocalTruncationErrorNumericalSolutionFinalForm_Conclusion} \end{equation} The form of the local truncation error does not depend on whether the advection equation is linear or non-linear, constant- or variable-coefficient, homogeneous or inhomogeneous, and whether the time-stepping method is explicit or implicit, predictor-corrector or multistep. The leading order terms of the local truncation error only depend on the orders of the spatial and temporal discretizations. If the PDE is reduced to an ODE by specifying all spatial gradients to be zero, the local truncation error reduces to that of the ODE, thereby attesting to the robustness of our theory. At a time horizon, the global truncation error is one order of $\Delta t$ less than its local counterpart, and assumes the form \begin{equation} \hat{\tau}_G = \mathcal{O}\left({\Delta x}^{\alpha}\right) + \Delta t \mathcal{O}\left({\Delta x}^{\alpha}\right) + {\Delta t}^2 \mathcal{O}\left({\Delta x}^{\alpha}\right) + \cdots + {\Delta t}^{\beta-1} \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right) \approx \mathcal{O}\left({\Delta x}^{\alpha}\right) + \mathcal{O}\left({\Delta t}^{\beta}\right) \text{ for $\Delta t \ll 1$}. \label{GlobalTruncationErrorNumericalSolutionFinalForm_Conclusion} \end{equation} When performing convergence tests of a hyperbolic PDE with the assumptions of \begin{enumerate}[label=(\alph*),noitemsep] \item a stable numerical scheme, \item the global solution error being of the same order of accuracy as the global truncation error, \item having reached the asymptotic regime, where the truncation error is dominated by the powers of the cell width and the time step rather than their coefficients, before machine precision error dominates, \end{enumerate} the following hold: \begin{enumerate}[label=(\roman*),noitemsep \item The order of convergence at constant ratio of time step to cell width is determined by the minimum of the orders of the spatial and temporal discretizations. So, for a spatial discretization of a given order, a time-stepping method of at least the same order should be employed to attain the optimum order of convergence, with the most computationally efficient choice being the order of the spatial discretization. \item Convergence of the error norm cannot be guaranteed under only spatial or temporal refinement. \item By plotting the difference in the numerical solution or the error between two successive pairs of spatial (or temporal) resolutions, the convergence rates of the spatial (or temporal) discretization can be determined. \end{enumerate} We have conducted numerical experiments with linear and non-linear advection equations to demonstrate and underline our theoretical findings. We have employed finite difference and finite volume spatial discretizations, and a variety of time-stepping methods including Forward Euler, Runge-Kutta, and Adams-Bashforth methods from second up to fourth order. With the finite difference and the majority of the non-monotone finite volume methods, the spatial and temporal resolutions have reached the asymptotic regime and the convergence rates match our theoretical predictions. However, for the finite volume method with some of the time-stepping methods, the resolutions do not reach the asymptotic regime before machine precision error takes over. Under such circumstances, the coefficients of the leading order terms in the truncation error cannot be ignored, and consequently (i)--(iii) may not necessarily hold. Moreover, the slope-limiting monotonicity-preserving strategies and the dissipation provided by the Riemann solver in the monotone finite volume method drops the spatial and temporal orders of accuracy. These are the practical aspects of numerical models we need to take into consideration, which are not covered by our theory. However, for a spatial discretization of a given order, we still observe that the optimum order of convergence is attained by a time-stepping method of at least the same order. The points presented in this paper on how spatial and temporal convergence interact, specifically equations \eqref{LocalTruncationErrorNumericalSolutionFinalForm_Conclusion} and \eqref{GlobalTruncationErrorNumericalSolutionFinalForm_Conclusion}, are both straightforward and fundamental. Ongoing and future work includes extending our theory to parabolic PDEs and higher-order and spectral discretizations in space and time. \section{Acknowledgements} \label{sec:acknowledgements} Siddhartha Bishnu has been supported by the Scientific Discovery through Advanced Computing (SciDAC) projects LEAP (Launching an Exascale ACME Prototype) and CANGA (Coupling Approaches for Next Generation Architectures) under the U.S.~Department of Energy (DOE), Office of Science, Office of Biological and Environmental Research (BER). Mark Petersen was supported as part of the Energy Exascale Earth System Model (E3SM) project, also funded by the DOE BER. This research used resources provided by the Los Alamos National Laboratory Institutional Computing Program, which is supported by the U.S. Department of Energy National Nuclear Security Administration under Contract No.~89233218CNA000001. The authors thank Tomek Plewa, Darren Engwirda, Giacomo Capodaglio, and Pedro da Silva Peixoto for helpful discussions.
{ "timestamp": "2022-01-11T02:12:24", "yymm": "2105", "arxiv_id": "2105.01822", "language": "en", "url": "https://arxiv.org/abs/2105.01822" }
\section{Introduction} Acoustic scene classification (ASC), which classifies sound recordings into the predefined class such as recording environments, places, and daily activities, is one of the core search problems in environmental sound analysis \cite{Virtanen_Springer2018_01,Imoto_AST2018_01,Heittola_DCASE2020_01}. ASC has significant potential for various applications such as monitoring infants/elderly people \cite{Mesaros_DCASE2017c_01}, automatic surveillance \cite{Ntalampiras_ICASSP2009_01}, automatic life-logging \cite{Imoto_IEICE2016_01}, and media retrieval \cite{Jin_INTERSPEECH2012_01}. Many methods for ASC utilizing spectral information have been proposed. For instance, Eronen {\it et al.} \cite{Eronen_TASLP2006_01} and Mesaros {\it et al.} \cite{Mesaros_EUSIPCO2010_01} have proposed methods based on mel-frequency cepstral coefficients (MFCCs) and Gaussian mixture models (GMMs). Valenti {\it et al.} \cite{Valenti_IJCNN2017_01}, Han {\it et al.} \cite{Han_DCASE2017_01}, and Jallet {\it et al.} \cite{Jallet_DCASE2017_01} have proposed methods using mel-spectrograms and a convolutional neural network (CNN). Liping {\it et al.} \cite{Liping_DCASE2018_01}, Tanabe {\it et al.} \cite{Tanabe_DCASE2018_01}, and Raveh and Amar \cite{Raveh_DCASE2018_01} have proposed Xeption-based, VGG-based, and ResNet-based ASC methods, respectively . More recently, environmental sound analysis utilizing spatial information, which is extracted from time differences or sound power ratios between channels, has also been studied \cite{Kwon_ISCS2009_01,Giannoulis_EUSIPCO2015_01,Imoto_TASLP2017_01,Nakadai_DCASE2018_01,Tanabe_DCASE2018_01,Imoto_IEICE2020_01}. Conventional microphone array processing requires that microphones are synchronized between channels and/or microphone locations or array geometry is known. However, spatial information based on accurate time differences or sound power ratios between channels cannot be extracted using a combination of unsynchronized distributed microphones such as smartphones, IoT devices, and surveillance cameras. To utilize unsynchronized distributed microphones whose locations or array geometry is unknown for multichannel ASC, K\"{u}rby \textit{et al.} \cite{Kurby_DCASE2016_01} have proposed a method based on the late fusion of scene classification results obtained with each microphone. Many conventional methods for multichannel ASC also apply this strategy \cite{Inoue_DCASE2018_01,Liu_DCASE2018_01}. Imoto \textit{et al.} have proposed ASC methods using the spatial cepstrum and graph cepstrum that can be applied under an unsynchronized condition \cite{Imoto_TASLP2017_01,Imoto_IEICE2020_01}. On the other hand, sounds recorded with smartphones or IoT devices often have missing parts caused by microphone failure, packet loss in data transmission over the network, or unreliable observations caused by clipping and wind noise. To analyze acoustic scenes from intermittently missing observations with a single-channel microphone, Imoto and Ono have proposed a method of simultaneously analyzing acoustic scenes and estimating missing observations \cite{Imoto_TASLP2019_01}. However, the conventional method is not for multichannel audio recordings, and the impact of partially missing channels on the ASC performance using multichannel audio recordings has not been investigated in the conventional works. In this paper, we thus investigate the impact of partially missing channels on the performance of multichannel ASC, especially for the distributed microphone array. In machine-learning-based multichannel ASC, missing channels cause not only losses of time-frequency and spatial information on sound sources but also a mismatch between a trained model and evaluation data. Therefore, to realize a robust ASC system, it is important to investigate how a missing channel affects the ASC performance. We then apply simple data augmentation methods for multichannel ASC with partially missing channels and evaluate the scene classification performance using the data augmentation methods. The remainder of this paper is organized as follows. In section 2, we discuss conventional acoustic scene classification using multichannel observation. In section 3, we introduce three simple data augmentation methods for multichannel ASC with missing channels. In section 4, we report the results of experiments carried out to evaluate the performance of ASC with partially missing channels and the impact of missing channels on the ASC performance. Finally, we conclude this paper in section 5. \vspace{5pt} \section{Conventional Methods for Scene Classification} \label{sec:conventioinal} Let us consider a model $f$ and model parameter ${\bi \theta}$. The purpose of ASC is to estimate an acoustic scene label $\hat{z}$ in an evaluated sound as \vspace{-2pt} \begin{align} \hat{z} = \mathop{\rm arg~max}\limits} % Definition \argmax ; \argmax_{x_{z} f({\bf X}, {\bi \theta}), \end{align} \vspace{-6pt} \noindent where $z$ and ${\bf X} \hspace{0.7pt} ( \hspace{0.7pt} \in \hspace{-3pt} \mathbb{R}^{F \times T \times C})$ are the acoustic scene class and acoustic feature, respectively. $F$, $T$, and $C$ are the numbers of frequency bins, time frames, and channels, respectively. The model parameter ${\bi \theta}$ is preliminarily determined using the training dataset $\mathcal D = \{ ({\bf X}_{1}, z_{1}), ..., ({\bf X}_{l}, z_{l}), ..., $ $({\bf X}_{L}, z_{L}) \}$. Here, ${\bf X}_{l}$ is the acoustic feature of the $l$th sound clip and $z_{l}$ indicates an acoustic scene label in the $l$th sound clip. For the acoustic feature ${\bf X}_{l}$, the mel-band energy and MFCCs are often used. As the model $f$, GMMs, a CNN, a ResNet-based, or a VGG-based method has often been applied. In the neural-network-based methods, the model parameter ${\bi \theta}$ is estimated using the softmax cross-entropy loss function and the backpropagation technique. Most conventional methods assume that there is no missing channel in a multichannel observation. However, in the scenario of a distributed microphone array, we may have partially missing channels caused by microphone failure, in which some acoustic feature ${\bf X}_{c}$ in the $c$\hspace{0.6pt}th channel cannot be utilized in the evaluation data. \vspace{1pt} \section{Data Augmentation for Multichannel Scene Classification} \label{sec:conventioinal} \vspace{1pt} In this work, we apply three data augmentation methods for multichannel scene classification with partially missing channels. These data augmentation methods are reasonably simple to implement and enable us to investigate how partially missing channels affect the ASC performance. \subsection{Channel Mask} \label{ssec:ChMask} The data missing in the evaluation stage causes a mismatch between the trained model and evaluation data. To avoid this mismatch, we apply simple binary masking throughout the input time-frequency features for the random channels in the model training stage as follows: \vspace{-2pt} \begin{align} {\bf X}_{l,c} = O, \label{eq:ChMask} \end{align} \vspace{-6pt} \noindent where ${\bf X}_{l,c}$ is the acoustic feature of the $l$th sound clip in the $c$\hspace{0.6pt}th channel. $O$ is the zero matrix when the acoustic feature is the linear spectrum, whereas it is the matrix that has negative infinity values in its element when the acoustic feature is the log spectrum. \subsection{Channel Overwrite and Random Copy} \label{ssec:ChOverwrite} When applying \textit{channel mask}, a large gap may remain between the unmasked channel in the training data and the missing channel in the evaluation data. To bridge this gap, we apply a data augmentation method using \textit{channel overwrite} in the model training stage and a random copy in the evaluation stage. \textit{Channel overwrite} mandatorily overwrites the time-frequency features between channels in model training as follows: \vspace{-2pt} \begin{align} \hspace{1pt} {\bf X}_{l,c} = {\bf X}_{l,c'}. \label{eq:ChOverwrite} \end{align} \vspace{-6pt} \noindent In the evaluation stage, we randomly copy the acoustic features from non-missing channels to missing channels. \begin{table}[!t] \small \begin{center} \renewcommand{\arraystretch}{1.02} \caption{Number of Recorded Audio Segments} \vspace{-5pt} \label{tab:audiosegments} \vspace{0pt} \begin{tabular}{cr} \wcline{1-2} \ \\[-8.0pt] \textbf{Acoustic scene} & \textbf{\# segments} \\[-0.4pt] \wcline{1-2} \ \\[-8.0pt] Absence&4,715\ \ \ \ \\[0pt] Cooking&1,281\ \ \ \ \\[0pt] Dishwashing&356\ \ \ \ \\[0pt] Eating&577\ \ \ \ \\[0pt] Other&515\ \ \ \ \\[0pt] Social activity&1,236\ \ \ \ \\[0pt] Vacuum cleaning&243\ \ \ \ \\[0pt] Watching TV&4,662\ \ \ \ \\[0pt] Working&4,661\ \ \ \ \\ \wcline{1-2} \ \\[-8.0pt] Total&18,246\ \ \ \ \\[-1pt] \wcline{1-2} \end{tabular} \end{center} \vspace{15pt} \vspace{-1pt} \footnotesize \caption{Detailed network architecture used for \protect\linebreak evaluation experiments} \vspace{-5pt} \label{tbl:parameter} \centering \begin{tabular}{ccc} \wcline{1-3} &\\[-7.5pt] \! \textbf{Layer} \!\!\!&\!\!\! \textbf{Input size} \!\!\!\!&\!\!\!\! \textbf{Output size} \!\!\!\\ \wcline{1-3} \!&\\[-7.2pt] \!Conv. (7$\times$1$\times$64) + BN + ReLU \!\!\!&\!\!\! 40$\times$501$\times$16 \!\!\!\!&\!\!\!\! 40$\times$501$\times$64 \!\!\!\!\\[0pt] \!Max pooling (4$\times$1) + Dropout (rate\hspace{1pt}=\hspace{1pt}0.2) \!\!\!&\!\!\! 40$\times$501$\times$64 \!\!\!\!&\!\!\!\! 10$\times$501$\times$64 \!\!\!\!\\[0pt] \!Conv. (10$\times$1$\times$128) + BN + ReLU \!\!\!&\!\!\! 10$\times$501$\times$64 \!\!\!\!&\!\!\!\! 10$\times$501$\times$128 \!\!\!\!\\[0pt] \!Conv. (1$\times$7$\times$256) + BN + ReLU \!\!\!&\!\!\! 10$\times$501$\times$128 \!\!\!\!&\!\!\!\! 10$\times$501$\times$256 \!\!\!\!\\[0pt] \!Global max pooling + Dropout (rate\hspace{1pt}=\hspace{1pt}0.5) \!\!\!&\!\!\! 10$\times$501$\times$256 \!\!\!\!&\!\!\!\! 256 \!\!\!\!\\[0pt] \!Dense \!\!\!&\!\!\! 256 \!\!\!\!&\!\!\!\! 128 \!\!\\[0pt] \!Softmax \!\!\!&\!\!\! 128 \!\!\!\!&\!\!\!\! 9 \!\!\\[0pt] \wcline{1-3} \end{tabular} \vspace{0pt} \vspace{15pt} \footnotesize \caption{Scene classification performance with missing channels in evaluation dataset} \vspace{-5pt} \label{tbl:degradation} \centering \begin{tabular}{ccccccc} \wcline{1-7} &\\[-7.5pt] \!\!\!\!&\!\!\!\! \textbf{w/o missing} \!\!\!\!&\!\!\! \textbf{1ch} \!\!\!&\!\!\! \textbf{2ch} \!\!\!&\!\!\! \textbf{4ch} \!\!\!&\!\!\! \textbf{8ch} \!\!\!&\!\!\! \textbf{12ch}\!\!\\ \wcline{1-7} \!\!\!&\!\!\\[-7.2pt] \!\!\textbf{Micro-Fscore}\!\!\!\!\!&\!\!\!\!\!\!96.80\%\!\!\!\!\!&\!\!\!85.50\%\!\!\!&\!\!\!74.88\%\!\!\!&\!\!\!60.93\%\!\!\!&\!\!\!38.63\%\!\!\!&\!\!\!17.67\%\!\!\\[0pt] \!\!\textbf{Macro-Fscore}\!\!\!\!\!&\!\!\!\!\!\!93.60\%\!\!\!\!\!&\!\!\!65.90\%\!\!\!&\!\!\!45.66\%\!\!\!&\!\!\!31.44\%\!\!\!&\!\!\!13.34\%\!\!\!&\!\!\!13.98\%\!\!\\[0pt] \wcline{1-7} \end{tabular} \vspace{19pt} \footnotesize \caption{Scene classification performance with same missing channels in training and evaluation datasets} \vspace{-5pt} \label{tbl:same} \centering \begin{tabular}{cccccc} \wcline{1-6} &\\[-7.5pt] & \textbf{1ch} & \textbf{2ch} & \textbf{4ch} & \textbf{8ch} & \textbf{12ch} \\ \wcline{1-6} &\\[-7.2pt] \textbf{Micro-Fscore}&95.67\%&95.05\%&95.23\%&93.55\%&90.35\%\\[0pt] \textbf{Macro-Fscore}&91.26\%&89.43\%&89.51\%&85.58\%&79.53\%\\[0pt] \wcline{1-6} \end{tabular} \vspace{10pt} \end{table} \begin{table*}[t] \vspace{1pt} \small \caption{Micro-Fscore for classification performance with data augmentation. \protect\linebreak ``$n$ch missing'' denotes the number of missing channels in the evaluation dataset.} \vspace{-5pt} \label{tbl:microFscore_aug} \centering \begin{tabular}{lcccccc} \wcline{1-7} &\\[-7.5pt] & \textbf{w/o missing} & \textbf{1ch missing} & \textbf{2ch missing} & \textbf{4ch missing} & \textbf{8ch missing} & \textbf{12ch missing} \\ \wcline{1-7} &\\[-7.2pt] w/o augmentation & \textbf{96.80\%} & 85.50\% & 74.88\% & 60.93\% & 38.63\% & 17.67\%\\[1pt] \textit{Channel mask} & 95.54\% & 95.59\% & 95.45\% & 94.85\% & 92.47\% & 76.01\%\\[1pt] \textit{Channel overwrite} + Random copy & 95.81\% & 95.78\% & 95.68\% & 95.36\%& 93.91\% & 91.43\%\\[1pt] \textit{Channel swap} + Random copy & 95.85\% & \textbf{95.82\%} & \textbf{95.72\%} & \textbf{95.39\%} & \textbf{94.06\%} & \textbf{91.46\%}\\[0pt] \wcline{1-7} \end{tabular} \vspace{15pt} \caption{Macro Fscore for classification performance with data augmentation \protect\linebreak ``$n$ch missing'' denotes the number of missing channels in the evaluation dataset.} \vspace{-5pt} \label{tbl:macroFscore_aug} \centering \begin{tabular}{lcccccc} \wcline{1-7} &\\[-7.5pt] & \textbf{w/o missing} & \textbf{1ch missing} & \textbf{2ch missing} & \textbf{4ch missing} & \textbf{8ch missing} & \textbf{12ch missing} \\ \wcline{1-7} &\\[-7.2pt] w/o augmentation & \textbf{93.60\%} & 65.90\% & 45.66\% & 31.44\% & 13.34\% & 13.98\%\\[1pt] \textit{Channel mask} & 90.63\% & 90.76\% & 90.42\% & 88.81\% & 84.28\% & 51.99\%\\[1pt] \textit{Channel overwrite} + Random copy & 90.75\% & 90.75\% & 90.53\% & 90.06\%& 88.21\% & 83.98\%\\[1pt] \textit{Channel swap} + Random copy & 90.74\% & \textbf{90.75\%} & \textbf{90.54\%} & \textbf{90.13\%} & \textbf{88.27\%} & \textbf{84.07\%}\\[0pt] \wcline{1-7} \end{tabular} \vspace{10pt} \end{table*} \subsection{Channel Swap and Random Copy} \label{ssec:ChSwap} \textit{Channel mask} and \textit{channel overwrite} lose time-frequency information since we discard time-frequency features in the training stage. To train the scene classification model without wasting time-frequency information, we apply a data augmentation method using \textit{channel swap}. \textit{Channel swap} simply swaps the time-frequency features between channels in model training as follows: \vspace{-3pt} \begin{align} \begin{cases} \hspace{1pt} {\bf X}_{l,c} = {\bf X}_{l,c'}\\ \hspace{1pt} {\bf X}_{l,c'} = {\bf X}_{l,c}. \end{cases} \label{eq:ChSwap} \end{align} \vspace{-1pt} \noindent In the evaluation stage, we randomly copy the acoustic features from non-missing channels to missing channels as with the data augmentation in \textit{channel overwrite}. \vspace{2pt} \section{Experiments} \label{sec:experiments} \vspace{1pt} \subsection{Experimental Conditions} \label{ssec:conditions} We evaluate the impact of a missing channel on the performance of scene classification using various data augmentation methods. To evaluate the performance, we use the development dataset of DCASE2018 Challenge Task 5 \cite{Dekkers_DCASE2018_01}, which is a derivative of the SINS dataset \cite{Dekkers_DCASE2017_01}. We construct each 10 s audio segment with sounds recorded by four microphone arrays, each of which consists of four linearly arranged microphones; that is, each audio segment contains 16 channels. As shown in Table~\ref{tab:audiosegments}, the dataset contains 18,246 audio segments, and we split them into the same 4-fold cross-validation setup as in DCASE2018 Challenge Task 5. For the acoustic features, we use the 40-dimensional log mel-band energy, which has a frame length of 40 ms with hop size of a 20 ms. In this paper, we regard the missing channels as silent with zeros filled in the time domain. As the classification model, we apply the same network proposed by Inoue {\it et al.} \cite{Inoue_DCASE2018_01}, which achieved the best score in DCASE2018 Challenge Task 5, except for the input channel size of the network. The detailed network structure is shown in Table~\ref{tbl:parameter}. We utilize the RAdam optimizer \cite{Liu_ICLR2020_01} with a learning rate of 0.001. For each method, we conduct the evaluation experiment 16 (random combinations of missing channels) $\times$ 4 (fold) times. \begin{figure*}[t] \vspace{15pt} \begin{tabular}{ccc} \begin{minipage}[t]{0.31\hsize} \centering \includegraphics[height=1.09\textwidth]{nomiss_00.eps} \vspace{-14pt} \caption{Example of scene classification result without missing channels (recall, \%)} \label{fig:nomiss} \end{minipage} \hspace*{9pt} \begin{minipage}[t]{0.31\hsize} \centering \hspace*{12pt} \includegraphics[height=1.09\textwidth]{missing_01.eps} \vspace{-4pt} \caption{Example of scene classification result with four missing channels in evaluation data (recall, \%)} \label{fig:missing} \end{minipage} \hspace*{7pt} \begin{minipage}[t]{0.31\hsize} \centering \hspace*{-3pt} \includegraphics[height=1.09\textwidth]{ChSwap_01.eps} \vspace{-4pt} \caption{Example of scene classification result based on \textit{channel swap} with four missing channels in evaluation data (recall, \%)} \label{fig:ChSwap} \end{minipage} \end{tabular} \vspace{10pt} \end{figure*} \subsection{Experimental Results} \label{ssec:results} \subsubsection{Impact of Missing Channel on Classification Performance} We evaluate the performance degradation caused by missing channels in the evaluation data. Table~\ref{tbl:degradation} shows the scene classification performance in terms of micro- and macro-Fscore. The result shows that the missing channels cause severe performance degradation in multichannel ASC. To investigate how the missing channels affect the ASC performance, we also evaluate the ASC performance with the same channels missing in the training and evaluation datasets. Table~\ref{tbl:same} shows the scene classification performance in terms of micro- and macro-Fscore. The results show that the performance degradation is significantly smaller than the corresponding results in Table~\ref{tbl:degradation}, even though some channels are missing in the training dataset. This indicates that, in multichannel ASC, the mismatch between the trained model and evaluation data is a much more severe problem than missing spectral and spatial information. Thus, the mismatch between the trained model and evaluation data must be addressed preferentially in multichannel ASC with partially missing channels. \vspace{8pt} \subsubsection{Evaluation of Data Augmentation Technique for Multichannel ASC} \vspace{2pt} We next evaluate the ASC performance with the proposed data augmentation methods. In this experiment, we randomly select a number of channels from 0 to 8 for data augmentation in each iteration of model training. Tables~\ref{tbl:microFscore_aug} and \ref{tbl:macroFscore_aug} show the scene classification performance in terms of micro- and macro-Fscore with the proposed data augmentation methods. The results show that the three data augmentation methods achieve reasonable performances. In particular, \textit{channel overwrite} and \textit{channel swap} achieve comparable ASC performance to the result without missing channels. Comparing these results with Table~\ref{tbl:same} indicates that \textit{channel overwrite} and \textit{channel swap} can almost completely avoid the mismatch between the trained model and evaluation data. \vspace{8pt} \subsubsection{Datailed Scene Classification Performance} \vspace{2pt} Figs.~\ref{fig:nomiss}--\ref{fig:ChSwap} show the detailed scene classification performance using no data augmentation method and \textit{channel swap} with four channels missing. The results show that most of the audio segments are predicted as ``other.'' On the other hand, the classification result using \textit{channel swap} achieves comparable performance to that without missing channels. From these results, we conclude that, in multichannel ASC, partially missing channels may cause a severe degradation of the ASC performance, and avoiding the mismatch between the trained model and evaluation data is important to achieve a robust ASC system in a realistic situation. \section{Conclusion} \label{sec:conclusion} In this paper, we investigated the impact of partially missing channels on the ASC performance using multichannel audio recordings obtained using a distributed microphone array. We also proposed three data augmentation methods for multichannel ASC: \textit{channel mask}, \textit{channel overwrite}, and \textit{channel swap}. The experimental results showed that, in multichannel ASC, the mismatch between the trained model and evaluation data is a much more severe problem than missing spectral and spatial information. To avoid this negative impact on the ASC performance, the data augmentation based on \textit{channel overwrite} and \textit{channel swap} is effective and can avoid the performance degradation caused by the model mismatch. \section*{Acknowledgment} This work was supported by JSPS KAKENHI Grant Number JP20H00613, JP19K20304, and KDDI Foundation. \vspace{3pt} \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-06T02:08:20", "yymm": "2105", "arxiv_id": "2105.01836", "language": "en", "url": "https://arxiv.org/abs/2105.01836" }
\section{Introduction} In this work we consider the complexity of ``approximating'' ``ordering constraint satisfaction problems (OCSPs)'' in the ``streaming setting''. We introduce these notions below before describing our results. \subsection{Orderings and Constraint Satisfaction Problems}\label{ssec:intro-notation} In this work we consider optimization problems where the solution space is all possible orderings of $n$ variables. The Travelling Salesperson Problem and most forms of scheduling fit this framework, though our work considers a more restricted class of problems, namely \emph{ordering constraint satisfaction problems (OCSPs)}. OCSPs as a class were first defined by Guruswami, H{\aa}stad, Manokaran, Raghavendra, and Charikar~\cite{GHM+11}. To describe them here, we first set up some notation and terminology, and then give some examples. We let $[n]$ denote the set $\{0,\ldots,n-1\}$ and $\mathsf{S}_n$ denote the set of permutations on $[n]$, i.e., the set of bijections $\boldsymbol{\sigma}: [n] \to [n]$. We sometimes use $[\boldsymbol{\sigma}(0)\: \boldsymbol{\sigma}(1)\cdots \boldsymbol{\sigma}(n-1)]$ to denote $\boldsymbol{\sigma}:[n]\to[n]$. The solution space of ordering problems is $\mathsf{S}_n$, i.e., an {\em assignment} to $n$ variables is given by $\boldsymbol{\sigma} \in \mathsf{S}_n$. Given $k$ distinct integers $a_0,\ldots,a_{k-1}$ we define $\textsf{ord}(a_0,\ldots,a_{k-1})$ to be the unique permutation in $\mathsf{S}_k$ which sorts $a_0,\ldots,a_{k-1}$. In other words, $\textsf{ord}(a_0,\ldots,a_{k-1})$ is the unique permutation $\boldsymbol{\pi}\in\mathsf{S}_k$ such that $a_{\boldsymbol{\pi}(0)} < \cdots < a_{\boldsymbol{\pi}(k-1)}$. A {\em $k$-ary ordering constraint function} is given by a predicate $\Pi:\mathsf{S}_k \to \{0,1\}$. An {\em ordering constraint application} on $n$ variables is given by a constraint function $\Pi$ and a $k$-tuple $\mathbf{j}=(j_0,j_1,\dots,j_{k-1})\in[n]^k$ where the $j_i$'s are distinct. In the interest of brevity we will often skip the term ``ordering'' below and further refer to constraint functions as ``functions'' and constraint applications as ``constraints''. A constraint $(\Pi,\mathbf{j})$ is {\em satisfied} by an assignment $\boldsymbol{\sigma}\in \mathsf{S}_n$ if $\Pi(\textsf{ord}(\boldsymbol{\sigma}|_{\mathbf{j}}))=1$, where $\boldsymbol{\sigma}|_{\mathbf{j}}$ is the $k$-tuple $(\boldsymbol{\sigma}(j_0),\ldots,\boldsymbol{\sigma}(j_{k-1})) \in [n]^k$. A \emph{maximum ordering constraint satisfaction problem}, $\textsf{Max-OCSP}(\Pi)$, is specified by a single ordering constraint function $\Pi: \mathsf{S}_k \rightarrow \{0,1\}$, for some positive integer arity $k$. An {\em instance} of $\textsf{Max-OCSP}(\Pi)$ on $n$ variables is given by $m$ constraints $C_0,\ldots,C_{m-1}$ where $C_i = (\Pi,\mathbf{j}(i))$, i.e., the application of the function $\Pi$ to the variables $\mathbf{j}(i) = (j(i)_0,\ldots,j(i)_{k-1})$. (We omit $\Pi$ from the description of a constraint $C_i$ when clear from context.) The \emph{value} of an ordering $\boldsymbol{\sigma} \in \mathsf{S}_n$ on an instance $\Psi = (C_0,\ldots,C_{m-1})$, denoted $\textsf{val}_\Psi(\boldsymbol{\sigma})$, is the fraction of constraints satisfied by $\boldsymbol{\sigma}$, i.e., $\textsf{val}_\Psi(\boldsymbol{\sigma})=\tfrac{1}{m}\sum_{i\in[m]} \Pi(\textsf{ord}(\boldsymbol{\sigma}|_{\mathbf{j}(i)}))$. The optimal value of $\Psi$ is defined as $\textsf{val}_\Psi = \max_{\boldsymbol{\sigma} \in \mathsf{S}_n}\{\textsf{val}_\Psi(\boldsymbol{\sigma})\}$. The simplest, and arguably most interesting, problem which fits the $\textsf{Max-OCSP}$ framework is the \emph{maximum acyclic subgraph} ($\textsf{MAS}$) problem. In this problem, the input is a directed graph on $n$ vertices, and the goal is to find an ordering of the vertices which maximize the number of forward edges. A simple depth-first search algorithm can decide whether a given graph $G$ has a \emph{perfect} ordering (i.e., one which has \emph{no} back edges); however, Karp~\cite{Kar72}, in his famous list of 21 $\NP$-complete problems, proved the $\NP$-completeness of deciding whether, given a graph $G$ and a parameter $k$, there exists an ordering of the vertices such that at least $k$ edges are forward. For our purposes, $\textsf{MAS}$ can be viewed as a 2-ary $\textsf{Max-OCSP}$ problem, by defining the ordering constraint predicate $\Pi_{\textsf{MAS}}: \mathsf{S}_2 \rightarrow \{0,1\}$ given by $\Pi_{\textsf{MAS}}([0\;1]) = 1$ and $\Pi_{\textsf{MAS}}([1;0]) = 0$, and associating vertices with variables and edges with constraints. Indeed, an edge/constraint $(u,v)$ (where $u,v \in [n]$ are distinct variables/vertices) will be satisfied by an assignment/ordering $\boldsymbol{\sigma} \in \mathsf{S}_n$ iff $\Pi_{\textsf{MAS}}(\textsf{ord}(\boldsymbol{\sigma}|_{(u,v)})) = 1$, or equivalently, iff $\boldsymbol{\sigma}(u) < \boldsymbol{\sigma}(v)$. A second natural $\textsf{Max-OCSP}$ problem is the \emph{maximum betweenness} ($\mathsf{Max}\textsf{Btwn}$) problem. This is a 3-ary OCSP in which an ordering $\boldsymbol{\sigma}$ satisfies a constraint $(u,v,w)$ iff $\boldsymbol{\sigma}(v)$ is between $\boldsymbol{\sigma}(u)$ and $\boldsymbol{\sigma}(w)$, i.e., iff $\boldsymbol{\sigma}(u) < \boldsymbol{\sigma}(v) < \boldsymbol{\sigma}(w)$ \emph{or} $\boldsymbol{\sigma}(u) > \boldsymbol{\sigma}(v) > \boldsymbol{\sigma}(w)$, and the goal is again to find the maximum number of satisfiable constraints. This is given by the constraint satisfaction function $\Pi_{\textsf{Btwn}}:\mathsf{S}_3 \rightarrow \{0,1\}$ given by $\Pi_\textsf{Btwn}([0\;1\;2]) = 1, \Pi_\textsf{Btwn}([2\;1\;0]) = 1$, and $\Pi_\textsf{Btwn}(\boldsymbol{\pi}) = 0$ for all other $\boldsymbol{\pi} \in \mathsf{S}_3$. The complexity of maximizing betweenness was originally studied by Opatrny~\cite{Opa79}, who proved that even deciding whether a set of betweenness constraints is perfectly satisfiable is $\mathsf{NP}$-complete. \subsection{Approximability} In this work we consider the \emph{approximability} of ordering constraint satisfaction problems. We say that a (randomized) algorithm $A$ is an {\em $\alpha$-approximation algorithm} for $\textsf{Max-OCSP}(\Pi)$ if for every instance $\Psi$, $\alpha \cdot \textsf{val}_\Psi \leq A(\Psi) \leq \textsf{val}_\Psi$ with probability at least 2/3 over the internal coin tosses of $A$. Thus our approximation factors $\alpha$ are numbers in the interval $[0,1]$. Given $\Pi:\mathsf{S}_k \to \{0,1\}$ let $\rho(\Pi) = \frac{|\{\boldsymbol{\pi} \in \mathsf{S}_k | \Pi(\boldsymbol{\pi}) = 1\}|}{k!}$ denote the probability that $\Pi$ is satisfied by a random ordering. Every instance $\Psi$ of $\textsf{Max-OCSP}(\Pi)$ satisfies $\textsf{val}_\Psi \geq \rho(\Pi)$ and thus the trivial algorithm that always outputs $\rho(\Pi)$ is a $\rho(\Pi)$-approximation algorithm for $\textsf{Max-OCSP}(\Pi)$. Under what conditions it is possible to beat this trivial approximation is a major open question. For $\textsf{Max}\textsf{Btwn}$, the trivial algorithm is a $\frac13$-approximation. Chor and Sudan~\cite{CS98} showed that $(\frac{47}{48}+\epsilon)$-approximating $\mathsf{Max}\textsf{Btwn}$ is $\NP$-hard, for every $\epsilon > 0$. The $\frac{47}{48}$ factor was improved to $\frac12$ by Austrin, Manokaran, and Wenner~\cite{AMW15}. For $\textsf{MAS}$, the trivial algorithm is a $\frac12$-approximation. Newman~\cite{New00} showed that $(\frac{65}{66}+\epsilon)$-approximating $\textsf{MAS}$ is $\NP$-hard, for every $\epsilon > 0$. \cite{AMW15} improved the $\frac{65}{66}$ to $\frac{14}{15}$, and Bhangale and Khot~\cite{BK19} further improved the factor to $\frac23$. We could hope that for every nontrivial nontrivial $\textsf{Max-OCSP}(\Pi)$, it is $\NP$-hard to even $(\rho(\Pi)+\epsilon)$-approximate $\textsf{Max-OCSP}(\Pi)$ for any constant factor $\epsilon > 0$. This property is called \emph{approximation resistance} (and we define it more carefully in the setting of streaming algorithms below). Approximation resistance based on $\NP$-hardness is known for certain constraint satisfaction problems which do not fall under the $\textsf{Max-OCSP}$ framework; this includes the seminal result of H{\aa}stad~\cite{Has01} that it is $\NP$-hard to $(\frac78+\epsilon)$-approximate $\mathsf{Max3AND}$ for any $\epsilon > 0$. But as far as we know, such results are lacking for \emph{any} $\textsf{Max-OCSP}$ problem. Given this state of affairs, Guruswami, H{\aa}stad, Manokaran, Raghavendra, and Charikar~\cite{GHM+11} proved the ``next best thing'': assuming the unique games conjecture (UGC) of Khot~\cite{Kho02}, every $\textsf{Max-OCSP}(\Pi)$ is approximation-resistant. But the question of proving approximation resistance for polynomial-time algorithms without relying on unproven assumptions such as UGC and $\P \neq \NP$ remains unsolved. Towards this goal, in this work, we consider the approximability of $\textsf{Max-OCSP}$'s in the \emph{(single-pass) streaming model}, which we define below. \subsection{Streaming algorithms} A (single-pass) streaming algorithm is defined as follows. An instance $\Psi = (C_0,\ldots,C_{m-1})$ of $\textsf{Max-OCSP}(\Pi)$ is presented as a stream of constraints with the $i$th element of the stream being $\mathbf{j}(i)$ where $C_i = (\Pi,\mathbf{j}(i))$. A streaming algorithm $A$ updates its state with each element of the stream and at the end produces the output $A(\Psi) \in [0,1]$ (which is supposed to estimate $\textsf{val}_\Psi$). The measure of complexity of interest to us is the space used by $A$ and in particular we distinguish between algorithms that use space polylogarithmic in the input length and space that grows polynomially ($\Omega(n^\delta)$ for $\delta > 0$) in the input length. We say that a problem $\textsf{Max-OCSP}(\Pi)$ is {\em approximable (in the streaming setting)} if we can beat the trivial $\rho(\Pi)$-approximation algorithm by a positive constant factor. Specifically $\textsf{Max-OCSP}(\Pi)$ is said to be {\em approximable} if for every $\delta > 0$ there exists $\epsilon > 0$ and a space $O(n^{\delta})$ algorithm $A$ that is a $(\rho(\Pi)+\epsilon)$-approximation algorithm for $\textsf{Max-OCSP}(\Pi)$, We say $\textsf{Max-OCSP}(\Pi)$ is {\em approximation-resistant (in the streaming setting)} otherwise. In recent years, investigations into CSP approximability in the streaming model have been strikingly successful, resulting in tight characterizations of streaming approximability for many problems \cite{KK15,KKS15,KKSV17,GVV17,GT19,KK19,CGV20,CGSV21,CGSV21a,CGS+21}. Most of these papers studied approximability, not of \emph{ordering} CSPs, but of ``non-ordering CSPs'' where the variables can take values in a finite alphabet. (\cite{GVV17} and \cite{GT19} are the exceptions, and we will discuss them below.) While single-pass streaming algorithms are a weaker model than general polynomial-time algorithms, we do remark that nontrivial approximations for many problems are possible in the streaming setting. In particular, the $\mathsf{Max2AND}$ problem is (roughly) $\frac49$-approximable in the streaming setting (whereas the trivial approximation is a $\frac14$-approximation) \cite{CGV20}. \subsection{Main result and comparison to prior and related works} \begin{restatable}[Main theorem]{theorem}{maintheorem}\label{main_thm} For every $k \in \mathbb{N}$ and every $\Pi : \mathsf{S}_k \to \{0,1\}$, $\textsf{Max-OCSP}(\Pi)$ is approximation resistant in the (single-pass) streaming setting. In particular for every $\epsilon > 0$, every $(\rho(\Pi) + \epsilon)$-approximation algorithm $A$ for $\textsf{Max-OCSP}(\Pi)$ requires $\Omega(n)$ space. \end{restatable} In particular our theorem implies that $\textsf{MAS}$ is not $1/2+\epsilon$-approximable in $o(n)$ space for every $\epsilon > 0$, and $\mathsf{Max}\textsf{Btwn}$ is not $1/3+\epsilon$-approximable. \autoref{main_thm} is restated in \autoref{sec:lower-bound} and proved there \autoref{main_thm} parallels the classical result of \cite{GHM+11}, who prove that $\textsf{Max-OCSP}(\Pi)$ is approximation resistant with respect to \emph{polynomial-time} algorithms, for every $\Pi$, assuming the unique games conjecture. In our setting of streaming algorithms, the only problem that seems to have been previously explored in the literature was $\textsf{MAS}$, and even in this case a tight approximability result was not known. In the case of $\textsf{MAS}$, Guruswami, Velingker, and Velusamy~\cite{GVV17} proved that for every $\epsilon > 0$, $\textsf{MAS}$ is not $(\frac78+\epsilon)$-approximable in $o(\sqrt n)$ space using a gadget reduction from the Boolean hidden matching problem \cite{GKK+08}. A stronger $o(\sqrt n)$-space, $3/4$-approximation hardness for $\textsf{MAS}$ is indicated in the work of Guruswami and Tao~\cite{GT19}, who prove streaming bounds for unique games, an ``non-ordering'' CSP problem, and suggest a reduction from unique games to $\textsf{MAS}$. As far as we know, our result is the first tight approximability result for $\textsf{Max-OCSP}(\Pi)$ for \emph{any} non-constant $\Pi$ in $\Omega(n^\delta)$ space for any $\delta > 0$, and it yields tight approximability results for \emph{every} $\Pi$ in \emph{linear} space. We remark that this linear space bound is also optimal (up to logarithmic factors); similarly to the observation in~\cite{CGS+21} for non-ordering CSPs, $\textsf{Max-OCSP}(\Pi)$ values can be approximated arbitrarily well in $\tilde{O}(n)$ space by subsampling $O(n)$ constraints from the input instance and then solving the $\textsf{Max-OCSP}(\Pi)$ problem on this subinstance exactly.\footnote{This assumes a definition of streaming complexity which makes no restriction on time complexity. Of course, if we restrict to polynomial time, then assuming the unique games conjecture, no nontrivial approximation will be possible.} Chakrabarti, Ghosh, McGregor, and Vorotnikova~\cite{CGMV20} recently also studied directed graph ordering problems (e.g., acyclicity testing, $(s,t)$-connectivity, topological sorting) in the streaming setting. For the problems that considered in \cite{CGMV20}, their work gives {\em super-linear} space lower bounds even for multi-pass streaming algorithms. Note that for our problems an $\tilde{O}(n)$ upper bound holds, suggesting that their problems are not OCSPs. Indeed this is true, but one of the problems considered is close enough to $\textsf{MAS}$ to allow a more detailed comparison. The specific problem is the \emph{minimum feedback arc set} ($\textsf{MFAS}$) problem, the goal of which is to output the fractional size of the smallest set of edges whose removal produces an acyclic subgraph. In other words, the sum of $\textsf{MFAS}$ value of a graph and the $\textsf{MAS}$ value of the graph is exactly one.~\cite{CGMV20} proved that for every $\kappa > 1$, $\kappa$-approximating\footnote{For minimization problems a $\kappa$ approximation is one whose value is at least the minimum value and at most $\kappa$ times larger than the minimum. Thus approximation factors are larger than $1$.} the $\textsf{MFAS}$ value requires $\Omega(n^2)$ space in the streaming setting (for a single pass, and more generally $\Omega(n^{1+\Omega(1/p)}/p^{O(1)})$ space for $p$ passes). Note that such lower bounds are obtained using instances with optimum $\textsf{MFAS}$ values that are $o(1)$. Thus the $\textsf{MAS}$ values in the same graph are $1 - o(1)$ (even in the \textbf{NO}\ instances) and thus these results usually do not imply any hardness of approximation for $\textsf{MAS}$. \subsection{Techniques} Our general approach is to start with a hardness result for CSPs over alphabets of size $q$ (i.e., constraint satisfaction problems where the variables take values in $[q]$), and then to reduce these CSPs to the OCSP at hand. While this general approach is not new, the optimality of our results seems to come from the fact that we choose the CSP problem carefully, and are able to get optimal hardness results for problems of our choice thanks to a general result of Chou, Golovnev, Sudan, Velingker and Velusamy~\cite{CGS+21}. Thus whereas previous approaches towards proving hardness of $\textsf{MAS}$, for example, were unable to get optimal hardness results for $\textsf{MAS}$ despite starting with optimal hardness results of the source (unique games), by choosing our source problem more carefully we manage to get optimal hardness results. In the remainder of this section, we describe and motivate this approach towards proving the approximation-resistance of $\textsf{Max-OCSP}$'s. \subsubsection{Special case: The intuition for $\textsf{MAS}$} We start by describing our proof technique for the special case of the $\textsf{MAS}$ problem. In this section, for readability, we (mostly) use the language of graphs, edges, and vertices instead of instances, constraints, and variables. Similarly to earlier work in the setting of streaming approximability (e.g., \cite{KKS15}), we prove inapproximability of $\textsf{MAS}$ by exhibiting a pair of distributions, which we denote $\mathcal{G}^Y$ and $\mathcal{G}^N$, satisfying the following two properties: \begin{enumerate} \item $\mathcal{G}^Y$ and $\mathcal{G}^N$ are ``indistinguishable'' to streaming algorithms (to be defined formally below) \item (With high probability) $\mathcal{G}^Y$ has high $\textsf{MAS}$ values ($\approx 1$) and $\mathcal{G}^N$ has low $\textsf{MAS}$ values ($\approx \frac12$) \end{enumerate} The existence of such distributions would suffice to establish the theorem: there cannot be any streaming approximation for $\textsf{MAS}$, since any such algorithm would be able to distinguish these distributions. But how are we to actually construct distributions $\mathcal{G}^Y$ and $\mathcal{G}^N$ satisfying these properties? The strategy which has proved successful in past work for proving streaming approximation resistance of other varieties of CSPs was roughly to let the $\mathcal{G}^N$ graphs be completely random, while $\mathcal{G}^Y$ graphs are sampled with ``hidden structure'', which is essentially a very good assignment. Then, one would show that streaming algorithms cannot detect the existence of such hidden structure, via a reduction to a communication game (typically a variant of Boolean hidden matching \cite{GKK+08,VY11}). In our setting, we might hope that the hidden structure could simply be an ordering; that is, we could hope to define $\mathcal{G}^Y$ by first sampling a random ordering of the vertices, then sampling edges which go forward with respect to this ordering, and then perhaps adding some noise. But unfortunately, we lack the techniques to prove communication lower bounds when orderings are the hidden structure. Hence, instead of seeking a direct proof of an indistinguishability result, in this paper, we turn back to earlier indistinguishability results proven in the context of \emph{non-ordering CSPs}. In this setting, variables take on values in an alphabet $[q]$, and constraints specify allowed values of subsets of the variables. In particular, two distinct variables may take on the same value in $[q]$, whereas in the ordering setting, every variable in $[n]$ must get a distinct value in $[n]$. (See \autoref{sec:nonordering-csps} for a formal definition.) We will set $q$ to be a large constant, carefully design a non-ordering CSP function, employ past results (i.e., \cite{CGS+21}) to characterize its streaming inapproximability, examine the $\mathcal{G}^Y$ and $\mathcal{G}^N$ graphs created in the reduction, and then show that $\mathcal{G}^N$ graphs have low $\textsf{MAS}$ values while the hidden structure in the $\mathcal{G}^Y$ graphs --- even if it isn't an ordering per se --- guarantees high $\textsf{MAS}$ values. Why would we expect such an idea to work out, and how do we properly choose the non-ordering CSP constraint function? To begin, this constraint function will be a 2-ary function $f : [q]^2 \to \{0,1\}$. Let $\textsf{Max-CSP}(f)$ denote the non-ordering CSP problem of maximizing the number of $f$ constraints satisfied by an assignment $\mathbf{b} \in [q]^n$. We will view an input graph $G$ simultaneously as an instance of $\textsf{MAS}$ and as an instance of $\textsf{Max-CSP}(f)$, with the same underlying set of edges/constraints. For a graph $G$, let $\textsf{val}_G$ denote its $\textsf{MAS}$ value and $\overline{\textsf{val}}_G$ its value in $\textsf{Max-CSP}(f)$. We will choose $f$ so that the indistinguishable hard distributions $\mathcal{G}^Y$ and $\mathcal{G}^N$ (originating from the reduction of \cite{CGS+21}) have the following four properties: \begin{enumerate} \item With high probability over $G \sim \mathcal{G}^Y$, $\overline{\textsf{val}}_G \approx 1$.\label{item:nonordering-y} \item With high probability over ${G \sim \mathcal{G}^N}$, $\overline{\textsf{val}}_G \approx \frac12$.\label{item:nonordering-n} \item For all $G$, $\textsf{val}_G \geq \overline{\textsf{val}}_G$.\label{item:gap-y} \item With high probability over ${G \sim \mathcal{G}^N}$, $\textsf{val}_G$ is not much larger than $\overline{\textsf{val}}_G$.\label{item:gap-n} \end{enumerate} Together, these items will suffice to prove the theorem since \autoref{item:nonordering-n} and \autoref{item:gap-n} together imply that with high probability over $G \sim \mathcal{G}^N$, $\textsf{val}_G \approx \frac12$, while \autoref{item:nonordering-y} and \autoref{item:gap-y} together imply that with high probability over $G \sim \mathcal{G}^Y$, $\textsf{val}_G \approx 1$. Concretely, we setup the non-ordering CSP function as follows. Recall that $\Pi_{\textsf{MAS}}([0\;1])= 1$ while $\Pi_{\textsf{MAS}}([1\; 0])=0$. We define the constraint function $f^q_{\textsf{MAS}} : [q]^2 \to \{0,1\}$ by $f^q_{\textsf{MAS}}(x,y) = 1$ iff $x < y$. Note that $f^q_{\textsf{MAS}}$ is supported on $\frac{q(q-1)}2 \approx \frac12$ pairs in $[q]^2$. We first show that \cite{CGS+21}'s results imply that $\textsf{Max-CSP}(f^q_{\textsf{MAS}})$ is approximation-resistant, and pick $\mathcal{G}^Y$ and $\mathcal{G}^N$ as the $\textbf{YES}$ and $\textbf{NO}$ distributions witnessing this result. This immediately yields \autoref{item:nonordering-y} and \autoref{item:nonordering-n} above. It remains to prove \autoref{item:gap-n} and \autoref{item:gap-y}. In the remainder of this subsection, we sketch the proofs; see \autoref{fig:mas} for a visual depiction, and \autoref{sec:bounds} for the formal proofs. Towards \autoref{item:gap-y}, we take advantage of the fact that $\textsf{Max-CSP}(f^q_{\textsf{MAS}})$ captures a ``$q$-coarsening'' of $\textsf{MAS}$. We consider an arbitrary $\textsf{Max-CSP}(f^q_{\textsf{MAS}})$-assignment $\mathbf{b} \in [q]^n$ for a graph $G$, which assigns to the $i$-th vertex a value $b_i \in [q]$. We construct an ordering of $G$'s vertices by first placing the ``block'' of vertices assigned value $0$, then the block of vertices assigned $1$, etc., finally placing the vertices assigned value $q-1$. (Within any particular block, the vertices may be ordered arbitrarily.) Now whenever an edge $(u,v)$ is satisfied by $\mathbf{b}$ when viewing $G$ as an instance of $\textsf{Max-CSP}(f^q_{\textsf{MAS}})$ --- that is, whenever $b_v > b_u$ --- the same edge will be satisfied by our constructed ordering when viewing $G$ as an instance of $\textsf{MAS}$. Hence $\textsf{val}_G \geq \overline{\textsf{val}}_G$. Towards \autoref{item:gap-n}, we can no longer use the results of \cite{CGS+21} as a black box. Instead, we show that the graphs $\mathcal{G}^N$ are ``small partition expanders'' in a specific sense: any partition of the constraint graph into $q$ roughly equal sized blocks has very few edges, specifically a $o(1)$ fraction, which lie \emph{within} the blocks. Now, we think of an ordering $\boldsymbol{\sigma} \in \mathsf{S}_n$ variables as dividing the $n$ variables into $q$ blocks with variables $\boldsymbol{\sigma}(0),\ldots,\boldsymbol{\sigma}(n/q-1)$ being in the first block, $\boldsymbol{\sigma}(n/q),\ldots,\boldsymbol{\sigma}(2n/q-1)$ being in the second block and so on. Whenever an edge $(u,v)$ is satisfied by $\boldsymbol{\sigma}$ when viewing $G$ as an instance of $\textsf{MAS}$, it will also be satisfied by our constructed ordering when viewing $G$ as an instance of $\textsf{Max-CSP}(f^q_{\textsf{MAS}})$, \emph{unless} $u$ and $v$ end up in the same block; but by the small partition expansion condition, this happens only for $o(1)$ fraction of the edges. Hence $\textsf{val}_G \leq \overline{\textsf{val}}_G + o(1)$. We remark in passing that our notion of coarsening is somewhat similar to, but not the same as, that used in previous works, notably~\cite{GHM+11}. In particular the techniques used to compare the OCSP value (before coarsening) with the non-ordering CSP value (after coarsening) are somewhat different: Their analysis involves more sophisticated tools such as influence of variables and Gaussian noise stability. The proof of \autoref{item:gap-n} in our setting, in contrast, uses a more elementary analysis of the type common with random graphs. Finally, we remark that in the rest of the paper, in the interest of self-containedness, our construction will ``forget'' about $\textsf{Max-CSP}(f^q_{\textsf{MAS}})$, define the distributions $\mathcal{G}^Y$ and $\mathcal{G}^N$ explicitly, and treat $\overline{\textsf{val}}_G$ simply as an artifact of the analysis which calculates the $\textsf{MAS}$ values of $\mathcal{G}^Y$ and $\mathcal{G}^N$, but we hope that this discussion has motivated the construction. \subsubsection{Extending to general ordering CSPs} Extending the idea to other OCSPs involves two additional steps. Given the constraint function $\Pi$ (of arity $k$) and positive integer $q$, we define $f^q_\Pi$ analogously to $f^q_\textsf{MAS}$. We then explicitly describe the $\textbf{YES}$ and $\textbf{NO}$ distributions of $\textsf{Max-CSP}(f^q_{\Pi})$ which the general theorem of \cite{CGS+21} shows are indistinguishable to $o(n)$ space algorithms. Crucial to this application is the observation that $f^q_{\Pi}$ is an ``$1-{k-1}/q$-wide'' function, where $f^q_\Pi$ is $\omega$-wide if there exists a vector $\mathbf{v} = (v_0,\ldots,v_{k-1}) \in [q]^k$ such that for an $\omega$-fraction of $a \in [q]$, we have $f^q_{\Pi}(v_0+a,\ldots,v_{k-1}+a) = 1$. This would allow us to conclude that $\textsf{Max-CSP}(f^q_{\Pi})$ is hard to approximate to within factor of roughly $\rho/\omega$, though as in the special case of $\textsf{MAS}$ we do not use this result explicitly.\footnote{Indeed, the ``width'' observation is involved in the proof of \autoref{item:nonordering-y} and \autoref{item:nonordering-n} even in the $\textsf{MAS}$ case (with $k=2$).} Instead, the second step of our proof replicates \autoref{item:gap-n} above. We give an analysis of the partition expansion in the $\textbf{NO}$ instances arising from the construction in~\cite{CGS+21}. Specifically we show that the constraint hypergraph is now a ``small partition hypergraph expander'', in the sense that any partition into $q$ roughly equal sized blocks would have very few hyperedges that contain even two vertices from the same block. With these two additional ingredients in place, and following the same template as in the hardness for $\textsf{MAS}$, we immediately get the approximation resistance of $\textsf{Max-OCSP}(\Pi)$ for general $\Pi$. \paragraph{This version.} The current version of this paper improves on a previous version of this paper~\cite{SSV21-early-version} that gave only $\Omega(\sqrt{n})$ space lower bounds for all OCSPs. Our improvement to $\Omega(n)$ space lower bounds comes by invoking the more recent results of \cite{CGS+21}, whereas our previous version used the strongest lower bounds for CSPs that were available at the time from an earlier work of Chou, Golovnev, Sudan, and Velusamy~\cite{CGSV21a}. The results of \cite{CGSV21a} are quantitatively weaker for the problems considered in \cite{CGS+21}, though their results apply to a broader collection of problems. Interestingly for our application, which covers {\em all} OCSPs, the narrower set of problems considered in \cite{CGS+21} suffices. We also note that the proof in this version of our paper is more streamlined thanks to the notion of ``wide'' constraints introduced and used in \cite{CGS+21}. \paragraph{Organization of the rest of the paper.} In \autoref{sec:prelim} we introduce some notation we use and background material. In \autoref{sec:lower-bound} we prove our main theorem, \autoref{main_thm}. In this section we also introduce two distributions on $\textsf{Max-OCSP}(\Pi)$ instances, the \textbf{YES}\ distribution and the \textbf{NO}\ distribution, and state lemmas asserting that these distributions are concentrated on instances with high, and respectively low, OCSP value; and that these distributions are indistinguishable to single-pass small space streaming algorithms. We prove the lemmas on the OCSP values in \autoref{sec:bounds}, and prove the indistinguishability lemma in \autoref{sec:streaming}. \section{Preliminaries and definitions}\label{sec:prelim} \subsection{Basic notation} Some of the notation we use is already introduced in \autoref{ssec:intro-notation}. Here we introduce some more notation we use. The \emph{support} of an ordering constraint function $\Pi : \mathsf{S}_k \to \{0,1\}$ is the set $\textsf{supp}(\Pi) = \{\boldsymbol{\pi} \in \mathsf{S}_k | \Pi(\boldsymbol{\pi}) =1\}$. Addition of elements in $[q]$ is implicitly taken modulo $q$. Throughout this paper we will be working with $k$-uniform {\em ordered} hypergraphs, or simply $k$-hypergraphs, defined in the sequel. Given a finite set $V$, an (ordered, self-loop-free) {\em $k$-hyperedge} $e = (v_1,\ldots,v_k)$ is a sequence of $k$ distinct elements $v_1,\ldots,v_k \in V$. We stress that the ordering of vertices within an edge is important to us. An (ordered, self-loop-free, multi-) \emph{$k$-hypergraph} $G = (V,E)$ is given by a set of vertices $V$ and a multiset $E = E(G) \subseteq V^k$ of \emph{$k$-hyperedges} A $k$-hyperedge $\mathbf{e}$ is \emph{incident} on a vertex $v$ if $v$ appears in $\mathbf{e}$. Let $\Gamma(\mathbf{e}) \subseteq V$ denote the set of vertices to which a $k$-hyperedge $\mathbf{e}$ is incident, and let $m = m(G)$ denote the number of $k$-hyperedges in $G$. A $k$-hypergraph is a \emph{$k$-hypermatching} if it has the property that no pair of (distinct) $k$-hyperedges is incident on the same vertex. For $\alpha \leq \frac1k$, an \emph{$\alpha$-partial $k$-hypermatching} is a $k$-hypermatching which contains $\alpha n$ $k$-hyperedges. We let $\mathcal{H}_{k,n,\alpha}$ denote the uniform distribution over all $\alpha$-partial $k$-hypermatchings on $[n]$. A vector $\mathbf{b} = (b_0,\ldots,b_{n-1}) \in [q]^n$ may be viewed as a \emph{$q$-partition} of $[n]$ into \emph{blocks} $\mathbf{b}^{-1}(0),\ldots,\mathbf{b}^{-1}(q-1)$, where the $i$-th block $\mathbf{b}^{-1}(i)$ is defined as the set of indices $\{j \in [n] : b_j = i\}$. Given $\mathbf{b} = (b_0,\ldots,b_{n-1}) \in [q]^n$ and an indexing vector $\mathbf{j} = (j_0,\ldots,j_{k-1}) \in [n]^k$, we define $\mathbf{b}|_{\mathbf{j}} = (b_{j_0},\ldots,b_{j_{k-1}})$. Given an instance $\Psi$ of $\textsf{Max-OCSP}(\Pi)$ on $n$ variables, we define the \emph{constraint hypergraph} $G(\Psi)$ to be the $k$-hypergraph on $[n]$, where each $k$-hyperedge corresponds to a constraint (given by the exact same $k$-tuple). We also let $m(\Psi)$ denote the number of constraints in $\Psi$ (equiv., the number of $k$-hyperedges in $G(\Psi)$). \subsection{Concentration bound} We also require the following form of \emph{Azuma's inequality}, a concentration inequality for submartingales. For us the following form, for Boolean-valued random variables with bounded conditional expectations taken from Kapralov and Krachun~\cite{KK19}, is particularly convenient. \begin{lemma}[{{\cite[Lemma 2.5]{KK19}}}]\label{lemma:azuma} Let $X_0,\ldots,X_{m-1}$ be (not necessarily independent) $\{0,1\}$-valued random variables, such that for some $p \in (0,1)$, $\E[X_i\mid X_0,\ldots,X_{i-1}] \leq p$ for every $i \in [m]$. Then if $\mu := pm$, for every $\nu > 0$, \[ \Pr[X_0 + \cdots + X_{m-1} \geq \mu + \nu] \leq \exp\left(-\frac12\cdot\frac{\nu^2}{\mu+\nu}\right). \] \end{lemma} \section{The streaming space lower bound}\label{sec:lower-bound} In this section we prove our main theorem, modulo some lemmas that we prove in later sections. We restate the theorem below for convenience. \maintheorem* Our lower bound is proved, as is usual for such statements, by showing that no small space algorithm can ``distinguish'' $\textbf{YES}$ instances with OCSP value at least $1-\varepsilon/2$, from $\textbf{NO}$ instances with OCSP value at most $\rho(\Pi)+\varepsilon/2$. Such a statement is in turn proved by exhibiting two families of distributions, the $\textbf{YES}$ distributions and the $\textbf{NO}$ distributions, and showing these are indistinguishable. Specifically we choose some parameters $q, T, \alpha$ and a permutation $\boldsymbol{\pi} \in \mathsf{S}_k$ carefully and define two distributions $\mathcal{G}^Y = \mathcal{G}^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}(\Pi)$ and $\mathcal{G}^N = \mathcal{G}^N_{q,n,\alpha,T}(\Pi)$. We claim that for our choice of parameters $\mathcal{G}^Y$ is supported on instances with value at least $1 - \varepsilon/2$ --- this is asserted in \autoref{lemma:g_y-lower-bound}. Similarly we claim that $\mathcal{G}^N$ is mostly supported (with probability $1-o(1)$) on instances with value at most $\rho(\Pi)+\epsilon/2$ (see \autoref{lemma:g_n-upper-bound}). Finally we assert in \autoref{lem:our-indist} that any algorithm that distinguishes $\mathcal{G}^Y$ from $\mathcal{G}^N$ with ``advantage'' at least $1/8$ (i.e., accepts $\Psi\sim\mathcal{G}^Y$ with probability $1/8$ more than $\Psi \sim \mathcal{G}^N$) requires $\Omega(n)$ space. Assuming \autoref{lemma:g_y-lower-bound}, \autoref{lemma:g_n-upper-bound}, and \autoref{lem:our-indist} the proof of \autoref{main_thm} is straightforward and proved at the end of this section. Proofs of \autoref{lemma:g_y-lower-bound} and \autoref{lemma:g_n-upper-bound} are in \autoref{sec:bounds} and of \autoref{lem:our-indist} in \autoref{sec:streaming}. \subsection{Distribution of hard instances}\label{sec:hard_distributions} For $\ell,k \in [q]$, define the $k$-tuple of ``contiguous'' values $\mathbf{v}_q^{(\ell)} = (\ell, \ldots, \ell+k-1) \in [q]^k$. Crucially, since the addition here is taken modulo $q$, we may have $\ell+k-1 < \ell$ and in particular $\textsf{ord}(\mathbf{v}_q^{(\ell)})$ may not be the identity. For a $k$-tuple $\mathbf{a} = (a_0,\ldots,a_{k-1})$ and a permutation $\boldsymbol{\pi} \in \mathsf{S}_k$, define the \emph{permuted} $k$-tuple $\mathbf{a}_{\boldsymbol{\pi}}$ as $(a_{\boldsymbol{\pi}^{-1}(0)},\ldots,a_{\boldsymbol{\pi}^{-1}(k-1)})$. In particular, we have $(\mathbf{v}_q^{(\ell)})_{\boldsymbol{\pi}} = (\boldsymbol{\pi}^{-1}(0) + \ell ,\ldots,\boldsymbol{\pi}^{-1}(k-1)+\ell)$. We define $\mathbf{a}_{\boldsymbol{\pi}}$ in this way because: \begin{proposition}\label{prop:order-permuted} If $\mathbf{a}$ is a $k$-tuple of distinct integers, then $\textsf{ord}(\mathbf{a}_{\boldsymbol{\pi}}) = \textsf{ord}(\mathbf{a}) \circ \boldsymbol{\pi}$ (where $\circ$ denotes composition of permutations). \end{proposition} \begin{proof} Recall that $\textsf{ord}(\mathbf{a})$ is the unique permutation $\boldsymbol{\tau}$ such that $a_{\boldsymbol{\tau}(0)} < \cdots < a_{\boldsymbol{\tau}(k-1)}$. Let $\boldsymbol{\tau} = \textsf{ord}(\mathbf{a})$, and let $\boldsymbol{\sigma} = \textsf{ord}(\mathbf{a}_{\boldsymbol{\pi}})$, so that $\boldsymbol{\sigma}$ is the unique permutation such that $a_{\boldsymbol{\sigma}(\boldsymbol{\pi}^{-1}(0))} < \cdots < a_{\boldsymbol{\sigma}(\boldsymbol{\pi}^{-1}(k-1))}$. Then $\boldsymbol{\tau} = \boldsymbol{\sigma} \circ \boldsymbol{\pi}^{-1}$. Hence $\boldsymbol{\tau} \circ \boldsymbol{\pi} = \boldsymbol{\sigma}$, as desired. \end{proof} We now formally define our $\textbf{YES}$ and $\textbf{NO}$ distributions for $\textsf{Max-OCSP}(\Pi)$. \begin{definition}[$\mathcal{G}_{q,n,\alpha,T}^{Y,\boldsymbol{\pi}}(\Pi)$ and $\mathcal{G}_{q,n,\alpha,T}^{N}(\Pi)$]\label{def:YES_NO_MaxOCSP_dist} For $k \in \mathbb{N}$ and $\Pi : \mathsf{S}_k \to \{0,1\}$, let $q,n,T \in \mathbb{N}$, $\alpha > 0$, and let $B = N$ or $B = (Y,\boldsymbol{\pi})$ for some $\boldsymbol{\pi} \in \textsf{supp}(\Pi)$. We define the distribution $\mathcal{G}^B_{q,n,\alpha,T}$, over $n$-variable $\textsf{Max-OCSP}(\Pi)$ instances, as follows: \begin{enumerate} \item Sample a uniformly random $q$-partition $\mathbf{b} = (b_0,\ldots,b_{n-1}) \in [q]^n$. \item Sample $T$ hypermatchings independently $\tilde G_0,\ldots,\tilde G_{T-1} \sim \mathcal{H}_{k,n,\alpha}$. \item For each $t \in [T]$, do the following: \begin{itemize} \item Let $G_t$ be an empty $k$-hypergraph on $[n]$. \item For each $k$-hyperedge $\tilde{\mathbf{e}} = (j_0,\ldots,j_{k-1}) \in E(\tilde G_t)$: \begin{itemize} \item ($\textbf{YES}$) If $B = (Y,\boldsymbol{\pi})$, and there exists $\ell \in [q]$ such that $\mathbf{b}|_\mathbf{j} = (\mathbf{v}_q^{(\ell)})_{\boldsymbol{\pi}}$, add $\tilde{\mathbf{e}}$ to $G_t$ with probability $\frac1q$. \item ($\textbf{NO}$) If $B = N$, add $\tilde{\mathbf{e}}$ to $G_t$ with probability $\frac1{q^k}$. \end{itemize} \end{itemize} \item Let $G := G_0 \cup \cdots \cup G_{T-1}$. \item Return the $\textsf{Max-OCSP}(\Pi)$ instance $\Psi$ on $n$ variables given by the constraint hypergraph $G$. \end{enumerate} We say that an algorithm $\mathbf{ALG}$ achieves advantage $\delta$ in distinguishing $\mathcal{G}_{q,n,\alpha,T}^{Y,\boldsymbol{\pi}}(\Pi)$ from $\mathcal{G}_{q,n,\alpha,T}^{N}(\Pi)$ if there exists an $n_0$ such that for all $n \geq n_0$, we have $$\left\lvert\Pr_{\Psi \sim \mathcal{G}_{q,n,\alpha,T}^{Y,\boldsymbol{\pi}}(\Pi)}[\mathbf{ALG}(\Psi)=1] - \Pr_{\Psi \sim \mathcal{G}_{q,n,\alpha,T}^{N}(\Pi)}[\mathbf{ALG}(\Psi)=1]\right\rvert \geq \delta.$$ \end{definition} We make several remarks on this definition. Firstly, note that the constraints within $\mathcal{G}^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}(\Pi)$ and $\mathcal{G}^{N}_{q,n,\alpha,T}(\Pi)$ do not directly depend on $\Pi$. We still parameterize the distributions by $\Pi$, since they are formally distributions over $\textsf{Max-OCSP}(\Pi)$ instances; $\Pi$ also determines the set of allowed permutations $\boldsymbol{\pi}$ in the $\textbf{YES}$ case as well as the underlying arity $k$. However, we will omit the parameterization $(\Pi)$ when clear from context. Secondly, we note that when sampling an instance from $\mathcal{G}^N_{q,n,\alpha,T}$, the partition $\mathbf{b}$ has no effect, and so $\mathcal{G}^N_{q,n,\alpha,T}$ is completely random. Hence these instances fit into the standard paradigm for streaming lower bounds of ``random graphs vs. random graphs with hidden structure''. Finally, we observe that the number of constraints in both distributions is distributed as a sum of $m = n \alpha T$ independent Bernoulli$(\frac1{q^k})$ random variables. In the following section we state lemmas which highlight the main properties of the distributions above. See \autoref{fig:mas} for a visual interpretation of the distributions in the case of $\textsf{MAS}$. \input{mas_picture} \subsection{Statement of key lemmas} Our first lemma shows that $\mathcal{G}^Y$ is supported on instances of high value. \begin{restatable}[$\mathcal{G}^Y$ has high $\textsf{Max-OCSP}(\Pi)$ values]{lemma}{gylowerbound}\label{lemma:g_y-lower-bound} For every ordering constraint satisfaction function $\Pi$, every $\boldsymbol{\pi} \in \textsf{supp}(\Pi)$ and $\Psi\sim \mathcal{G}^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$, we have $\textsf{val}_\Psi \geq 1-\frac{k-1}{q}$ (i.e., this occurs with probability 1). \end{restatable} We prove \autoref{lemma:g_y-lower-bound} in \autoref{sec:g_y-lower-bound}. Next we assert that $\mathcal{G}^N$ is supported mostly on instances of low value. \begin{restatable}[$\mathcal{G}^N$ has low $\textsf{Max-OCSP}(\Pi)$ values]{lemma}{gnupperbound}\label{lemma:g_n-upper-bound} For every $k$-ary ordering constraint function $\Pi : \mathsf{S}_k \to \{0,1\}$, and every $\epsilon >0$, there exists $q_0 \in \mathbb{N}$ and $\alpha_0 \geq 0$ such that for all $q \geq q_0$ and $\alpha \leq \alpha_0$, there exists $T_0 \in \mathbb{N}$ such that for all $T \geq T_0$, for sufficiently large $n$, we have \[ \Pr_{\Psi \sim \mathcal{G}^N_{q,n,\alpha,T}}\left[\textsf{val}_\Psi \geq \rho(\Pi) + \frac{\epsilon}2\right] \leq 0.01. \] \end{restatable} We prove \autoref{lemma:g_n-upper-bound} in \autoref{sec:g_n-upper-bound}. We note that this lemma is more technically involved than \autoref{lemma:g_y-lower-bound} and this is the proof that needs the notion of ``small partition expanders''. Finally the following lemma asserts the indistinguishability of $\mathcal{G}^Y$ and $\mathcal{G}^N$ to small space streaming algorithms. We remark that this lemma follows directly from the work of~\cite{CGS+21}. \begin{restatable}[]{lemma}{ourindist}\label{lem:our-indist} For every $q, k \in \mathbb{N}$ there exists $\alpha_0(k)>0$ such that for every $T\in\mathbb{N}$, $\alpha\in(0,\alpha_0(k)]$ the following holds: For every $\Pi :\mathsf{S}_k \to \{0,1\}$ and $\boldsymbol{\pi}\in\textsf{supp}(\Pi)$, every streaming algorithm $\mathbf{ALG}$ distinguishing $\mathcal{G}_{q,n,\alpha,T}^{Y,\boldsymbol{\pi}}$ from $\mathcal{G}_{q,n,\alpha,T}^{N}$ with advantage $1/8$ for all lengths $n$ uses space $\Omega(n)$. \end{restatable} \subsection{Proof of Theorem~\ref{main_thm}} We now prove \autoref{main_thm}. \begin{proof}[Proof of \autoref{main_thm}] Let $A$ be a $\rho(\Pi)+\epsilon$ approximation algorithm for $\textsf{Max-OCSP}(\Pi)$ that uses space $s$. Fix $\boldsymbol{\pi} \in \textsf{supp}(\Pi)$. Consider the algorithm $\mathbf{ALG}$ defined as follows: on input $\Psi$, an instance of $\textsf{Max-OCSP}(\Pi)$, if $A(\Psi)\ge \rho(\Pi)+\frac{\epsilon}{2}$, then $\mathbf{ALG}$ outputs $1$, else, it outputs $0$. Observe that $\mathbf{ALG}$ uses $O(s)$ space. Set $q_0\ge\frac{2(k-1)}{\epsilon}$ such that the condition of \autoref{lemma:g_n-upper-bound} holds. Set $\alpha_0\in (0,\alpha_0(k)]$ such that the conditions of \autoref{lemma:g_n-upper-bound} holds. Consider any $q\ge q_0$ and $\alpha\le \alpha_0$: let $T_0$ be set as in \autoref{lemma:g_n-upper-bound}. Consider any $T\ge T_0$: since $q\ge \frac{2(k-1)}{\epsilon} $, it follows from \autoref{lemma:g_y-lower-bound} that for $\Psi \sim \mathcal{G}^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$, we have $\textsf{val}_{\Psi} \ge 1-\frac{\epsilon}{2}$, and hence with probability at least $2/3$, $A(\Psi)\ge \rho(\Pi)+\frac{\epsilon}{2}$. Therefore, $\mathop{\mathbb{E}}_{\Psi \sim \mathcal{G}_{q,n,\alpha,T}^{Y,\boldsymbol{\pi}}}[\mathbf{ALG}(\Psi)=1]\ge 2/3$. Similarly, by the choice of $q_0,\alpha_0,T_0$, it follows from \autoref{lemma:g_n-upper-bound} that \[ \Pr_{\Psi \sim \mathcal{G}^N_{q,n,\alpha,T}}\left[\textsf{val}_\Psi \geq \rho(\Pi) + \frac{\epsilon}2\right] \leq 0.01, \]and hence, $\mathop{\mathbb{E}}_{\Psi \sim \mathcal{G}_{q,n,\alpha,T}^{N}}[\mathbf{ALG}(\Psi)=1] \le \frac{1}{3} + 0.01$. Therefore, $\mathbf{ALG}$ distinguishes $\mathcal{G}_{q,n,\alpha,T}^{Y,\boldsymbol{\pi}}$ from $\mathcal{G}_{q,n,\alpha,T}^{N}$ with advantage $1/8$. By applying \autoref{lem:our-indist}, we conclude that the space complexity of $A$ is at least $\Omega(n)$. \end{proof} \section{Bounds on $\textsf{Max-OCSP}(\Pi)$ values of $\mathcal{G}^Y$ and $\mathcal{G}^N$}\label{sec:bounds} The goal of this section is to prove our technical lemmas which lower bound the $\textsf{Max-OCSP}(\Pi)$ values of $\mathcal{G}^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$ (\autoref{lemma:g_y-lower-bound}) and upper bound the $\textsf{Max-OCSP}(\Pi)$ values of $\mathcal{G}^N_{q,n,\alpha,T}$ (\autoref{lemma:g_n-upper-bound}). \subsection{CSPs and coarsening}\label{sec:nonordering-csps} In preparation for proving the lemmas, we recall the definition of (non-ordering) \emph{constraint satisfaction problems (CSPs)}, whose solution spaces are $[q]^n$ (as opposed to $\mathsf{S}_n$), and define an operation called \emph{$q$-coarsening} on $\textsf{Max-OCSP}$'s, which restricts the solution space from $\mathsf{S}_n$ to $[q]^n$. A \emph{maximum constraint satisfaction problem}, $\textsf{Max-CSP}(f)$, is specified by a single constraint function $f : [q]^k \rightarrow \{0,1\}$, for some positive integer $k$. An {\em instance} of $\textsf{Max-CSP}(f)$ on $n$ variables is given by $m$ constraints $C_0,\ldots,C_{m-1}$ where $C_i = (f,\mathbf{j}(i))$, i.e., the application of the function $f$ to the variables $\mathbf{j}(i) = (j(i)_0,\ldots,j(i)_{k-1})$. (Again, $f$ is omitted when clear from context.) The \emph{value} of an assignment $\mathbf{b} \in [q]^n$ on an instance $\Phi = (C_0,\ldots,C_{m-1})$, denoted $\overline{\textsf{val}}_\Phi^q(\mathbf{b})$, is the fraction of constraints satisfied by $\mathbf{b}$, i.e., $\overline{\textsf{val}}_\Phi^q(\mathbf{b})=\tfrac{1}{m}\sum_{i\in[m]} f(\mathbf{b}|_{\mathbf{j}(i)})$, where (recall) $\mathbf{b}|_{\mathbf{j}} = (b_{j_0},\ldots,b_{j_{k-1}})$ for $\mathbf{b} = (b_0,\ldots,b_{n-1}), \mathbf{j} = (j_0,\ldots,j_{k-1})$. The optimal value of $\Phi$ is defined as $\overline{\textsf{val}}_\Phi^q = \max_{\mathbf{b} \in [q]^n}\{\overline{\textsf{val}}_\Phi^q(\mathbf{b})\}$. \begin{definition}[$q$-coarsening]\label{defn:coarsening} Let $\Pi$ be a $k$-ary $\textsf{Max-OCSP}$ and let $q \in \mathbb{N}$. The \emph{$q$-coarsening} of $\Pi$ is the $k$-ary $\textsf{Max-CSP}$ problem $\textsf{Max-CSP}(f_{\Pi}^q)$ where we define $f_{\Pi}^q : [q]^k \to \{0,1\}$ as follows: For $\mathbf{a} \in [q]^k$, $f_{\Pi}^q(\mathbf{a}) = 1$ iff the entries in $\mathbf{a}$ are all distinct and $\Pi(\textsf{ord}(\mathbf{a})) = 1$. The \emph{$q$-coarsening of an instance} $\Psi$ of $\textsf{Max-OCSP}(\Pi)$ is the instance $\Phi$ of $\textsf{Max-CSP}(f_\Pi^q)$ given by the identical collection of constraints. \end{definition} The following lemma captures the idea that coarsening restricts the space of possible solutions; compare to \autoref{lemma:sphe-gap-bound} below. \begin{lemma}\label{lemma:coarsening-monotonicity} If $q \in \mathbb{N}$, $\Psi$ is an instance of $\textsf{Max-OCSP}(\Pi)$, and $\Phi$ is the $q$-coarsening of $\Psi$, then $\textsf{val}_\Psi \geq \overline{\textsf{val}}_\Phi^q$. \end{lemma} \begin{proof} We will show that for every assignment $\mathbf{b} \in [q]^n$ to $\Phi$, we can construct an assignment $\boldsymbol{\sigma} \in \mathsf{S}_n$ to $\Psi$ such that $\textsf{val}_\Psi(\boldsymbol{\sigma}) \geq \overline{\textsf{val}}_\Phi^q(\mathbf{b})$. Consider an assignment $\mathbf{b} \in [q]^n$. Let $\boldsymbol{\sigma}$ be the ordering on $[n]$ given by placing the blocks $\mathbf{b}^{-1}(0),\ldots,\mathbf{b}^{-1}(q-1)$ in order (within each block, we enumerate the indices arbitrarily). Consider any constraint $C = \mathbf{j} = (j_0,\ldots,j_{k-1})$ in $\Phi$ which is satisfied by $\mathbf{b}$ in $\Phi$. Since $f_\Pi^q(\mathbf{b}|_{\mathbf{j}}) = 1$, by definition of $f_\Pi^q$ we have that $\Pi(\textsf{ord}(\mathbf{b}|_\mathbf{j})) =1$ and $b_{j_0},\ldots,b_{j_{k-1}}$ are distinct. The latter implies, by construction of $\boldsymbol{\sigma}$, that $\textsf{ord}(\mathbf{b}|_\mathbf{j}) = \textsf{ord}(\boldsymbol{\sigma}|_\mathbf{j})$. Hence $\Pi(\textsf{ord}(\boldsymbol{\sigma}|_\mathbf{j})) = 1$, so $\boldsymbol{\sigma}$ satisfies $C$ in $\Psi$. Hence $\textsf{val}_\Psi(\boldsymbol{\sigma}) \geq \overline{\textsf{val}}_\Phi^q(\mathbf{b})$. \end{proof} \subsection{$\mathcal{G}^Y$ has high $\textsf{Max-OCSP}(\Pi)$ values}\label{sec:g_y-lower-bound} In this section, we prove \autoref{lemma:g_y-lower-bound}, which states that the $\textsf{Max-OCSP}(\Pi)$ values of instances $\Psi$ drawn from $\mathcal{G}^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$ are large. Note that we prove a bound for \emph{every} instance $\Psi$ in the support of $\mathcal{G}^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$, although it would suffice for our application to prove that such a bound holds with high probability over the choice of $\Psi$. To prove \autoref{lemma:g_y-lower-bound}, if $\Phi$ is the $q$-coarsening of $\Psi$, by \autoref{lemma:coarsening-monotonicity}, it suffices to show that $\overline{\textsf{val}}^q_\Phi \geq 1-\frac{k-1}q$. One natural approach is to consider the $q$-partition $\mathbf{b} = (b_0,\ldots,b_{n-1}) \in [q]^n$ sampled when sampling $\Psi$ and view $\mathbf{b}$ as an assignment to $\Phi$. Consider any constraint $C = \mathbf{j} = (j_0,\ldots,j_{k-1})$ in $\Psi$; by the definition of $\mathcal{G}^{Y,\boldsymbol{\pi}}$ (\autoref{def:YES_NO_MaxOCSP_dist}), we have $\mathbf{b}|_{\mathbf{j}} = (\mathbf{v}^{(\ell)}_q)_{\boldsymbol{\pi}}$ for some (unique) $\ell \in [q]$, which we term the \emph{identifier} of $C$ (recall, we defined $\mathbf{v}^{(\ell)}_q$ as the $k$-tuple $(\ell,\ldots,\ell+k-1) \in [q]^k$). In other words, $\mathbf{b}|_{\mathbf{j}} = (\mathbf{v}^{(\ell)}_q)_{\boldsymbol{\pi}}$. Hence, $C$ is satisfied by $\mathbf{b}$ iff $\Pi(\textsf{ord}((\mathbf{v}^{(\ell)}_q)_{\boldsymbol{\pi}})) = 1$. By \autoref{prop:order-permuted} above, $\textsf{ord}((\mathbf{v}^{(\ell)}_q)_{\boldsymbol{\pi}}) = \textsf{ord}(\mathbf{v}^{(\ell)}_q) \circ \boldsymbol{\pi}$. Hence a sufficient condition for $\mathbf{b}$ to satisfy $C$ (which is in fact necessary in the case $|\textsf{supp}(\Pi)|=1$) is that $\textsf{ord}(\mathbf{v}^{(\ell)}_q) = [0 \; \cdots \; k-1]$ (since then $\textsf{ord}((\mathbf{v}^{(\ell)}_q)_{\boldsymbol{\pi}}) = \boldsymbol{\pi}$); this happens iff $C$'s identifier $\ell \in \{0,\ldots,q-k\}$. Unfortunately, when sampling the constraints $C$, we might get ``unlucky'' and get a sample which over-represents the constraints $C$ with identifier $\ell \in \{q-k+1,\ldots,q-1\}$. We can resolve this issue using ``shifted'' versions of $\mathbf{b}$.\footnote{Alternatively, in expectation, $\overline{\textsf{val}}_\Phi^q(\mathbf{b}) = 1-\frac{k-1}q$. Hence with probability at least $\frac{99}{100}$, $\overline{\textsf{val}}_\Phi^q(\mathbf{b}) \geq 1-\frac{100(k-1)}q$ by Markov's inequality; this suffices for a ``with-high-probability'' statement.} The proof is as follows: \begin{proof}[Proof of \autoref{lemma:g_y-lower-bound}] For $t \in [q]$, define the assignment $\mathbf{b}^{(t)} = (b^{(t)}_0,\ldots,b^{(t)}_{n-1})$ to $\Phi$ via $b^{(t)}_i = b_i+t$ for $i \in [n]$. Fix $t \in [q]$. Then we claim that $\mathbf{b}^{(t)}$ satisfies any constraint $C$ with identifier $\ell$ such that $\ell+t \in \{0,\ldots,q-k\}$. Indeed, if $C = \mathbf{j}$ is a constraint with identifier $\ell$, since $\mathbf{b}|_\mathbf{j} = (\mathbf{v}^{(\ell)}_q)_{\boldsymbol{\pi}}$, then we have $\mathbf{b}^{(t)}|_{\mathbf{j}} = (\mathbf{v}^{(\ell+t)}_q)_{\boldsymbol{\pi}}$; as long as $\ell+t \in \{0,\ldots,q-k\}$, then $\textsf{ord}(\mathbf{v}_q^{(\ell+t)}) = [0 \; \cdots \; k-1]$, and so by \autoref{prop:order-permuted}, $\textsf{ord}((\mathbf{v}_q^{(\ell+t)})_{\boldsymbol{\pi}}) = \boldsymbol{\pi}$. Thus, $\Pi(\textsf{ord}((\mathbf{v}_q^{(\ell+t)})_{\boldsymbol{\pi}})) = 1$. Now (no longer fixing $t$), for each $\ell \in [q]$, let $w^{(\ell)}$ be the fraction of constraints in $\Psi$ with identifier $\ell$. By the above claim, for each $t \in [q]$, we have $\overline{\textsf{val}}_\Phi^q(\mathbf{b}^{(t)}) \geq \sum_{\ell:\ell+t \in \{0,\ldots,q-k\}} w^{(\ell)}$. On the other hand, $\sum_{\ell=0}^{q-1} w^{(\ell)} = 1$ (since every constraint has some (unique) identifier). Hence \[ \sum_{t=0}^{q-1} \textsf{val}_\Phi(b^{(t)}) \geq \sum_{t=0}^{q-1} \left(\sum_{\ell : \ell+t \in \{0,\ldots,q-k\}} w^{(\ell)} \right) = q-(k-1), \] since each term $w^{(\ell)}$ appears exactly $q-(k-1)$ times in the expanded sum. Hence by averaging, there exists some $t \in [q]$ such that $\overline{\textsf{val}}_\Phi^q(\mathbf{b}^{(t)}) \geq 1-\frac{k-1}q$, and so $\overline{\textsf{val}}_\Phi^q \geq 1-\frac{k-1}q$, as desired. \end{proof} \subsection{$\mathcal{G}^N$ has low $\textsf{Max-OCSP}(\Pi)$ values}\label{sec:g_n-upper-bound} In this section, we prove \autoref{lemma:g_n-upper-bound}, which states that the $\textsf{Max-OCSP}(\Pi)$ value of an instance drawn from $\mathcal{G}^N$ does not significantly exceed the random ordering threshold $\rho(\Pi)$, with high probability. Using concentration bounds (i.e., \autoref{lemma:azuma}), one could show that a fixed solution $\boldsymbol{\sigma} \in \mathsf{S}_n$ satisfies more than $\rho(\Pi) + \frac1q$ constraints with probability which is exponentially small in $n$. However, taking a union bound over all $n!$ permutations $\boldsymbol{\sigma}$ would cause an unacceptable blowup in the probability. Instead, to prove \autoref{lemma:g_n-upper-bound}, we take an indirect approach, involving bounding the $\textsf{Max-CSP}$ value of the $q$-coarsening of a random instance and bounding the gap between the $\textsf{Max-OCSP}$ value and the $q$-coarsenened $\textsf{Max-CSP}$ value. To do this, we define the following notions of small set expansion for $k$-hypergraphs: \begin{definition}[Lying on a set] Let $G = (V,E)$ be a $k$-hypergraph. Given a set $S \subseteq V$, a $k$-hyperedge $\mathbf{e} \in E$ \emph{lies on} $S$ if it is incident on two (distinct) vertices in $S$ (i.e., if $|\Gamma(\mathbf{e}) \cap S| \geq 2$). \end{definition} \begin{definition}[Congregating on a partition] Let $G = (V,E)$ be a $k$-hypergraph. Given a $q$-partition $\mathbf{b} \in [q]^n$, a $k$-hyperedge $\mathbf{e} \in E$ \emph{congregates} on $\mathbf{b}$ if it lies on one of the blocks $\mathbf{b}^{-1}(i)$. \end{definition} We denote by $N(G,S)$ the number of $k$-hyperedges of $G$ which lie on $S$. \begin{definition}[Small set hypergraph expansion (SSHE) property] A $k$-hypergraph $G = (V,E)$ is a \emph{$(\gamma,\delta)$-small set hypergraph expander (SSHE)} if it has the following property: For every subset $S \subseteq V$ of size at most $\gamma |V|$, $N(G,S) \leq \delta |E|$ (i.e., the number of $k$-hyperedges in $E$ which lie on $S$ is at most $\delta |E|$). \end{definition} \begin{definition}[Small partition hypergraph expansion (SPHE) property] A $k$-hypergraph $G = (V,E)$ is a \emph{$(\gamma,\delta)$-small partition hypergraph expander (SPHE)} if it has the following property: For every partition $\mathbf{b} \in [q]^n$ where each block $\mathbf{b}^{-1}(i)$ has size at most $\gamma |V|$, the number of $k$-hyperedges in $E$ which congregate on $\mathbf{b}$ is at most $\delta |E|$. \end{definition} In the context of \autoref{fig:mas}, the SPHE property says that for \emph{any} partition with small blocks, there cannot be too many ``orange'' edges. Having defined the SSHE and SPHE properties, we now sketch the proof of \autoref{lemma:g_n-upper-bound}. It will be proved formally later in this section. \begin{proof}[Proof sketch of \autoref{lemma:g_n-upper-bound}] For sufficiently large $q$, with high probability, the $\textsf{Max-CSP}$ value of the $q$-coarsening of a random $\textsf{Max-OCSP}(\Pi)$ instance drawn from $\mathcal{G}^N_q$ is not much larger than $\rho(\Pi)$ (\autoref{lemma:g_n-satisfying-coarse} below). The constraint hypergraph for a random $\textsf{Max-OCSP}(\Pi)$ instance drawn from $\mathcal{G}^N_q$ is a good SSHE with high probability (\autoref{lemma:g_n-sshe} below). Hypergraphs which are good SSHEs are also (slightly worse) SPHEs (\autoref{lemma:sshe-to-sphe} below). Finally, if the constraint hypergraph of a $\textsf{Max-OCSP}(\Pi)$ instance is a good SPHE, its $\textsf{Max-OCSP}(\Pi)$ value cannot be much larger than its $q$-coarsened $\textsf{Max-CSP}$ value (\autoref{lemma:sphe-gap-bound} below); intuitively, this is because if we ``coarsen'' an optimal ordering $\boldsymbol{\sigma}$ for the $\textsf{Max-OCSP}$ by lumping vertices together in small groups to get an assignment $\mathbf{b}$ for the coarsened $\textsf{Max-CSP}$, we can view this assignment $\mathbf{b}$ as a partition on $V$, and for every $k$-hyperedge in $G(\Psi)$ which does not congregate on this partition, the corresponding constraint in $\Psi$ is satisfied. \end{proof} We remark that the bounds on $\textsf{Max-CSP}$ values of coarsened random instances (\autoref{lemma:g_n-satisfying-coarse} below) and on SSHE in random instances (\autoref{lemma:g_n-sshe} below) both use concentration inequalities (i.e., \autoref{lemma:azuma}) and union bound over a space of size only $(O_\epsilon(1))^n$ (the space of all solutions to the coarsened $\textsf{Max-CSP}$ and the space of all small subsets of $[n]$, respectively); this lets us avoid the issue of union-bounding over the entire space $\mathsf{S}_n$ directly. In the remainder of this section, we prove the necessary lemmas and then give a formal proof of \autoref{lemma:g_n-upper-bound}. We begin with several short lemmas. \begin{lemma}[Good SSHEs are good SPHEs]\label{lemma:sshe-to-sphe} For every $\gamma,\delta > 0$, if a $k$-hypergraph $G = (V,E)$ a $(\gamma,\delta)$-SSHE, then it is a $\left(\gamma,\delta (\frac2\gamma+1) \right)$-SPHE. \end{lemma} \begin{proof} Let $n = |V|$. Consider any partition $\mathbf{b} \in [q]^n$ of $V$ where each block has size at most $\gamma n$. WLOG, all but one block $\mathbf{b}^{-1}(i)$ has size at least $\frac{\gamma n}2$ (if not, merge blocks until this happens, only increasing the number of $k$-hyperedges which congregate on $\mathbf{b}$). Hence $\ell \leq \frac2\gamma+1$.\footnote{We include the $+1$ to account for the extra block which may have arbitrarily small size. Excluding this block, there are at most $\frac{n}{\lceil \gamma n /2\rceil} \leq \frac{n}{\gamma n / 2}$ blocks remaining.} By the SSHE property, there are at most $\delta m$ $k$-hyperedges which lie on each block; hence there are at most $\delta (\frac2\gamma+1) m$ constraints which congregate on $\mathbf{b}$. \end{proof} \begin{lemma}[Coarsening roughly preserves value in SPHEs]\label{lemma:sphe-gap-bound} Let $\Psi$ be a $\textsf{Max-OCSP}(\Pi)$ instance on $n$ variables. Suppose that the constraint hypergraph of $\Psi$ is a $(\gamma,\delta)$-SPHE. Let $\Phi$ be the $q$-coarsening of $\Psi$. Then for sufficiently large $n$, if $q \geq \frac{2}{\gamma}$, \[ \textsf{val}_\Psi \leq \overline{\textsf{val}}_\Phi^q + \delta. \] \end{lemma} \begin{proof} We will show that for every assignment $\boldsymbol{\sigma} \in \mathsf{S}_n$ to $\Psi$, we can construct an assignment $\mathbf{b} = (b_0,\ldots,b_{n-1}) \in [q]^n$ to $\Phi$ such that $\textsf{val}_\Psi(\boldsymbol{\sigma}) \leq \overline{\textsf{val}}_\Phi^q(\mathbf{b}) + \delta$. Fix $\boldsymbol{\sigma} \in \mathsf{S}_n$. Define $\mathbf{b} \in [q]^n$ by $b_i = \lfloor \boldsymbol{\sigma}(i) / \lfloor \gamma n \rfloor \rfloor$ for each $i \in [n]$. Observe that since $\boldsymbol{\sigma}(i) \leq n-1$, we have $b_i \leq \lfloor (n-1) / \lfloor \gamma n \rfloor \rfloor < q$, hence $\mathbf{b}$ is a valid assignment to $\Phi$. Also, $\mathbf{b}$ has the property that for every $i,j \in [n]$, if $\boldsymbol{\sigma}(i) < \boldsymbol{\sigma}(j)$ then $b_i \leq b_j$; we call this \emph{monotonicity} of $\mathbf{b}$. View $\mathbf{b}$ as a $q$-partition and consider the constraint hypergraph of $\Psi$ (which is the same as the constraint hypergraph of $\Phi$). Call a constraint $C = (j_0,\ldots,j_{k-1})$ \emph{good} if it is both satisfied by $\boldsymbol{\sigma}$, \emph{and} the $k$-hyperedge corresponding to it does not congregate on $\mathbf{b}$. If $C$ is good, then $b_{j_0},\ldots,b_{j_{k-1}}$ are all distinct; together with monotonicity of $\mathbf{b}$, we conclude that if $C$ is good, then $\textsf{ord}(\mathbf{b}|_\mathbf{j}) = \textsf{ord}(\boldsymbol{\sigma}(j_0),\ldots,\boldsymbol{\sigma}(j_{k-1}))$. Finally, we note that each block in $\mathbf{b}$ has size at most $\gamma n$ by definition; hence by the SPHE property of the constraint hypergraph of $\Psi$, at most $\delta$-fraction of the constraints of $\Psi$ correspond to $k$-hyperedges which congregate on $\mathbf{b}$. Since $\textsf{val}_\Psi(\boldsymbol{\sigma})$ fraction of the constraints of $\Psi$ are satisfied by $\boldsymbol{\sigma}$, at least $(\textsf{val}_\Psi(\boldsymbol{\sigma}) - \delta)$-fraction of the constraints of $\Psi$ are good, and hence $\mathbf{b}$ satisfies at least $(\textsf{val}_\Psi(\boldsymbol{\sigma}) - \delta)$-fraction of the constraints of $\Phi$, as desired. \end{proof} The construction in this lemma was called \emph{coarsening} the assignment $\boldsymbol{\sigma}$ by~\cite{GHM+11} (cf.~\cite[Definition 4.1]{GHM+11}). We also include the following helpful lemma, which lets us restrict to the case where our sampled $\textsf{Max-OCSP}(\Pi)$ instance has many constraints. \begin{lemma}[Most instances in $\mathcal{G}^N$ have many constraints]\label{lemma:g_n-edge-count} For every $n$, $\alpha,\gamma > 0$, and $q \in \mathbb{N}$, \[ \Pr_{\Psi \sim \mathcal{G}^N_{q,n,\alpha,T}}\left[m(\Psi) \leq \frac{n\alpha T}{2q^k}\right] \leq \exp\left(-\frac{n\alpha T}{8q^k}\right). \] \end{lemma} \begin{proof} The number of constraints in $\Psi$ is distributed as the sum of $n\alpha T$ independent Bernoulli$(1/q^k)$ random variables. The desired bound follows by applying the Chernoff bound. \end{proof} \subsubsection{$\mathcal{G}^N$ is a good SSHE with high probability} Recall that for a $k$-hypergraph $G = (V,E)$ and $S\subseteq V(G)$, we define $N(G,S)$ to be the number of $k$-hyperedges in $G$ that lie on $S$, and for an $k$-hyperedge $\mathbf{e} \in E$, we define $\Gamma(\mathbf{e}) \subseteq V$ as the set of vertices incident on $\mathbf{e}$. \begin{lemma}[Random hypermatchings barely lie on small sets]\label{lemma:random-hypermatching-lying-on} For every $n$ and $\alpha, \gamma > 0$ with $\alpha \leq \frac1{2k}$, and every subset $S \subseteq [n]$ of at most $\gamma n$ vertices, we have \[ \Pr_{G \sim \mathcal{H}_{k,n,\alpha}} [N(G,S)\ge 8k^2\gamma^2\alpha n] \leq\exp\left(-\gamma^2 \alpha n\right). \] \end{lemma} \begin{proof} Label the hyperedges of $G$ as $\mathbf{e}_0,\ldots,\mathbf{e}_{\alpha n - 1}$. For $i \in [\alpha n]$, let $X_i$ be the indicator for the event that $\mathbf{e}_i$ lies on $S$. We have $N(G,S) = X_0 + \cdots+X_{\alpha n-1} $. We first bound $\mathop{\mathbb{E}}[X_i \mid X_0,\ldots,X_{i-1}]$ for each $i$. Conditioned on $\mathbf{e}_0,\ldots,\mathbf{e}_{i-1}$, the $k$-hyperedge $\mathbf{e}_{i}$ is uniformly distributed over the set of all $k$-hyperedges on $[n] \setminus (\Gamma(\mathbf{e}_0) \cup \cdots \cup \Gamma(\mathbf{e}_{i-1}))$. It suffices to union-bound, over distinct pairs $j_1 < j_2 \in \binom{[k]}2$, the probability that the $j_1$-st and $j_2$-nd vertices of $\mathbf{e}_i$ are in $S$ (conditioned on $X_0,\ldots,X_{i-1}$). We can sample the $j_1$-st and $j_2$-nd vertices of $\mathbf{e}_i$ first (uniformly over remaining vertices outside of $S$) and then sample the remaining vertices (uniformly over remaining vertices). Hence we have the upper-bound \begin{align*} \mathop{\mathbb{E}}[X_i \mid X_0,\ldots,X_{i-1}] & \leq \binom{k}2 \cdot \frac{|S|(|S|-1)}{(n-ki)(n-ki-1)}\\ &\leq \binom{k}2 \cdot \left(\frac{|S|}{n-ki}\right)^2 \\ &\leq \binom{k}2 \cdot \left(\frac{|S|}{n-k\alpha n}\right)^2 \leq 4k^2\gamma^2 \, , \end{align*} since $\alpha \leq \frac1{2k}$. Now, we apply the concentration bound in \autoref{lemma:azuma} to conclude that: \[ \Pr_{G \sim \mathcal{H}_{k,n,\alpha}}\left[X_0 + \cdots+X_{\alpha n-1} \geq 8 k^2 \gamma^2 \alpha n \right] \leq \exp\left(-2k^2 \gamma^2 \alpha n\right) \leq \exp(-\gamma^2 \alpha n). \] \end{proof} \begin{lemma}\label{lemma:g_n-sshe} For every $n$, $\alpha, \gamma > 0$, and $q \in \mathbb{N}$ with $\alpha \leq \frac1{2k}$, \[ \Pr_{\Psi \sim \mathcal{G}^N_{q,n,\alpha,T}}\left[G(\Psi)\text{ is not a }(\gamma,8k^2\gamma^2)\text{-SSHE} \;\middle|\; m(\Psi) \geq \frac{n\alpha T}{2q^k} \right] \leq \exp\left(-{\left(\frac{\gamma^2 \alpha T}{2q^k}-\ln 2\right)}n\right). \] \end{lemma} \begin{proof} Let $\alpha_0, \ldots, \alpha_{T-1} \geq 0$ be such that $\frac{\alpha T}{2q^k} \leq \alpha_0 + \cdots + \alpha_{T-1} \leq \alpha T$. It suffices to prove the bound, for every such sequence $\alpha_0,\ldots,\alpha_{T-1}$, conditioned on the event that for every $i \in [T]$, $m(G_i) = \alpha_i n$ (where $G_i$ is defined as in \autoref{def:YES_NO_MaxOCSP_dist}). This is equivalent to simply sampling each $G_i \sim \mathcal{H}_{k,n,\alpha_i}$ independently. Fix any set $S \subseteq [n]$ of size at most $\gamma n$. Applying \autoref{lemma:random-hypermatching-lying-on}, and the fact that each hypermatching $G_i$ in $G$ is sampled independently, we conclude that \begin{align*} & \Pr_{\Psi \sim \mathcal{G}^N_{q,n,\alpha,T}} \left[\exists i\in[T] \text{ s.t. }N(G_i,S)\ge 8k^2\gamma^2 \alpha_i n\;\middle|\; \forall i \in [T], m(G_i) = \alpha_i n\right] \\ \le &\exp\left(-\gamma^2 (\alpha_0+\cdots+\alpha_{T-1}) n\right) \\ \le &\exp\left(-{\frac{\gamma^2 \alpha Tn}{2q^k}}\right) \, . \end{align*} Hence by averaging, the total fraction of $k$-hyperedges in $G$ which lie on $S$ is at most $8k^2\gamma^2$. Taking the union-bound over the $\leq 2^n$ possible subsets $S \subseteq [n]$ gives the desired bound. \end{proof} \subsubsection{$\mathcal{G}^N$ has low coarsened $\textsf{Max-CSP}(f_\Pi^q)$ values with high probability} For $G\sim \mathcal{H}_{k,n,\alpha}$, we define an instance $\Phi(G)$ of $\textsf{Max-CSP}(f_\Pi^q)$ on $n$ variables $x_0,\dots,x_{n-1}$ naturally as follows: for each $k$-hyperedge $\mathbf{j} = (j_0,\ldots,j_{k-1}) \in E(G) \subseteq [n]^k$, we add the constraint $\mathbf{j}$ to $\Phi(G)$. \begin{lemma}[Satisfiability of random instances of $\textsf{Max-CSP}(f_\Pi^q)$]\label{lemma:random-hypermatching-satisfying-coarse} For every $n$, $\alpha, \eta >0$, and $\mathbf{b} \in [q]^n$, \[ \Pr_{G \sim \mathcal{H}_{k,n,\alpha}} [\overline{\textsf{val}}_{\Phi(G)}^q(\mathbf{b}) \geq \rho(\Pi)+\eta] \leq \exp\left(-{\left(\frac{\eta^2\alpha}{2(\rho(\Pi)+\eta)}\right)}n\right). \] \end{lemma} \begin{proof} Let the $k$-hyperedges of $G$ be labelled as $\mathbf{e}_0,\ldots,\mathbf{e}_{\alpha n -1}$ and the corresponding constraints of $\Phi(G)$ be denoted by $\mathbf{j}(0),\ldots,\mathbf{j}(\alpha n - 1)$. For $i \in [\alpha n]$, let $X_i$ be the indicator for the event that the constraint $\mathbf{j}(i)$ is satisfied by $\mathbf{b}$, i.e., $f^q_\Pi(\mathbf{b}|_{\mathbf{j}(i)}) = 1$. Again, like in the proof of \autoref{lemma:random-hypermatching-lying-on}, we bound $\mathop{\mathbb{E}}[X_i \mid X_0,\ldots,X_{i-1}]$, for each $i$. Conditioned on $\mathbf{e}_0,\ldots,\mathbf{e}_{i-1}$, the $k$-hyperedge $\mathbf{e}_{i}$ is uniformly distributed over the set of all $k$-hyperedges on $[n] \setminus (\Gamma(\mathbf{e}_0) \cup \cdots \cup \Gamma(\mathbf{e}_{i-1}))$. Hence, $\mathop{\mathbb{E}}[X_i \mid X_0,\ldots,X_{i-1}] \leq \rho(\Pi)$. Indeed, the set of possible $k$-hyperedges on $[n] \setminus (\Gamma(\mathbf{e}_0) \cup \cdots \cup \Gamma(\mathbf{e}_{i-1}))$ may be partitioned into blocks of size $k!$ by mapping each $k$-hyperedge to the set of vertices on which it is incident. For each subset $J = \{j_0,\ldots,j_{k-1}\} \subseteq [n]$, if $b_{j_0},\ldots,b_{j_{k-1}}$ are not all distinct, then for every $\boldsymbol{\pi} \in \mathsf{S}_k$, the constraint corresponding to the permuted $k$-tuple $\mathbf{j}_{\boldsymbol{\pi}}$ is not satisfied by $\mathbf{b}$. On the other hand, if $b_{j_0},\ldots,b_{j_{k-1}}$ are all distinct, then \[ \left|\{ \boldsymbol{\pi} \in \mathsf{S}_k : f^q_\Pi (\mathbf{b}|_{\mathbf{j}_{\boldsymbol{\pi}}}) =1 \}\right| = |\textsf{supp}(\Pi)| = \rho(\Pi) \cdot k! \, . \] Finally, we again apply the concentration bound in \autoref{lemma:azuma} to conclude that: \[ \Pr_{G \sim \mathcal{H}_{k,n,\alpha}}\left[X_0 + \cdots+X_{\alpha n-1} \geq (\rho(\Pi) + \eta) \alpha n \right] \leq\exp\left(-{\left(\frac{\eta^2\alpha}{2(\rho(\Pi)+\eta)}\right)}n\right), \] as desired. \end{proof} \begin{lemma}\label{lemma:g_n-satisfying-coarse} For every $n$ and $\alpha, \eta > 0$, \begin{multline*} \Pr_{\Psi \sim \mathcal{G}^N_{q,n,\alpha,T}} \left[\overline{\textsf{val}}_\Phi^q \geq \rho(\Pi)+\eta \text{, where }\Phi\text{ is the }q\text{-coarsening of }\Psi \;\middle|\; m(\Psi) \geq \frac{n\alpha T}{2q^k}\right] \\ \leq \exp\left(-{\left(\frac{\eta^2\alpha T}{4(\rho(\Pi)+\eta)q^k} - \ln q\right)}n\right). \end{multline*} \end{lemma} \begin{proof} Identical to the proof of \autoref{lemma:g_n-sshe} (using \autoref{lemma:random-hypermatching-satisfying-coarse} instead of \autoref{lemma:random-hypermatching-lying-on}), but now union-bounding over a set of size $q^n$ (i.e., the set of possible assignments $\mathbf{b} \in [q]^n$ for $\Phi$). \end{proof} We finally give the proof of \autoref{lemma:g_n-upper-bound}. \begin{proof}[Proof of \autoref{lemma:g_n-upper-bound}] Let $q_0 := \left\lceil \frac{192k^2}{\epsilon} \right\rceil$ and let $\alpha_0 := \frac1{2k}$. Suppose $\alpha \leq \alpha_0$ and $q \geq q_0$. Then let $\gamma := \frac{\epsilon}{96k^2}$ and $\eta := \frac{\epsilon}4$, and let \[ T_0 := \max\left\{\frac{4(\ln 2)q^k}{\gamma^2 \alpha},\frac{8(\rho(\Pi)+\eta)q^k (\ln q)}{\eta^2 \alpha}\right\}. \] Consider any $T \geq T_0$; we will prove the desired bound. Let $\delta := 8k^2\gamma^2$. Then the multiplicative factors in the exponents of the error terms in \autoref{lemma:g_n-edge-count}, \autoref{lemma:g_n-sshe}, and \autoref{lemma:g_n-satisfying-coarse} are all positive (the latter two lemmas may be applied since $\alpha \leq \alpha_0 = \frac1{2k}$); taking a union bound (and then conditioning on $m(\Psi) \geq \frac{n\alpha T}{2q^k}$), for sufficiently large $n$, we can conclude that with probability at least $0.99$ over $\Psi \sim \mathcal{G}^N_{q,n,\alpha,T}$, we have $\overline{\textsf{val}}^q_\Phi \geq \rho(\Pi) + \eta$ (where $\Phi$ is the $q$-coarsening of $\Psi$) and $G(\Psi)$ is a $(\gamma,\delta)$-SSHE. If $G(\Psi)$ is a $(\gamma,\delta)$-SSHE, by \autoref{lemma:sshe-to-sphe} it is also a $(\gamma,\delta')$-SPHE, where $\delta' := \frac{3\delta}{\gamma} \geq \delta(\frac2{\gamma}+1)$. Note that $\delta' = 24k^2\gamma = \frac{\epsilon}4$. Now since $q \geq q_0 \geq \frac2{\gamma}$, we can apply \autoref{lemma:sphe-gap-bound}, and conclude that for sufficiently large $n$, with probability $\geq 0.99$ over the choice of $\Psi \sim \mathcal{G}^N_{q,n,\alpha,T}$, we have \[ \textsf{val}_\Psi \geq \rho(\Pi) + \eta + \delta' = \rho(\Pi) + \frac{\epsilon}2, \] as desired. \end{proof} \input{streaming_bound} \bibliographystyle{alpha} \section{Streaming indistinguishability of $\mathcal{G}^Y$ and $\mathcal{G}^N$}\label{sec:streaming} In this section we prove \autoref{lem:our-indist}. This indistinguishability follows directly from the work of \cite{CGS+21}, who introduce a $T$-player communication problem called \emph{implicit randomized mask detection (IRMD)}. Once we properly situate our instances $\mathcal{G}^Y$ and $\mathcal{G}^N$ within the framework of \cite{CGS+21}, \autoref{lem:our-indist} follows immediately. We first recall their definition of the IRMD problem, and state their lower bound. The following definition is based on \cite[Definition 3.1]{CGS+21}. In \cite{CGS+21} the IRMD game is parametrized by two distributions $\mathcal{D}_Y$ and $\mathcal{D}_N$, but hardness is proved for a specific pair of distributions which suffices for our purpose; these distributions will thus be ``hardcoded'' into the definition we give. \begin{definition}[Implicit randomized mask detection (IRMD) problem\label{def:irmd}] Let $q,k,n,T \in \mathbb{N},\alpha \in (0,1/k)$ be parameters. In the $\mathsf{IRMD}_{\alpha,T}$ game, there are $T$ players, indexed from $0$ to $T-1$, and a hidden partition encoded by a random $\mathbf{b} \in [q]^n$. The $t$-th player has two inputs: (a.) $M_t \in \{0,1\}^{\alpha k n \times n}$, the hypermatching matrix corresponding to a uniform $\alpha$-partial $k$-hypermatching on $n$ vertices (i.e., drawn from $\mathcal{H}_{n, \alpha}$), and (b.) a vector $\mathbf{z}_t \in [q]^{\alpha k n}$ that can be generated from one of two different distributions: \begin{itemize} \item ($\textbf{YES}$) $\mathbf{z}_t = M_t \mathbf{b} + \mathbf{y}_t \pmod{q}$ where $\mathbf{y}_t \in [q]^{\alpha k n}$ is of the form $\mathbf{y}_t = (\mathbf{y}_{t,0},\ldots,\mathbf{y}_{t,\alpha n-1})$ and each $\mathbf{y}_{t,i} \in [q]^k$ is sampled as $(a,\ldots,a)$ where $a$ is sampled uniformly from $[q]$. \item ($\textbf{NO}$) $\mathbf{z}_t = M_t \mathbf{b} + \mathbf{y}_t \pmod{q}$ where $\mathbf{y}_t \in [q]^{\alpha k n}$ is of the form $\mathbf{y}_t = (\mathbf{y}_{t,0},\ldots,\mathbf{y}_{t,\alpha n-1})$ and each $\mathbf{y}_{t,i} \in [q]^k$ is sampled as $(a_0,\ldots,a_{k-1})$ where each $a_j$ is sampled uniformly and independently from $[q]$. \end{itemize} This is a one-way game where the $t$-th player can send a private message to the $(t+1)$-st player after receiving a message from the previous player. The goal is for the $(T-1)$-st player to decide whether the $\{\mathbf{z}_t\}$ have been chosen from the $\textbf{YES}$ or $\textbf{NO}$ distribution, and the advantage of a protocol is defined as \[ \left\lvert\Pr_{\textbf{YES}\text{ case}}[\text{the }(T-1)\text{-st player outputs 1}]-\Pr_{\textbf{NO}\text{ case}}[\text{the }(T-1)\text{-st player outputs 1}]\right\rvert. \] \end{definition} Note that the definition of the IRMD problem does not depend on an underlying family of constraints. Nevertheless, we will be able to leverage its hardness to prove \autoref{lem:our-indist} (and indeed, all hardness results in \cite{CGS+21} itself stem from hardness for the IRMD problem). The following theorem from \cite{CGS+21} gives a lower bound on the communication complexity of the IRMD problem: \begin{theorem}[{\cite[Theorem~3.2]{CGS+21}}]\label{thm:distinguishing_distributions} For every $q,k \in \mathbb{N}$ and $\delta \in (0,1/2)$, $\alpha \in (0,1/k)$, $T \in \mathbb{N}$ there exists $n_0 \in \mathbb{N}$ and $\tau \in (0,1)$ such that the following holds. For all $n \geq n_0$, every protocol for $\mathsf{IRMD}_{\alpha,T}$ on $n$ vertices with advantage $\delta$ requires $\tau n$ bits of communication. \end{theorem} Now, we use this hardness result to prove \autoref{lem:our-indist}. The following proof is based on the proof of \cite[Theorem 4.3]{CGS+21}, which introduces a notion called the \emph{width} of a constraint family, which we briefly discuss. For our purposes, it suffices to define the width $\omega(f) \in [0,1]$ of a single constraint $f : [q]^k \to \{0,1\}$ as \[ \omega(f) = \max_{\mathbf{b} \in [q]^k} \left\{ \Pr_{\ell \in [q]} [f(\mathbf{b}+\ell)=1]\right\}, \] where $\mathbf{b} + \ell$ denotes adding $\ell$ to each component of $\mathbf{b}$. \cite[Theorem 4.3]{CGS+21} states that for every $f$ and $\epsilon > 0$, $\textsf{Max-CSP}(f)$ cannot be $(\rho(f)/\omega(f)+\epsilon)$-approximated by a sublinear-space single-pass streaming algorithm, where $\rho(f) = \Pr_{\mathbf{b} \in [q]^k}[f(\mathbf{b})=1]$ is the random assignment value for $f$. In other words, whenever $\omega(f)$ is close to $1$, $\textsf{Max-CSP}(f)$ is difficult to approximate. In our setting, we have $\omega(f_\Pi^q) \geq 1 - \frac{k-1}q$; indeed, simply take $\mathbf{b} = (\boldsymbol{\pi}^{-1}(0),\ldots,\boldsymbol{\pi}^{-1}(k-1))$, and then for any $\ell \in \{0,\ldots,q-k\}$, we have $f_\Pi^q(\mathbf{b}+\ell) = 1$ (by the same reasoning as in \autoref{sec:g_y-lower-bound}). The fact that $\omega(f_\Pi^q) \approx 1$ for large $q$ is precisely what enables us to apply \cite{CGS+21}'s lower bounds to get optimal lower bounds in our setting. However, \cite[Theorem 4.3]{CGS+21} as written contains both the streaming-to-communication reduction and an analysis of the CSP values of $\textbf{YES}$ and $\textbf{NO}$ instances; in the following, we reprove only the former (and adapt the language to our setting). \begin{proof}[Proof of \autoref{lem:our-indist}] We prove the lemma for the same $\alpha_0$ as in \autoref{thm:distinguishing_distributions}. Suppose $\mathbf{ALG}$ is a $O(n)$-space streaming algorithm which distinguishes $\mathcal{G}_{q,n,\alpha,T}^{Y,\boldsymbol{\pi}}$ from $\mathcal{G}_{q,n,\alpha,T}^{N}$ with advantage $1/8$ for all lengths $n$. We now show how to use $\mathbf{ALG}$ to construct a protocol $\mathbf{ALG}'$ solving $\mathsf{IRMD}_{\alpha,T}$ with advantage $1/8$ for $n \geq n_0$, which uses only $O(n)$ bits of communication; this contradicts \autoref{thm:distinguishing_distributions}. As is standard, this reduction will involve the players collectively running the streaming algorithm $\mathbf{ALG}$. That is, $\mathbf{ALG}'$ is defined as follows: For $t = 0,\ldots,T-1$, the $t$-th player $P_t$ will add some constraints to the stream and then send the state of $\mathbf{ALG}$ on to the next player. Finally, the last player $P_{T-1}$ terminates the streaming algorithm and outputs the output of $\mathbf{ALG}$. Which constraints does $P_t$ add to the stream in $\mathbf{ALG}'$? $P_t$'s input is $(M_t, \mathbf{z}_t)$, with $\mathbf{z}_t = (\mathbf{z}_{t,0},\ldots,\mathbf{z}_{t,\alpha n - 1})$, and each $\mathbf{z}_{t,i} \in [q]^k$. Above, we defined $\mathbf{v}_q^{(\ell)} = (\ell,\ldots,\ell+k-1 \pmod q) \in [q]^k$, so that $(\mathbf{v}_q^{(0)})_{\boldsymbol{\pi}} = (\boldsymbol{\pi}^{-1}(0),\ldots,\boldsymbol{\pi}^{-1}(k-1))$ (see \autoref{sec:hard_distributions}). $P_t$ simply examines each $i \in [\alpha n]$ and the corresponding hyperedge $\tilde{\mathbf{e}}_i$ in $M_t$. If $\mathbf{z}_{t,i} = (\mathbf{v}_q^{(0)})_{\boldsymbol{\pi}}$, $P_t$ adds the constraint corresponding to $\tilde{\mathbf{e}}_i$ to the stream. Let $\cS^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$ and $\cS^N_{q,n,\alpha,T}$ denote the distributions of $\textsf{Max-OCSP}(\Pi)$ instances constructed by $\mathbf{ALG}'$ in the $\textbf{YES}$ and $\textbf{NO}$ cases, respectively. The crucial claim is that $\cS^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$ and $\mathcal{G}^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$ are identical distributions, and similarly with $\cS^{N}_{q,n,\alpha,T}$ and $\mathcal{G}^{N}_{q,n,\alpha,T}$. This claim suffices to prove the lemma, since the constructed stream of constraints is fed into $\mathbf{ALG}$, which is an $O(n)$-space streaming algorithm distinguishing $\mathcal{G}^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$ from $\mathcal{G}^N_{q,n,\alpha,T}$; hence we can conclude that $\mathbf{ALG}'$ is a protocol for $\mathsf{IRMD}$ using $O(n)$ bits of communication. It remains to prove the claim. We first consider the $\textbf{NO}$ case. $\cS^{N}_{q,n,\alpha,T}$ and $\mathcal{G}^{N}_{q,n,\alpha,T}$ are both sampled by independently sampling $T$ hypermatchings from $\mathcal{H}_{n,\alpha}$ and then (independently) selecting some subset of hyperedges from each hypermatching to add as constraints. It suffices by independence to prove equivalence of how the subset of each hypermatching is sampled in each case. In the $t$-th hypermatching in $\cS^{N}_{q,n,\alpha,T}$, $P_t$ adds a hyperedge $\tilde{\mathbf{e}}_i$ iff $\mathbf{z}_{t,i} = (\mathbf{v}_q^{(0)})_{\boldsymbol{\pi}}$. But (even conditioned on all other $\mathbf{z}_{t,i'}$'s and on $\tilde{\mathbf{e}}_i$ itself), $\mathbf{z}_{t,i}$ is a uniform value in $[q]^k$, and hence $ \tilde{\mathbf{e}}_i$ is added to the instance with probability $\frac1q^k$ (independently of every other hyperedge). This is exactly how we defined $\mathcal{G}^N_{q,n,\alpha,T}$ to sample constraints. Similarly, we consider the $\textbf{YES}$ case, analyzing the $t$-th hypermatching in $\cS^{Y,\boldsymbol{\pi}}_{q,n,\alpha,T}$. Consider the sampled $q$-partition $\mathbf{b} = (b_0,\ldots,b_{n-1}) \in [q]^n$. Again consider a hyperedge $\tilde{\mathbf{e}}_i = (j_0,\ldots,j_{k-1})$. In this case, $\mathbf{z}_{t,i}$ is a \emph{uniform translation} of $\mathbf{b}|_{\mathbf{j}}$, i.e., it equals $\mathbf{b}|_j + \ell'$ where $\ell' \in [q]$ is uniform and the sum denotes adding $\ell'$ to each component of $\mathbf{b}|_j$. Hence $P_t$ will add $\tilde{\mathbf{e}}_i$ iff (1) $\mathbf{b}|_{\mathbf{j}} = (\mathbf{v}_q^{(\ell)})_{\boldsymbol{\pi}}$ for some $\ell \in [q]$ \emph{and} (2) $\ell + \ell' = 0 \pmod q$. The latter event occurs with probability $\frac1q$, even conditioned on all other $\mathbf{z}_{t,i'}$'s and on $\tilde{\mathbf{e}_i}$. Hence $\tilde{\mathbf{e}}_i$ is added to the instance with probability $\frac1q$, as long as $\mathbf{b}|_{\mathbf{j}} = (\mathbf{v}_q^{(\ell)})_{\boldsymbol{\pi}}$ for some $\ell \in [q]$. This is exactly how we defined $\mathcal{G}^Y_{q,n,\alpha,T}$ to sample constraints. \end{proof}
{ "timestamp": "2021-07-09T02:01:38", "yymm": "2105", "arxiv_id": "2105.01782", "language": "en", "url": "https://arxiv.org/abs/2105.01782" }
\section{Introduction} \label{sec:Intro} When making dynamic decisions, the decision criteria of an agent at different times may not align with each other, leading to time-inconsistent behavior: an action that is optimal under the decision criterion today may no longer be optimal under the decision criterion at certain future time. A variety of preference models can lead to time inconsistent behaviors, such as those involving present-bias, mean-variance criterion, and probability weighting. In his seminal paper, \cite{Strotz1955:MyopiaInconsistency} describes three types of agents when facing time inconsistency. Type 1, a ``spendthrift" (or a naivet\'e as in the more recent literature), does not recognize the time-inconsistency and at any given time seeks an optimal solution from the vantage point of that moment only. As a result, his strategies are always myopic and change all the times. The next two types are aware of the time inconsistency but act differently. Type 2 is a ``precommitter" who solves the optimization problem only once at time 0 and then commits to the resulting strategy throughout, even though she knows that the original solution may no longer be optimal at later times. Type 3 is a ``thrift" (or a sophisticated agent) who is unable to precommit and realizes that her future selves may disobey whatever plans she makes now. Her resolution is to compromise and choose {\it consistent planning} in the sense that she optimizes taking the future disobedience as a {\it constraint}. In this resolution, the agent's selves at different times are considered to be the players of a game, and a consistent plan chosen by the agent becomes an equilibrium of the game from which no selves are willing to deviate. Such a plan or strategy is referred to as an {\em intra-personal equilibrium}. To illustrate the above three types of behavior under time inconsistency, consider an agent who has a planning horizon with a finite end date $T$ and makes decisions at {\em discrete} times $t\in \{0,1,\dots, T-1\}$. The agent's decision drives a Markov state process and the agent's decision criterion at time $t$ is to maximize an objective function $J(t,x;\mathbf{u})$, where $x$ stands for the Markovian state at that time and $\mathbf{u}$ represents the agent's strategy. The agent considers Markovian strategies, so $\mathbf{u}$ is a function of time $s\in\{0,1,\dots, T-1\}$ and the Markovian state at that time. If the agent, at certain time $t$ with state $x$, is a ``pre-committer", she is committed to implementing throughout the remaining horizon the strategy $\mathbf{u}^{\mathrm{pc}}_{(t,x)}=\{\mathbf{u}^{\mathrm{pc}}_{(t,x)}(s,\cdot)|s=t,t+1,\dots, T-1\}$ that maximizes $J(t,x;\mathbf{u})$, and this strategy is referred to as the {\em pre-committed strategy} of the agent at time $t$ with state $x$. If the agent is a ``spendthrift", at {\em every} time $t$ with state $x$, she is able to implement the pre-committed strategy at that moment only and will change at the next moment; so the strategy that is actually implemented by the agent throughout the horizon is $\mathbf{u}^{\mathrm{n}}=\{\mathbf{u}^{\mathrm{pc}}_{(s,X^{\mathbf{u}^{\mathrm{n}}}(s))}(s,X^{\mathbf{u}^{\mathrm{n}}}(s))|s=0,1,\dots, T-1\}$, where $X^{\mathbf{u}^{\mathrm{n}}}$ denotes the state process under $\mathbf{u}^{\mathrm{n}}$. This strategy is referred to as the {\em na\"ive strategy}. If the agent is a ``thrift", she chooses an intra-personal equilibrium strategy $\hat{\mathbf{u}}$: At any time $t\in \{0,1,\dots, T-1\}$ with any state $x$ at that time, $\hat{\mathbf{u}}(t,x)$ is the optimal action of the agent given that her future selves follow $\hat{\mathbf{u}}$; i.e., \begin{align}\label{eq:EquilibriumDiscreteTime} \hat{\mathbf{u}}(t,x) \in \mathrm{arg}\max_{u}J(t,x; {\mathbf{u}}_{t,u}), \end{align} where ${\mathbf{u}}_{t,u}(t,x):=u$ and ${\mathbf{u}}_{t,u}(s,\cdot):=\hat{\mathbf{u}}(s,\cdot)$ for $s=t+1,\dots, T-1$. All the three types of behavior are important from an economic perspective. First, field and experimental studies reveal the popularity of commitment devices to help individuals to fulfill plans that would otherwise be difficult to implement due to lack of self control; see for instance \citet{BryanEtal2010:CommitmentDevices}. The demand for commitment devices implies that some individuals seek for pre-committed strategies in the presence of time inconsistency. Second, empirically observed decision-making behavior implies that some individuals are naivet\'es. For example, \citet{Barberis2012:Casino} shows that a na\"ive agent would take on a series independent, unfavorable bets and take a gain-exit strategy, and this gambling behavior is commonly observed in casinos. Finally, when an agent foresees the time-inconsistency and a commitment device is either unhelpful or unavailable, the intra-personal equilibrium strategy becomes a rational choice of the agent. It is important to note that it is hard or perhaps not meaningful to determine which type is superior than the others, simply because there is no uniform criteria to evaluate and compare them. So a na\"ive strategy, despite its name, is not necessarily inferior to an intra-personal equilibrium in terms of an agent's long-run utility. Indeed, \citet{ODonogheRabin:1999DoingItNowOrLater} show that in an optimal stopping problem with an immediate reward and present-biased preferences, a sophisticate agent has a larger tendency to preproperate than a naivet\'e and thus leads to a lower long-run utility. In this sense, studying the different behaviors under time inconsistency sometimes falls into the realm of being ``descriptive" as in behavioral science, rather than being ``normative" as in classical decision-making theory. In this survey article, we focus on reviewing the studies on intra-personal equilibrium of a sophisticated agent in {\em continuous time}.\footnote{Hence the title of this article. Note that there is no grammatical error in the phrase ``who {\it are} I": the word ``I" here is {\it plural}; it refers to many different selves over time, which is the premise of this article.} Intra-personal equilibrium for time-inconsistent problems in discrete time, which is defined through the equilibrium condition \eqref{eq:EquilibriumDiscreteTime}, has been extensively studied in the literature and generated various economic implications. The extension to the continuous-time setting, however, is nontrivial because in this setting, taking a different action from a given strategy at only one time instant does not change the state process and thus has no impact on the objective function value. As a result, it becomes meaningless to examine whether the agent is willing to deviate from a given strategy at a particular moment by just comparing the objective function values before and after the deviation. To address this issue and to formalize the idea of \cite{Strotz1955:MyopiaInconsistency}, \cite{EkelandPirvu2008:InvestConsptnWithoutCommit}, \cite{EkelandLazrak2006:BeingSeriousAboutNonCommitment}, and \cite{BjordTMurgoci:08u} assume that the agent's self at each time can implement her strategy in an infinitesimally small, but positive, time period; consequently, her action has an impact on the state process and thus on the objective function. In Section \ref{sec:ExHJB} below, we follow the framework of \cite{BjorkMurgoci2014:TheoryMarkovianTimeInconsistent} to define intra-personal equilibria, show a sufficient and necessary condition for an equilibrium, and present the so-called {\em extended HJB equation} that characterizes the intra-personal equilibrium strategy and the value under this strategy. In Section \ref{sec:Discussions}, we further discuss various issues related to intra-personal equilibria. A close-loop strategy for a control system is a mapping from the historical path of the system state and control to the space of controls. So at each time, the control taken by the agent is obtained by plugging the historical path into this mapping. For example, a Markovian strategy for a Markovian control system is a closed-loop strategy. An open-loop strategy is a collection of controls across time (and across scenarios in case of stochastic control), and at each time the control in this collection is taken, regardless of the historical path of the system state and control. For a classical, time-consistent controlled Markov decision problem, the optimal close-loop strategy and the optimal open-loop strategy yield the same state-control path. For time-inconsistent problems, however, closed-loop and open-loop intra-personal equilibria can be vastly different. In Section \ref{sec:OpenLoop}, we review the study of open-loop intra-personal equilibrium and discuss its connection with closed-loop intra-personal equilibrium. Optimal stopping problems can be viewed as a special case of control problems, so intra-personal equilibria can be defined similarly for time-inconsistent stopping problems. These problems, however, have very special structures, and by exploiting these structures new notions of intra-personal equilibria have been proposed in the literature. We discuss these in Section \ref{sec:Stopping}. If we discretize a continuous horizon of time and assume that the agent has full self control in each subperiod under the discretization, we can define and derive intra-personal equilibria as in the discrete-time setting. The limits of the intra-personal equilibria as discretization becomes infinitely finer are used by some authors to define intra-personal equilibria for continuous-time problems. In Section \ref{sec:Discretization}, we review this thread of research. Time-inconsistency arises in various economic problems, and for many of them, intra-personal equilibria have been studied and their implications discussed in the literature. In Section \ref{sec:Applications}, we review this literature. Finally, in Section \ref{sec:DynamicConsistent}, we review the studies on dynamic consistency preferences. In these studies, starting from a preference model for an agent at certain initial time, the authors attempt to find certain preference models for the agent's future selves such that the pre-committed strategy for the agent at the initial time is also optimal for the agent at any future time and thus can be implemented consistently over time. \section{Extended HJB Equation} \label{sec:ExHJB} \citet{Strotz1955:MyopiaInconsistency} is the first to study the behavior of a sophisticated agent in the presence of time-inconsistency in a continuous-time model. Without formally defining the notion of intra-personal equilibrium, the author derives a consistent plan of the sophisticated agent. \citet{Barro1999:Ramsey} and \citet{LuttmerMariotti2003:SubjectiveDiscounting} also investigate, for certain continuous-time models, consistent plans of sophisticated agents, again without their formal definitions. In a series of papers, \citet{EkelandLazrak2006:BeingSeriousAboutNonCommitment}, \citet{EkelandILazrakA:07ti}, and \citet{EkelandLazrak2010:GoldenRule} study the classical Ramsey model with a nonexponential discount function and propose for the first time a formal notion of intra-personal equilibrium for deterministic control problems in continuous time. Such a notion is proposed in a stochastic context by \citet{BjordTMurgoci:08u}, which is later split into two papers, \cite{BjorkMurgoci2014:TheoryMarkovianTimeInconsistent} and \citet{Bjork2017:TimeInconsistent}, discussing the discrete-time and continuous-time settings, respectively. In this section, we follow the framework of \citet{Bjork2017:TimeInconsistent} to define an intra-personal equilibrium strategy and present a sufficient and necessary condition for such a strategy. \subsection{Notations} We first introduce some notations. By convention, $x\in\mathbb{R}^n$ is always a column vector. When a vector $x$ is a row vector, we write it as $x\in \mathbb{R}^{1\times n}$. Denote by $A^\top $ the transpose of a matrix $A$, and by $\mathrm{tr}(A)$ the trace of a square matrix $A$. For a differentiable function $\xi$ that maps $x\in \mathbb{R}^m$ to $\xi(x)\in \mathbb{R}^n$, its derivative, denoted as $\xi_x(x)$, is an $n\times m$ matrix with the entry in the $i$-th row and $j$-th column denoting the derivative of the $i$-th component of $\xi$ with respect to the $j$-th component of $x$. In particular, for a mapping $\xi$ from $\mathbb{R}^m$ to $\mathbb{R}$, $\xi_x(x)$ is an $m$-dimensional row vector, and we further denote by $\xi_{xx}$ the Hessian matrix. Consider $\xi$ that maps $(z,x)\in \mathbb{Z}\times {\mathbb X}$ to $\xi(z,x)\in{\mathbb R}^l$, where $\mathbb{Z}$ is a certain set and ${\mathbb X}$, which represents the state space throughout, is either ${\mathbb R}^n$ or $(0,+\infty)$. $\xi$ is {\em locally Lipschitz in $x\in {\mathbb X}$, uniformly in $z\in\mathbb{Z}$} if there exists a sequence of compact sets $\{{\mathbb X}_k\}_{k\ge 1}$ with $\cup_{k\ge 1}{\mathbb X}_k={\mathbb X}$ and a sequence of positive numbers $\{L_k\}_{k\ge 1}$ such that for any $k\ge 1$, $\|\xi(z,x)-\xi(z,x') \|\leq L_k \|x-x'\|,\forall z\in \mathbb{Z}, x,x'\in {\mathbb X}_k$. $\xi$ is {\em global Lipschitz in $x\in {\mathbb X}$, uniformly in $z\in\mathbb{Z}$} if there exists constant $L>0$ such that $\|\xi(z,x)-\xi(z,x') \|\leq L \|x-x'\|,\forall z\in \mathbb{Z}, x,x'\in {\mathbb X}$. In the case ${\mathbb X}={\mathbb R}^n$, $\xi$ is of {\em linear growth in $x\in {\mathbb X}$, uniformly in $z\in\mathbb{Z}$} if there exists $L>0$ such that $\|\xi(z,x)\|\le L(1+\|x\|),\forall z\in \mathbb{Z}, x\in {\mathbb X}$. In the case ${\mathbb X}=(0,+\infty)$, $\xi$ {\em has a bounded norm in $x\in{\mathbb X}$, uniformly in $z\in\mathbb{Z}$}, if there exists $L>0$ such that $\|\xi(z,x)\|\le Lx,\forall z\in \mathbb{Z}, x\in {\mathbb X}$. $\xi$ is of {\em polynomial growth in $x\in\mathbb{X}$, uniformly in $z\in\mathbb{Z}$} if there exists $L>0$ and integer $\gamma\ge 1$ such that $\|\xi(z,x)\|\le L\left(1+\varphi_{2\gamma}(x)\right), \forall z\in \mathbb{Z}, x\in {\mathbb X}$, where $\varphi_{2\gamma}(x)=\|x\|^{2\gamma}$ when ${\mathbb X}={\mathbb R}^n$ and $\varphi_{2\gamma}(x) = x^{2\gamma}+x^{-2\gamma}$ when ${\mathbb X}=(0,+\infty)$. Fix integers $r\geq 0$, $q\geq 2 r$, and real numbers $a<b$. Consider ${\xi}$ that maps $(t,x)\in [a,b]\times {\mathbb X}$ to ${\xi}(t,x) \in {\mathbb R}^l $. We say $\xi\in \mathfrak{C}^{r,q}([a, b]\times {\mathbb X} )$ if for any derivative index $\alpha$ with $|\alpha|\le q-2j$ and $j=0,\dots, r$, the partial derivative $\frac{\partial^{j+\alpha} \xi(t,x) }{\partial t^j \partial x^\alpha}:=\frac{\partial^{j+\alpha_1+\dots+\alpha_n} \xi(t,x) }{\partial t^j \partial x_1^{\alpha_1}\dots \partial x_n^{\alpha_n}} $ exists for any $(t, x)\in ( a,b) \times {\mathbb X}$ and can be extended to and continuous on $[a, b]\times {\mathbb X}$. We say ${\xi}\in \mathfrak{\bar C}^{r,q}( [a, b] \times {\mathbb X})$ if ${\xi}\in \mathfrak{C}^{r,q}( [a, b] \times {\mathbb X})$ and $\frac{\partial^{j+\alpha} \xi(t,x) }{\partial t^j \partial x^\alpha}$ is of polynomial growth in $x\in {\mathbb X}$, uniformly in $t\in [a, b]$, for any derivative index $\alpha$ with $|\alpha|\le q-2j$ and $j=0,...,r$. \subsection{Time-Inconsistent Stochastic Control Problems}\label{subse:framework} Let be given a probability space $(\Omega,{\cal F},\mathbb{P})$ with a standard $d$-dimensional Brownian motion $W(t):=\big(W_1(t),...,W_d(t)\big)^\top$, $t\ge 0$, on the space, along with the filtration $({\cal F}_t)_{t\ge 0}$ generated by the Brownian motion and augmented by the $\mathbb{P}$-null sets. Consider an agent who makes dynamic decisions in a given period $[0,T]$, and for any $(t,x)\in[0,T)\times {\mathbb X}$, the agent faces the following stochastic control problem: \begin{align}\label{eq:ControlProblem} \left\{ \begin{array}{c l} \underset{\mathbf{u}}{\max} &J(t,x;\mathbf{u} ) \\ \text{subject to} &dX^{\mathbf{u}}(s)=\mu(s, X^{\mathbf{u}}(s), \mathbf{u}(s, X^{\mathbf{u}}(s)) )ds\\ & \;\;\;\; +\sigma(s,X^{\mathbf{u}}(s), \mathbf{u}(s,X^{\mathbf{u}}(s)) )dW(s),\; s\in[t,T]\\ & X^{\mathbf{u}}(t)=x. \end{array}\right. \end{align} The agent's dynamic decisions are represented by a Markov strategy $\mathbf{u}$, which maps $(s,y)\in [0,T)\times {\mathbb X}$ to $\mathbf{u}(s,y)\in \mathbb{U}\subset{\mathbb R}^m$. The controlled diffusion process $X^{\mathbf{u}}$ under $\mathbf{u}$ takes values in ${\mathbb X}$, which as aforementioned is set to be either $(0,+\infty)$ or ${\mathbb R}^n$. $\mu$ and $\sigma$ are measurable mappings from $[0,T]\times {\mathbb X}\times \mathbb{U}$ to ${\mathbb R}^n$ and to ${\mathbb R}^{n\times d}$, respectively, where $n$ stands for the dimension of ${\mathbb X}$. The agent's goal at $(t,x)\in [0,T]\times {\mathbb X}$ is to maximize the following objective function: \begin{align}\label{eq:ObjFun} J(t,x;\mathbf{u} ) &=\mathbb{E}_{t,x}\left[\int_t^T C\big(t,x, s, X^{\mathbf{u}}(s), {\mathbf{u}}(s, X^{\mathbf{u}}(s) ) \big)ds+F\big(t, x, X^{{\mathbf{u}}}(T) \big)\right]\notag \\ &\quad +G\big(t, x, \mathbb{E}_{t,x} [ X^{{\mathbf{u}}}(T) ] \big), \end{align} where $C$ is a measurable mapping from $[0,T)\times {\mathbb X}\times [0,T]\times {\mathbb X}\times \mathbb{U}$ to $\mathbb{R}$, and $F$ and $G$ are measurable mappings from $[0,T)\times {\mathbb X}\times {\mathbb X}$ to ${\mathbb R}$. Here and hereafter, $\mathbb{E}_{t,x}[Z]$ denotes the expectation of $Z$ conditional on $X^{\mathbf{u}}(t)=x$. If $C$, $F$, and $G$ are independent of $(t,x)$ and $G\big(t, x, \mathbb{E}_{t,x} [ X^{{\mathbf{u}}}(T) ]\big)$ is linear in $\mathbb{E}_{t,x} [ X^{{\mathbf{u}}}(T) ]$, then $J(t,x;\mathbf{u} )$ becomes a standard objective function in classical stochastic control where time consistency holds. Thus, with objective function \eqref{eq:ObjFun}, time inconsistency arises from the dependence of $C$, $F$, and $G$ on $(t,x)$ as well as from the nonlinearity of $G\big(t, x, \mathbb{E}_{t,x} [ X^{{\mathbf{u}}}(T) ]\big)$ in $\mathbb{E}_{t,x} [ X^{{\mathbf{u}}}(T) ]$. For any feedback strategy $\mathbf{u}$, denote \begin{align*} &\mu^{\mathbf{u}}(t, x):=\mu(t, x, \mathbf{u}(t,x)), \; \sigma^{\mathbf{u}}(t, x):=\sigma(t, x, \mathbf{u}(t,x)), \; \nonumber\\ &\Upsilon^{\mathbf{u}}(t, x):=\sigma(t, x, \mathbf{u}(t,x)) \sigma(t, x, \mathbf{u}(t,x))^\top , \; C^{\tau, y, \mathbf{u}}(t,x ):=C\big(\tau, y,t, x, \mathbf{u}(t, x ) \big). \end{align*} With a slight abuse of notation, $u \in \mathbb{U}$ also denotes the feedback strategy $\mathbf{u}$ such that $\mathbf{u}(t,x)=u,\forall (t,x)\in[0,T]\times {\mathbb X}$; so $\mathbb{U}$ also stands for the set of all {\em constant} strategies when no ambiguity arises. We need to impose conditions on a strategy $\mathbf{u}$ to ensure the existence and uniqueness of the SDE in \eqref{eq:ControlProblem} and the well-posedness of the objective function $J(t,x;\mathbf{u} )$. This consideration leads to the following definition of feasibility: \begin{definition}\label{de:Feasibility} A feedback strategy $\mathbf{u}$ is {\em feasible} if the following hold: \begin{enumerate} \item[(i)] $\mu^{\mathbf{u}}$, $\sigma^{\mathbf{u}}$ are locally Lipschitz in $x\in{\mathbb X}$, uniformly in $t\in[0,T]$. \item[(ii)] $\mu^{\mathbf{u}}$ and $\sigma^{\mathbf{u}}$ are of linear growth in $x\in{\mathbb X}$, uniformly in $t\in[0,T]$, when ${\mathbb X}={\mathbb R}^n$ and have bounded norm in $x\in{\mathbb X}$, uniformly in $t\in[0,T]$, when ${\mathbb X}=(0,+\infty)$. \item[(iii)] For each fixed $(\tau,y) \in [0, T)\times {\mathbb X}$, $C^{\tau,y,\mathbf{u}}(t,x)$ and $F(\tau, y, x)$ are of polynomial growth in $x\in {\mathbb X}$, uniformly in $t\in [0,T]$. \item[(iv)] For each fixed $(\tau,y) \in [0, T)\times {\mathbb X}$ and $x\in{\mathbb X}$, $\mu^{\mathbf{u}}(t,x)$ and $\sigma^{\mathbf{u}}(t,x)$ are right-continuous in $t\in[0,T)$ and $\lim_{t'\ge t,(t',x')\rightarrow (t,x)}C^{\tau, y, \mathbf{u}}(t',x') = C^{\tau, y, \mathbf{u}}(t,x)$ for any $t\in[0,T)$. \end{enumerate} Denote the set of feasible strategies as $\mathbf{U}$. \end{definition} We impose the following assumption: \begin{assumption}\label{as:ModelParameters} Any $u\in \mathbb{U}$ is feasible. \end{assumption} \subsection{Intra-Personal Equilibrium}\label{subse:IntraPersonal} Here and hereafter, $\hat{\mathbf{u}} \in \mathbf{U}$ denotes a given strategy and we examine whether it is an equilibrium strategy. For given $t\in[0,T)$, $\epsilon\in (0,T-t)$ and $\mathbf{a} \in \mathbf{U} $, define \begin{align}\label{eq:PerturbatedPolicyFeedback} {\mathbf{u}}_{t,\epsilon,\mathbf{a}}(s,y) := \begin{cases} \mathbf{a}(s,y), & s\in[t,t+\epsilon), y\in {\mathbb X} \\ \hat{\mathbf{u}}(s,y),&s\notin [t,t+\epsilon), y\in {\mathbb X}. \end{cases} \end{align} Imagine that the agent at time $t$ chooses strategy $\mathbf{a}$ and is able to commit herself to this strategy in the period $[t,t+\epsilon)$. The agent, however, is unable to control her future selves beyond this small time period, namely in the period $[t+\epsilon,T)$ and believes that her future selves will take strategy $\hat{\mathbf{u}}$. Then, ${\mathbf{u}}_{t,\epsilon,\mathbf{a}}$ is the strategy that the agent at time $t$ expects herself to implement throughout the entire horizon. Note that $ {\mathbf{u}}_{t,\epsilon,\mathbf{a}}$ is feasible because both $\hat{\mathbf{u}}$ and $\mathbf{a}$ are feasible. \begin{definition}[Intra-Personal Equilibrium]\label{de:EquilibriumFirstOrd} $\hat{\mathbf{u}} \in \mathbf{U}$ is an {\em intra-personal equilibrium} if for any $x\in {\mathbb X}$, $t\in[0,T)$, and $\mathbf{a} \in \mathbf{U} $, we have \begin{align}\label{new requirement} \limsup_{\epsilon\downarrow 0}\frac{ J(t,x;{\mathbf{u}}_{t,\epsilon, \mathbf{a}} ) -J(t,x;\hat{\mathbf{u}} ) }{\epsilon}\leq 0. \end{align} \end{definition} For each positive $\epsilon$, ${\mathbf{u}}_{t,\epsilon, \mathbf{a}}$ leads to a possibly different state process and thus to a different objective function value from those of $\hat{\mathbf{u}}$, so it is meaningful to compare the objective function values of ${\mathbf{u}}_{t,\epsilon, \mathbf{a}}$ and $\hat{\mathbf{u}}$ to examine whether the agent is willing to deviate from $\hat{\mathbf{u}}$ to $\mathbf{a}$ in the period of time $[t,t+\epsilon)$. Due to the continuous-time nature, the length of the period, $\epsilon$, during which the agent at $t$ exerts full self control, must be set to be infinitesimally small. Then, $J(t,x;{\mathbf{u}}_{t,\epsilon, \mathbf{a}} )$ and $J(t,x;\hat{\mathbf{u}} )$ become arbitrarily close to each other; so instead of evaluating their difference, we consider the {\em rate of increment} in the objective function value, i.e., the limit on the left-hand side of \eqref{new requirement}. Thus, under Definition \ref{de:EquilibriumFirstOrd}, a strategy $\hat{\mathbf{u}}$ is an intra-personal equilibrium if at any given time and state, the rate of increment in the objective value when the agent deviates from $\hat{\mathbf{u}}$ to any alternative strategy is nonpositive. As a result, the agent has little incentive to deviate from $\hat{\mathbf{u}}$. \subsection{Sufficient and Necessary Condition} We first introduce the generator of the controlled state process. Given $\mathbf{u} \in \mathbf{U}$ and interval $[a,b] \subseteq [0, T]$, consider $\xi$ that maps $(t,x)\in [a,b] \times {\mathbb X}$ to $ \xi(t,x)\in {\mathbb R}$. Suppose $\xi \in \mathfrak{C}^{1,2}([a,b]\times {\mathbb X})$, and denote by $\xi_t$, $\xi_x$, and $\xi_{xx}$ respectively its first-order partial derivative in $t$, first-order partial derivative in $x$, and second-order partial derivative in $x$. Define the following generator: \begin{align}\label{eq:Generator} \mathcal{A}^{\mathbf{u}}\xi(t,x)=\xi_t(t,x)+\xi_x(t,x)\mu^{\mathbf{u}}(t, x)+\frac{1}{2}\mathrm{tr}\left(\xi_{xx}(t,x)^\top \Upsilon^{\mathbf{u}}(t, x)\right),\notag \\ t\in [a,b],x\in{\mathbb X}. \end{align} For each fixed $(\tau,y)\in [0,T)\times {\mathbb X}$, denote \begin{align} &{f}^{\tau, y}(t,x):=\mathbb{E}_{t,x}[F(\tau, y, X^{\hat{\mathbf{u}}}(T) ) ],\label{eq:fFunction} \\ &{g}(t,x):=\mathbb{E}_{t,x}[ X^{\hat{\mathbf{u}}}(T) ],\; t\in[0,T],x\in{\mathbb X}.\label{eq:gFunction} \end{align} In addition, for fixed $(\tau,y) \in [0, T)\times {\mathbb X}$ and $s\in [0, T]$, denote \begin{align}\label{eq:cFunction} &{c}^{\tau,y, s}(t,x):=\mathbb{E}_{t,x}[ C^{\tau, y, \hat{\mathbf{u}}}(s, X^{\hat{\mathbf{u}}}(s))],\; t\in[0,s],x\in{\mathbb X}. \end{align} In the following, $\mathcal{A}^{\mathbf{u}}f^{\tau,y}$ denotes the function that is obtained by applying the operator $\mathcal{A}^{\mathbf{u}}$ to $f^{\tau,y}(t,x)$ as a function of $(t,x)$ while fixing $(\tau,y)$. Then, $\mathcal{A}^{\mathbf{u}}f^{t,x}(t,x)$ denotes the value of $\mathcal{A}^{\mathbf{u}}f^{\tau,y}$ at $(t,x)$ while $(\tau,y)$ is also set at $(t,x)$ The above notations also apply to $C^{\tau,y,\mathbf{u}}$ and $c^{\tau,y,s}$. To illustrate how to evaluate $J(t,x;{\mathbf{u}}_{t,\epsilon, \mathbf{a}} ) -J(t,x;\hat{\mathbf{u}} )$ and thus the rate of increment, let us consider the second term in the objective function (\ref{eq:ObjFun}). An informal calculation yields \begin{align*} &\mathbb{E}_{t,x}\left[F(t, x, X^{{\mathbf{u}}_{t,\epsilon,\mathbf{a}}}(T) )\right] - \mathbb{E}_{t,x}\left[F(t, x, X^{\hat{\mathbf{u}}}(T) )\right]\\ & = \mathbb{E}_{t,x}\left[\mathbb{E}_{ t+\epsilon,X^{{\mathbf{u}}_{t,\epsilon,\mathbf{a}}}(t+\epsilon)}\left[F(t, x, X^{{\mathbf{u}}_{t,\epsilon,\mathbf{a}}}(T) )\right]\right] - \mathbb{E}_{t,x}\left[F(t, x, X^{\hat{\mathbf{u}}}(T) )\right]\\ &= \mathbb{E}_{t,x}\left[f^{(t,x)}(t+\epsilon, X^{\mathbf{a}}(t+\epsilon) )\right] - f^{(t,x)}(t,x)\\ & \approx \mathcal{A}^{\mathbf{a}}f^{t,x}(t,x)\epsilon, \end{align*} where the second equality holds because ${\mathbf{u}}_{t,\epsilon,\mathbf{a}}(s,\cdot)=\mathbf{a}(s,\cdot)$ for $s\in [t,t+\epsilon)$ and ${\mathbf{u}}_{t,\epsilon,\mathbf{a}}(s,\cdot) = \hat{\mathbf{u}}(s,\cdot)$ for $s\in [t+\epsilon,T)$ in addition to the definition of $f^{\tau,y}$ in \eqref{eq:fFunction}. The change of the other terms in the objective function when the agent deviates from $\hat{\mathbf{u}}$ to $\mathbf{a}$ in the period $[t,t+\epsilon)$ can be evaluated similarly. As a result, we can derive the rate of increment in the objective value, namely the limit on the left-hand side of \eqref{new requirement}, which in turn enables us to derive a sufficient and necessary condition for $\hat{\mathbf{u}}$ to be an intra-personal equilibrium. To formalize the above heuristic argument, we need to impose the following assumption: \begin{assumption}\label{as:FirstOrderSmoothness} For any fixed $(\tau,y) \in [0, T)\times {\mathbb X}$ and $t\in[0,T)$, there exists $\tilde t\in (t,T]$ such that (i) ${f}^{\tau, y},g\in \mathfrak{\bar C}^{1,2}([t,\tilde t]\times{\mathbb X})$; (ii) ${c}^{\tau, y, s}\in\mathfrak{C}^{1,2}([t,\tilde t\wedge s]\times {\mathbb X})$ for each fixed $s\in (t,T]$ and $\frac{\partial^{j+\alpha} {c}^{\tau, y, s}(t',x') }{\partial t^j \partial x^\alpha}$ is of polynomial growth in $x'\in {\mathbb X}$, uniformly in $t'\in[t,\tilde t\wedge s]$ and $s\in (t,T]$, for any $\alpha$ with $|\alpha|\le 2-2j$ and $j=0,1$; and (iii) $G(\tau,y,z)$ is continuously differentiable in $z$, with the partial derivative denoted as $G_z(\tau,y,z)$. \end{assumption} \begin{theorem}\label{Theorem:first order derivative} Suppose Assumptions \ref{as:ModelParameters} and \ref{as:FirstOrderSmoothness} hold. Then, for any $(t,x)\in [0,T)\times {\mathbb X}$ and $\mathbf{a} \in \mathbf{U}$, we have \begin{align}\label{first order expansion} &\lim_{\epsilon\downarrow 0}\frac{J(t,x;{\mathbf{u}}_{t,\epsilon,\mathbf{a}} )-J(t,x;\hat{\mathbf{u}} )}{\epsilon}= \Gamma^{t,x,\hat{\mathbf{u}}}(t,x; \mathbf{a}), \end{align} where for any $(\tau,y)\in [0,T)\times {\mathbb X}$, \begin{align}\label{eq:FirstOrderDerivative} \Gamma^{\tau, y,\hat{\mathbf{u}}}(t,x; \mathbf{a}):&=C^{\tau, y, \mathbf{a}}(t,x )-{C}^{\tau, y, \hat{\mathbf{u}}}(t,x) +\int_t^T \mathcal{A}^{\mathbf{a}} {c}^{\tau,y,s}(t, x )ds\notag \\ &+ \mathcal{A}^{\mathbf{a} } f^{\tau,y}(t,x) +G_z(\tau,y, g(t,x))\mathcal{A}^{\mathbf{a} } g(t,x). \end{align} Moreover, $\Gamma^{\tau,y,\hat{\mathbf{u}}}(t,x; \mathbf{a})=\Gamma^{\tau,y,\hat{\mathbf{u}}}(t,x; \tilde{\mathbf{a}})$ for any $\mathbf{a}, \tilde{\mathbf{a}} \in \mathbf{U}$ with $\mathbf{a}(t,x)=\tilde{\mathbf{a}}(t,x)$ and $\Gamma^{\tau,y,\hat{\mathbf{u}}}(t,x; \mathbf{a})=0$ if $\mathbf{a}(t,x)=\hat{\mathbf{u}}(t,x)$. Consequently, $\hat{\mathbf{u}}$ is an intra-personal equilibrium if and only if \begin{align}\label{weak equilibrium condition} \Gamma^{t,x,\hat{\mathbf{u}}}(t,x; u)\leq 0, \; \forall u\in \mathbb{U}, x\in {\mathbb X}, t\in [0, T). \end{align} \end{theorem} Theorem \ref{Theorem:first order derivative} presents a sufficient and necessary condition \eqref{weak equilibrium condition} for an intra-personal equilibrium $\hat{\mathbf{u}}$. Because $\Gamma^{\tau,y,\hat{\mathbf{u}}}(\tau,y; \hat{\mathbf{u}}(t,x))=0$, we have \begin{align*} \Gamma^{\tau, y,\hat{\mathbf{u}}}(t,x; \mathbf{a}) = \Pi^{\tau,y}(t,x;\mathbf{a})-\Pi^{\tau,y}(t,x;\hat{\mathbf{u}}), \end{align*} where \begin{align} \Pi^{\tau,y}(t,x;\mathbf{a}):&=C^{\tau, y, \mathbf{a}}(t,x ) + \int_t^T \mathcal{A}^{\mathbf{a}} {c}^{\tau,y,s}(t, x )ds+ \mathcal{A}^{\mathbf{a} } f^{\tau,y}(t,x)\notag\\ &+ G_z(\tau,y, g(t,x))\mathcal{A}^{\mathbf{a} } g(t,x).\label{eq:Pifun} \end{align} As a result, condition \eqref{weak equilibrium condition} is equivalent to \begin{align}\label{eq:EquiConditionForm2} \max_{u\in \mathbb{U}}\Gamma^{t,x,\hat{\mathbf{u}}}(t,x; u)=0, x\in {\mathbb X}, t\in [0, T) \end{align} or \begin{align}\label{eq:EquiConditionForm3} \hat{\mathbf{u}}(t,x)\in\mathrm{arg}\max_{u\in \mathbb{U}}\Pi^{t,x}(t,x;u), x\in {\mathbb X}, t\in [0, T). \end{align} This can be regarded as a time-inconsistent version of the verification theorem in (classical) stochastic control. The proof of Theorem \ref{Theorem:first order derivative} can be found in \citet{Bjork2017:TimeInconsistent} and \citet{HeJiang2019:OnEquilibriumStrategies}. Assumption \ref{as:ModelParameters} is easy to verify because it involves only the model parameters, i.e., $\mu$, $\sigma$, $C$, $F$, and $G$. Assumption \ref{as:FirstOrderSmoothness} imposes some regularity conditions on $\hat{\mathbf{u}}$, which usually requires $\hat{\mathbf{u}}$ to be smooth to a certain degree; see \citet{HeJiang2019:OnEquilibriumStrategies} for a sufficient condition for this assumption. As a result, the sufficient and necessary condition \eqref{weak equilibrium condition} cannot tell us whether there exists any intra-personal equilibrium among the strategies that do not satisfy Assumption \ref{as:FirstOrderSmoothness}. This condition, however, is still very useful for us to find intra-personal equilibria for specific problems. Indeed, in most time-inconsistent problems in the literature, intra-personal equilibrium can be found and verified using \eqref{weak equilibrium condition}; see Section \ref{sec:Applications}. \subsection{Extended HJB}\label{subse:ExtendedHJB} Define the {\em continuation value} of a strategy $\hat{\mathbf{u}}$, denoted as $V^{\hat{\mathbf{u}}}(t,x),(t,x)\in [0,T]\times {\mathbb X}$, to be the objective value over time and state under this strategy, i.e., \begin{align} V^{\hat{\mathbf{u}}}(t,x):&=J(t,x;\hat{\mathbf{u}} ) = H^{t,x}(t,x) + G\big(t, x,g(t,x) \big),\label{eq:ContinuationValue} \end{align} where \begin{align} H^{\tau,y}(t,x):&=\mathbb{E}_{t,x}\left[\int_t^T C^{\tau, y, \hat{\mathbf{u}}}(s, X^{\hat{\mathbf{u}}}(s))ds + F(\tau,y,X^{\hat{\mathbf{u}}}(T))\right]\notag\\ & = \int_t^Tc^{\tau,y,s}(t,x)ds + f^{\tau,y}(t,x).\label{eq:Hfunction} \end{align} Assuming certain regularity conditions and applying the operator ${\cal A}^{u}$ to $V^{\hat{\mathbf{u}}}(t,x)$, we derive \begin{align*} {\cal A}^{u} V^{\hat{\mathbf{u}}}(t,x) &= -C^{t,x,\hat{\mathbf{u}}}(t,x) + \int_t^T{\cal A}^{u} c^{t,x,s}(t,x)ds + {\cal A}^{u}f^{t,x}(t,x)\\ & + G_{z}\big(t,x,g(t,x)\big){\cal A}^{u} g(t,x) + {\cal A}^{u}_{\tau, y} H^{t,x}(t,x)+{\cal A}^{u}_{\tau, y}G(t,x,g(t,x))\\ & + \mathrm{tr}\left(\left(H^{t,x}_{xy}(t,x)+ G_{zy}(t,x,g(t,x))^\top g_x(t,x)\right)^\top \Upsilon^{u}(t, x) \right)\\ & +\frac{1}{2} G_{zz}\big(t, x,g(t,x) \big)\mathrm{tr}\left(g_x(t,x)g_x(t,x)^\top \Upsilon^{u}(t, x)\right) \end{align*} where $H^{\tau,y}_{xy}(t,x)$ denotes the cross partial derivative of $H^{\tau,y}(t,x)$ in $x$ and $y$, $G_{zy}(\tau,y,z)$ the cross partial derivative of $G(\tau,y,z)$ in $z$ and $y$, and $G_{zz}(\tau,y,z)$ the second-order derivative of $G(\tau,y,z)$ in $z$. For each {\em fixed} $(t,x)$, ${\cal A}^{u}_{\tau, y}H^{\tau,y}(t,x)$ denotes the generator of ${\cal A}^{u}$ applied to $H^{\tau,y}(t,x)$ as a function of $(\tau,y)$, i.e., ${\cal A}^{u}_{\tau, y}H^{\tau,y}(t,x):={\cal A}^u\ell (\tau,y)$, where $\ell(\tau,y):=H^{\tau,y}(t,x),(\tau,y)\in [0,T)\times {\mathbb X}$, and ${\cal A}^{u}_{\tau, y}G(\tau,y,g(t,x))$ is defined similarly. Now, suppose $\hat {\mathbf{u}}$ is an intra-personal equilibrium. Recalling \eqref{eq:FirstOrderDerivative} and the sufficient and necessary condition \eqref{eq:EquiConditionForm2}, we derive the following equation satisfied by the continuation value of an intra-personal equilibrium $\hat{\mathbf{u}}$: \begin{align} &\max_{u\in\mathbb{U}} \Big[{\cal A}^{u} V^{\hat{\mathbf{u}}}(t,x) + C^{t,x,u}(t,x)-\big({\cal A}^{u}_{\tau, y} H^{t,x}(t,x)+{\cal A}^{u}_{\tau, y}G(t,x,g(t,x))\big)\notag \\ &-\mathrm{tr}\left(\left(H^{t,x}_{xy}(tx)+ G_{zy}(t,x,g(t,x))^\top g_x(t,x)\right)^\top \Upsilon^{u}(t, x) \right)\notag \\ & -\frac{1}{2} G_{zz}\big(t, x,g(t,x) \big)\mathrm{tr}\left(g_x(t,x)g_x(t,x)^\top \Upsilon^{u}(t, x)\right)\Big]=0, (t,x)\in [0,T)\times {\mathbb X},\notag \\ &V^{\hat{\mathbf{u}}}(T,x) = F(T,x,x) + G(T,x,x),\; x\in {\mathbb X}.\label{eq:EHJBEqMain} \end{align} By \eqref{eq:Hfunction}, the definitions of $c^{\tau,y,s}(t,x)$ and $f^{\tau,y}(t,x)$, and the Feymann-Kac formula, we derive the following equation for $H^{\tau,y}(t,x)$: \begin{align} &{\cal A}^{\hat{\mathbf{u}}}H^{\tau,y}(t,x) + {C}^{\tau, y, \hat{\mathbf{u}}}(t,x) = 0, \quad (t,x)\in [0,T)\times {\mathbb X},(\tau,y)\in [0,T)\times {\mathbb X},\notag\\ & H^{\tau,y}(T,x) = F(\tau,y,x),\quad x\in {\mathbb X},(\tau,y)\in [0,T)\times {\mathbb X}.\label{eq:EHJBEqExpt} \end{align} Similarly, we derive the following equation for $g$: \begin{align} &{\cal A}^{\hat{\mathbf{u}}}g(t,x) = 0, \quad (t,x)\in [0,T)\times {\mathbb X},\notag\\ & g(T,x) = x,\quad x\in {\mathbb X}.\label{eq:EHJBEqMean} \end{align} Some remarks are in order. First, instead of a single equation for the value function of a time-consistent problem, the intra-personal equilibrium and its continuation value satisfy a system of equations \eqref{eq:EHJBEqMain}--\eqref{eq:EHJBEqMean}, which is referred to as the {\em extended HJB equation} by \citet{Bjork2017:TimeInconsistent}. Second, compared to the HJB equation for a time-consistent problem, which takes the form $\max_{u\in\mathbb{U}} \big[{\cal A}^{u} V^{\hat{\mathbf{u}}}(t,x) + C^{u}(t,x)\big]=0$, equation \eqref{eq:EHJBEqMain} has three additional terms in the first, second, and third lines of the equation, respectively. Here and hereafter, when $C^{\tau,y,u}(t,x)$ does not depend on $(\tau,y)$, we simply drop the superscript $(\tau,y)$. Similar notations apply to $H^{\tau,y}(t,x)$ and to the case when there is no dependence on $y$. Now, recall that for the objective function \eqref{eq:ObjFun}, time inconsistency arises from (i) the dependence of $C$, $F$, and $G$ on $(t,x)$ and (ii) the nonlinear dependence of $G\big(t, x, \mathbb{E}_{t,x} [ X^{{\mathbf{u}}}(T) ]\big)$ on $\mathbb{E}_{t,x} [ X^{{\mathbf{u}}}(T) ]$. If source (i) of time inconsistency is absent, the first and second additional terms in \eqref{eq:EHJBEqMain} will vanish. If source (ii) of time inconsistency is absent, the third additional term in \eqref{eq:EHJBEqMain} will disappear. In particular, without time inconsistency, the extended HJB equation \eqref{eq:EHJBEqMain} reduces to the classical HJB equation. Third, consider the case in which $C$, $F$, and $G$ do not depend on $x$ and $G\big(t, x, \mathbb{E}_{t,x} [ X^{{\mathbf{u}}}(T) ]\big)$ is linear in $\mathbb{E}_{t,x} [ X^{{\mathbf{u}}}(T) ]$. In this case, the second and third lines of \eqref{eq:EHJBEqMain} vanish and we can assume $G\equiv 0$ without loss of generality because $G$ can be combined with $F$. As a result, the extended HJB equation \eqref{eq:EHJBEqMain} specializes to \begin{align} &\max_{u\in\mathbb{U}} \Big[{\cal A}^{u} V^{\hat{\mathbf{u}}}(t,x) + C^{t,u}(t,x)\Big]=h^{t}(t,x), (t,x)\in [0,T)\times {\mathbb X},\notag \\ &V^{\hat{\mathbf{u}}}(T,x) = F(T,x),\; x\in {\mathbb X}\label{eq:EHJBEqSimplified} \end{align} where $h^{\tau}(t,x):=H^{\tau}_\tau(t,x)$ (with the subscript $\tau$ denoting the partial derivative with respect to $\tau$) and thus satisfies \begin{align} &{\cal A}^{\hat{\mathbf{u}}}h^{\tau}(t,x) + {C}^{\tau,\hat{\mathbf{u}}}_\tau(t,x) = 0, \quad (t,x)\in [0,T)\times {\mathbb X},\tau \in[0,T),\notag\\ & h^{\tau}(T,x) = F_\tau(\tau,x),\quad x\in {\mathbb X},\tau\in [0,T).\label{eq:EHJBEqExptDer} \end{align} \section{Discussions}\label{sec:Discussions} \subsection{Intra-Personal Equilibria with Fixed Initial Data} Consider an agent at time 0 with a fixed state $x_0$ who correctly anticipates that her self at each future time $t$ faces the problem \eqref{eq:ControlProblem} and who has no control of future selves at any time. A strategy $\hat{\mathbf{u}}$ can be consistently implemented by the agent throughout the entire horizon $[0,T]$ if the agent has no incentive to deviate from it at any time {\em along the state path}. Actions that the agent might be taking were she not on the state path are irrelevant. To be more precise, for any fixed initial data $(0,x_0)$, we define $\hat{\mathbf{u}}$ to be an {\em intra-personal equilibrium starting from $(0,x_0)$} if \eqref{new requirement} holds for any $\mathbf{a}\in \mathbf{U}$, $t\in [0,T)$, and $x\in \mathbb{X}^{0,x_0,\hat{\mathbf{u}}}_t$, where $\mathbb{X}^{0,x_0,\hat{\mathbf{u}}}_t$ denotes the set of all possible states at time $t$ along the state path starting from $x_0$ at the initial time and under the strategy $\hat{\mathbf{u}}$. It is evident that the intra-personal equilibrium defined in Definition \ref{de:EquilibriumFirstOrd} is {\em universal} in that it is an equilibrium starting from {\em any} initial data $(0,x_0)$. On the other hand, starting from a fixed state $x_0$ at time 0, the state process in the future might not be able to visit the whole state space; so an equilibrium starting from $(0,x_0)$ is not necessarily universal, i.e., it is not necessarily an equilibrium when the agent starts from other initial data. For example, \citet{HeEtal2019:MedianMaximization} consider a continuous-time portfolio selection problem in which an agent maximizes the median of her terminal wealth. With a fixed initial wealth of the agent, the authors derive a set of intra-personal equilibrium strategies starting from this particular initial wealth level. They show that these strategies are no longer equilibria if the agent starts from some other initial wealth levels, and in particular not universal equilibria in the sense of Definition \ref{de:EquilibriumFirstOrd}. The first study of intra-personal equilibria starting from a fixed initial data dates back to \cite{PelegYaari1973:OnExistenceConsistent}. In a discrete-time setting, the authors propose that a strategy $(s^*_0,s^*_1,\dots)$, where $s^*_t$ stands for the agent's closed-loop strategy at time $t$, is an equilibrium strategy if for any $t$, $(s_0^*,\dots, s_{t-1}^*,s_t,s_{t+1}^*,\dots )$ is dominated by $(s_0^*,\dots, s_{t-1}^*,s_t^*,s_{t+1}^*,\dots )$ for any $s_t$. They argue that the above definition is more desirable than the following one, which is based on a model in \cite{Pollak1968:ConsistentPlanning}: $(s^*_0,s^*_1,\dots)$ is an equilibrium strategy if for any time $t$, $(s_0,\dots, s_{t-1},s_t,s_{t+1}^*,\dots )$ is dominated by $(s_0,\dots, s_{t-1},s_t^*,s_{t+1}^*,\dots )$ for {\em any} $(s_0,\dots, s_t)$. It is clear that the equilibrium strategies considered by \cite{PelegYaari1973:OnExistenceConsistent} are the ones starting from a fixed initial data while those studied by \cite{Pollak1968:ConsistentPlanning} are universal. Recently, \citet{HeJiang2019:OnEquilibriumStrategies}, \citet{HanWong2020:TimeInconsistency}, and \citet{HernandezPossami2020:MyMyself} also consider intra-personal equilibria with fixed initial data. Moreover, \citet{HeJiang2019:OnEquilibriumStrategies} propose a formal definition of $\mathbb{X}^{0,x_0,\hat{\mathbf{u}}}_t$, calling it the set of reachable states. Finally, let us comment that the sufficient and necessary condition in Theorem \ref{Theorem:first order derivative} is still valid for intra-personal equilibria starting from fixed initial data $(0,x_0)$, provided that we replace ${\mathbb X}$ in this condition with the set of reachable states $\mathbb{X}^{0,x_0,\hat{\mathbf{u}}}_t$; see \citet{HeJiang2019:OnEquilibriumStrategies} for details. The extended HJB equation in Section \ref{subse:ExtendedHJB} can be revised and applied similarly. \subsection{Set of Alternative Strategies} In Definition \ref{de:EquilibriumFirstOrd}, the set of strategies that the agent can choose at time $t$ to implement for the period $[t,t+\epsilon)$, denoted as $\mathbf{D}$, is set to be the entire set of feasible strategies $\mathbf{U}$. This definition is used in \citet{Bjork2017:TimeInconsistent}, \cite{EkelandPirvu2008:InvestConsptnWithoutCommit}, and \cite{EkelandEtal2012:TimeConsistent}. In some other works, however, $\mathbf{D}$ is set to be the set of constant strategies $\mathbb{U}$; see for instance \cite{EkelandLazrak2006:BeingSeriousAboutNonCommitment,EkelandILazrakA:07ti,EkelandLazrak2010:GoldenRule}, \cite{BjordTMurgoci:08u}, and \cite{BasakSChabakauri2010:DynamicMeanVariance}. \citet{HeJiang2019:OnEquilibriumStrategies} show that the choice of $\mathbf{D}$ is irrelevant as long as it at least contains $\mathbb{U}$. Indeed, this can be seen from the observation in Theorem \ref{Theorem:first order derivative} that $\Gamma^{\tau,y,\hat{\mathbf{u}}}(t,x; \mathbf{a})=\Gamma^{\tau,y,\hat{\mathbf{u}}}(t,x; \mathbf{a}(t,x))$ for any $\mathbf{a} \in \mathbf{U}$. \citet{HeJiang2019:OnEquilibriumStrategies} also show that for strong intra-personal equilibrium, which will be introduced momentarily, the choice of $\mathbf{D}$ is relevant. \subsection{Regular and Strong Intra-Personal Equilibrium} As noted in Remark 3.5 of \cite{Bjork2017:TimeInconsistent}, condition \eqref{new requirement} does not necessarily imply that $J(t,x;{\mathbf{u}}_{t,\epsilon, \mathbf{a}} )$ is less than or equal to $J(t,x;\hat{\mathbf{u}} )$ however small $\epsilon>0$ might be and thus disincentivizes the agent from deviating from $\hat{\mathbf{u}}$. For example, if $J(t,x;{\mathbf{u}}_{t,\epsilon, \mathbf{a}} )-J(t,x;\hat{\mathbf{u}} )=\epsilon^2$, then \eqref{new requirement} holds, but the agent can achieve a strictly larger objective value if she deviates from $\hat{\mathbf{u}}$ to $\mathbf{a}$ and thus is willing to do so. To address the above issue, \cite{HuangZhou2018:StrongWeakEquilibria} and \citet{HeJiang2019:OnEquilibriumStrategies} propose the notion of {\em strong intra-personal equilibrium}: \begin{definition}[Strong Intra-personal Equilibrium]\label{de:EquilibriumExtd} $\hat{\mathbf{u}} \in \mathbf{U}$ is a {\em strong intra-personal equilibrium strategy} if for any $x\in {\mathbb X}$, $t\in[0,T)$, and $\mathbf{a} \in \mathbf{D}$, there exists $\epsilon_0 \in (0,T-t)$ such that \begin{align}\label{new requirementExtd} J(t,x;{\mathbf{u}}_{t,\epsilon,\mathbf{a}} ) -J(t,x;\hat{\mathbf{u}} ) \leq 0,\quad \forall \epsilon\in(0,\epsilon_0]. \end{align} \end{definition} It is straightforward to see that a strong intra-personal equilibrium implies the one in Definition \ref{de:EquilibriumFirstOrd}, which we refer to as a {\em weak intra-personal equilibrium} in this subsection. \cite{HuangZhou2018:StrongWeakEquilibria} consider a stochastic control problem in which an agent can control the generator of a time-homogeneous, continuous-time, finite-state Markov chain at each time to maximize expected running reward in an infinite time horizon. Assuming that at each time the agent can implement a time-homogeneous strategy only, the authors provide a characterization of a strong intra-personal equilibrium and prove its existence under certain conditions. \citet{HeJiang2019:OnEquilibriumStrategies} follow the framework in \eqref{sec:ExHJB} and derive two necessary conditions for a strategy to be strong intra-personal equilibrium. Using these conditions, the authors show that strong intra-personal equilibrium does not exist for the portfolio selection and consumption problems studied in \citet{EkelandPirvu2008:InvestConsptnWithoutCommit}, \citet{BasakSChabakauri2010:DynamicMeanVariance}, and \citet{BjorkEtal2011:MeanVariancewithStateDependentRiskAversion}. Motivated by this non-existence result, the authors propose the so-called {\em regular intra-personal equilibrium} and show that it exists for the above three problems and is stronger than the weak intra-personal equilibrium and weaker than the strong intra-personal equilibrium in general. \subsection{Existence and Uniqueness} In most studies on time-inconsistent problems in the literature, a {\em closed-form} strategy is constructed and verified to satisfy the sufficient and necessary condition \eqref{weak equilibrium condition} or the extended HJB equation \eqref{eq:EHJBEqMain}--\eqref{eq:EHJBEqMean}. The existence of intra-personal equilibrium in general is difficult to prove because it essentially relies on a fixed point argument: For each guess of intra-personal equilibrium $\hat{\mathbf{u}}$, we first calculate $\Gamma^{\tau,y,\hat{\mathbf{u}}}$ in \eqref{weak equilibrium condition} and $H^{\tau,y}(t,x)$ and $g$ in \eqref{eq:EHJBEqExpt} and \eqref{eq:EHJBEqMean}, respectively, and then derive an updated intra-personal equilibrium, denoted as $\mathbb{T} \hat{\mathbf{u}}$, from the condition \eqref{weak equilibrium condition} or from the equation \eqref{eq:EHJBEqMain}. The existence of an intra-personal equilibrium then boils down to the existence of the fixed point of $\mathbb{T}$. The mapping $\mathbb{T}$ is highly nonlinear; so the existence of its fixed point is hard to establish. Additional difficulty is caused by the regularity conditions that we need to pose on $\hat{\mathbf{u}}$ to validate the sufficient and necessary condition \eqref{weak equilibrium condition} or the extended HJB equation \eqref{eq:EHJBEqMain}--\eqref{eq:EHJBEqMean}. We are only aware of very few works on the existence of intra-personal equilibria in continuous time. \citet{Yong2012:TimeInconsistent} proposes an alternative approach to defining the strategy of a sophisticated agent, which will be discussed in detail in Section \ref{sec:Discretization}. Assuming $G\equiv 0$, $C$ and $F$ to be independent of $x$ in the objective function \eqref{eq:ObjFun}, and $\sigma(t,x,u)$ in the controlled diffusion process \eqref{eq:ControlProblem} to be independent of control $u$ and nondegenerate, \citet{Yong2012:TimeInconsistent} proves the existence of the sophisticated agent's strategy, which is used to imply the existence of an intra-personal equilibrium under Definition \ref{de:EquilibriumFirstOrd}. \citet{WeiEtal2017:TimeInconsistent} and \citet{WangYong2019:TimeInconsistent} extend the result of \citet{Yong2012:TimeInconsistent} by generalizing the objective function; however for the existence of intra-personal equilibria, they need to assume the volatility $\sigma$ to be independent of control and nondegenerate. \citet{HernandezPossami2020:MyMyself} study intra-personal equilibria in a non-Markovian setting, where they consider a non-Markovian version of the objective function in \citet{Yong2012:TimeInconsistent} and assume the drift $\mu$ of the controlled process to be in the range of the volatility matrix at each time. The authors prove the existence of intra-personal equilibria when the volatility $\sigma$ is independent of control Intra-personal equilibria can be non-unique; see \citet{EkelandLazrak2010:GoldenRule}, \citet{CaoWerning2016:DynamicSavingsDisagreements}, and \citet{HeEtal2019:MedianMaximization}. For some problems, however, uniqueness has been established in the literature. Indeed, \citet{Yong2012:TimeInconsistent}, \citet{WeiEtal2017:TimeInconsistent}, \citet{WangYong2019:TimeInconsistent}, and \citet{HernandezPossami2020:MyMyself} prove the uniqueness in various settings with the common assumption that the volatility $\sigma$ is independent of control. \subsection{Non-Markovian Strategies} In most studies on time-inconsistent problems, where the controlled state processes are Markovian, the search for intra-personal equilibrium is restricted to the set of Markovian strategies, i.e., strategies that are functions of time $t$ and the {\it current} state value $x$. Motivated by some practical problems such as rough volatility models and principle-agent problems, \citet{HanWong2020:TimeInconsistency} and \citet{HernandezPossami2020:MyMyself} define and search intra-personal equilibria in the class of non-Markovian or path-dependent strategies, i.e., ones that depend on time $t$ and the whole path of the controlled state up to $t$. \section{Closed-Loop versus Open-Loop Intra-Personal Equilibria}\label{sec:OpenLoop} A {\em closed-loop} or {\it feedback} control strategy is a function $\mathbf{u}$ that maps time $t$ and the controlled state path $(x_s)_{s\le t}$ up to $t$ to the space of actions. As a result, the action taken by an agent under such a strategy is $\mathbf{u}(t, (x_s)_{s\le t})$. An {\em open-loop} control is a collection of actions over time and state of the nature, $(u(t,\omega))_{t\ge 0}$, where $u(t,\omega)$ is the action to be taken at time $t$ and in scenario $\omega$, {\em regardless} of the state path $(x_s)_{s\le t}$. For classical time-consistent control problems and under some technical assumptions, the state-control paths under the optimal open-loop control and under the optimal closed-loop control strategy are the same if the controlled system starts from the same initial time and state; see for instance \citet{YongJZhouXY:99sc}. In Section \ref{sec:ExHJB}, intra-personal equilibrium is defined for closed-loop control strategies, which is also the approach taken by most studies on time-inconsistent problems in the literature. In some other works, intra-personal equilibrium is defined for open-loop controls; see for instance \citet{HuEtal2012:TimeInconsistentStochasticLQ}, \citet{HuEtal2017:TimeInconsisten}, \citet{LiEtal2019:EquilibriumStrategies}, and \citet{HuEtal2020:ConsistentInvestmentRDU}. Formally, under the same probabilistic framework in Section \ref{subse:framework}, we represent an open-loop strategy by a progressively measurable process $(u(t))_{t\ge 0}$ that takes values in $\mathbb{U}$. The controlled state process $X^{u}$ takes the form \begin{align*} dX^{u}(s)=\mu(s, X^{u}(s),u(s) )ds +\sigma(s,X^{u}(s), u(s) )dW(s),\; s\in[t,T];\; X^{u}(t)=x. \end{align*} Denote by ${\cal U}$ the set of feasible open-loop controls, i.e., the set of progressively measurable processes on $[0,T]$ satisfying certain integrability conditions. At time $t$ with state $x$, suppose the agent's objective is to maximize $J(t,x;u(\cdot))$ by choosing $u(\cdot)\in {\cal U}$. Given $\hat u(\cdot)\in {\cal U}$, for any $t\in [0,T)$, $x\in {\mathbb X}$, $\epsilon \in (0,T-t)$, and $a(\cdot)\in {\cal U}$, define \begin{align}\label{eq:PerturbatedPolicyOpenLoop} u_{t,\epsilon,a}(s) := \begin{cases} a(s), & s\in[t,t+\epsilon) \\ \hat u(s),&s\notin [t,t+\epsilon). \end{cases} \end{align} Suppose that at time $t$ with state $x$, the agent chooses an open-loop control $a(\cdot)$, but is only able to implement it in the period $[t,t+\epsilon)$. Anticipating that her future selves will take the given control $\hat u(\cdot)$, the agent expects herself to follow $u_{t,\epsilon,a}$ in the period $[t,T]$. \begin{definition}[Open-Loop Intra-Personal Equilibrium]\label{de:EquilibriumOpenLoop} $\hat u(\cdot)\in {\cal U}$ is an {\em open-loop intra-personal equilibrium} if for any $x\in {\mathbb X}$, $t\in[0,T)$, and $a \in {\cal U}$ that is constant in a small period after $t$, we have \begin{align}\label{eq:OpenLoopEquilibriumCond} \limsup_{\epsilon\downarrow 0}\frac{ J(t,x;u_{t,\epsilon,a}(\cdot) ) -J(t,x;\hat u(\cdot) ) }{\epsilon}\leq 0. \end{align} \end{definition} The above is analogous to the definition of an intra-personal equilibrium for closed-loop strategies. However, there is a subtle yet crucial difference between the two definitions. For the one for open-loop controls, the perturbed control $u_{t,\epsilon,a}(s)$ defined by (\ref{eq:PerturbatedPolicyOpenLoop}) and the original one $\hat u$ are identical on $[t+\epsilon,T]$ as two stochastic processes. In other words, the perturbation in the small time period $[t,t+\epsilon)$ will not affect the control process beyond this period. This is not the case for the closed-loop counterpart, because the perturbation (\ref{eq:PerturbatedPolicyFeedback}) on $[t,t+\epsilon)$ changes the control in the period, which will alter the state process in $[t,t+\epsilon)$ and in particular the state at time $t+\epsilon$. This in turn will change the control {\it process} on $[t+\epsilon,T]$ upon substituting the state process into the feedback strategy. To characterize open-loop intra-personal equilibria, we only need to compute the limit on the left-hand side of \eqref{eq:OpenLoopEquilibriumCond}. This limit can be evaluated by applying the spike variation technique that is used to derive Pontryagin's maximum principle for time-consistent control problems in continuous time \citep{YongJZhouXY:99sc}. As a result, open-loop intra-personal equilibrium can be characterized by a flow of forward-backward stochastic differential equations (SDEs); see \citet{HuEtal2012:TimeInconsistentStochasticLQ} for more details. In contrast, the spike variation technique no longer works for closed-loop equilibria because the perturbed control process is different from the original one beyond the small time period for perturbation, as discussed above. This discussion suggests that closed-loop and open-loop equilibria are likely different. This is confirmed by \citet{HuEtal2012:TimeInconsistentStochasticLQ}. The authors consider a mean-variance portfolio selection problem, where an agent decides the dollar amount invested in a stock at each time, and derive an open-loop equilibrium; see Section 5.4.1 therein. They then compare this equilibrium with the closed-loop equilibrium derived by \citet{BjorkEtal2011:MeanVariancewithStateDependentRiskAversion} for the same portfolio selection problem, and find that the state-control path under these two equilibria are different. It can be argued that closed-loop strategies are preferred to the open-loop ones for three reasons. First, in many problems, agents' actions naturally depend on some state variables. For example, in a consumption problem, an agent's consumption at any time is more likely to depend directly on her wealth at that time. If her wealth suddenly increases, she would probably consume more. Second, closed-loop intra-personal equilibrium is invariant to the choice of control variables while open-loop intra-personal equilibrium might not. For example, in a portfolio selection problem where an agent decides the allocation of her wealth between a risk-free asset and a risky stock, the decision variable can be set to be the dollar amount invested in the stock or the percentage of wealth invested in the stock. Suppose $\hat {\mathbf{u}}$ is a closed-loop intra-personal equilibrium representing the percentage of wealth invested in the stock. Then, we have \begin{align}\label{eq:IntraPEquiClosedLoopCond} \limsup_{\epsilon\downarrow 0}\frac{ J(t,x;{\mathbf{u}}_{t,\epsilon, \mathbf{a}} ) -J(t,x;\hat{\mathbf{u}} ) }{\epsilon}\leq 0. \end{align} for all $t\in [0,T)$, $x\in {\mathbb X}$, and $\mathbf{a}\in \mathbf{U}$, where the state variable $x$ represents the agent's wealth. Now, suppose we represent the agent's decision by the dollar amount invested in the risky stock, and denote a control strategy as $\boldsymbol{\pi}$. Then, the agent's objective function is $\tilde J(t,x;\boldsymbol{\pi}) = J(t,x;\mathbf{u})$ with $\mathbf{u}(s,y) = \boldsymbol{\pi}(s,y)/y$. Condition \eqref{eq:IntraPEquiClosedLoopCond} implies that \begin{align*} \limsup_{\epsilon\downarrow 0}\frac{ \tilde J(t,x;\boldsymbol{\pi}_{t,\epsilon, \tilde{\mathbf{a}}} ) -\tilde J(t,x;\hat{\boldsymbol{\pi}} ) }{\epsilon}\leq 0, \end{align*} for any $t\in [0,T)$, $x\in {\mathbb X}$, and strategy $\tilde{\mathbf{a}}$ that represents the dollar amount invested in the stock, where $\hat{\boldsymbol{\pi}}(s,y):=y\hat{\mathbf{u}}(s,y)$ and $\boldsymbol{\pi}_{t,\epsilon, \tilde{\mathbf{a}}}$ is defined similarly to ${\mathbf{u}}_{t,\epsilon, \mathbf{a}}$. Thus, $\hat{\boldsymbol{\pi}}$, which is the dollar amount investment strategy corresponding to the percentage investment strategy $\hat {\mathbf{u}}$, is also an intra-personal equilibrium. By contrast, for the mean-variance portfolio selection problem studied by \citet{HuEtal2012:TimeInconsistentStochasticLQ}, where the agent's decision is the dollar amount invested in the stock, the open-loop intra-personal equilibrium yields a different control-state path from the one yielded by its closed-loop counterpart derived by \citet{BjorkEtal2011:MeanVariancewithStateDependentRiskAversion}. If we change the agent's decision variable to the percentage of wealth invested in the stock, the open-loop intra-personal equilibrium and the closed-loop intra-personal equilibrium in \citet{BjorkEtal2011:MeanVariancewithStateDependentRiskAversion} yield the same control-state path. This implies that open-loop equilibria depend on the choice of control variables. Third, open-loop intra-personal equilibrium may not be well-posed for some problems. Consider the discrete-time version of the consumption problem studied in \citet{Strotz1955:MyopiaInconsistency}: An agent decides the amount of consumption $C_t$ at each time $t=0,1,\dots, T$ with the total budget $x_0$, i.e., $\sum_{t=0}^T C_t =x_0$. For this problem, any consumption plan $(\hat C_t)_{t\ge 0}$ is an open-loop intra-personal equilibrium. Indeed, at each time $t$, anticipating her future selves will consume $\hat C_s,s=t+1,\dots, T$, the only amount of consumption $C_t$ that the agent can choose at time $t$ is $\hat C_t$ due to the budget constraint $(\sum_{s=0}^{t-1}\hat C_s) + C_t + (\sum_{s=t+1}^T\hat C_s)=x_0$. This leads to a trivial definition of intra-personal equilibrium. The above issue can be rectified if we use closed-loop strategies. To see this, we set $x_t$ to be the agent's remaining budget at time $t$ before the consumption at that time. For closed-loop intra-personal equilibrium, we consider a mapping from time $t$ and the remaining budget $x_t$ to the consumption amount. As a result, if the agent consumes more at time $t$, her future selves will consume less because the remaining budget in the future becomes smaller; consequently, the budget constraint is still satisfied. To elaborate, suppose the agent's future selves' strategies are to consume $\hat k_s$ fractional of wealth at time $s$, $s=t+1,\dots, T$ with $\hat k_s\in [0,1],s=t+1,\dots, T-1$ and $\hat k_T=1$. Then, given that the agent at time $t$ consumes any amount $C_t\in [0,x_t]$, the agent's consumption in the future is $C_s = \hat k_s x_s$, $s=t+1,\dots, T$, where $x_s = x_{s-1}-C_{s-1}$, $s=t+1,\dots, T$. As a result, the aggregate consumption from time $t$ to the end is $\sum_{s=t}^TC_s = x_t$. Recall that the aggregate consumption strictly prior to time $t$ is $x_0-x_t$; so the aggregate consumption throughout the entire horizon is $x_0$ satisfying the budget constraint. Thus, at each time $t$, the agent can consume any amount up to his wealth level at that time and her future selves will adjust their consumption according to a given strategy so that the budget constraint is still satisfied. Finally, we establish a connection between closed-loop and open-loop intra-personal equilibria. If a closed-loop equilibrium $\hat {\mathbf{u}}$ is independent of the state variable $x$, then it follows from the definition that it is also an open-loop equilibrium. For a general closed-loop equilibrium $\hat {\mathbf{u}}$, we can consider the following controlled state process: \begin{align*} d\hat X^{v}(s)=\hat \mu(s, \hat X^{v}(s), v(s) )ds+\hat \sigma(s,\hat X^{v}(s), v(s) )dW(s),\; s\in[t,T];\; X^{v}(t)=x, \end{align*} where $\hat \mu(s,y,v):=\mu(s,y,\hat{\mathbf{u}}(s,y)+v)$, $\hat \sigma(s,y,v) : =\sigma(s,y,\hat{\mathbf{u}}(s,y)+v)$, and $v(\cdot)$ is a progressively measurable control process. We further consider the following objective function: \begin{align*} \hat J(t,x;v(\cdot)):=\mathbb{E}_{t,x}\left[\int_t^T \hat C\big(t,x, s, \hat X^{v}(s), v(s) )\big)ds+F\big(t, x, \hat X^{v}(T) \big)\right]\notag \\ +G\big(t, x, \mathbb{E}_{t,x} [ \hat X^{v}(T) ] \big), \end{align*} where $\hat C(t,x,s,y,v):=C(t,x,s,y,\hat{\mathbf{u}}(s,y)+v)$. Then, by definition, $\hat {\mathbf{u}}$ is a closed-loop equilibrium if and only if $\hat v(\cdot)\equiv 0$ is an open-loop equilibrium for the problem of maximizing $\hat J(t,x;v(\cdot))$ in $v(\cdot)$ with the controlled state process $\hat X^{v}$. In particular, we can characterize $\hat {\mathbf{u}}$ by a flow of forward-backward SDEs by applying the spike variation technique. In order to apply this technique, however, we need to assume that $\hat \mu(s,y,v)$ and $\hat \sigma(s,y,v)$ to be twice differentiable in $y$, which in turn requires $\hat {\mathbf{u}}$ to be twice differentiable; see \citet{YongJZhouXY:99sc} for the detailed regularity conditions needed for the spike variation technique. Thus, the spike variation technique does not seem to be advantageous over the approached reviewed in Section \ref{sec:ExHJB}. \section{Optimal Stopping}\label{sec:Stopping} An optimal stopping problem is one to search an optimal random time $\tau$ to stop a given, {\it uncontrollable} process $(X_t)_{t\ge 0}$ (taking values in a state space ${\mathbb X}$) in the set of stopping times with respect to the filtration generated by the process. It is well known that if the objective function of the optimal stopping problem depends on the path of $(X_t)_{t\ge 0}$ up to the stopping time only, this problem can be ``embedded" into a general control problem with (i) a closed-loop control strategy $\mathbf{u}$ taking binary values 0 and 1 representing the action of stopping and not stopping $(X_t)_{t\ge 0}$ respectively; and (ii) a controlled state process $(\tilde X^{\mathbf{u}})_{t\ge 0}$ that is set to be $(X_t)_{t\ge 0}$ until the first time the control path under $\mathbf{u}$ takes value 0 and is set to be an {\em absorbing state} afterwards; see for instance Section 3.4 of \citet{Bertsekas2017:DynamicProgrammingOptimalControl}. We call the control strategy $\mathbf{u}$ associated with a stopping time $\tau$ in the above embedding a {\em stopping rule}, which maps each pair of time $t$ and a path of the process $X$ up to time $t$ to $\{0,1\}$. A stopping time $\tau$ is {\em Markovian} if the associated stopping rule is Markovian, i.e., it is a mapping from the time--state space to $\{0,1\}$. With a Markovian stopping time, at each time $t$, given that the process has not yet been stopped, whether to stop at $t$ depends on the value of the process at $t$ only. In view of the above embedding, intra-personal equilibrium stopping rules can be defined naturally for time-inconsistent stopping problems; see for instance \citet{TWZ}, \citet{Christensen2018finding}, \citet{EbertEtal2017:Discounting}, and \citet{ChristensenLindensjo2020:OnTimeInconsistentStopping}. In particular, \citet{TWZ} show that the smooth pasting principle, which is the main approach used to construct explicit solutions for classical time-consistent optimal stopping, may fail to find an equilibrium when one changes merely the exponential discounting to non-exponential one while keeping everything else the same. The authors also construct an explicit example in which no equilibrium exists. These results caution blindly extending the classical approach for time-consistent stopping to their time-inconsistent counterpart. By exploiting special structures of stopping problems in continuous time, \citet{HuangNguyenHuu2018:TimeConsistent} propose an alternative approach to defining the optimal stopping rule for a sophisticated agen; see also applications of this approach in \citet{HuangEtal2017:StoppingBehaviors}, \citet{EbertStrack2016:NeverEverGettingStarted}, and \citet{HuangYu2021:OptimalStopping}. Precisely, consider a Markov state process \begin{align*} dX_t = \mu(t,X_t)dt + \sigma(t,X_t)dW_t \end{align*} in $\mathbb{R}^n$, where $(W_t)_{t\ge 0}$ is an $d$-dimensional standard Brownian motion and $\mu$ and $\sigma$ are functions of time $t$ and state $x$ taking values in $\mathbb{R}^n$ and $\mathbb{R}^{n\times d}$, respectively. Following the settings in the above papers, we consider Markovian stopping times only in the following presentation, but the case of non-Markovian stopping times can be investigated similarly. At each time $t$ with state $x$, give that the state process has not been stopped, the agent's goal is to choose a Markovian stopping time $\tau\in [t,T]$ to maximize an objective value $J(t,x;\tau)$. Here, $J(t,x;\tau)$ can be of the form $\mathbb{E}_{t,x}\left[\int_t^\tau g(t,x,s,X_s)ds + h(t,x,\tau, X_\tau)\right]$ for some functions $g$ and $h$, or be a functional of the distribution of $X_\tau$ conditional on $X_t= x$. Recall the embedding of optimal stopping problems into a general control framework and the stopping rule associated with each stopping time as discussed at the beginning of the present subsection. With a slight abuse of notation, we use $\tau$ to denote both a stopping time and a stopping rule. Let us now consider a given stopping rule $\tau$ and the current time--state pair $(t,x)$. If the agent decides to stop, then she has the immediate reward $J(t,x;t)$. If the agent decides not to stop at $t$ but expects her future selves will still follow the original rule $\tau$, then she will stop at time ${\cal L}^*\tau$, the first time $s>t$ at which $\tau$ would stop the process. In this case the objective value is $J(t,x;{\cal L}^*\tau)$. Then, the optimal action of the agent at time $t$ with state $x$ is to stop if $J(t,x;t)>J(t,x;{\cal L}^*\tau)$, to continue if $J(t,x;t)<J(t,x;{\cal L}^*\tau)$, and to follow the originally assigned stopping rule $\tau$ in the break-even case $J(t,x;t)=J(t,x;{\cal L}^*\tau)$. The above plan across all time $t$ and state $x$ constitutes a {\it new} stopping rule, denoted as $\Theta \tau$, which can be proved to be feasible in the sense that it can generate stopping times; see \citet{HuangNguyenHuu2018:TimeConsistent} and \citet{HuangEtal2017:StoppingBehaviors}. The above game-theoretic thinking shows that for any arbitrarily given stopping rule $\tau$, at any time $t$ with any state $x$, the agent finds $\Theta\tau$ to be always no worse than $\tau$, {\it assuming} that her future selves will follow $\tau$. Hence, an equilibrium stopping rule $\tau$ can be defined as one that can not be strictly improved by taking $\Theta\tau$ instead. Following \citet{BayraktarEtal2019:OnTheNotions}, we name it as a {\em mild intra-personal equilibrium} stopping rule: \begin{definition \label{de:EquiStoppingIter} A stopping rule $\tau$ is a mild intra-personal equilibrium if $\Theta \tau=\tau$. \end{definition} So a mild intra-personal equilibrium is a fix-point of the operator $\Theta$. If $\tau$ is to stop the process at any time and with any state, then it is straightforward to see that ${\cal L}^*\tau=\tau$. Consequently, by definition $\Theta \tau=\tau$ and thus $\tau$ is a mild intra-personal equilibrium. In other words, following Definition \ref{de:EquiStoppingIter}, immediate stop is {\it automatically} a (trivial) mild intra-personal equilibrium. For a general stopping rule $\tau$, consider any time $t$ and state $x$ in the interior of the stopping region of $\tau$, where the stoping region refers to the set of time-state pairs at which the stopping rule $\tau$ would stop the process. Then, it is also easy to see that ${\cal L}^*\tau = \tau$ at time $t$ and state $x$, so one should immediately stop under $\Theta\tau$ as well. As a result, the stopping region of $\Theta\tau$ is at least as large as that of $\tau$, if we ignore the time-state pairs that are on the boundary of the stopping region of $\tau$. Therefore, we expect the iterative sequence $\Theta^n \tau$ to converge as $n\rightarrow \infty$, and the convergent point $\tau^*$ satisfies $\tau^* = \Theta \tau^*$ and thus is a mild intra-personal equilibrium. It is, however mathematically challenging to formalize the above heuristic derivation. Rigorous proofs have been established in various settings by \citet{HuangNguyenHuu2018:TimeConsistent}, \citet{HuangEtal2017:StoppingBehaviors}, and \citet{HuangYu2021:OptimalStopping}. The above iterative algorithm, which generates a sequence $\Theta^n\tau,\; n=0,1,\dots$, not only yields a mild intra-personal equilibrium as the limit of the sequence, but also has a clear economic interpretation: each application of $\Theta$ corresponds to an additional level of strategic reasoning; see \citet{HuangNguyenHuu2018:TimeConsistent} and \citet{HuangEtal2017:StoppingBehaviors} for elaborations. As discussed in the above, immediate stop is always a mild equilibrium; so it is expected that there exist multiple mild intra-personal equilibrium stopping rules; see \citet{HuangNguyenHuu2018:TimeConsistent} and \citet{HuangEtal2017:StoppingBehaviors}. To address the issue of multiplicity, \citet{HuangZhou2019:OptimalEquilibria} and \citet{HuangWang2020:OptimalEquilibria} consider, in the setting of an infinite-horizon, continuous-time optimal stopping under nonexponential discounting, the ``optimal" mild intra-personal equilibrium stopping rule $\tau^*$ which achieves the maximum of $J(t,x;\tau)$ over $\tau\in {\cal E}$ for all $t\in [0,T)$, $x\in {\mathbb X}$, where ${\cal E}$ is the set of all mild intra-personal equilibrium stopping rules. \citet{BayraktarEtal2019:OnTheNotions} compare mild intra-personal equilibrium stopping rules with weak (respectively strong) intra-personal equilibrium stopping rules obtained by embedding optimal stopping into stochastic control and then applying Definition \ref{de:EquilibriumFirstOrd} (respectively Definition \ref{de:EquilibriumExtd}). Assuming the objective function to be a multiplication of a discount function and a Markov process taking values in a finite or countably infinite state space, the authors prove that the optimal mild intra-personal equilibrium is a strong intra-personal equilibrium. \section{Discretization Approach}\label{sec:Discretization} In the discrete-time setting, an intra-personal equilibrium strategy of a sophisticated agent can be easily defined and derived in a backward manner starting from the last period. Thus, for a continuous-time problem, it is natural to discretize and then pass to the limit. Specifically, one partitions the continuous-time period $[0,T]$ into a finite number of subperiods, assumes the agent is able to commit in each subperiod but not beyond it, and computes the strategy chosen by the agent. Sending the length of the longest subperiod in the partition to zero, the limit of the above strategy, if it exists, can be regarded as the strategy of a sophisticated agent for the continuous-time problem. This ideas was first employed by \cite{Pollak1968:ConsistentPlanning} to study the consumption problem of \citet{Strotz1955:MyopiaInconsistency} and has recently been revisited and extensively studied by a series of papers; see for instance \cite{Yong2012:TimeInconsistent}, \cite{WeiEtal2017:TimeInconsistent}, \cite{MeiYong2019:EquilibriumStrategies}, and \cite{WangYong2019:TimeInconsistent}. Specifically, consider the control problem in Section \ref{sec:ExHJB} and assume that in the objective function in \eqref{eq:ObjFun}, $C$ and $F$ do not depend on $x$ and $G\equiv 0$. For a partition $\Pi$ of $[0,T]$: $0=t_0<t_1<\dots <t_{N-1}<t_N=T$, we denote $\|\Pi\|:=\max_{k=1,\dots, N}|t_{k}-t_{k-1}|$. A control strategy $\hat{\mathbf{u}}^\Pi$ is an intra-personal equilibrium with respect to the partition $\Pi$ if \begin{align}\label{eq:EquiDiscretizationCond} J(t_k,x_k;\hat{\mathbf{u}}^\Pi)\ge J(t_k,x_k;\mathbf{u}^\Pi_{k,\mathbf{a}}) \end{align} for any $k=0,1,\dots, N-1$, reachable state $x_k$ at time $k$ under $\hat{\mathbf{u}}^{\Pi}$, and strategy $\mathbf{a}$, where $\mathbf{u}^\Pi_{k,\mathbf{a}}(s,\cdot) := \mathbf{a}(s,\cdot)$ for $s\in [t_k,t_{k+1})$ and $\mathbf{u}^\Pi_{k,\mathbf{a}}(s,\cdot) = \hat{\mathbf{u}}^\Pi(s,\cdot)$ for $s\in [t_{k+1},T)$. In other words, $\hat{\mathbf{u}}(s,\cdot),s\in[t_k,t_{k+1})$, is optimal for an agent who can commit in the period $[t_k,t_{k+1})$ and anticipates that her future selves will take strategy $\hat{\mathbf{u}}$ beyond time $t_{k+1}$. In the aforementioned literature, the authors define a strategy $\hat{\mathbf{u}}$ to be a {\em limiting intra-personal equilibrium} if there exists a sequence of partition $(\Pi_m)_{m\in \mathbb{N}}$ with $\lim_{m\rightarrow \infty}\|\Pi_m\|=0$ such that the state process, control process, and continuation value process under certain intra-personal equilibrium with respect to $\Pi_m$ converge to those under $\hat{\mathbf{u}}$, respectively, as $m\rightarrow \infty$. Assuming that the diffusion coefficient of the controlled state process is independent of control and non-degenerate and that some other conditions hold, \citet{WeiEtal2017:TimeInconsistent} prove the above convergence for any sequence of partitions with mesh size going to zero, and the limit of the continuation value function satisfies a flow of PDEs. Moreover, this flow of PDEs admits a unique solution, so the limiting intra-personal equilibrium uniquely exists. Furthermore, the limiting equilibrium is also an equilibrium under Definition \ref{de:EquilibriumFirstOrd}. Whether the equilibrium with respect to $\Pi$ converges when $\|\Pi\|\rightarrow 0$ for a general time-inconsistent problem, however, is still unknown. Moreover, the definition of this equilibrium relies on the assumptions that $C$ and $F$ do not depend on $x$ and $G\equiv 0$. Otherwise, for a given partition $\Pi$, the optimal strategy the agent at time $t_k$ implements in the subperiod $[t_k,t_{k+1})$ is {\em semi-Markovian}: the agent's action at time $s\in [t_k,t_{k+1})$ is a function of $s$, the state at $s$, and the state at $t_k$. As a result, the intra-personal equilibrium with respect to $\Pi$ is non-Markovian; so we cannot restrict limiting equilibria to be Markov strategies. \section{Applications}\label{sec:Applications} \subsection{Present-bias Preferences} Present-biased preferences, also known as hyperbolic discounting, refer to the following observation in intertemporal choice: when considering time preferences between two moments, individuals become more impatient when the two moments are closer to the present time. \citet{Thaler1981:SomeEmpiricalEvidence} provides an illustrative example of present-biased preferences: some people may prefer an apple today to two apples tomorrow, but very few people would prefer an apple in a year to two apples in a year plus one day. Noted as early as in \citet{Strotz1955:MyopiaInconsistency}, present-biased preferences lead to time inconsistency. For example, consider an agent whose time preferences for having apples are as described in the above illustrative example by \citet{Thaler1981:SomeEmpiricalEvidence}. At time $0$, faced with Option A of having one apple at time $t=365$ (days) and Option B of having two apples at time $s=366$ (days), the agent chooses Option B. When time $t=365$ arrives, however, if the agent gets to choose again, she would choose Option A. This shows that the agent in the future will change her actions planned today; hence time-inconsistency is present. For a review of the literature on present-biased preferences, see \citet{FrederickEtal2002:TimePreferences}. In a time-separable discounted utility model, present-biased preferences can be modeled by a non-exponential discount function. For example, consider an intertemporal consumption model in continuous time for an agent. The agent's preference value of a random consumption stream $(C_s)_{s\in [t,T]}$ can be represented as \begin{align}\label{eq:ConsumptionHBD} \mathbb{E}_t\left[\int_t^Th(s-t)u(C_s)ds\right], \end{align} where $u$ is the agent's utility function, $h$ is the agent's discount function, and $\mathbb{E}_t$ denotes the expectation conditional on all the information available at time $t$. To model present-biased preferences, we assume $h(s+\Delta)/h(s)$ to be {\it strictly} increasing in $s\ge 0$ for any fixed $\Delta >0$; hence it excludes the standard exponential discount function. An example is the generalized hyperbolic discount function proposed by \citet{LoewensteinPrelec1992:Anomalies}: $h(s)= (1+\alpha s)^{-\beta/\alpha},s\ge 0$, where $\alpha>0$ and $\beta>0$ are two parameters. \citet{EbertEtal2017:Discounting} introduce a class of weighted discount functions that is broad enough to include most commonly used non-exponential discount functions in finance and economics. In various continuous-time settings, \citet{Barro1999:Ramsey}, \citet{EkelandLazrak2006:BeingSeriousAboutNonCommitment}, \citet{EkelandILazrakA:07ti}, \citet{EkelandLazrak2010:GoldenRule}, \citet{EkelandPirvu2008:InvestConsptnWithoutCommit}, \citet{MarinSolanoNavas2010:ConsumptionPortfolio}, and \citet{EkelandEtal2012:TimeConsistent} study intra-personal equilibria for portfolio selection and consumption problems with present-biased preferences. \citet{EbertEtal2017:Discounting} and \citet{TWZ} study real option problems for agents with general weighted discount functions and derive equilibrium investment strategies. \citet{HarrisLaibson2013:InstantaneousGratification} and \citet{GrenadierWang2007:Investment} apply a stochastic, piece-wise step discount function to a consumption problem and a real option problem, respectively, and derive intra-personal equilibrium strategies. Asset pricing for sophisticated agents with present-biased preferences and without commitment has been studied by \citet{LuttmerMariotti2003:SubjectiveDiscounting} and \citet{Bjork2017:TimeInconsistent}. \subsection{Mean-Variance} A popular decision criterion in finance is mean--variance, with which an agent minimizes the variance and maximizes the mean of certain random quantity, e.g., the wealth of a portfolio at the end of a period. Any mean--variance model is inherently time inconsistent due to the variance part. To see this, consider a two-period decision problem with dates 0, 1, and 2 for an agent. The agent is offered various options at time 1 that will yield certain payoffs at time 2. The set of options offered to the agent at time 1 depends on the outcome of a fair coin that is tossed between time 0 and 1. If the toss yields a head, the agent is offered two options at time 1: Option H1 that yields \$0 and \$200 with equal probabilities and Option H2 that yields \$50 and \$150 with equal probabilities. If the toss yields a tail, the agent is offered another two options at time 1: Option T1 that yields \$0 and \$200 with equal probabilities and Option T2 that yields \$1050 and \$1150 with equal probabilities. Suppose that at both time 0 and 1, the agent's decision criterion is to minimize the variance of the terminal payoff at time 2. At time 0, the agent has not yet observed the outcome of the toss; so she will need to make choices contingent on this outcome, i.e., she chooses between the following four plans: (H1,T1), (H1,T2), (H2,T1), and (H2,T2), where the first and second components of each of the above four plans stand for the agent's planned choice when the toss yields a head and a tail, respectively. Straightforward calculation shows that the plan (H2,T1) yields the smallest variance of the terminal payoff; so at time 0 the agent plans to choose H2 when the toss yields a head and choose T1 when the toss yields a tail. At time 1, after having observed the outcome of the toss, if the agent can choose again with the objective of minimizing the variance of the terminal payoff, she would choose H2 if the outcome is a head and T2 is the outcome is a tail. Consequently, what the agent plans at time 0 is different from what is optimal for the agent at time 1, resulting in time inconsistency. The reason of having time inconsistency above can be seen from the following conditional variance formula: $\mathrm{var}(X) = \mathbb{E}[\mathrm{var}(X|Y)] + \mathrm{var}(\mathbb{E}[X|Y])$, where $X$ stands for the terminal payoff and $Y$ denotes the outcome of the coin toss. At time 0, the agent's objective is to maximize $\mathrm{var}(X)$ and at time 1, her objective is to maximize $\mathrm{var}(X|Y)$. Although the plan (H2,T2) yields small variance of $X$ given the outcome of the toss $Y$ and thus a small value of the average conditional variance $\mathbb{E}[\mathrm{var}(X|Y)]$, it yields very different expected payoffs conditional on having a head and on having a tail, leading to a large value of $\mathrm{var}(\mathbb{E}[X|Y])$. Consequently, $\mathrm{var}(X)$ under plan (H2,T2) is larger than under plan (H2,T1), which yields a larger value of $\mathbb{E}[\mathrm{var}(X|Y)]$ than the former but a much smaller value of $\mathrm{var}(\mathbb{E}[X|Y])$. Consequently, (H2,T1) is preferred to (H2,T2) for the agent at time 0. A lot of recent works study intra-personal equilibrium investment strategies for agents with mean-variance preferences. For continuous-time models, see for instance \citet{BasakSChabakauri2010:DynamicMeanVariance}, \citet{BjorkEtal2011:MeanVariancewithStateDependentRiskAversion}, \citet{Pun2018:TimeConsistentMV}, \citet{BensoussanEtal2014:TimeConsistent}, \citet{CuiEtal2016:ContinuousTime}, \citet{SunEtal2016:Precommitment}, \citet{Landriault2018:EquilibriumStrategies}, \citet{BensoussanEtal2019:AParadox}, \citet{KrygerEtal2020:OptimalControl}, and \citet{HanEtal2021:RobustMV}. In all these works, the mean-variance criterion is formulated as a weighted average of the mean and variance of wealth at a terminal time, i.e., at each time $t$, the agent's objective is to maximize $\mathbb{E}_t[X]-\frac{\gamma_t}{2}\mathrm{var}_t(X)$, where $\mathbb{E}_t$ and $\mathrm{var}_t$ stand for the conditional mean and variance of the terminal wealth $X$, respectively, and $\gamma_t$ is a risk aversion parameter. Alternatively, \citet{HeJiang2017:DynamicMeanRisk} and \citet{HeJiang2020:DynamicMVFractionalKelly} study intra-personal equilibria for mean-variance investors in a constrained formulation: at each time, an investor minimizes the variance of terminal wealth with a target constraint of the expected terminal wealth. \citet{DaiEtal2017:RoboAdvising} consider a mean-variance model for log returns. \citet{HuEtal2012:TimeInconsistentStochasticLQ}, \citet{HuEtal2017:TimeInconsisten}, \citet{Czichowsky2013:TimeConsistent}, and \citet{YanWong2020:OpenLoop} investigate open-loop intra-personal equilibria for mean-variance portfolio selection problems. For equilibrium mean-variance insurance strategies, see for instance \citet{ZengLi2011:OptimalTimeConsistent}, \citet{LiEtal2012:OptimalTimeConsistent}, \citet{ZengEtal2013:TimeConsistentInvestment}, \citet{LiangSong2015:TimeConsistent}, and \citet{BiCai2019:OptimalInvestment}. \subsection{Non-EUT Preferences} There is abundant empirical and experimental evidence showing that when making choices under uncertainty, individuals do not maximize expected utility (EU); see for instance a survey by \citet{StarmerC:00neut}. Various alternatives to the EU model, which are generally referred to as {\em non-EU} models, have been proposed in the literature. Some of these models employ probability weighting functions to describe the tendency of overweighing extreme outcomes that occur with small probabilities, examples being prospect theory (PT) \citep{KahnemanDTverskyA:79pt,TverskyKahneman1992:CPT} and rank-dependent utility (RDU) theory \citep{QuigginJ:82rd}. It has been noted that when applied to dynamic choice problems, non-EU models can lead to time inconsistency; see \citet{Machina1989:DynamicConsistency} for a review of early works discussing this issue. For illustration, consider a casino gambling problem studied by \citet{Barberis2012:Casino}: a gambler is offered 10 independent bets with equal probabilities of winning and losing \$1, plays these bets sequentially, and decides when to stop playing. Suppose at each time, the gambler's objective is to maximize the preference value of the payoff at end of the game and the preferences are represented by a non-EU model involving a probability weighting function. We represent the cumulative payoff of playing the bets by a binomial tree with up and down movements standing for winning and losing, respectively. At time 0, the top most state (TMS) of the tree at $t=10$ represents the largest possible payoff achievable and the probability of reaching this state is extremely small ($2^{-10}$). The gambler overweighs this state due to probability weighting and aspires to reach it. Hence, at time 0, her plan is to play the 10-th bet if and when she has won all the previous 9 bets. Now, suppose she has played and indeed won the first 9 bets. If she has a chance to re-consider her decision of whether to play the 10-th bet at {\it that} time, she may find it no longer favorable to play because the probability of reaching the TMS at time 10 is 1/2 and thus this state is not overweighed. Consequently, when deciding whether to play the 10-th bet conditioning on she has won the first 9 bets, the gambler may choose differently when she is at time 0 and when she is at time 9, showing time inconsistency. In a continuous-time, complete market, \citet{HuEtal2020:ConsistentInvestmentRDU} study a portfolio selection problem in which an agent maximizes the following RDU of her wealth $X$ at a terminal time: \begin{align}\label{eq:RDU} \int_{\mathbb{R}} u(x) w(1-F_X(x)), \end{align} where $u$ is a utility function, $w$ is a probability weighting function, and $F_X$ is the cumulative distribution function of $X$. The authors derive an open-loop intra-personal equilibrium and show that it is in the same form as in the classical Merton model but with a properly scaled market price of risk. \citet{HeEtal2019:MedianMaximization} consider median and quantile maximization for portfolio selection, where the objective function, namely the quantile of the terminal wealth, can be regarded as a special case of RDU with a particular probability weighting function $w$. The authors study closed-loop intra-personal equilibrium and find that an affine trading strategy is an equilibrium if and only if it is a portfolio insurance strategy. \citet{EbertStrack2016:NeverEverGettingStarted} consider the optimal time to stop a diffusion process with the objective to maximize the value of the process at the stopping time under a PT model. Using the notion of mild intra-personal equilibrium as previously discussed in Section \ref{sec:Stopping}, the authors show that under reasonable assumptions on the probability weighting functions, the only equilibrium among all two-threshold stopping rules is to immediately stop. \citet{HuangEtal2017:StoppingBehaviors} study mild intra-personal equilibrium stopping rules for an agent who wants to stop a geometric Brownian motion with the objective of maximizing the RDU value at the stopping time. Risk measures, such as value-at-risk (VaR) and conditional value-at-risk (VaR), can also be considered to be non-EU models leading to time consistency. There are, however, few studies on intra-personal equilibria for mean-risk models in continuous time. For relevant studies in discrete-time settings, see for instance \citet{CuiEtal2019:TimeConsistent}. Models with Knightian uncertainty or ambiguity can also result in time inconsistency. For example, the $\alpha$-maxmin model proposed by \citet{GhirardatoEtal2004:DifferentiatingAmbiguity} is dynamically inconsistent in general; see for instance \citet{BeissnerEtal2016:DynamicallyConsistent}. \citet{LiEtal2019:EquilibriumStrategies} find an open-loop intra-personal equilibrium investment strategy for an agent with $\alpha$-maximin preferences. \citet{HuangYu2021:OptimalStopping} consider a problem of stopping a one-dimensional diffusion process with preferences represented by the $\alpha$-maxmin model and study the mild intra-personal equilibrium stopping rule for the problem. \section{Dynamically Consistent Preferences}\label{sec:DynamicConsistent} \citet{Machina1989:DynamicConsistency} notes that, in many discussions of time inconsistency in the literature, a hidden assumption is {\em consequentialism}: at any intermediate time $t$ of a dynamic decision process, the agent employs the {\em same} preference model as used at the initial time to evaluate the choices in the {\em continuation} of the dynamic decision process from time $t$, conditional on the circumstances at time $t$. For example, consider a dynamic consumption problem for an agent with present-bias preferences and suppose that at the initial time 0, the agent's preference value for a consumption stream $(C_s)_{s\ge 0}$ is represented by $\mathbb{E}[\int_0^\infty h(s)u(C_s)ds]$, where the discount function $h$ models the agent's time preferences at the initial time 0 and $u$ is the agent's utility function. The consequentialism assumption implies that at any intermediate time $t$, the agent's preferences for the continuation of the consumption stream, i.e., $(C_s)_{s\ge t}$, are represented by the same preference model as at the initial time 0, conditional on the situations at time $t$, i.e., by $\mathbb{E}_t[\int_t^\infty h(s-t)u(C_s)ds]$, where the discount function $h$ and $u$ are the same as the ones in the preference model at the initial time 0. Similarly, for a dynamic choice problem with RDU preferences for the payoff at a terminal time, the consequentialism assumption stipulates that the agent uses the same utility function $u$ and probability weighting function $w$ at all intermediate times $t$ when evaluating the terminal payoff at those times. The consequentialism assumption, however, has not been broadly validated because there are few experimental or empirical studies on how individuals dynamically update their preferences. \citet{Machina1989:DynamicConsistency} consider a class of non-EU maximizers, referred to as $\gamma$-people, who adjust their preferences dynamically over time so as to remain time consistent. The idea in \citet{Machina1989:DynamicConsistency} was further developed by \citet{KarnamEtal2016:DynamicApproaches} who propose the notion of {\em time-consistent dynamic preference models}. The idea of considering time-consistent dynamic preferences is also central in the theory of forward performance criteria proposed and developed by \citet{MusielaZariphopoulou06,MusielaZariphopoulou08, MusielaZariphopoulou09,MusielaZariphopoulou10a, MusielaZariphopoulou10b,MusielaZariphopoulou11}; see also \citet{HeEtal2019:ForwardRDU} for a related discussion. Formally, consider a dynamic choice problem in a period $[0,T)$. A preference model at time 0 is specified for an agent, denoted as $J_0(u(\cdot))$, where $(u(s))_{s\in [0,T)}$ denotes the agent's dynamic choice. A family of dynamic preference models $J_t,\;t\in (0,T)$, are called time-consistent for the initial model $J_0$ if the optimal strategy under $J_0$, namely, the pre-committed strategy for the agent at time 0, is also optimal under $J_t$ for the agent at any future time $t\in (0,T)$. Note that given the pre-committed strategy at time 0, we can always find preference models at $t>0$ such that this strategy remains optimal. Thus, a more interesting question is whether we can find a family of time-consistent dynamic preference models that are of the same type as the initial preference model. \citet{HeEtal2019:ForwardRDU} study portfolio selection in the Black-Scholes market for an agent whose initial preference model for wealth at a terminal time is represented by RDU. The authors show that there exists a family of time-consistent dynamic RDU models if and only if (i) the probability weighting function in the initial model belongs to a parametric class of functions proposed by \citet{Wang1996:PremiumCalculation}; and (ii) the parameter of the probability weighting function, the absolute risk aversion index of the utility function, and the market price of risk must be coordinated with each other over time in a specific way. \citet{CuiEtAl2012:Better}, \citet{KarnamEtal2016:DynamicApproaches}, and \citet{HeJiang2020:DynamicMVFractionalKelly} find that mean-variance models become time consistent if the dynamic trade-off between the mean and variance over time is set properly. For mean-CVaR models, where an agent maximizes the mean and minimize the CVaR at certain confidence level, \citet{PflugPichler2016:TimeInconsistent} and \citet{StrubEtal2017:DiscreteTimeMeanCVaR} note, in discrete-time settings, that time consistency is retained as long as the tradeoff between the mean and CVaR and the confidence level evolve dynamically in a certain way. The problem of intra-personal equilibria and that of dynamically consistent preferences can be considered primal--dual to each other: the former finds equilibrium strategies given the time-inconsistent preferences, whereas the latter identifies preferences given the problem is time-consistent. Diving deeper into this relationship may call for innovative mathematical analysis and result in profound economic insights. \begin{acknowledgement} Xue Dong He gratefully acknowledges financial support through the General Research Fund of the Research Grants Council of Hong Kong SAR (Project No. 14200917). Xun Yu Zhou gratefully acknowledges financial support through a start-up grant at Columbia University and through the Nie Center for Intelligent Asset Management. This author also thanks the hospitality of The Chinese University of Hong Kong during the summer of 2020 when the present project started. \end{acknowledgement}
{ "timestamp": "2021-05-06T02:08:05", "yymm": "2105", "arxiv_id": "2105.01829", "language": "en", "url": "https://arxiv.org/abs/2105.01829" }
\section{Introduction} The theory of Hom-algebras has been initiated in \cite{HartwigLarSil:defLiesigmaderiv, LarssonSilvJA2005:QuasiHomLieCentExt2cocyid,LarssonSilv:quasiLiealg} motivated by quasi-deformations of Lie algebras of vector fields, in particular q-deformations of Witt and Virasoro algebras. Hom-Lie algebras and more general quasi-Hom-Lie algebras were introduced first by Hartwig, Larsson and Silvestrov in \cite{HartwigLarSil:defLiesigmaderiv} where a general approach to discretization of Lie algebras of vector fields using general twisted derivations ($\sigma$-deriva\-tions) and a general method for construction of deformations of Witt and Virasoro type algebras based on twisted derivations have been developed. The general quasi-Lie algebras, containing the quasi-Hom-Lie algebras and Hom-Lie algebras as subclasses, as well their graded color generalization, the color quasi-Lie algebras including color quasi-hom-Lie algebras, color hom-Lie algebras and their special subclasses the quasi-Hom-Lie superalgebras and hom-Lie superalgebras, have been first introduced in \cite{HartwigLarSil:defLiesigmaderiv,LarssonSilvJA2005:QuasiHomLieCentExt2cocyid,LarssonSilv:quasiLiealg,LSGradedquasiLiealg,LarssonSilv:quasidefsl2,SigSilv:CzechJP2006:GradedquasiLiealgWitt}. Subsequently, various classes of Hom-Lie admissible algebras have been considered in \cite{ms:homstructure}. In particular, in \cite{ms:homstructure}, the Hom-associative algebras have been introduced and shown to be Hom-Lie admissible, that is leading to Hom-Lie algebras using commutator map as new product, and in this sense constituting a natural generalization of associative algebras as Lie admissible algebras leading to Lie algebras using commutator map. Furthermore, in \cite{ms:homstructure}, more general $G$-Hom-associative algebras including Hom-associative algebras, Hom-Vinberg algebras (Hom-left symmetric algebras), Hom-pre-Lie algebras (Hom-right symmetric algebras), and some other Hom-algebra structures, generalizing $G$-associative algebras, Vinberg and pre-Lie algebras respectively, have been introduced and shown to be Hom-Lie admissible, meaning that for these classes of Hom-algebras, the operation of taking commutator leads to Hom-Lie algebras as well. Also, flexible Hom-algebras have been introduced, connections to Hom-algebra generalizations of derivations and of adjoint maps have been noticed, and some low-dimensional Hom-Lie algebras have been described. In Hom-algebra structures, defining algebra identities are twisted by linear maps. Since the pioneering works \cite{HartwigLarSil:defLiesigmaderiv,LarssonSilvJA2005:QuasiHomLieCentExt2cocyid, LarssonSilv:quasiLiealg,LSGradedquasiLiealg,LarssonSilv:quasidefsl2,ms:homstructure}, Hom-algebra structures have developed in a popular broad area with increasing number of publications in various directions. Hom-algebra structures include their classical counterparts and open new broad possibilities for deformations, extensions to Hom-algebra structures of representations, homology, cohomology and formal deformations, Hom-modules and hom-bimodules, Hom-Lie admissible Hom-coalgebras, Hom-coalgebras, Hom-bialgebras, Hom-Hopf algebras, $L$-modules, $L$-comodules and Hom-Lie quasi-bialgebras, $n$-ary generalizations of BiHom-Lie algebras and BiHom-associative algebras and generalized derivations, Rota-Baxter operators, Hom-dendriform color algebras, Rota-Baxter bisystems and covariant bialgebras, Rota-Baxter cosystems, coquasitriangular mixed bialgebras, coassociative Yang-Baxter pairs, coassociative Yang-Baxter equation and generalizations of Rota-Baxter systems and algebras, curved $\mathcal{O}$-operator systems and their connections with tridendriform systems and pre-Lie algebras, BiHom-algebras, BiHom-Frobenius algebras and double constructions, infinitesimal BiHom-bialgebras and Hom-dendriform $D$-bialgebras, Hom-algebras have been considered \cite{AbdaouiMabroukMakhlouf, AmmarEjbehiMakhlouf:homdeformation, AttanLaraiedh:2020ConstrBihomalternBihomJordan, Bakayoko:LaplacehomLiequasibialg, Bakayoko:LmodcomodhomLiequasibialg, BakBan:bimodrotbaxt, BakyokoSilvestrov:HomleftsymHomdendicolorYauTwi, BakyokoSilvestrov:MultiplicnHomLiecoloralg, BenMakh:Hombiliform, BenAbdeljElhamdKaygorMakhl201920GenDernBiHomLiealg, CaenGoyv:MonHomHopf, ChtiouiMabroukMakhlouf1, ChtiouiMabroukMakhlouf2, DassoundoSilvestrov2021:NearlyHomass, EbrahimiFardGuo08, GrMakMenPan:Bihom1, HassanzadehShapiroSutlu:CyclichomolHomasal, HounkonnouDassoundo:centersymalgbialg, HounkonnouHoundedjiSilvestrov:DoubleconstrbiHomFrobalg, HounkonnouDassoundo:homcensymalgbialg, kms:narygenBiHomLieBiHomassalgebras2020, Laraiedh1:2021:BimodmtchdprsBihomprepois, LarssonSigSilvJGLTA2008, LarssonSilvJA2005:QuasiHomLieCentExt2cocyid, LarssonSilv:quasidefsl2, LarssonSigSilvJGLTA2008:QuasiLiedefFttN, LarssonSilvestrovGLTMPBSpr2009:GenNComplTwistDer, MaMakhSil:CurvedOoperatorSyst, MaMakhSil:RotaBaxbisyscovbialg, MaMakhSil:RotaBaxCosyCoquasitriMixBial, MaZheng:RotaBaxtMonoidalHomAlg, MabroukNcibSilvestrov2020:GenDerRotaBaxterOpsnaryHomNambuSuperalgs, Makhl:HomaltHomJord, Makhlouf2010:ParadigmnonassHomalgHomsuper, MakhloufHomdemdoformRotaBaxterHomalg2011, MakhSil:HomHopf, MakhSilv:HomDeform, MakhSilv:HomAlgHomCoalg, MakYau:RotaBaxterHomLieadmis, RichardSilvestrovJA2008, RichardSilvestrovGLTbnd2009, SaadaouSilvestrov:lmgderivationsBiHomLiealgebras, ShengBai:homLiebialg, Sheng:homrep, SigSilv:GLTbdSpringer2009, SilvestrovParadigmQLieQhomLie2007, SilvestrovZardeh2021:HNNextinvolmultHomLiealg, QSunHomPrealtBialg, Yau:ModuleHomalg, Yau:HomEnv, Yau:HomHom, Yau:HombialgcomoduleHomalg, Yau:HomYangBaHomLiequasitribial, YauHomMalcevHomalternHomJord}. In this paper we introduce and give some results on constructions of BiHom-X algebras using Yau's twisting, Rota Baxter and some elements of centroids. The bimodules of BiHom-left symmetric dialgebras, BiHom-associative dialgebras and BiHom-tridendriform algebra are defined, and it is shown that a sequence of this kind of bimodules can be constructed. Their matched pairs are also introduced and related relevant properties are given. In Section \ref{sec:Yaustwistinggeneralization}, we provide some results on constructions of BiHom-X algebras. Section \ref{sec:homleftsymcolordialg} contains definitions and some key results about bimodules of BiHom-associative algebras and BiHom-left-symmetric algebras, and matched pairs of BiHom-left symmetric and BiHom-associative dialgebras. In section \ref{sec:homtridendriformcoloralgebras}, devoted to bimodules of BiHom-tridendriform algebras, definitions and some constructions of BiHom-dendriform and BiHom-tridendriform algebras and the concepts of bimodules and matched pairs of BiHom-tridendriform algebra are investigated. \section{Constructions of BiHom-X algebras} \label{sec:Yaustwistinggeneralization} Throughout this paper, all graded vector spaces are assumed to be over a field $\mathbb{K}$ of characteristic different from $2$. In this section, we provide some results on constructions of BiHom-X algebras. \begin{defn} A BiHom-algebra is a $(n+3)$-tuple $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ in which $A$ is a linear space, $\mu_i : A\otimes A \rightarrow A$ $(i=1, \dots, n)$ are bilinear maps, and $\alpha,\beta : A\rightarrow A$ are linear maps, called the twisting maps. If in addition, $$\alpha\circ\mu_i=\mu_i\circ(\alpha\otimes \alpha),~\beta\circ\mu_i=\mu_i\circ(\beta\otimes \beta), \quad (i=1, \dots, n),$$ the BiHom-algebra $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ is said to be multiplicative. \end{defn} \begin{defn} Let $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ be a BiHom-algebra. Then \begin{enumerate} \item A BiHom-subalgebra of $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ is a linear subspace $H$ of $A$, which is closed for the multiplication $\mu_i$ $(i=1, \dots, n)$, and invariant by $\alpha$ and $\beta$, that is, $\mu_i(x,y)\in H,~\alpha(x)\in H$ and $\beta(x)\in H$ for all $x,y\in H$. If furthermore $\mu_i(x,y)\in H$ and $\mu_i(y,x)\in H$ for all $(x,y)\in A\times H,$ then $H$ is called a two-sided BiHom-ideal of $A$. \item $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ is said to be regular if $\alpha$ and $\beta$ are algebra automorphisms. \item $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ is said to be involutive if $\alpha$ and $\beta$ are two involutions, that is $\alpha^2=\beta^2=id$. \end{enumerate} \end{defn} \begin{defn} Let $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ and $(A', \mu'_1, \dots, \mu'_n,\alpha',\beta')$ be two BiHom- algebras. Then a linear map $f:A\longrightarrow A^{'}$ is said to be a BiHom-algebras morphism if the following conditions hold: $$\begin{array}{llll}&&f \circ \mu_i= \mu_i^{'} \circ(f \otimes f),~\forall i=1,\dots,n,\\&&f\circ \alpha=\alpha^{'} \circ f,\\&&f\circ \beta=\beta^{'}\circ f,\end{array} $$ as illustrated, respectively, by the following commutative diagrams: $$ \xymatrix{ A\otimes A \ar[d]_{f\otimes f }\ar[rr]^{\mu_i} && A \ar[d]^{f} \\ A'\otimes A' \ar[rr]^{\mu'_i} && A' },\quad \xymatrix{ A \ar[d]_{ f }\ar[rr]^{\alpha} && A \ar[d]^{f} \\ A' \ar[rr]^{\alpha'} && A' },\quad \xymatrix{ A \ar[d]_{ f }\ar[rr]^{\beta} && A \ar[d]^{f} \\ A' \ar[rr]^{\beta'} && A', } $$ for all $i=1,\dots,n$. \end{defn} Denote by $\Gamma_{f}=\{x+f(x);~~x\in A\}\subset A\oplus A'$ the graph of a linear map $f:A\longrightarrow A^{'}$. \begin{defn} A BiHom-algebra $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ is called a BiHom-X algebra if the axioms defining the structure of $X$ are linear combination of the terms of the form $\mu_j\circ(\mu_i\otimes \beta)$ or $\mu_j\circ(\alpha\otimes\mu_i)$. \end{defn} \begin{prop}\label{bylv} Let $(A,\mu_1^{A}, \dots, \mu_n^{A},\alpha_{1},\beta_{1})$ and $(B,\mu_1^{B}, \dots, \mu_n^{B},\alpha_{2},\beta_{2})$ be BiHom-X algebras. Then, there is a BiHom-X algebra $(A\oplus B, \mu_1^{A\oplus B},\dots, \mu_n^{A\oplus B}, \alpha,\beta),$ where for all $i=1,\dots,n,$ the bilinear maps $\mu_{i}^{A\oplus B}:(A\oplus B)^{\times 2}\longrightarrow (A\oplus B)$ are given by $$ \mu_{i}^{A\oplus B}(a_1+b_1,a_2+b_2):=\mu_{i}^A(a_1,a_2)+\mu_{i}^B (b_1,b_2), \forall \ a_1,a_2\in A,\ \forall \ b_1,b_2\in B, $$ and the linear maps $\alpha$ and $\beta$ are given, for all $(a,b)\in A\times B$, by $$ \begin{array}{lll} \alpha(a+b)&:=& \alpha_{1}(a)+\alpha_{2}(b),\\ \beta(a+b)&:=& \beta_{1}(a)+\beta_{2}(b). \end{array}$$ \end{prop} \begin{proof} For any $a_1,b_1,c_1\in A$, $a_2,b_2,c_2\in B$ and $1\leq i,j \leq n$, \begin{multline*} \mu_{i}^{A\oplus B}(\mu_{j}^{A\oplus B}(a_1+a_2,b_1+b_2),\beta(c_1+c_2))= \\ \mu_{i}^{A\oplus B}(\mu_{j}^{A}(a_1,b_1)+\mu_{j}^{B}(a_2,b_2),\beta_1(c_1)+\beta_2(c_2))\\ =\mu_{i}^{A}(\mu_{j}^{A}(a_1,b_1),\beta_1(c_1))+\mu_{i}^{B}(\mu_{j}^{B}(a_2,b_2),\beta_2(c_2)) \end{multline*} Similarly, \begin{align*} \mu_{i}^{A\oplus B}(\alpha(a_1+a_2),&\mu_{j}^{A\oplus B}(b_1+b_2,c_1+c_2))= \\ &\mu_{i}^{A}(\alpha_1(a_1),\mu_{j}^{A}(b_1,c_1))+\mu_{i}^{B}(\alpha_2(a_2),\mu_{j}^{B}(b_2,c_2)). \qedhere \end{align*} \end{proof} \begin{prop} Let $(A,\mu_{1}^A,\dots,\mu_{n}^A,\alpha_{1},\beta_{1})$ and $(B,\mu_{1}^{B},\dots,\mu_{n}^{B},\alpha_{2},\beta_{2})$ be BiHom-X algebras. Then a linear map $\varphi: A\rightarrow B$ is a morphism from the BiHom-X algebra $(A,\mu_{1}^A,\dots,\mu_{n}^A,\alpha_{1},\beta_{1})$ to the BiHom-X algebra $(B,\mu_{1}^{B},\dots,\mu_{n}^{B},\alpha_{2},\beta_{2})$ if and only if its graph $\Gamma_{\varphi}$ is a BiHom-X subalgebra of $(A\oplus B, \mu_{1}^{A\oplus B},\dots,\mu_{n}^{A\oplus B}, \alpha_{1}+\beta_{1},\alpha_{2}+\beta_{2}).$ \end{prop} \begin{proof} Let $\varphi: (A,\mu_{1}^A,\dots,\mu_{n}^A,\alpha_{1},\beta_{1})\longrightarrow (B,\mu_{1}^{B},\dots,\mu_{n}^{B},\alpha_{2},\beta_{2})$ be a morphism of BiHom-X algebras. Then for all $u, v\in A$ and $1\leq i \leq n$, $$ \mu_{i}^{A\oplus B}((u+\varphi(u),v+\varphi(v))=(\mu_{i}^A(u,v)+\mu_{i}^B(\varphi(u),\varphi(v)))=(\mu_{i}^A(u,v)+\varphi(\mu_{i}^A(u,v))).$$ Thus the graph $\Gamma_{\varphi}$ is closed under the multiplication $\mu_{i}^{A\oplus B}.$ Furthermore, $\varphi\circ\alpha_{1}=\alpha_{2}\circ\varphi$ yields $(\alpha_{1}\oplus\alpha_{2})(u, \varphi(u)) = (\alpha_{1}(u), \alpha_{2}\circ\varphi(u)) = (\alpha_{1}(u), \varphi\circ\alpha_{1}(u)).$ In the same way, $(\beta_{1}\oplus\beta_{2})(u, \varphi(u)) = (\beta_{1}(u), \beta_{2}\circ\varphi(u)) = (\beta_{1}(u), \varphi\circ\beta_{1}(u)),$ which implies that $\Gamma_{\varphi}$ is closed under $\alpha_{1}\oplus\alpha_{2}$ and $\beta_{1}\oplus\beta_{2}$. Thus $\Gamma_{\varphi}$ is a BiHom-X subalgebra of $(A\oplus B, \mu_{1}^{A\oplus B},\dots,\mu_{n}^{A\oplus B}, \alpha_{1}+\beta_{1},\alpha_{2}+\beta_{2}).$ Conversely, if the graph $\Gamma_{\varphi}\subset A\oplus B$ is a BiHom-X subalgebra of $$(A\oplus B, \mu_{1}^{A\oplus B},\dots,\mu_{n}^{A\oplus B}, \alpha_{1}+\beta_{1},\alpha_{2}+\beta_{2}),$$ then for all $1\leq i \leq n$, $$\mu_{i}^{A\oplus B}((u+ \varphi(u)), (v+ \varphi(v)))=(\mu_{i}^A (u, v) + \mu_{i}^ B(\varphi(u), \varphi(v)) )\in\Gamma_{\varphi},$$ which implies that $$\mu_{i}^ B(\varphi(u), \varphi(v))=\varphi(\mu_{i}^ A(u, v)).$$ Furthermore, $(\alpha_{1}\oplus\alpha_{2})(\Gamma_{\varphi})\subset\Gamma_{\varphi},~(\beta_{1}\oplus\beta_{2})(\Gamma_{\varphi})\subset\Gamma_{\varphi}$ implies $$(\alpha_{1}\oplus\alpha_{2})(u+\varphi(u))=(\alpha_{1}(u)+ \alpha_{2}\circ\varphi(u)) \in\Gamma_{\varphi},$$ $$(\beta_{1}\oplus\beta_{2})(u+ \varphi(u))=(\beta_{1}(u)+ \beta_{2}\circ\varphi(u)) \in\Gamma_{\varphi},$$ equivalent to the conditions $\alpha_2\circ\varphi(u)=\varphi\circ\alpha_1(u)$ and $\beta_2\circ\varphi(u)=\varphi\circ\beta_1(u),$ that is $ \alpha_1\circ\varphi=\varphi\circ\alpha_2$ and $\beta_1\circ\varphi=\varphi\circ\beta_2.$ Therefore, $\varphi$ is a morphism of BiHom-X algebras. \end{proof} \begin{thm}\label{bk8} Let $(A_1,\mu_{1}^{A_1},\dots,\mu_{n}^{A_1},\alpha_{1},\beta_{1})$ and $(A_2,\mu_{1}^{A_2},\dots,\mu_{n}^{A_2},\alpha_{2},\beta_{2})$ be some BiHom-X algebras. Then $A=A_1\otimes A_2$ is endowed with a BiHom-X algebra structure for twising maps $\alpha,\beta: A\rightarrow A$ and the product $\ast_i : A\otimes A\rightarrow A$ defined for any $a_1,b_1,c_1\in A_1$, $a_2,b_2,c_2\in A_2$ and $1\leq i\leq n$ by \begin{eqnarray*} \alpha(a_1\otimes a_2)&:=&\alpha_1(a_1)\otimes\alpha_2(a_2),\\ \beta(a_1\otimes a_2)&:=&\beta_1(a_1)\otimes\beta_2(a_2),\\ (a_1\otimes a_2)\ast_{i}(b_1\otimes b_2)&:=&\mu_i^{A_1}(a_1,b_1)\otimes \mu_i^{A_2}(a_2, b_2). \end{eqnarray*} \end{thm} \begin{proof} For any $a_1,b_1,c_1\in A_1$, $a_2,b_2,c_2\in A_2$ and $1\leq i,j \leq n$, \begin{multline*} ((a_1\otimes a_2)\ast_i(b_1\otimes b_2))\ast_j\beta(c_1\otimes c_2)=\\ (\mu_{i}^{A_1}(a_1,b_1)\otimes\mu_{i}^{A_{2}}(a_2,b_2))\ast_{j}\beta_1(c_1)\otimes \beta_{2}(c_2)\\ =\mu_{j}^{A_{1}}(\mu_{i}^{A_1}(a_1,b_1),\beta_1(c_1))\otimes\mu_{j}^{A_2}(\mu_{i}^{A_{2}}(a_2,b_2)),\beta_{2}(c_2)). \end{multline*} Similarly, \begin{align*} \alpha(a_1\otimes a_2)\ast_j((b_1\otimes b_2)\ast_i & (c_1\otimes c_2))= \\ &\mu_{j}^{A_{1}}(\alpha_1(a_1),\mu_{i}^{A_1}(b_1,c_1))\otimes\mu_{j}^{A_2} (\alpha_2(a_2),\mu_{i}^{A_2}(b_2,c_2)). \qedhere \end{align*} \end{proof} \begin{defn} Let $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ be a BiHom-algebra and $k\in\mathbb{N}^*$. \begin{enumerate} \item [1)] The $kth$ derived BiHom-algebra of type $1$ of $A$ is defined by \begin{eqnarray} A_1^k=(A, \mu^{(k)}_1=\mu_1\circ(\alpha^k\otimes\beta^{k}), \dots, \mu^{(k)}_n=\mu_n\circ(\alpha^k\otimes\beta^{k}),\alpha^{k+1},\beta^{k+1}). \end{eqnarray} \item [2)] The $kth$ derived BiHom-algebra of type $2$ of $A$ is defined by \begin{equation} \begin{array}{rl} & A_2^k =(A, \mu^{(2^k-1)}_1=\mu_1\circ(\alpha^{2^k-1}\otimes\beta^{2^k-1}), \dots, \\ &\quad \quad \quad \quad \quad \mu^{(2^k-1)}_n =\mu_n\circ(\alpha^{2^k-1}\otimes\beta^{2^k-1}), \alpha^{2^k},\beta^{2^{k}}). \end{array} \end{equation} \end{enumerate} Note that $A_1^0=A_2^0=(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ and $A^1_1=A_2^1=(A, \mu_1\circ(\alpha\otimes \beta), \dots,\mu_n\circ(\alpha\otimes \beta), \alpha^{2},\beta^{2})$. \end{defn} \begin{defn} A BiHom-algebra $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ endowed with a linear map $R : A\rightarrow A$ such that $\alpha\circ R=R\circ\alpha,~~\beta\circ R=R\circ \beta$ and \begin{eqnarray} \mu_i(R(x), R(y))=R\Big(\mu_i(R(x), y)+\mu_i(x, R(y))+\lambda\mu_i(x, y)\Big),\;\; i=1, \dots, n, \label{rb} \end{eqnarray} with $\lambda\in\mathbb{K}, x, y\in A$, is called a Rota-Baxter BiHom-algebra, and $R$ is called a Rota-Baxter operator of weight $\lambda$ on $A$. \end{defn} The below result allows to get BiHom-X algebras from either a BiHom-X algebra or an $X$-algebra. \begin{thm}\label{gftp1} Let $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ be a Rota-Baxter BiHom-X algebra and let $\alpha',\beta' : A\rightarrow A$ be two endomorphisms of $A$ such that any two of the maps $\alpha,\beta,\alpha',\beta'$ commute. Then, for any nonnegative integer $p$, $$A_{\alpha^{'},\beta'}=(A, \mu_{\alpha',\beta'}^1=\mu_1\circ(\alpha'^{p}\otimes\beta'^{p}), \dots, \mu_{\alpha',\beta'}^n=\mu_n\circ(\alpha'^{p}\otimes\beta'^{p}), \alpha'^{p}\circ\alpha,\beta'^{p}\circ\beta)$$ is a Rota-Baxter BiHom-X algebra. Moreover, let $(A', \mu'_1, \dots, \mu'_n,\gamma,\delta)$ be another BiHom-X algebra and $\gamma',\delta' : A'\rightarrow A'$ be two endomorphisms such that any two of the maps $\gamma,\delta,\gamma',\delta'$ commute. If $f : A\rightarrow A'$ is a morphism of BiHom-X algebras that satisfies $f\circ\alpha'=\gamma'\circ f,~f\circ\beta'=\delta'\circ f$, then $f : A_{\alpha',\beta'}\rightarrow A'_{\gamma',\delta'}$ is also a morphism of BiHom-X algebras. \end{thm} \begin{proof} The proof of the first part follows from the following facts. For any $x, y, z\in A, \; 1\leq i, j\leq n$, \begin{eqnarray*} \mu_{\alpha',\beta'}^i(\mu_{\alpha',\beta'}^j(x, y), (\beta'^{p}\circ\beta)(z)) &=&\mu_{\alpha',\beta'}^i(\mu_{\alpha',\beta'}^j(x, y), \beta'^{p}(\beta(z)))\\ &=&\mu_i\Big(\alpha'^{p}\mu_j(\alpha'^{p}(x),\beta'^{p}( y)), \beta'^{2p}(\beta(z))\Big) \\ &=&\mu_i(\mu_j(\alpha'^{2p}(x), \alpha'^{p}\beta'^{p}(y)), \beta(\beta'^{2p}(z)) \\ &=&\mu_i(\mu_j(X, Y), \beta(Z)),\\ \mu_{\alpha',\beta'}^i(\alpha'^{p}\circ\alpha(x), \mu_{\alpha',\beta'}^j(y, z)) &=& \mu_{\alpha',\beta'}^i(\alpha(\alpha'^{p}(x)), \mu_{\alpha',\beta'}^j(y, z))\\ &=& \mu_i(\alpha'^{2p}\circ\alpha(x), \beta'^{p}(\mu_j(\alpha'^{p}(y), \beta'^{p}(z)))\\ &=& \mu_i(\alpha(\alpha'^{2p}(x)), \beta'^{p}(\mu_j(\alpha'^{p}(y), \beta'^{p}(z)))\\ &=& \mu_i(\alpha(\alpha'^{2p}(x)), \mu_j(\alpha'^{p}\beta'^{p}(y), \beta'^{2p}(z))\\ &=& \mu_i(\alpha(X), \mu_j(Y, Z)), \end{eqnarray*} where $X=\alpha'^{2p}(x),~Y=\alpha'^{p}\beta'^{p}(y)$ and $Z=\beta'^{2p}(z)$. The Rota-Baxter identity (\ref{rb}) for $\mu_{\alpha',\beta'}^i$ is proved by \begin{multline*} \mu_{\alpha',\beta'}^i(R(x), R(y))=\mu_i(\alpha'^{p}(R(x)), \beta'^{p}(R(y))) =\mu_i(R(\alpha'^{p}(x)), R(\beta'^{p}(y)))\\ =R\Big(\mu_i(R(\alpha'^{p}(x)), \beta'^{p}(y))+\mu_i(\alpha'^{p}(x), R(\beta'^{p}(y)))+\lambda\mu_i(\alpha'^{p}(x), \beta'^{p}(y))\Big)\\ =R\Big(\mu_{\alpha',\beta'}^i(R(x), y)+\mu_{\alpha',\beta'}^i(x, R(y))+\lambda\mu_{\alpha',\beta'}^i(x, y)\Big). \end{multline*} The second assertion follows from \begin{align*} f(\mu_{\alpha',\beta'}^i(x, y))&=f(\mu_i(\alpha'^{p}(x), \beta'^{p}(y)))=\mu'_i(f(\alpha'^{p}(x)), f(\beta'^{p}(y)))\\ &=\mu'_i(\gamma'^{p}(f(x)), \delta'^{p}(f(y)))=\mu_{\gamma',\delta'}^i(f(x), f(y)). \qedhere \end{align*} \end{proof} We have the following series of consequence of Theorem \ref{gftp1}. \begin{cor} Let $(A, \mu_1, \dots, \mu_n)$ be an $X$-algebra and $\alpha,\beta : A\rightarrow A$ be two endomorphisms of $A$. Then $A_{\alpha,\beta}= (A, \mu_1\circ(\alpha\otimes\beta), \dots, \mu_n(\alpha\otimes\beta),\alpha,\beta)$ is a multiplicative BiHom-X algebra. \end{cor} \begin{proof} Take $\alpha=\beta=Id$ and $p=1$ in Theorem \ref{gftp1}. \end{proof} \begin{cor} Let $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ be a BiHom-X algebra. Then the $k$th derived BiHom-algebra of type $1$ and the $k$th derived BiHom-algebra of type $2$ are BiHom-X algebras. \end{cor} \begin{proof} It is sufficient to take $\alpha'=\alpha,~\beta'=\beta$, and $p={k}$ and $p={2^k-1}$ respectively in Theorem \ref{gftp1}. \end{proof} Now we introduce the notion of centroids for BiHom-X algebras. \begin{defn} A Centroid of a BiHom-algebra $(A, \mu_1, \dots, \mu_n,\alpha,\beta)$ is a linear map $\gamma : A\rightarrow A$ such that $\gamma\circ\alpha=\alpha\circ\gamma$, $\gamma\circ\beta=\beta\circ\gamma$ and for any $1\leq i\leq n$ and $x, y\in A,$ $$\gamma(\mu_i(x, y))=\mu_i(\gamma(x), y)=\mu_i(x, \gamma(y)).$$ \end{defn} \begin{thm} Let $(A, \mu_1, \dots, \mu_n,R, \alpha,\beta)$ be a Rota-Baxter BiHom-X algebra and $\gamma_1, \gamma_2 :A\rightarrow A$ be a pair of commuting elements of the centroid such that $$\gamma_i\circ R=R\circ\gamma_i, i=1,2.$$ Define bilinear maps $\mu^i_{\gamma} : A\times A\rightarrow A, i=1, \dots, n$ for any $x, y\in A$ by $$\mu^i_{\gamma}(x, y):=\mu_i(\gamma_2\gamma_1(x), y).$$ Then $A_{\gamma_1, \gamma_2}=(A, \mu^1_\gamma, \dots, \mu^n_\gamma,R, \gamma_1\alpha,\gamma_2\beta)$ is also a Rota-Baxter BiHom-X algebra. \end{thm} \begin{proof} For any $x, y\in A$ and $1\leq i,j\leq n$, \begin{align*} \mu^i_\gamma(\mu^j_{\gamma}(x, y), \gamma_2\beta(z)) &= \mu^i_\gamma(\mu_j(\gamma_1\gamma_2(x),y), \gamma_2\beta(z)) \\ &=\mu_i(\gamma_1\gamma_2\mu_j(\gamma_1\gamma_2(x),y), \gamma_2\beta(z)) \\ &=\gamma^{2}_1\gamma^{3}_2\mu_i(\mu_j(x,y),\beta(z)), \\ \mu^i_\gamma(\gamma_1\alpha(x),\mu^j_{\gamma}(y, z)) &= \mu^i_{\gamma}(\gamma_1\alpha(x),\mu_j(\gamma_1\gamma_2(y),z)) \\ &=\mu_i(\gamma_1^{2}\gamma_2\alpha(x),\mu_j(\gamma_1\gamma_2(y),z)) \\ &=\gamma^{2}_1\gamma^{3}_2\mu_i(\alpha(x),\mu_j(y,z)), \\ \mu^i_\gamma(R(x), R(y)) &= \mu_i(\gamma_1\gamma_2(R(x)), R(y))=\gamma_1\gamma_2\mu_i(R(x), R(y)) \\ &=\gamma_1\gamma_2R\Big(\mu_i(R(x), y)+\mu_i(x, R(y))+\lambda\mu_i(x, y)\Big) \\ &= R\Big(\gamma_1\gamma_2\mu_i(R(x), y)+\gamma_1\gamma_2\mu_i(x, R(y))+\lambda\gamma_1\gamma_2\mu_i(x, y)\Big) \\ &= R\Big(\mu_i(\gamma_1\gamma_2R(x), y)+\mu_i(\gamma_1\gamma_2(x), R(y))+\lambda\mu_i(\gamma_1\gamma_2(x), y)\Big) \\ &= R\Big(\mu^i_\gamma(R(x), y)+\mu^i_\gamma(x, R(y))+\lambda\mu^i_\gamma(x, y)\Big) . \qedhere \end{align*} \end{proof} \section{Bimodules and matched pairs of BiHom-left symmetric and BiHom-associative dialgebras} \label{sec:homleftsymcolordialg} In this section, we recall definitions and some key results about bimodules of BiHom-associative algebras \cite{GRAZIANI} and BiHom-left-symmetric algebras \cite{BenHassineChtiouiMabroukNcib19:CohomLiedeformBiHomleftsym}. Next, we introduce the notions of BiHom-left-symmetric dialgebra and BiHom-associative dialgebra and we give some related relevant properties. \begin{defn} A BiHom-module is a triple $(V,\alpha_V,\beta_V)$ consisting of a $\mathbb{K}$-vector space $V$ and two linear maps $\alpha_V, \beta_V: V\longrightarrow V$ such that $\alpha_V\beta_V=\beta_V\alpha_V.$ A morphism $f: (V,\alpha_V, \beta_V)\rightarrow (W,\alpha_W,\beta_W)$ of BiHom-modules is a linear map $f: V\longrightarrow W$ such that $f\alpha_V=\alpha_W f$ and $f\beta_V=\beta_W f.$ \end{defn} \begin{defn} A BiHom-associative algebra is a quadruple $(A,\mu,\alpha,\beta)$ consisting of a vector space $A$ on which the operation $\mu: A\otimes A\rightarrow A$ and $\alpha, \beta: A\rightarrow A$ are linear maps satisfying, for any $x, y, z \in A ,$ \begin{eqnarray} \alpha\circ\beta &=& \beta\circ\alpha,\label{aca0}\\ \alpha\circ\mu(x, y)&=&\mu(\alpha(x),\alpha(y)),\label{aca0.0}\\ \beta\circ\mu(x, y)&=&\mu(\beta(x),\beta(y)),\label{aca0.00}\\ \mu(\alpha(x), \mu(y, z))&=&\mu(\mu(x, y), \beta(z)). \label{aca} \end{eqnarray} \end{defn} \begin{rems} Clearly, there are the following connections between Hom-associative, BiHom-associative and BiHom-$X$ algebra structures. \begin{enumerate} \item A Hom-associative algebra $(A,\mu,\alpha)$ can be regarded as a BiHom-associative algebra $(A,\mu,\alpha,\alpha)$. \item A BiHom-associative algebra is a BiHom-$X$ algebra. \end{enumerate} \end{rems} \begin{ex} Let $\{e_1,e_2\}$ be a basis of a $2$-dimensional vector space $A$ over $\mathbb{K}$. The following multiplication $\mu$ and maps $\alpha,\beta$ on $A$ define a BiHom-associative algebra: \begin{alignat*}{4} \alpha(e_1)&=2e_1, & \qquad \alpha(e_2)&=-2a_1e_1+(1-a)e_2,\\ \beta(e_1)&=2e_1, & \qquad \beta(e_2)&=-ae_1+(1-a) e_2, \\ \mu(e_1,e_1)&=2e_1, & \qquad \mu(e_1,e_2)&= -ae_1+(1-a)e_2, \\ \mu(e_2,e_1)&=-2ae_1+(a-1)e_2, & \qquad \mu(e_2,e_2)&=2a^2 e_1+ae_2. \end{alignat*} where $a\in\mathbb{K}\backslash\{0\}$. \end{ex} \begin{defn} Let $(A, \cdot, \alpha_{1}, \alpha_{2})$ be a BiHom-associative algebra, and let $(V, \beta_{1}, \beta_{2})$ be a BiHom-module. Let $ l, r: A \rightarrow gl(V) $ be two linear maps. The quintuple $(l, r, \beta_{1}, \beta_{2}, V)$ is called a bimodule of $A$ if, for all $ x, y \in A, v \in V $, $$\begin{array}{llllllll} l(x\cdot y)\beta_{2}(v)&=& l(\alpha_{1}(x))l(y)v,&& r(x\cdot y)\beta_{1}(v)&=& r(\alpha_{2}(y))r(x)v,\\ l(\alpha_{1}(x))r(y)v &=& r(\alpha_{2}(y))l(x)v,&& \beta_{1}(l(x)v)&=& l(\alpha_{1}(x))\beta_{1}(v),\\ \beta_{1}(r(x)v)&=& r(\alpha_{1}(x))\beta_{1}(v),&& \beta_{2}(l(x)v) &=& l(\alpha_{2}(x))\beta_{2}(v),\\ \beta_{2}(r(x)v)&=& r(\alpha_{2}(x))\beta_{2}(v). \end{array}$$ \end{defn} \begin{prop} Let $(l, r, \beta_{1}, \beta_{2}, V)$ be a bimodule of a BiHom-associative algebra $(A, \cdot, \alpha_{1}, \alpha_{2})$. Then, the direct sum $A \oplus V$ of vector spaces is a BiHom-associative algebra with multiplication in $A\oplus V $, defined for all $ x_{1}, x_{2} \in A, v_{1}, v_{2} \in V$, by \begin{eqnarray*} (x_{1} + v_{1}) \ast (x_{2} + v_{2})&:=& x_{1} \cdot x_{2} + (l(x_{1})v_{2} + r(x_{2})v_{1}),\cr (\alpha_{1}\oplus\beta_{1})(x_{1} + v_{1})&:=& \alpha_{1}(x_{1}) + \beta_{1}(v_{1}),\cr (\alpha_{2}\oplus\beta_{2})(x_{1} + v_{1})&:=& \alpha_{2}(x_{1}) + \beta_{2}(v_{1}). \end{eqnarray*} \end{prop} We denote such BiHom-associative algebra by $(A \oplus V, \ast, \alpha_{1} + \beta_{1}, \alpha_{2} + \beta_{2}),$ or $A \times_{l, r, \alpha_{1}, \alpha_{2}, \beta_{1}, \beta_{2}} V.$ \begin{ex} Let $(A,\cdot,\alpha,\beta)$ be a BiHom-associative algebra. Then $(L,0,\alpha,\beta,A)$, $(0,R,\alpha,\beta,A)$ and $(L,R,\alpha,\beta,A)$ are bimodules of $(A,\cdot,\alpha,\beta)$, where $L(a)b=a\cdot b$ and $R(a)b=b\cdot a$ for all $a,b\in A$. \end{ex} \begin{thm}[\cite{HounkonnouHoundedjiSilvestrov:DoubleconstrbiHomFrobalg}] \label{matched ass} Let $(A,\cdot_A,\alpha_1,\alpha_2)$ and $(B,\cdot_B,\beta_1,\beta_2)$ be two BiHom-associative algebras. Suppose that there are linear maps $l_A,r_A:A\rightarrow gl(B)$ and $l_B,r_B:B\rightarrow gl(A)$ such that $(l_A,r_A,\beta_1,\beta_2,B) \ \mbox{is a bimodule of}\ A, (l_B,r_B,\alpha_1,\alpha_2,A) \ \mbox{is a bimodule of}\ B,$ and for any $x,y\in A,~a,b\in B$, \begin{align} \label{3} l_A(\alpha_1(x))(a\cdot_B b)&=l_A(r_B(a)x)\beta_2(b)+(l_A(x)a)\cdot_B\beta_2(b), \\ \label{4} r_A(\alpha_2(x))(a\cdot_B b)&=r_A(l_B(b)x)\beta_1(a)+\beta_1(a)\cdot_B(r_A(x)b), \\ \label{5} l_A(l_B(a)x)\beta_2(b)&+(r_A(x)a)\cdot_B\beta_2(b)\nonumber \\ &-r_A(r_B(b)x)\beta_1(a) -\beta_1(a)\cdot_B(l_A(x)b)=0, \\ \label{6} l_B(\beta_1(a))(x\cdot_A y)&=l_B(r_A(x)a)\alpha_2(y)+(l_B(a)x)\cdot_A\alpha_2(y), \\ \label{7} r_B(\beta_2(a))(x\cdot_A y)&=r_B(l_A(y)a)\alpha_1(x)+\alpha_1(x)\cdot_A(r_B(a)y), \\ \label{8} l_B(l_A(x)a)\alpha_2(y)&+(r_B(a)x)\cdot_A\alpha_2(y)\nonumber \\ &-r_B(r_A(y)a)\alpha_1(x)-\alpha_1(x)\cdot_A(l_B(a)y)=0. \end{align} Then $(A,B,l_A,r_A,\beta_1,\beta_2,l_B,r_B,\alpha_1,\alpha_2)$ is called a matched pair of BiHom-associative algebras. In this case, there is a BiHom-associative algebra structure on the direct sum $A\oplus B$ of the underlying vector spaces of $A$ and $B$ given by $$\begin{array}{llllll} (x + a) \cdot (y + b)&:=&x \cdot_A y + (l_A(x)b + r_A(y)a)+a \cdot_B b + (l_B(a)y + r_B(b)x),\cr (\alpha_{1}\oplus\beta_{1})(x + a)&:=&\alpha_{1}(x) + \beta_{1}(a),\cr (\alpha_{2}\oplus\beta_{2})(x + a)&:=&\alpha_{2}(x) + \beta_{2}(a). \end{array}$$ \end{thm} \begin{proof} For any $x,y,z\in A$ and $a,b,c\in B$, \begin{align*} &(\alpha_1+\beta_1)(x+a)\cdot((y+b)\cdot(z+c))\\ &\quad =(\alpha_1(x)+\beta_1(a))[y\cdot_A z+l_B(b)z+r_B(c)y+b\cdot c+l_A(y)c+r_A(z)b)\\ &\quad =\alpha_1(x)\cdot_A(y\cdot_A z)+\alpha_1(x)\cdot_A l_B(b)z+\alpha_1(x)\cdot_A r_B(c)y+l_B(\beta_1(a))(y\cdot_A z)\\ &\quad \quad +l_B(\beta_1(a))l_B(b)z+l_B(\beta_1(a))r_B(c)y+r_B(b\cdot_B c)\alpha_1(x)+r_B(l_A(y)c)\alpha_1(x)\\ &\quad \quad +r_B(r_A(z)b)\alpha_1(x)+\beta_1(a)\cdot_B(b\cdot_B c)+\beta_1(a)\cdot_B l_A(y)c+\beta_A(\alpha_1(x))l_A(y)c\\ &\quad \quad +l_A(\alpha_1(x))r_A(z)b+r_A(y\cdot_A z)\beta_1(a)+r_A(l_A(b)z)\beta_1(a)+r_A(r_B(c)y)\beta_1(a), \\ &((x+a)\cdot(y+b))\cdot(\alpha_2+\beta_2)(z+c)\\ &\quad=(x\cdot_A y+l_B(a)y+r_B(b)x+a\cdot_B b+l_A(x)b+r_A(y)a)\cdot(\alpha_2(z)+\beta_2(c))\\ &\quad=(x\cdot_A y)\cdot_A\alpha_2(z)+l_B(a)y\cdot_A\alpha_2(z)+r_B(b)x\cdot_A\alpha_2(z)+l_B(a\cdot_B b)\alpha_2(z)\\ &\quad\quad+l_B(l_A(x)b)\alpha_2(z)+l_B(r_A(y)a)\alpha_2(z)+r_B(\beta_2(c))(x\cdot_A y)+r_A(\beta_2(c))l_B(a)y\\ &\quad\quad +r_B(\beta_2(c))r_B(b)x+(a\cdot_B b) \cdot_B\beta_2(c)+(l_A(x)b)\cdot_B\beta_2(c)+(r_A(y)a)\cdot_B\beta_2(c)\\ &\quad\quad+r_A(\alpha_2(z))(a\cdot_B b)+r_A(\alpha_2(z))(l_A(x)b) +r_A(\alpha_2(z))(r_A(y)a)+l_A(x\cdot_A y)\beta_2(c)\\ &\quad\quad+l_A(l_B(a)y)\beta_2(c)+(r_B(b)x)\beta_2(c). \end{align*} Then \eqref{aca} and \eqref{3}-\eqref{8} yield \begin{equation*} (\alpha_1+\beta_1)(x+a)\cdot((y+b)\cdot(z+c))=((x+a)\cdot(y+b))\cdot(\alpha_2+\beta_2)(z+c). \qedhere \end{equation*} \end{proof} We denote this BiHom-associative algebra by $A\bowtie^{l_A,r_A,\beta_1,\beta_2}_{l_B,r_B,\alpha_1,\alpha_2}B$.\\ \begin{defn} A BiHom-left-symmetric algebra is a quadruple $(S, \ast, \alpha, \beta)$ consisting of a vector space $S$ on which the operation $\ast: S\otimes S\rightarrow S$ and $\alpha, \beta: S\rightarrow S$ are linear maps satisfying, for all $x, y, z\in S$, \begin{align} \alpha\circ\beta &=\beta\circ\alpha,\label{clsa0}\\ \alpha(x\ast y) &=\alpha(x)\ast\alpha(y),\label{clsa0.0}\\ \beta(x\ast y) &=\beta(x)\ast\beta(y),\label{clsa0.00}\\ (\beta(x)\ast \alpha(y))\ast\beta(z)&-\alpha\beta(x)\ast(\alpha(y)\ast z) \nonumber \\ &=(\beta(y)\ast \alpha(x))\ast\beta(z)-\alpha\beta(y)\ast(\alpha(x)\ast z). \label{clsa} \end{align} \end{defn} \begin{defn} Let $(S, \ast, \alpha_{1}, \alpha_{2})$ be a BiHom-left-symmetric algebra, and $(V, \beta_{1}, \beta_{2})$ be a BiHom-module. Let $ l, r: S \rightarrow gl(V) $ be two linear maps. The quintuple $(l, r, \beta_{1}, \beta_{2}, V)$ is called a bimodule of $S$ if for all $ x, y \in S, v \in V $, \begin{align*} l(\alpha_{2}(x)\ast\alpha_{1}(y))\beta_{2}(v)&-l(\alpha_{1}\alpha_{2}(x))l(\alpha_{1}(y))v \\ &=l(\alpha_{2}(y)\ast\alpha_{1}(x))\beta_{2}(v)-l(\alpha_{1}\alpha_{2}(y))l(\alpha_{1}(x))v,\\ r(\alpha_{2}(x))r(\alpha_{1}(y))\beta_{2}(v)&-r(\alpha_{1}(y)\ast x)\alpha_{1}\alpha_{2}(v)\\ &=r(\alpha_{2}(x))l(\alpha_{2}(y))\beta_{1}(v)-l(\alpha_{1}\alpha_{2}(y))r(x)\beta_{1}(v),\\ \beta_{1}(l(x)v)= l(\alpha_{1}(x))\beta_{1}(v),& \quad \beta_{1}(r(x)v)= r(\alpha_{1}(x))\beta_{1}(v),\\ \beta_{2}(l(x)v)= l(\alpha_{2}(x))\beta_{2}(v),& \quad \beta_{2}(r(x)v)= r(\alpha_{2}(x))\beta_{2}(v). \end{align*} \end{defn} \begin{prop} Let $(l, r, \beta_{1}, \beta_{2}, V)$ be a bimodule of a BiHom-left-symmetric algebra $(S, \ast, \alpha_{1}, \alpha_{2})$. Then, the direct sum $S \oplus V$ of vector spaces is turned into a BiHom-left-symmetric algebra by defining multiplication in $S \oplus V $ by \begin{eqnarray*} &&(x_{1} + v_{1}) \ast' (x_{2} + v_{2}):=x_{1} \ast x_{2} + (l(x_{1})v_{2} + r(x_{2})v_{1}),\cr &&(\alpha_{1}\oplus\beta_{1})(x_{1} + v_{1}):=\alpha_{1}(x_{1}) + \beta_{1}(v_{1}),\cr&&(\alpha_{2}\oplus\beta_{2})(x_{1} + v_{1}):=\alpha_{2}(x_{1}) + \beta_{2}(v_{1}), \end{eqnarray*} for all $ x_{1}, x_{2} \in S, v_{1}, v_{2} \in V$. \end{prop} We denote such a biHom-left-symmetric algebra by $(S\oplus V, \ast, \alpha_{1} + \beta_{1}, \alpha_{2} + \beta_{2}),$ or by $S\times_{l, r, \alpha_{1}, \alpha_{2}, \beta_{1}, \beta_{2}} V.$ \begin{ex} Let $(S,\ast,\alpha,\beta)$ be a BiHom-left-symmetric algebra. Then $(L,0,\alpha,\beta,S)$, $(0,R,\alpha,\beta,S)$ and $(L,R,\alpha,\beta,S)$ are bimodules of $(S,\ast,\alpha,\beta)$, where $L(a)b=a\ast b$ and $R(a)b=b\ast a$ for all $a,b\in S$. \end{ex} \begin{thm}[\cite{Laraiedh1:2021:BimodmtchdprsBihomprepois}] Let $(A,\ast_A,\alpha_1,\alpha_2)$ and $(B,\ast_{B},\beta_1,\beta_2)$ be two BiHom-left-symmetric algebras. Suppose that there are linear maps $l_{\ast_A},r_{\ast_A}:A\rightarrow gl(B)$ and $l_{\ast_B},r_{\ast_B}:B\rightarrow gl(A)$ such that \begin{align*} &(l_{\ast_A},r_{\ast_A},\beta_1,\beta_2,B) \ \mbox{is a bimodule of}\ A, \\ &(l_{\ast_B},r_{\ast_B},\alpha_1,\alpha_2,A) \ \mbox{ is a bimodule of} \ B \end{align*} and, for any $x,y\in A,~a,b\in B$ with \begin{align*} \{\alpha_2(x),\alpha_1(y)\}_A&=\alpha_2(x)\ast_A\alpha_1(y)-\alpha_2(y)\ast_A\alpha_1(x),\\ (\rho_A\circ\alpha_2)\beta_1&=(l_{\ast_A}\circ\alpha_2)\beta_1-(r_{\ast_A}\circ\alpha_1)\beta_2,\\ \{\beta_2(a),\beta_1(b)\}_B&=\beta_2(a)\ast_B\beta_1(b)-\beta_2(b)\ast_B\beta_1(a), \\ (\rho_B\circ\beta_2)\alpha_1&=(l_{\ast_B}\circ\beta_2)\alpha_1-(r_{\ast_B}\circ\beta_1)\alpha_2, \end{align*} the following equalities hold: \begin{align} &r_{\ast_A}(\alpha_2(x))\{\beta_2(a),\beta_1(b)\}_B=r_{\ast_A}(l_{\ast_B}(\beta_1(b))x)\beta_1\beta_2(a)\nonumber\\ &\quad -r_{\ast_A}(l_{\ast_B}(\beta_1(a)x)\beta_1\beta_2(b)+\beta_1\beta_2(a)\ast_Br_{\ast_A}(x)\beta_1(b)\nonumber\\ &\quad -\beta_1\beta_2(b)\ast_B r_{\ast_A}(x)\beta_1(a),\label{Lie11}\\ &l_{\ast_A}(\alpha_1\alpha_2(x))(\beta_1(a)\ast_B b) =(\rho_A(\alpha_2(x))\beta_1(a))\ast_B\beta_{2}(b)\nonumber\\ & \quad-l_{\ast_A}(\rho_B(\beta_2(a))\alpha_1(x))\beta_2(b) +\beta_1\beta_2(a)\ast_B(l_{\ast_A}(\alpha_1(x))b)\nonumber\\ &\quad+r_{\ast_A}(r_{\ast_B}(b)\alpha_1(x))\beta_1\beta_2(a),\label{Lie12}\\ &r_{\ast_B}(\beta_2(a))\{\alpha_2(x),\alpha_1(y)\}_A= r_{\ast_B}(l_{\ast_A}(\alpha_1(y))a)\alpha_1\alpha_2(x)\nonumber\\ &\quad-r_{\ast_B}(l_{\ast_A}(\alpha_1(x)a)\alpha_1\alpha_2(y)+\alpha_1\alpha_2(x)\ast_A r_{\ast_B}(a)\alpha_1(y)\nonumber\\ &\quad-\alpha_1\alpha_2(y)\ast_A r_{\ast_B}(a)\alpha_1(x),\label{Lie13}\\ &l_{\ast_B}(\beta_1\beta_2(a))(\alpha_1(x)\ast_A y) =(\rho_B(\beta_2(a))\alpha_1(x))\ast_A\alpha_{2}(y)\nonumber\\ &\quad-l_{\ast_B}(\rho_A(\alpha_2(x))\beta_1(a))\alpha_2(y)+\alpha_1\alpha_2(x)\ast_A(l_{\ast_B}(\beta_1(a))y)\nonumber\\ &\quad+r_{\ast_B}(r_{\ast_A}(y)\beta_1(a))\alpha_1\alpha_2(x).\label{Lie14} \end{align} Then for $(A,B,l_{\ast_A},r_{\ast_A},\beta_1,\beta_2,l_{\ast_B},r_{\ast_B},\alpha_1,\alpha_2)$, called a matched pair of BiHom-left-symmetric algebras, there exists a BiHom-left-symmetric algebra structure on the vector space $A\oplus B$ of the underlying vector spaces of $A$ and $B$ given by $$\begin{array}{llllll} (x + a) \ast (y + b)&:=&x \ast_A y + (l_{\ast_A}(x)b + r_{\ast_A}(y)a)+a \ast_B b + (l_{\ast_B}(a)y + r_{\ast_B}(b)x),\cr (\alpha_{1}\oplus\beta_{1})(x + a)&:=&\alpha_{1}(x) + \beta_{1}(a),\cr (\alpha_{2}\oplus\beta_{2})(x + a)&:=&\alpha_{2}(x) + \beta_{2}(a). \end{array}$$ \end{thm} \begin{proof} The proof is obtained in a similar way as for Theorem \ref{matched ass}. \end{proof} We denote this BiHom-left-symmetric algebra by $A\bowtie^{l_{\ast_A},r_{\ast_A},\beta_1,\beta_2}_{l_{\ast_B},r_{\ast_B},\alpha_1,\alpha_2}B$.\\ \subsection{BiHom-left-symmetric dialgebra} \begin{defn}\label{gls} A BiHom-left-symmetric dialgebra is a linear space $S$ equipped with a two bilinear products $\dashv,\vdash : S\times S\rightarrow S$ and two linear maps $\alpha,\beta: S\rightarrow S$ satisfying, for all $x, y, z\in S$, \begin{align} \alpha\circ\beta &=\beta\circ\alpha,\label{als0}\\ \alpha(x\dashv y) &=\alpha(x)\dashv\alpha(y), \alpha(x\vdash y)=\alpha(x)\vdash\alpha(y),\label{als0.0}\\ \beta(x\dashv y) &=\beta(x)\dashv\beta(y), \beta(x\vdash y)=\beta(x)\vdash\beta(y),\label{als0.00}\\ \alpha(x)\dashv(y\dashv z) &=\alpha(x)\dashv(y\vdash z),\label{als1}\\ (x\vdash y)\vdash\beta(z) &=(x\dashv y)\vdash\beta(z),\label{als2}\\ \alpha\beta(x)\dashv(\alpha(y)\dashv z)&-(\beta(x)\dashv \alpha(y))\dashv \beta(z) \nonumber\\ &=\alpha\beta(y)\vdash(\alpha(x)\dashv z)-(\beta(y)\vdash \alpha(x))\dashv \beta(z), \label{als3}\\ \alpha\beta(x)\vdash(\alpha(y)\vdash z)&-(\beta(x)\vdash \alpha(y))\vdash \beta(z) \nonumber\\ &=\alpha\beta(y)\vdash(\alpha(x)\vdash z)-(\beta(y)\vdash \alpha(x))\vdash \beta(z). \label{als4} \end{align} \end{defn} \begin{ex} Any BiHom-associative algebra $(A,\mu,\alpha,\beta)$ is a BiHom-left-symmetric dialgebra with $\dashv=\vdash=\mu$. \end{ex} \begin{rmk} Relation \eqref{als4} means that $(S, \vdash, \alpha,\beta)$ is a BiHom-left-symmetric algebra. So, any BiHom-left-symmetric dialgebra is a BiHom-left-symmetric algebra. \end{rmk} \begin{thm}\label{isma} Given two BiHom-left-symmetric dialgebras $(S_1,\dashv_1, \vdash_1,\alpha_1,\beta_{1})$ and $(S_2,\dashv_2, \vdash_2,\alpha_{2},\beta_{2}),$ there is a BiHom-left-symmetric dialgebra $(S_1\oplus S_2,\dashv=\dashv_1+\dashv_2, \vdash=\vdash_1+\vdash_2, \alpha_{1}+\alpha_{2},\beta_{1}+\beta_{2}),$ where the bilinear maps $\dashv,\vdash:(S_1\oplus S_2)^{\times 2}\longrightarrow (S_1\oplus S_2)$ are given for all $a_1,a_2\in S_1$ and $b_1,b_2\in S_2$ by \begin{align*} (a_1+b_1)\dashv(a_2+b_2) & :=a_1\dashv_1a_2+b_1\dashv_2 b_2,\\ (a_1+b_1)\vdash(a_2+b_2) & :=a_1\vdash_1a_2+b_1\vdash_2 b_2, \end{align*} and the linear maps $\alpha_{1}+\alpha_{2},~\beta_{1}+\beta_{2}: (S_1\oplus S_2)\longrightarrow (S_1\oplus S_2)$ are given for all $(a,b)\in S_1\times S_2$ by \begin{align*} (\alpha_1+\alpha_2)(a+b)&:= \alpha_1(a)+\alpha_2(b),\\ (\beta_1+\beta_2)(a+b)&:= \beta_1(a)+\beta_2(b). \end{align*} \end{thm} \begin{proof} We prove only the axiom \eqref{als1}, as others are proved similarly. For any $a_{1},b_{1},c_{1}\in S_1$ and $a_2, b_2, c_2\in S_2$, \begin{align*} (\alpha_1+\alpha_2)(a_1+a_2) &\dashv((b_1+b_2)\vdash(c_1+c_2)) \\ &=(\alpha_1(a_1)+\alpha_2(a_2))\dashv((b_1+b_2)\vdash(c_1+c_2))\\ &=\alpha_1(a_1)\dashv_1(b_1\vdash_1 c_1)+\alpha_2(a_2)\dashv_2(b_2\vdash_2 c_2)\\ &=\alpha_1(a_1)\dashv_1(b_1\dashv_1 c_1)+\alpha_2(a_2)\dashv_2(b_2\dashv_2 c_2)\\ &=(\alpha_1(a_1)+\alpha_2(a_2))\dashv((b_1+b_2)\dashv(c_1+c_2))\\ &=(\alpha_1+\alpha_2)(a_1+a_2)\dashv((b_1+b_2)\dashv(c_1+c_2)). \qedhere \end{align*} \end{proof} \begin{prop} Let $(S, \dashv, \vdash, \alpha, \beta)$ be a BiHom-left-symmetric dialgebra, and suppose $\alpha^{2}=\beta^{2}=\alpha\circ\beta=\beta\circ\alpha=id.$ Then, $(S, \dashv, \vdash, \alpha, \beta)\cong(S,\dashv,\vdash, \beta, \alpha).$ \end{prop} \begin{proof} We prove only one axiom, as others are proved similarly. For any $x, y, z\in S,$ \begin{eqnarray*} \alpha(x)\dashv(y\dashv z)&=&\alpha(x)\dashv(y\vdash z)\Leftrightarrow\\ \alpha(\alpha\beta(x))\dashv (y\dashv z)&=&\alpha(\alpha\beta(x))\dashv(y\vdash z)\Leftrightarrow\\ \alpha^{2}\beta(x)\dashv(y\dashv z)&=&\alpha^{2}\beta(x)\dashv(y\vdash z)\Leftrightarrow\\ \beta(x)\dashv(y\dashv z)&=&\beta(x)\dashv(y\vdash z). \end{eqnarray*} Then $(S, \dashv,\vdash, \alpha, \beta)\cong(S, \dashv,\vdash, \beta, \alpha).$ \end{proof} \begin{thm}\label{th1.1} Let $(S, \dashv, \vdash,\alpha,\beta)$ be a BiHom-left-symmetric dialgebra and $\alpha'\beta' : S\rightarrow S$ be two endomorphisms of $S$ such that any two of the maps $\alpha,\beta,\alpha',\beta'$ commute. Then, $S_{\alpha',\beta'}=(S, \dashv_{\alpha',\beta'}=\dashv\circ(\alpha'\otimes\beta'), \vdash_{\alpha',\beta'}=\vdash\circ(\alpha'\otimes\beta'), \alpha'\circ\alpha,\beta'\circ\beta)$ is a BiHom-left-symmetric dialgebra. Moreover, suppose that $(S', \dashv', \vdash',\gamma,\delta)$ is another BiHom-left-symmetric dialgebra and $\gamma',\delta' : S'\rightarrow S'$ be two endomorphisms of $S'$ such that any two of the maps $\gamma,\delta,\gamma',\delta'$ commute. If $f : S\rightarrow S'$ is a morphism of BiHom-left-symmetric dialgebras such that $f\circ\alpha'=\gamma'\circ f,~f\circ\beta'=\delta'\circ f$, then $ f : S_{\alpha',\beta'}\rightarrow S'_{\gamma',\delta'}$ is a morphism of BiHom-left-symmetric dialgebras. \end{thm} \begin{proof} We only prove \eqref{als1} in $S_{\alpha',\beta'}$, since the other axioms are proved analogously. For any $x, y, z\in S$, $$\begin{array}{lllllll}\alpha\alpha'(x)\dashv_{\alpha',\beta'}(y\dashv_{\alpha',\beta'} z)&=&\alpha\alpha(x)\dashv_{\alpha',\beta'}(\alpha'^{2}(y)\dashv\beta'(z))\\ &=&\alpha\alpha'^{2}(x)\dashv(\alpha'\beta'(y)\dashv\beta'^{2}(z)) \\ &=&\alpha\alpha'^{2}(x)\dashv(\alpha'\beta'(y)\vdash\beta'^{2}(z))\quad (by~\textsc{\eqref{als1}}~in~S)\\ &=&\alpha\alpha'(x)\dashv_{\alpha',\beta'}(\alpha'(y)\vdash\beta'(z))\\ &=&\alpha\alpha'(x)\dashv_{\alpha',\beta'}(y\vdash_{\alpha',\beta'} z).\end{array}$$ For the second assertion, for any $x,y\in S$, \begin{eqnarray*} f(x\dashv_{\alpha',\beta'} y)&=&f(\alpha'(x)\dashv\beta'(y))=f(\alpha'(x))\dashv' f(\beta'(y))\\ &=&\gamma'(f(x))\dashv'\delta'(f(y))=f(x)\dashv'_{\gamma',\delta'} f(y). \end{eqnarray*} Similarly, $f(x\vdash_{\alpha',\beta'} y)=f(x)\vdash'_{\gamma',\delta'} f(y)$. \end{proof} \begin{defn} Let $(S, \dashv, \vdash, \alpha_{1}, \alpha_{2})$ be a BiHom-left-symmetric dialgebra, and $V$ be a vector space. Let $l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash} :S \rightarrow gl(V),$ and $\beta_{1}, \beta_{2}: V \rightarrow V$. Then, $ (l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash}, \beta_{1}, \beta_{2}, V)$ is called a bimodule of $ S $ if the following equations hold for any $ x, y \in S $ and $v\in V$: $$\begin{array}{rlllllllllll} l_{\dashv}(\alpha_{1}(x))l_{\dashv}(y)v&=& l_{\dashv}(\alpha_{1}(x))l_{\vdash}(y)v,\cr r_{\dashv}(\alpha_{1}(x)\dashv y)\beta_{1}(v)&=&r\dashv(x\vdash y)\beta_{1}(v),\cr l_{\dashv}(\alpha_{1}(x))r_{\dashv}(y)v&=&l_{\dashv}(\alpha_{1}(x))r_{\vdash}(y)v,\cr l_{\vdash}(x \vdash y)\beta_{2}(v) &=& l_{\vdash}(x \dashv y)\beta_{2}(v),\cr r_{\vdash}(\alpha_{2}(x))r_{\vdash}(y)v &=& r_{\vdash}(\alpha_{2}(x))r_{\dashv}(y)v, \cr r_{\dashv}(\alpha_{2}(x))l_{\vdash}(y)v &=& r_{\vdash}(\alpha_{2}(x))l_{\dashv}(y)v,\cr l_{\dashv}(\alpha_{1}\alpha_{2}(x))l_{\dashv}(\alpha_{1}(y))v&-& l_{\dashv}(\alpha_{2}(x)\dashv\alpha_{1}(y))\beta_{2}(v)\cr &=& l_{\vdash}(\alpha_{1}\alpha_{2}(y))l_{\dashv}(\alpha_{1}(x))v-l_{\dashv}(\alpha_{2}(y)\vdash\alpha_{1}(x))\beta_{2}(v),\cr r_{\dashv}(\alpha_{1}(x)\dashv y)\beta_{1}\beta_{2}(v)&-&r_{\dashv}(\alpha_{2}(y))r_{\dashv}(\alpha_{1}(x))\beta_{2}(v)\cr &=& l_{\vdash}(\alpha_{1}\alpha_{2}(x))r_{\dashv}(y)\beta_{1}(v)- r_{\dashv}(\alpha_{2}(x))l_{\vdash}(\alpha_{2}(y))\beta_{1}(v),\cr l_{\vdash}(\alpha_{1}\alpha_{2}(x))l_{\vdash}(\alpha_{1}(y))v&-&l_{\vdash}(\alpha_{2}(x)\vdash\alpha_{1}(y))\beta_{2}(v)\cr &=& l_{\vdash}(\alpha_{1}\alpha_{2}(y))l_{\vdash}(\alpha_{1}(x))v-l_{\vdash}(\alpha_{2}(y)\vdash\alpha_{1}(x))\beta_{2}(v),\cr r_{\vdash}(\alpha_{1}(x)\vdash y)\beta_{1}\beta_{2}(v)&-&r_{\vdash}(\alpha_{2}(y))r_{\vdash}(\alpha_{1}(x))\beta_{2}(v)\cr &=& l_{\vdash}(\alpha_{1}\alpha_{2}(x))r_{\vdash}(y)\beta_{1}(v)- r_{\vdash}(\alpha_{2}(x))l_{\vdash}(\alpha_{2}(y))\beta_{1}(v),\cr \beta_{1}(l_{\vdash}(x)v)= l_{\vdash}(\alpha_{1}(x))\beta_{1}(v),&&\beta_{1}(r_{\vdash}(x)v)= r_{\vdash}(\alpha_{1}(x))\beta_{1}(v),\cr \beta_{2}(l_{\vdash}(x)v) = l_{\vdash}(\alpha_{2}(x))\beta_{2}(v),&&\beta_{2}(r_{\vdash}(x)v)= r_{\vdash}(\alpha_{2}(x))\beta_{2}(v),\cr \beta_{1}(l_{\dashv}(x)v)= l_{\dashv}(\alpha_{1}(x))\beta_{1}(v),&& \beta_{1}(r_{\dashv}(x)v)= r_{\dashv}(\alpha_{1}(x))\beta_{1}(v),\cr \beta_{2}(l_{\dashv}(x)v) = l_{\dashv}(\alpha_{2}(x))\beta_{2}(v),&&\beta_{2}(r_{\dashv}(x)v)=r_{\dashv}(\alpha_{2}(x))\beta_{2}(v). \end{array}$$ \end{defn} \begin{prop} If $(l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash}, \beta_{1}, \beta_{2}, V)$ is a bimodule of a BiHom-left-symmetric dialgebra $(S,\dashv, \vdash, \alpha_{1}, \alpha_{2}),$ then there exists a BiHom-left-symmetric dialgebra structure on the direct sum $S\oplus V $ of the underlying vector spaces of $ S $ and $V$ given for all $ x, y \in S, u, v \in V $ by \begin{eqnarray*} (x + u) \dashv' (y + v) &:=& x \dashv y + l_{\dashv}(x)v + r_{\dashv}(y)u,\cr (x + u) \vdash' (y + v) &:=& x \vdash y + l_{\vdash}(x)v + r_{\vdash}(y)u, \cr (\alpha_1+\beta_1)(x+u)&:=&\alpha_1(x)+\beta_1(u),\cr (\alpha_2+\beta_2)(x+u)&:=&\alpha_2(x)+\beta_2(u). \end{eqnarray*} We denote this structure by $ S \times_{l_{\dashv},r_{\dashv}, l_{\vdash}, r_{\vdash}, \alpha_{1}, \alpha_{2}, \beta_{1}, \beta_{2}} V$. \end{prop} \begin{proof} We prove only the axiom \eqref{als1}, as the others are proved similarly. For any $x_{1},x_{2},x_{3}\in S$ and $v_1, v_2, v_3\in V$, \begin{align*} &(\alpha_{1}+\beta_{1})(x_{1}+v_{1})\dashv'((x_2+v_2)\dashv'(x_3+v_3))\\ &\quad=(\alpha_1(x_1)+\beta_1(v_1))\dashv'(x_1\dashv x_3+l_\dashv(x_2)v_3+r_\dashv(x_3)v_2)\\ &\quad =\alpha_1(x_1)\dashv(x_2\dashv x_3)+l_\dashv(\alpha_1(x_1))l_\dashv(x_2)v_3 \\ & \quad\quad +l_\dashv(\alpha_1(x_1))r_\dashv(x_3)v_2+r_\dashv(x_2\dashv x_3)\beta_1(v_1)\\ & \quad=\alpha_1(x_1)\dashv(x_2\vdash x_3)+l_{\dashv}(\alpha_1(x_1))l_\vdash(x_2)v_3\\ &\quad \quad +l_\dashv(\alpha_1(x_1))r_\vdash(x_3)v_2+r_\dashv(x_2\vdash x_3)\beta_1(v)\\ &\quad =(\alpha_{1}+\beta_{1})(x_{1}+v_{1})\dashv'((x_2+v_2)\vdash'(x_3+v_3)). \qedhere \end{align*} \end{proof} \begin{exes}\begin{enumerate} \item Let $(S,\dashv, \vdash,\alpha,\beta)$ be a BiHom-left-symmetric dialgebra. Then $(L_{\dashv},R_{\dashv},L_{\vdash},R_{\vdash},\alpha,\beta,S)$ and $(L_{\dashv},0,0,R_{\vdash},\alpha,\beta,S)$ are a bimodules of $(S,\dashv, \vdash,\alpha,\beta)$ where $L_{\dashv}(a)b=a\dashv b,~R_{\dashv}(a)b=b\dashv a,~L_{\vdash}(a)b=a\vdash b$ and $R_{\vdash}(a)b=b\vdash a$ for all $(a,b)\in S^{2}$. More generally, if $B$ is a two-sided BiHom-ideal of $(S,\dashv, \vdash,\alpha,\beta)$, then $(L_{\dashv},R_{\dashv},L_{\vdash},R_{\vdash},\alpha,\beta,B)$ is a bimodule of $S$, where for all $x\in B$ and $(a,b)\in S^{2}$, $$L_{\dashv}(a)x=a\dashv x=x\dashv a=R_{\dashv}(a)x, \quad L_{\vdash}(a)x=a\vdash x=x\vdash a=R_{\vdash}(a)x.$$ \item If $(S,\dashv, \vdash)$ is a left-symmetric dialgebra and $(l_\dashv,r_\dashv,l_\vdash,r_\vdash,V)$ are a bimodules of $S$ in the usual sense, then $(l_\dashv,r_\dashv,l_\vdash,r_\vdash,Id_{V},Id_{V},V)$ is a bimodule of $\mathbb{S}$, where $\mathbb{S}=(S,\dashv, \vdash,Id_{S}, Id_{S})$ is a BiHom-left-symmetric dialgebra.\end{enumerate} \end{exes} \begin{prop}\label{viaf} If $f:(S,\dashv_1, \vdash_1,\alpha_{1},\alpha_2)\longrightarrow(S',\dashv_2, \vdash_2,\beta_{1},\beta_{2})$ is a morphism of BiHom-left-symmetric dialgebras, then $(l_{\dashv_1},r_{\dashv_1},l_{\vdash_1},r_{\vdash_1},\beta_1,\beta_{2},S')$ becomes a bimodule of $S$ via $f$, that is, $l_{\dashv_1}(a)b=f(a)\dashv_2 b,~r_{\dashv_1}(a)b=b \dashv_2 f(a),~l_{\vdash_1}(a)b=f(a)\vdash_2 b$ and $r_{\vdash_1}(a)b=b \vdash_2 f(a)$ for all $(a,b)\in S\times S'$. \end{prop} \begin{proof} We prove only first axiom, as the other axioms are proved similarly. For any $x,y\in S, z\in S'$, \begin{align*} l_{\dashv_1}(\alpha_1(x))l_{\vdash_1}(y)z&=f(\alpha_1(x))\dashv_2(f(y)\vdash_2 z)\\ &=\beta_1f(x)\dashv_2( f(y)\vdash_2 z)\\ &=\beta_1f(x)\dashv_2( f(y)\dashv_2 z) \quad \textrm{(by \eqref{als1})}\\ &=f(\alpha_1(x))\dashv_2(f(y)\dashv_2 z)\\ &=l_{\dashv_1}(\alpha_1(x))l_{\dashv_1}(y)z. \qedhere \end{align*}\end{proof} \begin{defn} An abelian extension of BiHom-left-symmetric dialgebra is a short exact sequence of BiHom-left-symmetric dialgebras $$0\longrightarrow (V,\alpha_{V},\beta_{V})\stackrel{\mbox{i}} \longrightarrow(A,\dashv_{A},\vdash_A,\alpha_{A},\beta_{A})\stackrel{\mbox{$\pi$}}\longrightarrow (B,\dashv_{B},\vdash_B,\alpha_{B},\beta_{B})\longrightarrow 0 ,$$ where $(V,\alpha_{V},\beta_{V})$ is a trivial BiHom-left-symmetric dialgebra, and $i$ and $\pi$ are morphisms of BiHom-left-symmetric dialgebras. Furthermore, if there exists a morphism $s:(B,\dashv_{B},\vdash_B,\alpha_{B},\beta_{B}) \longrightarrow (A,\dashv_{A},\vdash_A,\alpha_{A},\beta_{A})$ such that $\pi\circ s=id_{B}$ then the abelian extension is said to be split and $s$ is called a section of $\pi$. \end{defn} \begin{rmk} Consider the split null extension $S\oplus V$ determined by the bimodule $(l_\dashv,r_\dashv,l_\vdash,r_\vdash,\alpha_{V},\beta_{V},V)$ of BiHom-left-symmetric dialgebra $(S,\dashv,\vdash,\alpha,\beta)$ in the previous proposition. Write elements $a+v$ of $S\oplus V$ as $(a,v).$ Then there is an injective homomorphism of BiHom-modules $i :V\rightarrow S\oplus V $ given by $i(v)=(0,v)$ and a surjective homomorphism of BiHom-modules $\pi : S\oplus V\rightarrow S$ given by $\pi(a,v)=a.$ Moreover, $i(V)$ is a two-sided BiHom-ideal of $S\oplus V$ such that $S\oplus V/i(V)\cong S$. On the other hand, there is a morphism of BiHom-left-symmetric dialgebra $\sigma: S\rightarrow S\oplus V$ given by $\sigma(a)=(a,0)$ which is clearly a section of $\pi.$ Hence, we obtain the abelian split exact sequence of BiHom-left-symmetric dialgebra and $(l_\dashv,r_\dashv,l_\vdash,r_\vdash,\alpha_{V},\beta_{V},V)$ is a bimodule for $S$ via $\pi.$ \end{rmk} \begin{thm}\label{mamm} Let $(S,\dashv, \vdash,\alpha_1,\alpha_2)$ be a BiHom-left-symmetric dialgebra, and let $(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},\beta_1,\beta_2,V)$ be a bimodule of $S$. Let $\alpha'_1,\alpha'_2$ be endomorphisms of $S$ such that any two of the maps $\alpha_1,\alpha'_1,\alpha_2,\alpha'_2$ commute and $\beta'_1,~\beta'_2$ be linear self-maps of $V$ such that any two of the maps $\beta_1,\beta'_1,\beta_2,\beta'_2$ commute. Suppose furthermore that $$\left\{ \begin{array}{lllllll} \beta'_1\circ l_\dashv=(l_\dashv\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ l_\dashv=(l_\dashv\circ\alpha'_2)\beta'_2,& \\ \beta'_1\circ l_\vdash=(l_\vdash\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ l_\vdash=(l_\vdash\circ\alpha'_2)\beta'_2,& \end{array} \right.$$ $$\left\{ \begin{array}{lllllll} \beta'_1\circ r_\dashv=(r_\dashv\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ r_\dashv=(r_\dashv\circ\alpha'_2)\beta'_2,& \\ \beta'_1\circ r_\vdash=(r_\vdash\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ r_\vdash=(r_\vdash\circ\alpha'_2)\beta'_2,& \end{array} \right.$$ and let $S_{\alpha'_1,\alpha'_2}$ be the BiHom-left-symmetric dialgebra $(S,\dashv_{\alpha'_1,\alpha'_2}, \vdash_{\alpha'_1,\alpha'_2},\alpha_1\alpha'_1,\alpha_2\alpha'_2)$ and $V_{\beta'_1,\beta'_2}=(\widetilde{l}_{\dashv},\widetilde{r}_{\dashv},\widetilde{l}_{\vdash}, \widetilde{r}_{\vdash},\beta_1\beta'_1,\beta_2\beta'_2,V)$ where \begin{equation} \begin{array}{l} \widetilde{l}_{\dashv}=(l_{\dashv}\circ\alpha'_1)\beta'_2, \quad \widetilde{r}_{\dashv}=(r_{\dashv}\circ\alpha'_2)\beta'_1, \\ \widetilde{l}_{\vdash}=(l_{\vdash}\circ\alpha'_1)\beta'_2, \quad \widetilde{r}_{\vdash}=(r_{\vdash}\circ\alpha'_2)\beta'_1. \end{array} \end{equation} Then $V_{\beta'_1,\beta'_2}$ is a bimodule of $S_{\alpha'_1,\alpha'_2}$. \end{thm} \begin{proof} We prove only one axiom, as others are proved similarly. For any $x,y\in S, v\in V$, \begin{align*} \widetilde{l}_{\dashv}(\alpha_1\alpha'_1(x))\widetilde{l}_{\dashv}(y)v &=l_{\dashv}(\alpha_1\alpha'^{2}_1(x))\beta'_2 (l_{\dashv}(\alpha'_{1}(y))\beta'_2(v)\\ &=l_{\dashv}(\alpha_1\alpha'^{2}_1(x))l_{\dashv}(\alpha'_1\alpha'_2(y))\beta'^{2}_2(v)\\ &=l_{\dashv}(\alpha_1\alpha'^{2}_1(x))l_{\vdash}(\alpha'_1\alpha'_2(y))\beta'^{2}_2(v)\\ &=\widetilde{l}_{\dashv}(\alpha_1\alpha'_1(x))\widetilde{l}_{\vdash}(y)v. \qedhere \end{align*} \end{proof} \begin{cor} Let $(S,\dashv, \vdash,\alpha_1,\alpha_2)$ be a BiHom-left-symmetric dialgebra, and let $(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},\beta_1,\beta_2,V)$ be a bimodule of $S$. Then $V_{\beta^{q}_1,\beta^{q}_2}$ is a bimodule of $S_{\alpha^{p}_1,\alpha^{p}_2}$ for any nonnegative integers $p$ and $q$. \end{cor} \begin{proof} Apply Theorem \ref{mamm} with $\alpha'_1=\alpha_1^{p},~\alpha'_2=\alpha_{2}^{p}$ and $\beta'_1=\beta_1^{q},~\beta'_2=\beta_{2}^{q}$. \end{proof} Assume that $( l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash}, \beta_{1}, \beta_{2}, V)$ be a bimodule of a BiHom-left-symmetric dialgebra $(S, \dashv, \vdash, \alpha_{1}, \alpha_{2})$ and let $l_{\dashv}^{\ast}, r_{\dashv}^{\ast}, l_{\vdash}^{\ast}, r_{\vdash}^{\ast}:S\rightarrow gl(V^{\ast}).$ Let $\alpha_1^{\ast},\alpha_{2}^{\ast}:S^{\ast}\rightarrow S^{\ast},$ and $\beta_{1}^{\ast},\beta_{2}^{\ast}:V^{\ast}\rightarrow V^{\ast}$ be the dual maps of respectively $\alpha_1,\alpha_2,\beta_1$ and $\beta_2$ such that $$\begin{array}{llllllll} \langle l_{\dashv}^{\ast}(x)u^{\ast},v\rangle =\langle u^{\ast},l_{\dashv}(x)v\rangle,&& \langle r^{\ast}_{\dashv}(x)u^{\ast},v\rangle =\langle u^{\ast},r_{\dashv}(x)v\rangle,\\ \langle l_{\vdash}^{\ast}(x)u^{\ast},v\rangle =\langle u^{\ast},l_{\vdash}(x)v\rangle,&& \langle r^{\ast}_{\vdash}(x)u^{\ast},v\rangle =\langle u^{\ast},r_{\vdash}(x)v\rangle,\\ \alpha_{1}^{\ast}(x^{\ast}(y))=x^{\ast}(\alpha_{1}(y)),&& \alpha_{2}^{\ast}(x^{\ast}(y))=x^{\ast}(\alpha_{2}(y)),\\ \beta_{1}^{\ast}(u^{\ast}(v))=u^{\ast}(\beta_{1}(v)),&& \beta_{2}^{\ast}(u^{\ast}(v))=u^{\ast}(\beta_{2}(v)). \end{array}$$ The following proposition holds. \begin{prop} If $( l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash}, \beta_{1}, \beta_{2}, V)$ is a bimodule of a BiHom-left-symmetric dialgebra $(S, \dashv, \vdash, \alpha_{1}, \alpha_{2})$, then $( l_{\dashv}^{\ast}, r_{\dashv}^{\ast}, l_{\vdash}^{\ast}, r_{\vdash}^{\ast}, \beta_{1}^{\ast}, \beta_{2}^{\ast}, V^{\ast})$ is a bimodule of $(S, \dashv, \vdash,\alpha_{1}, \alpha_{2})$ provided that for all $x,y\in S$ and $u\in V$, \begin{align*} l_{\dashv}(y)l_{\dashv}(\alpha_{1}(x))u&= l_{\vdash}(y)l_{\dashv}(\alpha_{1}(x))u,\cr\beta_{1}( r_{\dashv}(\alpha_{1}(x)\dashv y)(v))u&=\beta_{1}(r\dashv(x\vdash y))u,\cr r_{\dashv}(y)l_{\dashv}(\alpha_{1}(x))u&=r_{\vdash}(y)l_{\dashv}(\alpha_{1}(x))u,\cr \beta_{2}(l_{\vdash}(x \vdash y))u &= \beta_{2}(l_{\vdash}(x \dashv y))u,\cr r_{\vdash}(y)r_{\vdash}(\alpha_{2}(x))u &=r_{\dashv}(y) r_{\vdash}(\alpha_{2}(x))u, \cr l_{\vdash}(y)r_{\dashv}(\alpha_{2}(x))u &= l_{\dashv}(y)r_{\vdash}(\alpha_{2}(x))u,\cr l_{\dashv}(\alpha_{1}(y))l_{\dashv}(\alpha_{1}\alpha_{2}(x))u&-\beta_{2}(l_{\dashv}(\alpha_{2}(x)\dashv\alpha_{1}(y)))u\cr &= l_{\dashv}(\alpha_{1}(x))l_{\vdash}(\alpha_{1}\alpha_{2}(y))u-\beta_{2}(l_{\dashv}(\alpha_{2}(y)\vdash\alpha_{1}(x)))u,\cr \beta_{2}\beta_{1}(r_{\dashv}(\alpha_{1}(x)\dashv y))u&-\beta_{2}r_{\dashv}(\alpha_{1}(x))r_{\dashv}(\alpha_{2}(y))u\cr &=r_{\dashv}(y)\beta_{1}l_{\vdash}(\alpha_{1}\alpha_{2}(x))u- \beta_{1}l_{\vdash}(\alpha_{2}(y))r_{\dashv}(\alpha_{2}(y))u,\cr l_{\vdash}(\alpha_{1}(y))l_{\vdash}(\alpha_{1}\alpha_{2}(x))u&-\beta_{2}(l_{\vdash}(\alpha_{2}(x)\vdash\alpha_{1}(y)))u\cr &= l_{\vdash}(\alpha_{1}(x))l_{\vdash}(\alpha_{1}\alpha_{2}(y))u-\beta_{2}(l_{\vdash}(\alpha_{2}(y)\vdash\alpha_{1}(x)))u,\cr \beta_{2}(r_{\vdash}(\alpha_{1}(x)\vdash y)\beta_{1})u&-\beta_2 r_{\vdash}(\alpha_{1}(x))r_{\vdash}(\alpha_{2}(y))u\cr &=\beta_{1}r_{\vdash}(y)l_{\vdash}(\alpha_{1}\alpha_{2}(x))u- \beta_{1}l_{\vdash}(\alpha_{2}(y)) r_{\vdash}(\alpha_{2}(y))u. \end{align*} \end{prop} The following theorem is proved in a similar way as for Theorem \ref{matched ass}. \begin{thm} Let $(A,\dashv_A,\vdash_A,\alpha_1,\alpha_2)$ and $(B,\dashv_B,\vdash_{B},\beta_1,\beta_2)$ be two BiHom-left-symmetric dialgebras. Suppose that there are linear maps $l_{\dashv_A},r_{\dashv_A},l_{\vdash_A},r_{\vdash_A}:A\rightarrow gl(B)$ and $l_{\dashv_B},r_{\dashv_B},l_{\vdash_B},r_{\vdash_B}:B\rightarrow gl(A)$ such that $( l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}}, \beta_{1}, \beta_{2}, B)$ is a bimodule of $A,$ and $(l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}}, \alpha_{1}, \alpha_{2}, A)$ is a bimodule of $B,$ and for all $x,y\in A,~a,b\in B$, the following equalities hold: \begin{align} \label{bieq35A} & \begin{array}{l} l_{\dashv_A}(\alpha_1(x))(a\dashv_B b)=l_{\dashv_A}(\alpha_1(x))(a\vdash_B b), \end{array} \\ \label{bieq36A} & \begin{array}{l} r_{\dashv_A}(r_{\dashv_B}(b)x)\beta_1(a)+\beta_1(a)\dashv_B(l_{\dashv_A}(x)b)= \\ \quad r_{\dashv_A}(r_{\vdash_B}(b)x)\beta_1(a)+\beta_1(a)\dashv_B(l_{\vdash_A}(x)b), \end{array} \\ \label{bieq37A} & \begin{array}{l} r_{\dashv_A}(l_{\dashv_B}(b)x)\beta_1(a)+\beta_1(a)\dashv_B(r_{\dashv_A}(x)b)=\\ \quad r_{\dashv_A}(l_{\vdash_B}(b)x)\beta_1(a)+\beta_1(a)\dashv_B(r_{\vdash_A}(x)b), \end{array} \\ \label{bieq38A} & \begin{array}{l} r_{\vdash_A}(\beta(x))(a\vdash_B b)=r_{\dashv_A}(\alpha_2(x))(a\dashv_B b), \end{array} \\ \label{bieq39A} & \begin{array}{l} l_{\vdash_A}(r_{\vdash_B}(a)x)\beta_2(b)+(l_{\vdash_A}(x)a)\vdash_B\beta_2(b)=\\ \quad l_{\vdash_A}(r_{\dashv_B}(a)x)\beta_2(b)+(l_{\dashv_A}(x)a)\vdash_B\beta_2(b), \end{array}\\ \label{bieq40A} &\begin{array}{l} l_{\vdash_A}(l_{\dashv_B}(a)x)\beta_2(b)+(r_{\vdash_A}(x)a)\vdash_B\beta_2(b)= \\ \quad l_{\vdash_A}(l_{\dashv_B}(a)x)\beta_2(b)+(r_{\dashv_A}(x)a)\vdash_B\beta_2(b), \end{array}\\ \label{bieq41A} &\begin{array}{l} l_{\dashv_A}(\alpha_1\alpha_2(x))(\alpha_1(a)\dashv_B b) -l_{\dashv_A}(r_{\dashv_B}(\beta_1(a))\alpha_2(x))\beta_2(b)\\ \quad -(l_{\dashv_A}(\alpha_1(x))\beta_1(a))\dashv_B\beta_2(b)=\\ \beta_1\beta_2(a)\vdash_B(l_{\dashv_A}(\alpha_1(x))b)+r_{\vdash_A}(r_{\dashv_B}(b)\alpha_1(x))\beta_1\beta_2(a)\\ \quad-(r_{\vdash_A}(\alpha_1(x))\beta_2(a))\dashv_B \beta_2(b) -l_{\dashv_A}(l_{\vdash_B}(\beta_2(a))\alpha_1(x))\beta_2(b), \end{array} \\ \label{bieq42A} &\begin{array}{l} \beta_1\beta_2(a)\dashv_B(l_{\dashv_A}(\alpha_1(x))b)+r_{\dashv_A}(r_{\dashv_B}(b)\alpha_1(x))\beta_1\beta_2(a)\\ \quad -(r_{\dashv_A}(\alpha_1(x))\beta_2(a))\dashv_B\beta_2(b)=\\ l_{\vdash_A}(\alpha_1\alpha_2(x))(\beta_2(a)\dashv_B b)-(l_{\vdash_A}(\alpha_2(x))\beta_2(a))\dashv_B\beta_2(b)\\ \quad -l_{\dashv_A}(l_{\vdash_B}(\alpha_2(x))\beta_2(a))\beta_2(b), \end{array}\\ \label{bieq43A} &\begin{array}{l} \beta_1\beta_2(a)\dashv_B(r_{\dashv_A}(x)\beta_2(b))+r_{\dashv_A}(l_{\dashv_B}(\beta_2(b))x)\beta_1\beta_2(a)\\ \quad -r_{\dashv_A}(\alpha_2(x))(\beta_2(a)\dashv_B\beta_1(b))=\\ \beta_1\beta_2(b)\vdash_B(r_{\dashv_A}(x)\beta_1(a))+r_{\vdash_A}(l_{\dashv_B}(\beta_1(a))x)\beta_1\beta_2(b) \\ \quad -r_{\dashv_A}(\alpha_2(x))(\beta_2(b)\vdash_B\beta_1(a)), \end{array} \\ \label{bieq44A} & \begin{array}{l} l_{\vdash_A}(\alpha_1\alpha_2(x))(\beta_1(a)\vdash_B b)-l_{\vdash_A}(r_{\vdash_B}(\beta_1(a))\alpha_2(x))\beta_2(b) \\ \quad -(l_{\vdash_A}(\alpha_2(x))\beta_1(a))\vdash_B\beta_2(b)=\\ \beta_1\beta_2(a)\vdash_B(l_{\vdash_A}(\alpha_1(x))b)+r_{\vdash_A}(r_{\vdash_B}(b)\alpha_1(x))\beta_1\beta_2(a)\\ \quad -(r_{\vdash_A}(\alpha_1(x))\beta_2(a))\vdash_B \beta_2(b) -l_{\vdash_A}(l_{\vdash_B}(\beta_2(a))\alpha_1(x))\beta_2(b), \end{array} \\ \label{bieq45A} & \begin{array}{l} \beta_1\beta_2(a)\vdash_B(l_{\vdash_A}(\alpha_1(x))b)+r_{\vdash_A}(r_{\vdash_B}(b)\alpha_1(x))\beta_1\beta_2(a) \\ \quad -(r_{\vdash_A}(\alpha_1(x))\beta_2(a))\vdash_B\beta_2(b)=\\ \quad l_{\vdash_A}(\alpha_1\alpha_2(x))(\beta_2(a)\vdash_B b) -(l_{\vdash_A}(\alpha_2(x))\beta_2(a))\vdash_B\beta_2(b)\\ \quad -l_{\vdash_A}(l_{\vdash_B}(\alpha_2(x))\beta_2(a))\beta_2(b), \end{array}\\ \label{bieq46A} & \begin{array}{l} \beta_1\beta_2(a)\vdash_B(r_{\vdash_A}(x)\beta_2(b))+r_{\vdash_A}(l_{\vdash_B}(\beta_2(b))x)\beta_1\beta_2(a)\\ \quad -r_{\vdash_A}(\alpha_2(x))(\beta_2(a)\vdash_B\beta_1(b))=\\ \beta_1\beta_2(b)\vdash_B(r_{\vdash_A}(x)\beta_1(a))+ r_{\vdash_A}(l_{\vdash_B}(\beta_1(a))x)\beta_1\beta_2(b)\\ \quad -r_{\vdash_A}(\alpha_2(x))(\beta_2(b)\vdash_B\beta_1(a)), \end{array}\\ \label{bieq35B} & \begin{array}{l} l_{\dashv_B}(\beta_1(a))(x\dashv_A y)=l_{\dashv_B}(\beta_1(a))(x\vdash_A y), \end{array}\\ \label{bieq36B} &\begin{array}{l} r_{\dashv_B}(r_{\dashv_A}(y)a)\alpha_1(x)+\alpha_1(x)\dashv_A(l_{\dashv_B}(a)y)=\\ \quad r_{\dashv_B}(r_{\vdash_A}(y)a)\alpha_1(x)+\alpha_1(x)\dashv_A(l_{\vdash_B}(a)y), \end{array}\\ \label{bieq37B} &\begin{array}{l} r_{\dashv_B}(l_{\dashv_A}(y)a)\alpha_1(x)+\alpha_1(x)\dashv_A(r_{\dashv_B}(a)y)=\\ \quad r_{\dashv_B}(l_{\vdash_A}(y)a)\alpha_1(x)+\alpha_1(x)\dashv_A(r_{\vdash_B}(a)y), \end{array}\\ \label{bieq38B} & \begin{array}{l} r_{\vdash_B}(\alpha(a))(x\vdash_A y)=r_{\dashv_B}(\beta_2(a))(x\dashv_A y), \end{array} \\ \label{bieq39B} & \begin{array}{l} l_{\vdash_B}(r_{\vdash_A}(x)a)\alpha_2(y)+(l_{\vdash_B}(a)x)\vdash_A\alpha_2(y)=\\ \quad l_{\vdash_B}(r_{\dashv_A}(x)a)\alpha_2(y)+(l_{\dashv_B}(a)a)\vdash_A\alpha_2(y), \end{array} \\ \label{bieq40B} & \begin{array}{l} l_{\vdash_B}(l_{\dashv_A}(x)a)\alpha_2(y)+(r_{\vdash_B}(a)x)\vdash_A\alpha_2(y)=\\ \quad l_{\vdash_B}(l_{\dashv_A}(x)a)\alpha_2(y)+(r_{\dashv_B}(a)x)\vdash_A\alpha_2(y), \end{array} \\ \label{bieq41B} &\begin{array}{l} l_{\dashv_B}(\beta_1\beta_2(a))(\alpha_1(x)\dashv_A y) -l_{\dashv_B}(r_{\dashv_A}(\alpha_1(x))\beta_2(a))\alpha_2(y) \\ \quad -(l_{\dashv_B}(\beta_1(a))\alpha_1(x))\dashv_A\alpha_2(y)=\\ \alpha_1\alpha_2(x)\vdash_A(l_{\dashv_B}(\beta_1(a))y)+r_{\vdash_B}(r_{\dashv_A}(y)\beta_1(a))\alpha_1\alpha_2(x) \\ \quad -(r_{\vdash_B}(\beta_1(a))\alpha_2(x))\dashv_A \alpha_2(y) -l_{\dashv_B}(l_{\vdash_A}(\alpha_2(x))\beta_1(a))\alpha_2(y), \end{array} \\ \label{bieq42B} &\begin{array}{l} \alpha_1\alpha_2(x)\dashv_A(l_{\dashv_B}(\beta_1(a))y)+r_{\dashv_B}(r_{\dashv_A}(y)\beta_1(a))\alpha_1\alpha_2(x) \\ \quad -(r_{\dashv_B}(\beta_1(a))\alpha_2(x))\dashv_A\alpha_2(y)=\\ l_{\vdash_B}(\beta_1\beta_2(xa))(\alpha_2(x)\dashv_A y)-(l_{\vdash_B}(\beta_2(a))\alpha_2(x))\dashv_A\alpha_2(y) \\ \quad -l_{\dashv_B}(l_{\vdash_A}(\beta_2(a))\alpha_2(x))\alpha_2(y), \end{array}\\ \label{bieq43B} &\begin{array}{l} \alpha_1\alpha_2(x)\dashv_A(r_{\dashv_B}(a)\alpha_2(y)) +r_{\dashv_B}(l_{\dashv_A}(\alpha_2(y))a)\alpha_1\alpha_2(x) \\ \quad -r_{\dashv_B}(\beta_2(a))(\alpha_2(x)\dashv_A\alpha_1(y))=\\ \alpha_1\alpha_2(y)\vdash_A(r_{\dashv_B}(a)\alpha_1(x))+ r_{\vdash_B}(l_{\dashv_A}(\alpha_1(x))a)\alpha_1\alpha_2(y) \\ \quad -r_{\dashv_B}(\beta_2(a))(\alpha_2(y)\vdash_A\alpha_1(x)), \end{array}\\ \label{bieq44B} & \begin{array}{l} l_{\vdash_B}(\beta_1\beta_2(a))(\alpha_1(x)\vdash_A y) -l_{\vdash_B}(r_{\vdash_A}(\alpha_1(x))\beta_2(a))\alpha_2(y) \\ \quad -(l_{\vdash_B}(\beta_2(a))\alpha_1(x))\vdash_A\alpha_2(y)= \\ \alpha_1\alpha_2(x)\vdash_A(l_{\vdash_B}(\beta_1(a))y)+ r_{\vdash_B}(r_{\vdash_A}(y)\beta_1(a))\alpha_1\alpha_2(x) \\ \quad -(r_{\vdash_B}(\beta_1(a))\alpha_2(x))\vdash_A \alpha_2(y) -l_{\vdash_B}(l_{\vdash_A}(\alpha_2(x))\beta_1(a))\alpha_2(y), \end{array} \\ \label{bieq45B} & \begin{array}{l} \alpha_1\alpha_2(x)\vdash_A(l_{\vdash_B}(\beta_1(a))y)+ r_{\vdash_B}(r_{\vdash_A}(y)\beta_1(a))\alpha_1\alpha_2(x)\\ \quad -(r_{\vdash_B}(\beta_1(a))\alpha_2(x))\vdash_A\alpha_2(y)= \\ l_{\vdash_B}(\beta_1\beta_2(a))(\alpha_2(x)\vdash_A y)-(l_{\vdash_B}(\beta_2(a))\alpha_2(x))\vdash_A\alpha_2(y)\\ \quad -l_{\vdash_B}(l_{\vdash_A}(\beta_2(a))\alpha_2(x))\alpha_2(y), \end{array} \\ \label{bieq46B} &\begin{array}{l} \alpha_1\alpha_2(x)\vdash_A(r_{\vdash_B}(a)\alpha_2(y))+r_{\vdash_B}(l_{\vdash_A}(\alpha_2(y))a) \alpha_1\alpha_2(x) \\ \quad -r_{\vdash_B}(\beta_2(a))(\alpha_2(x)\vdash_A\alpha_1(y))= \\ \alpha_1\alpha_2(y)\vdash_A(r_{\vdash_B}(a)\alpha_1(x))+ r_{\vdash_B}(l_{\vdash_A}(\alpha_1(x))a)\alpha_1\alpha_2(y) \\ \quad -r_{\vdash_B}(\beta_2(a))(\alpha_2(y)\vdash_A\alpha_1(x)). \end{array} \end{align} Then $(A,B,l_{\dashv_A},r_{\dashv_A},l_{\vdash_A},r_{\vdash_A},\beta_1,\beta_2,l_{\dashv_B},r_{\dashv_B},l_{\vdash_B},r_{\vdash_B},\alpha_1,\alpha_2)$ is called a matched pair of BiHom-left-symmetric dialgebras. In this case, there exists a BiHom-left-symmetric dialgebra structure on the direct sum $A\oplus B$ of the underlying vector spaces of $A$ and $B$ given by \begin{align*} (x + a) \dashv(y + b)&:=x \dashv_A y + (l_{\dashv_A}(x)b + r_{\dashv_A}(y)a)+a \dashv_B b + (l_{\dashv_B}(a)y + r_{\dashv_B}(b)x),\cr (x + a) \vdash (y + b)&:=x \vdash_A y + (l_{\vdash_A}(x)b + r_{\vdash_A}(y)a)+a \vdash_B b + (l_{\vdash_B}(a)y + r_{\vdash_B}(b)x),\cr (\alpha_{1}\oplus\beta_{1})(x + a)&:=\alpha_{1}(x) + \beta_{1}(a),\cr (\alpha_{2}\oplus\beta_{2})(x + a)&:=\alpha_{2}(x) + \beta_{2}(a). \end{align*} \end{thm} We denote this BiHom-left-symmetric dialgebra by $A\bowtie^{l_{\dashv_A},r_{\dashv_A},l_{\vdash_A},r_{\vdash_A},\beta_1,\beta_2}_{l_{\dashv_B},r_{\dashv_B},l_{\vdash_B},r_{\vdash_A},\alpha_1,\alpha_2}B$.\\ \subsection{BiHom-associative dialgebra} \begin{defn}\label{dia} A BiHom-associative dialgebra is a quintuple $(D, \dashv, \vdash, \alpha, \beta)$ consisting of a vector space $D$ on which the operations $\dashv, \vdash: D\otimes D\rightarrow D$ and $\alpha, \beta: D\rightarrow D$ are linear maps satisfying, for $x, y, z\in D$, \begin{eqnarray} \alpha\circ\beta &=&\beta\circ\alpha,\label{dia0}\\ \alpha(x\dashv y)&=&\alpha(x)\dashv\alpha(y), \alpha(x\vdash y)=\alpha(x)\vdash\alpha(y),\label{dia0.0}\\ \beta(x\dashv y)&=&\beta(x)\dashv\beta(y), \beta(x\vdash y)=\beta(x)\vdash\beta(y),\label{dia0.00}\\ (x\vdash y)\dashv\beta(z)&=&\alpha(x)\vdash(y\dashv z), \label{dia1}\\ \alpha(x)\dashv (y\dashv z)&=&(x\dashv y)\dashv\beta(z),\label{dia2}\\ (x\dashv y)\dashv\beta(z)&=&\alpha(x)\dashv(y\vdash z),\label{dia3}\\ (x\vdash y)\vdash\beta(z)&=&\alpha(x)\vdash(y\vdash z),\label{dia4}\\ \alpha(x)\vdash(y\vdash z)&=&(x\dashv y)\vdash\beta(z).\label{dia5} \end{eqnarray} \end{defn} \begin{rmk}\label{bk4} The following connections between the considered structures hold. \begin{enumerate} \item If $(D, \dashv, \vdash, \alpha,\beta)$ is a BiHom-associative dialgebra and $\dashv=\vdash=:\mu$, then $(D, \mu,\alpha,\beta)$ is a BiHom-associative algebra. Any BiHom-associative algebra $(A,\mu,\alpha,\beta)$ is a BiHom-associative dialgebra with $\dashv:=\mu=:\vdash$. \item A BiHom-associative dialgebra is a BiHom-X algebra. \end{enumerate} \end{rmk} \begin{prop} All BiHom-associative dialgebras are BiHom-left symmetric dialgebras. \end{prop} \begin{proof} Let $(D,\dashv,\vdash, \alpha,\beta)$ be a BiHom-associative dialgebra, then \eqref{als1} and \eqref{als2} are satisfied. Since the products $\dashv$ and $\vdash$ are associative with the condition \eqref{dia1}, the equalities \eqref{als3} and \eqref{als4} are established. \end{proof} \begin{rmk} Any BiHom-left symmetric algebra is a BiHom-left symmetric dialgebra in which $\dashv=\vdash$. A nonassociative BiHom-left symmetric algebra is not a BiHom-left symmetric dialgebra. \end{rmk} \begin{prop}\label{iibb} A BiHom-left-symmetric dialgebra $S$ is a BiHom-associative dialgebra if and only if both products of $S$ are BiHom-associative. \end{prop} \begin{proof} If a BiHom-left-symmetric dialgebra $S$ is a BiHom-associative dialgebra, then both products $\dashv$ and $\vdash$ defined over $S$ are BiHom-associative according to Definition \ref{dia}. Conversely, if each product of a BiHom-left-symmetric dialgebra is BiHom-associative, then by Definition \ref{gls}, $S$ is a BiHom-associative dialgebra. \end{proof} \begin{defn} An averaging operator over a BiHom-associative algebra $(A, \mu, \alpha,\beta)$ is a linear map $\gamma : A\rightarrow A$ such that $\alpha\circ\gamma=\gamma\circ\alpha$ and $\beta\circ\gamma=\gamma\circ\beta$, and for all $x, y\in A$, \begin{equation} \gamma(\mu(\gamma(x), y)=\mu(\gamma(x), \gamma(y))=\gamma(\mu(x, \gamma(y))). \label{avo1} \end{equation} \end{defn} \begin{thm}\label{bk2} Let $(A,\cdot)$ be an associative algebra and $\alpha,\beta : A\rightarrow A$ two averaging operators such that $(A, \cdot,\alpha,\beta)$ be a BiHom-associative algebra. For any $x, y\in A$, define new operations on $A$ by $$x\vdash y=\alpha(x)\cdot \beta(y)\quad\mbox{and}\quad x\dashv y:=\beta(x)\cdot\alpha(y).$$ Then $(A, \dashv, \vdash, \alpha,\beta)$ is a BiHom-associative dialgebra. \end{thm} \begin{proof} We prove only one axiom, as others are proved similarly. For any $x, y, z\in A$, \begin{align*} \alpha(x)\dashv(y\dashv z)&-(x\dashv y)\dashv\beta(z) =\alpha\beta(x)\cdot\alpha(\beta(y)\cdot\alpha(z))-\beta(\beta(x)\cdot\alpha(y))\cdot\alpha\beta(z)\nonumber\\ &= \alpha\beta(x)\cdot(\alpha\beta(y)\cdot\alpha(z))-(\beta(x)\cdot\alpha\beta(y))\cdot\alpha\beta(z) \quad\quad\;(\mbox{by}\;\eqref{avo1})\nonumber\\ &= \alpha\beta(x)\cdot(\alpha\beta(y)\cdot\alpha(z))-\alpha\beta(x)\cdot(\alpha\beta(y)\cdot\alpha(z))=0. \;\;(\mbox{by}\;\eqref{aca})\nonumber \end{align*} This proves the second axiom in Definition \ref{dia}. \end{proof} \begin{defn} Let $(D, \dashv, \vdash, \alpha_{1}, \alpha_{2})$ be a BiHom-associative dialgebra, and $V$ be a vector space. Let $l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash} : D \rightarrow gl(V),$ and $\beta_{1}, \beta_{2}: V \rightarrow V$ be six linear maps. Then, $(l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash}, \beta_{1}, \beta_{2}, V$) is called a bimodule of $ D $ if for any $x, y \in D $ and $v\in V$: $$\begin{array}{lllllllllll} l_{\dashv}(x\vdash y)\beta_{2}(v)&=&l_{\vdash}(\alpha_{1}(x))l_{\dashv}(y)v,&r_{\dashv}(\alpha_{2}(x))l_{\vdash}(y)v&=&l_{\vdash}(\alpha_{1}(y))r_{\dashv}(x)v,\\r_{\dashv}(\alpha_{2}(x)) r_{\vdash}(y)(v)&=&r_{\vdash}(y\dashv x)\beta_{1}(v),& l_{\dashv}(x\dashv y)\beta_{2}(v)&=&l_{\dashv}(\alpha_{1}(x))l_{\dashv}(y)v,\\ r_{\dashv}(\alpha_{2}(x))l_{\dashv}(y)v&=&l_{\dashv}(\alpha_{1}(y))r_{\dashv}(x)v,&r_{\dashv}(\alpha_{2}(x)) r_{\dashv}(y)(v)&=&r_{\dashv}(y\dashv x)\beta_{1}(v),\\ l_{\dashv}(x\dashv y)\beta_{2}(v)&=&l_{\dashv}(\alpha_{1}(x))l_{\vdash}(y)v,&r_{\vdash}(\alpha_{2}(x))l_{\vdash}(y)v&=&l_{\vdash}(\alpha_{1}(y))r_{\vdash}(x)v,\\r_{\vdash}(\alpha_{2}(x)) r_{\vdash}(y)(v)&=&r_{\vdash}(y\vdash x)\beta_{1}(v),& l_{\vdash}(x\vdash y)\beta_{2}(v)&=&l_{\vdash}(\alpha_{1}(x))l_{\vdash}(y)v,\\ r_{\dashv}(\alpha_{2}(x))l_{\vdash}(y)v&=&l_{\vdash}(\alpha_{1}(y))r_{\dashv}(x)v, &r_{\dashv}(\alpha_{2}(x)) r_{\vdash}(y)(v)&=&r_{\vdash}(y\dashv x)\beta_{1}(v),\\ l_{\dashv}(x\dashv y)\beta_{2}(v)&=&l_{\vdash}(\alpha_{1}(x))l_{\dashv}(y)v, &r_{\vdash}(\alpha_{2}(x))l_{\dashv}(y)v&=&l_{\vdash}(\alpha_{1}(y))r_{\vdash}(x)v,\\r_{\vdash}(\alpha_{2}(x)) r_{\dashv}(y)(v)&=&r_{\vdash}(y\vdash x)\beta_{1}(v), & \beta_{1}(l_{\vdash}(x)v)&=& l_{\vdash}(\alpha_{1}(x))\beta_{1}(v),\\ \beta_{1}(r_{\vdash}(x)v)&=& r_{\vdash}(\alpha_{1}(x))\beta_{1}(v),& \beta_{2}(l_{\vdash}(x)v) &=& l_{\vdash}(\alpha_{2}(x))\beta_{2}(v),\cr\beta_{2}(r_{\vdash}(x)v)&=& r_{\vdash}(\alpha_{2}(x))\beta_{2}(v),& \beta_{1}(l_{\dashv}(x)v)&=& l_{\dashv}(\alpha_{1}(x))\beta_{1}(v),\cr \beta_{1}(r_{\dashv}(x)v)&=& r_{\dashv}(\alpha_{1}(x))\beta_{1}(v),& \beta_{2}(l_{\dashv}(x)v) &=& l_{\dashv}(\alpha_{2}(x))\beta_{2}(v),\\\beta_{2}(r_{\dashv}(x)v)&=&r_{\dashv}(\alpha_{2}(x))\beta_{2}(v). \end{array}$$ \end{defn} \begin{prop} Let $(l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash}, \beta_{1}, \beta_{2}, V)$ be a bimodule of a BiHom-associative dialgebra $(D,\dashv, \vdash, \alpha_{1}, \alpha_{2}).$ Then, there exists a BiHom-associative dialgebra structure on the direct sum $D\oplus V $ of the underlying vector spaces of $ D $ and $V$ given for all $ x, y \in D, u, v \in V $ by \begin{eqnarray*} (x + u) \dashv' (y + v) &:=& x \dashv y + l_{\dashv}(x)v + r_{\dashv}(y)u, \cr (x + u) \vdash' (y + v) &:=& x \vdash y + l_{\vdash}(x)v + r_{\vdash}(y)u, \cr (\alpha_{1}+\beta_{1})(x+u)&:=&\alpha_1(x)+\beta_1(u), \cr (\alpha_{2}+\beta_{2})(x+u)&:=&\alpha_2(x)+\beta_2(u). \end{eqnarray*} We denote such a BiHom-associative dialgebra by $D \times_{l_{\dashv},r_{\dashv}, l_{\vdash}, r_{\vdash}, \alpha_{1}, \alpha_{2}, \beta_{1}, \beta_{2}} V$. \end{prop} \begin{proof} We prove only one axiom, as others are proved similarly. For any $x_{1},x_{2},x_{3}\in D$ and $v_1, v_2, v_3\in V$, \begin{align*} &((x_1+v_1)\vdash'(x_2+v_2))\dashv'(\alpha_2+\beta_{2})(x_3+v_3)\\ &\quad=(x_1\vdash x_2+l_{\vdash}(x_1)v_2+r_{\vdash}(x_2)v_1)\dashv'(\alpha_2(x_3)+\beta_2(v_3))\\ &\quad=(x_1\vdash x_2)\dashv\alpha_2(x_3)+l_{\dashv}(x_1\vdash x_2)\beta_2(v_3)\\ &\quad \quad +r_\dashv(\alpha_2(x_3))l_\vdash(x_1)v_1+r_{\dashv}(\alpha_{2}(x_3))r_{\vdash}(x_2)v_1. \\ &(\alpha_1+\beta_1)(x_1+v_1)\vdash'((x_{2}+v_{2})\dashv'(x_3+v_3))\\ &\quad =(\alpha_1(x_1)+\beta_1(v_1))\vdash'(x_2\dashv x_3+l_{\dashv}(x_{2})v_3+r_{\dashv}(x_3)v_2)\\ &\quad =\alpha_1(x_1)\vdash(x_2\dashv x_3)+l_{\vdash}(\alpha_1(x_1))l_{\dashv}(x_2)v_3\\ &\quad \quad +l_{\vdash}(\alpha_1(x_1))r_{\dashv}(x_{3})v_2+r_\vdash(x_2+x_3)\beta_1(v_1), \end{align*} which implies that \begin{align*} ((x_1+v_1)\vdash'(x_2+v_2))\dashv' & (\alpha_2+\beta_{2})(x_3+v_3)= \\ &(\alpha_1+\beta_1)(x_1+v_1)\vdash'((x_{2}+v_{2})\dashv'(x_3+v_3)). \qedhere \end{align*} \end{proof} \begin{exes} Some examples can be obtained as follows. \\ 1) Let $(D,\dashv, \vdash,\alpha,\beta)$ be a BiHom-associative dialgebra. Then $(L_{\dashv},R_{\dashv},L_{\vdash},R_{\vdash},\alpha,\beta,D)$ is a bimodule of $D$, where $L_{\dashv}(a)b=a\dashv b,~R_{\dashv}(a)b=b\dashv a,~L_{\vdash}(a)b=a\vdash b$ and $R_{\vdash}(a)b=b\vdash a$ for all $(a,b)\in D^{ 2}$. More generally, if $B$ is a two-sided BiHom-ideal of $(D,\dashv, \vdash,\alpha,\beta)$, then $(L_{\dashv},R_{\dashv},L_{\vdash},R_{\vdash},\alpha,\beta,B)$ is a bimodule of $D$, where the structure maps are $L_{\dashv}(a)x=a\dashv x=x\dashv a=R_{\dashv}(a)x$ and $L_{\vdash}(a)x=a\vdash x=x\vdash a=R_{\vdash}(a)x$ for all $x\in B$ and $(a,b)\in D^{ 2}$. \\ 2) If $(D,\dashv, \vdash)$ is an associative dialgebra and $(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},\alpha,\beta,V)$ is a bimodule of $D$, then $(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},Id_D,Id_D,V)$ is a bimodule of $\mathbb{D}$ where $\mathbb{D}=(D,\dashv, \vdash,Id_{D}, Id_{D})$ is a BiHom-associative dialgebra. \end{exes} \begin{prop} If $f:(D_1,\dashv_1, \vdash_1,\alpha_1,\alpha_2)\longrightarrow(D_2,\dashv_2, \vdash_2,\beta_1,\beta_{2})$ is a morphism of BiHom-associative dialgebra, then $(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},\beta_1,\beta_{2},D_2)$ becomes a bimodule of $D_1$ via $f$, that is, the structure maps are defined as $l_{\dashv_1}(a)b=f(a)\dashv_2 b,~r_{\dashv_1}(a)b=b \dashv_2 f(a),~l_{\vdash_1}(a)b=f(a)\vdash_2 b$ and $r_{\vdash_1}(a)b=b \vdash_2 f(a)$ for all $(a,b)\in D_1\times D_2$. \end{prop} \begin{proof} It is obtained in a similar way as for Proposition \ref{viaf}. \end{proof} \begin{thm} Let $(A, \dashv_{A}, \vdash_{A}, \alpha_{1}, \alpha_{2})$ and $(B, \dashv_{B}, \vdash_{B}, \beta_{1}, \beta_{2})$ be BiHom-associative dialgebras. Suppose that there are linear maps $ l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}} : A \rightarrow gl(B),$ and $ l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}} : B \rightarrow gl(A)$ such that \begin{align*} & ( l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}}, \beta_{1}, \beta_{2}, B) \ \mbox{is a bimodule of} \ A, \\ & (l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}}, \alpha_{1}, \alpha_{2}, A) \ \mbox{is a bimodule} \ of B, \end{align*} and for any $ x, y \in A, ~a, b \in B $, \begin{eqnarray} \label{bieq101} r_{\dashv_{A}}(\alpha_{2}(x))(a \vdash_{B} b) = r_{\vdash_{A}}(l_{\dashv_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\vdash_{B} (r_{\dashv_{A}}(x)b), \\ \label{bieq102} \begin{array}{ll} l_{\dashv_{A}}(l_{\vdash_{B}}(a)x)\beta_{2}(b) & + (r_{\vdash_{A}}(x)a) \dashv_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\vdash_{B} (l_{\dashv_{A}}(x)b) + r_{\vdash_{A}}(r_{\dashv_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq103} l_{\dashv_{A}}(\alpha_{1}(x))(a \dashv_{B} b) = ( l_{\vdash_{A}}(x)a) \dashv_{B}\beta_{2}(b) + l_{\dashv_{A}}(r_{\vdash_{B}}(a)x)\beta_{2}(b), \\ \label{bieq104} r_{\dashv_{A}}(\alpha_{2}(x))(a \dashv_{B} b) = r_{\dashv_{A}}(l_{\dashv_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\dashv_{B} (r_{\dashv_{A}}(x)b), \\ \label{bieq105} \begin{array}{ll} l_{\dashv_{A}}(l_{\dashv_{B}}(a)x)\beta_{2}(b) & + (r_{\dashv_{A}}(x)a) \dashv_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\dashv_{B} (l_{\dashv_{A}}(x)b) + r_{\dashv_{A}}(r_{\dashv_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq106} l_{\dashv_{A}}(\alpha_{1}(x))(a \dashv_{B} b) = ( l_{\dashv_{A}}(x)a) \dashv_{B}\beta_{2}(b) + l_{\dashv_{A}}(r_{\dashv_{B}}(a)x)\beta_{2}(b), \\ \label{bieq107} r_{\dashv_{A}}(\alpha_{2}(x))(a \dashv_{B} b) = r_{\dashv_{A}}(l_{\vdash_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\dashv_{B} (r_{\vdash_{A}}(x)b), \\ \label{bieq108} \begin{array}{ll} l_{\dashv_{A}}(l_{\dashv_{B}}(a)x)\beta_{2}(b) & + (r_{\dashv_{A}}(x)a) \dashv_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\dashv_{B} (l_{\vdash_{A}}(x)b) + r_{\dashv_{A}}(r_{\vdash_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq109} l_{\dashv_{A}}(\alpha_{1}(x))(a \vdash_{B} b) = ( l_{\dashv_{A}}(x)a) \dashv_{B}\beta_{2}(b) + l_{\dashv_{A}}(r_{\dashv_{B}}(a)x)\beta_{2}(b), \\ \label{bieq110} r_{\vdash_{A}}(\alpha_{2}(x))(a \vdash_{B} b) = r_{\vdash_{A}}(l_{\vdash_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\vdash_{B} (r_{\vdash_{A}}(x)b), \\ \label{bieq111} \begin{array}{ll} l_{\vdash_{A}}(l_{\vdash_{B}}(a)x)\beta_{2}(b) & + (r_{\vdash_{A}}(x)a) \vdash_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\vdash_{B} (l_{\vdash_{A}}(x)b) + r_{\vdash_{A}}(r_{\vdash_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq112} l_{\vdash_{A}}(\alpha_{1}(x))(a \vdash_{B} b) = ( l_{\vdash_{A}}(x)a) \vdash_{B}\beta_{2}(b) + l_{\vdash_{A}}(r_{\vdash_{B}}(a)x)\beta_{2}(b), \\ \label{bieq113} r_{\vdash_{A}}(\alpha_{2}(x))(a \dashv_{B} b) = r_{\vdash_{A}}(l_{\vdash_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\vdash_{B} (r_{\vdash_{A}}(x)b), \\ \label{bieq114} \begin{array}{ll} l_{\vdash_{A}}(l_{\dashv_{B}}(a)x)\beta_{2}(b) & + (r_{\dashv_{A}}(x)a) \vdash_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\vdash_{B} (l_{\vdash_{A}}(x)b) + r_{\vdash_{A}}(r_{\vdash_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq115} l_{\vdash_{A}}(\alpha_{1}(x))(a \vdash_{B} b) = ( l_{\dashv_{A}}(x)a) \vdash_{B}\beta_{2}(b) + l_{\vdash_{A}}(r_{\dashv_{B}}(a)x)\beta_{2}(b), \\ \label{bieq116} r_{\dashv_{B}}(\beta_{2}(a))(x \vdash_{A} y) = r_{\vdash_{B}}(l_{\dashv_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\vdash_{A} (r_{\dashv_{B}}(a)y), \\ \label{bieq117} \begin{array}{ll} l_{\dashv_{B}}(l_{\vdash_{A}}(x)a)\alpha_{2}(y) & + (r_{\vdash_{B}}(a)x) \dashv_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\vdash_{B} (l_{\dashv_{B}}(a)y) + r_{\vdash_{B}}(r_{\dashv_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq118} l_{\dashv_{B}}(\beta_{1}(a))(x \dashv_{A} y) = ( l_{\vdash_{B}}(a)x) \dashv_{A}\alpha_{2}(y) + l_{\dashv_{B}}(r_{\vdash_{A}}(x)a)\alpha_{2}(y), \\ \label{bieq119} r_{\dashv_{B}}(\beta_{2}(a))(x \dashv_{A} y) = r_{\dashv_{B}}(l_{\dashv_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\dashv_{A} (r_{\dashv_{B}}(a)y), \\ \label{bieq120} \begin{array}{ll} l_{\dashv_{B}}(l_{\dashv_{A}}(x)a)\alpha_{2}(y) & + (r_{\dashv_{B}}(a)x) \dashv_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\dashv_{B} (l_{\dashv_{B}}(a)y) + r_{\dashv_{B}}(r_{\dashv_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq121} l_{\dashv_{B}}(\beta_{1}(a))(x \dashv_{A} y) = ( l_{\dashv_{B}}(a)x) \dashv_{A}\alpha_{2}(y) + l_{\dashv_{B}}(r_{\dashv_{A}}(x)a)\alpha_{2}(y), \\ \label{bieq122} r_{\dashv_{B}}(\beta_{2}(a))(x \dashv_{A} y) = r_{\dashv_{B}}(l_{\vdash_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\dashv_{A} (r_{\vdash_{B}}(a)y), \\ \label{bieq123} \begin{array}{ll} l_{\dashv_{B}}(l_{\dashv_{A}}(x)a)\alpha_{2}(y) & + (r_{\dashv_{B}}(a)x) \dashv_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\dashv_{B} (l_{\vdash_{B}}(a)y) + r_{\dashv_{B}}(r_{\vdash_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq124} l_{\dashv_{B}}(\beta_{1}(a))(x \vdash_{A} y) = ( l_{\dashv_{B}}(a)x) \dashv_{A}\alpha_{2}(y) + l_{\dashv_{B}}(r_{\dashv_{A}}(x)a)\alpha_{2}(y), \\ \label{bieq125} r_{\vdash_{B}}(\beta_{2}(a))(x \vdash_{A} y) = r_{\vdash_{B}}(l_{\vdash_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\vdash_{A} (r_{\vdash_{B}}(a)y), \\ \label{bieq126} \begin{array}{ll} l_{\vdash_{B}}(l_{\vdash_{A}}(x)a)\alpha_{2}(y) & + (r_{\vdash_{B}}(a)x) \vdash_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\vdash_{B} (l_{\vdash_{B}}(a)y) + r_{\vdash_{B}}(r_{\vdash_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq127} l_{\vdash_{B}}(\beta_{1}(a))(x \vdash_{A} y) = ( l_{\vdash_{B}}(a)x) \vdash_{A}\alpha_{2}(y) + l_{\vdash_{B}}(r_{\vdash_{A}}(x)a)\alpha_{2}(y), \\ \label{bieq128} r_{\vdash_{B}}(\beta_{2}(a))(x \dashv_{A} y) = r_{\vdash_{B}}(l_{\vdash_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\vdash_{A} (r_{\vdash_{B}}(a)y), \\ \label{bieq129} \begin{array}{ll} l_{\vdash_{B}}(l_{\dashv_{A}}(x)a)\alpha_{2}(y) & + (r_{\dashv_{B}}(a)x) \vdash_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\vdash_{B} (l_{\vdash_{B}}(a)y) + r_{\vdash_{B}}(r_{\vdash_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq130} l_{\vdash_{B}}(\beta_{1}(a))(x \vdash_{A} y) = ( l_{\dashv_{B}}(a)x) \vdash_{A}\alpha_{2}(y) + l_{\vdash_{B}}(r_{\dashv_{A}}(x)a)\alpha_{2}(y). \end{eqnarray} Then, there is a BiHom-associative dialgebra structure on the direct sum $ A \oplus B $ of the underlying vector spaces of $ A $ and $ B $ given by \begin{eqnarray*} (x + a) \dashv ( y + b ) &:=& (x \dashv_{A} y + r_{\dashv_{B}}(b)x + l_{\dashv_{B}}(a)y)\cr &&\quad +(l_{\dashv_{A}}(x)b + r_{\dashv_{A}}(y)a + a \dashv_{B} b ), \cr (x + a) \vdash ( y + b ) &:=& (x \vdash_{A} y + r_{\vdash_{B}}(b)x + l_{\vdash_{B}}(a)y)\cr &&\quad + (l_{\vdash_{A}}(x)b + r_{\vdash_{A}}(y)a + a \vdash_{B} b ),\cr (\alpha_1+\beta_1)(x+a)&:=&\alpha_1(x)+\beta_1(a),\cr (\alpha_2+\beta_2)(x+a)&:=&\alpha_2(x)+\beta_2(a). \end{eqnarray*} \end{thm} \begin{proof} The proof is obtained in a similar way as for Theorem \ref{matched ass}. \end{proof} Let $ A \bowtie^{l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}}, \beta_{1}, \beta_{1}}_{l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}}, \alpha_{1}, \alpha_{2}} B $ denote this BiHom-associative dialgebra. \section{Bimodules and matched pairs of BiHom- tridendriform algebras} \label{sec:homtridendriformcoloralgebras} In this section, we recall definitions of BiHom-dendriform and BiHom tridendriform algebras given in \cite{LiuMakhMenPan:Rota-BaxteropsBiHomassalg}. Next we study the concept of bimodules and matched pairs of BiHom-tridendriform alge- bra and we give some related properties. \begin{defn} A BiHom-dendriform algebra is a quintuple $(A, \dashv, \vdash, \alpha, \beta)$ consisting of a vector space $A$ on which the operations $\dashv, \vdash: A\otimes A\rightarrow A$ and $\alpha, \beta: A\rightarrow A$ are linear maps satisfying \begin{eqnarray} &&\alpha\circ\beta=\beta\circ\alpha,\\ &&\alpha(x\dashv y)=\alpha(x)\dashv\alpha(y), \alpha(x\vdash y)=\alpha(x)\vdash\alpha(y),\\ &&\beta(x\dashv y)=\beta(x)\dashv\beta(y), \beta(x\vdash y)=\beta(x)\vdash\beta(y),\\ &&(x \dashv y)\dashv \beta(z) = \alpha(x)\dashv (y \dashv z + y \vdash z), \\ &&(x\vdash y)\dashv\beta(z)=\alpha(x)\vdash(y \dashv z), \\ &&\alpha(x)\vdash (y \vdash z ) = (x \dashv y + x \vdash y)\vdash\beta(z). \end{eqnarray} \end{defn} \begin{rmk} BiHom-dendriform algebras are BiHom-X algebras. \end{rmk} \begin{prop} If $(A, \dashv, \vdash, \alpha, \beta)$ is a BiHom-dendriform algebra, then $(A, \ast, \alpha, \beta)$ is a multiplicative BiHom-associative algebra, where $ x \ast y = x \dashv y + x \vdash y $. \end{prop} \begin{proof} For all $x, y, z\in A$, \begin{align*} (x\ast y)\ast\beta(z)&=(x\dashv y)\dashv\beta(z) + (x\dashv y)\vdash\beta(z) + (x\vdash y)\dashv\beta(z) + (x\vdash y)\vdash\beta(z)\cr &=(x\dashv y)\dashv\beta(z) + (x\vdash y)\dashv\beta(z) + (x\dashv y)\vdash\beta(z) + (x\vdash y)\vdash\beta(z)\cr &= (x\dashv y)\dashv\beta(z) + (x\vdash y)\dashv\beta(z) + (x\ast y)\vdash\beta(z)\cr &= \alpha(x)\dashv(y\ast z) + \alpha(x)\vdash(y\dashv z) + \alpha(x)\vdash(y\vdash z)\cr &= \alpha(x)\dashv(y\ast z) + \alpha(x)\vdash(y\ast z)= \alpha(x)\ast(y\ast z), \cr \alpha(x\ast y)&=\alpha(x\vdash y) + \alpha(x\dashv y)= \alpha(x)\vdash\alpha(y) + \alpha(x)\dashv\alpha(y)= \alpha(x)\ast\alpha(y),\\ \beta(x\ast y)&=\beta(x\vdash y) + \beta(x\dashv y)= \beta(x)\vdash\beta(y) + \beta(x)\dashv\beta(y)= \beta(x)\ast\beta(y). \qedhere \end{align*} \end{proof} \begin{defn}[\cite{HounkonnouHoundedjiSilvestrov:DoubleconstrbiHomFrobalg}] Let $(A, \dashv, \vdash, \alpha_{1}, \alpha_{2})$ be a BiHom-dendriform algebra, and $V$ be a vector space. Let $l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash} : A \rightarrow gl(V),$ and $\beta_{1}, \beta_{2}: V \rightarrow V$ be six linear maps. Then, ( $ l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash}, \beta_{1}, \beta_{2}, V$) is called a bimodule of $ A $ if for any $ x, y \in A, v\in V$ and $ x \ast y = x \dashv y + x \vdash y,$ $l_{\ast} = l_{\dashv} + l_{\vdash},$ $r_{\ast} = r_{\dashv} + r_{\vdash}, $ the following equalities hold: $$ \begin{array}{llllllllll} l_{\dashv}(x \dashv y)\beta_{2}(v)&=& l_{\dashv}(\alpha_{1}(x))l_{\ast}(y)v,& r_{\dashv}(\alpha_{2}(x))l_{\dashv}(y)v&=&l_{\dashv}(\alpha_{1}(y))r_{\ast}(x)v,\cr r_{\dashv}(\alpha_{2}(y))r_{\dashv}(y)v &=& r_{\dashv}(x\ast y)\beta_{1}(v),& l_{\dashv}(x \vdash y)\beta_{2}(v) &=& l_{\vdash}(\alpha_{1}(x))l_{\dashv}(y)v,\cr r_{\dashv}(\alpha_{2}(x))l_{\vdash}(y)v &=& l_{\vdash}(\alpha_{1}(y))r_{\dashv}(x)v,& r_{\dashv}(\alpha_{2}(x))r_{\vdash}(y)v &=& r_{\vdash}(y\dashv x)\beta_{1}(v),\cr l_{\vdash}(x\ast y)\beta_{2}(v) &=& l_{\vdash}(\alpha_{1}(x))l_{\vdash}(y)v,& r_{\vdash}(\alpha_{2}(x))l_{\ast}(y)v&=& l_{\vdash}(\alpha_{1}(y))r_{\vdash}(x)v,\cr r_{\vdash}(\alpha_{2}(x))r_{\ast}(y)v &=& r_{\vdash}(y \vdash x)\beta_{1}(v),& \beta_{1}(l_{\vdash}(x)v)&=& l_{\vdash}(\alpha_{1}(x))\beta_{1}(v),\\ \beta_{1}(r_{\vdash}(x)v)&=& r_{\vdash}(\alpha_{1}(x))\beta_{1}(v),& \beta_{2}(l_{\vdash}(x)v) &=& l_{\vdash}(\alpha_{2}(x))\beta_{2}(v),\cr\beta_{2}(r_{\vdash}(x)v)&=& r_{\vdash}(\alpha_{2}(x))\beta_{2}(v),& \beta_{1}(l_{\dashv}(x)v)&=& l_{\dashv}(\alpha_{1}(x))\beta_{1}(v),\cr \beta_{1}(r_{\dashv}(x)v)&=& r_{\dashv}(\alpha_{1}(x))\beta_{1}(v),& \beta_{2}(l_{\dashv}(x)v) &=& l_{\dashv}(\alpha_{2}(x))\beta_{2}(v),\\ \beta_{2}(r_{\dashv}(x)v)&=&r_{\dashv}(\alpha_{2}(x))\beta_{2}(v). \end{array}$$ \end{defn} \begin{prop}[\cite{HounkonnouHoundedjiSilvestrov:DoubleconstrbiHomFrobalg}] Let $(l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash}, \beta_{1}, \beta_{2}, V)$ be a bimodule of a BiHom-dendri\-form algebra $(A,\dashv, \vdash, \alpha_{1}, \alpha_{2}).$ Then, on the direct sum $A\oplus V $ of the underlying vector spaces of $ A $ and $V$, there exists a BiHom-dendriform algebra structure given for all $ x, y \in A, u, v \in V $ by \begin{align*} (x + u) \dashv' (y + v) &:= x \dashv y + l_{\dashv}(x)v + r_{\dashv}(y)u\cr (x + u) \vdash' (y + v) &:= x \vdash y + l_{\vdash}(x)v + r_{\vdash}(y)u, \cr (\alpha_1+\beta_1)(x+a)&:=\alpha_1(x)+\beta_1(a), \cr (\alpha_2+\beta_2)(x+a)&:=\alpha_2(x)+\beta_2(a). \end{align*} We denote it by $ A \times_{l_{\dashv},r_{\dashv}, l_{\vdash}, r_{\vdash}, \alpha_{1}, \alpha_{2}, \beta_{1}, \beta_{2}} V$. \end{prop} \begin{thm}[\cite{HounkonnouHoundedjiSilvestrov:DoubleconstrbiHomFrobalg}] Let $(A,\dashv_A,\vdash_A,\alpha_1,\alpha_2)$ and $(B,\dashv_B,\vdash_{B},\beta_1,\beta_2)$ be two BiHom-dendriform algebras. Suppose that there are linear maps $l_{\dashv_A},r_{\dashv_A},l_{\vdash_A},r_{\vdash_A}:A\rightarrow gl(B)$ and $l_{\dashv_B},r_{\dashv_B},l_{\vdash_B},r_{\vdash_B}:B\rightarrow gl(A)$ such that for all $x,y\in A,~a,b\in B$ and for \begin{align*}x\ast_A y=x\dashv_A y + x \vdash_A y,~~~l_{A} = l_{\dashv_A} + l_{\vdash_A},~~~r_{A} = r_{\dashv_A} + r_{\vdash_A},\\ a\ast_B b= a\dashv_B b + a \vdash_B b,~~~l_{_B} = l_{\dashv_B} + l_{\vdash_B},~~~r_{B} = r_{\dashv_B} + r_{\vdash_B}, \end{align*} the following equalities hold: \begin{align} \label{bieq35Adend} &r_{\dashv_{A}}(\alpha_{2}(x))(a \dashv_{B} b) = \beta_{1}(a)\dashv_{B}( r_{A}(x)b) + r_{\dashv_{A}}(l_{B}(x)\beta_{1}(a)), \\ \label{bieq36Adend} &\begin{array}{lll} l_{\dashv_{A}}(l_{\dashv_{B}}(x))\beta_{2}(b) &+& (r_{\dashv_{A}}(x)a) \dashv_{B}\beta_{2}(b)= \cr &&\beta_{1}(a) \dashv_{B} (l_{\dashv_{A}}(x)b) + r_{\dashv_{A}}(r_{\dashv_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq37Adend} &l_{\dashv_{A}}(\alpha_{1}(x))(a \ast_{B} b) = (l_{\dashv_{A}}(x)a) \ast_{B} \beta_{2}(b) + l_{\dashv_{A}}(r_{\dashv_{A}}(a)x)\beta_{2}(b), \\ \label{bieq38Adend} &r_{\dashv_{A}}(\alpha_{2}(x))(a \vdash_{B} b) = r_{\vdash_{A}}(l_{\dashv_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\vdash_{B} (r_{\dashv_{A}}(x)b), \\ \label{bieq39Adend} &\begin{array}{lll} l_{\dashv_{A}}(l_{\vdash_{B}}(a)x)\beta_{2}(b) &+& (r_{\vdash_{A}}(x)a) \dashv_{B}\beta_{2}(b)=\cr && \beta_{1}(a)\vdash_{B} (l_{\dashv_{A}}(x)b) + r_{\vdash_{A}}(r_{\dashv_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq40Adend} &l_{\vdash_{A}}(\alpha_{1}(x))(a \dashv_{B} b) = ( l_{\vdash_{A}}(x)a) \dashv_{B}\beta_{2}(b) + l_{\dashv_{A}}(r_{\vdash_{B}}(a)x)\beta_{2}(b), \\ \label{bieq41Adend} &r_{\vdash_{A}}(\alpha_{2}(x))(a \ast_{B} b)= \beta_{1}(a)\vdash_{B} (r_{\vdash_{A}}(x)b) + r_{\vdash_{A}}(l_{\vdash_{B}}(b)x)\beta_{1}(a), \\ \label{bieq42Adend} &\begin{array}{lll} \beta_{1}(a)\vdash_{B} (l_{\vdash_{A}}(x)b) &+& r_{\vdash_{A}}(r_{\vdash_{B}}(b)x)\beta_{1}(a)=\cr &&l_{\vdash_{A}}(l_{B}(a)x)\beta_{2}(b) + (r_{A}(x)a) \vdash_{B}\beta_{2}(b), \end{array} \\ \label{bieq43Adend} &l_{\vdash_{A}}(\alpha_{1}(x))(a \vdash_{B} b) = (l_{A}(x)a) \vdash_{B}\beta_{2}(b) + l_{\vdash_{A}}(r_{B}(a)x)\beta_{2}(b), \\ \label{bieq44Adend} &r_{\dashv_{B}}(\beta_{2}(a))(x \dashv_{A} y) = \alpha_{1}(x)\dashv_{A} (r_{B}(a)y) + r_{\dashv_{B}}(l_{A}(y)a)\alpha_{1}(x), \\ \label{bieq45Adend} &\begin{array}{lll} l_{\dashv_{B}}(l_{\dashv_{A}}(x)a)\alpha_{2}(y) &+& (r_{\dashv_{B}}(a)x) \dashv_{A}\alpha_{2}(y)=\cr &&\alpha_{1}(x)\dashv_{A} (l_{B}(a)y) + r_{\dashv_{B}}(r_{A}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq46Adend} &l_{\dashv_{B}}(\beta_{1}(a))(x \ast_{A} y) = (l_{\dashv_{B}}(a)x) \dashv_{A}\alpha_{2}(y) + l_{\dashv_{B}}(r_{\dashv_{A}}(x)a)\alpha_{2}(y), \\ \label{bieq47Adend} &r_{\dashv_{B}}(\beta_{2}(a))(x \vdash_{A} y) = r_{\vdash_{B}}(l_{\dashv_{B}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\vdash_{A} (r_{\dashv_{B}}(a)y), \\ \label{bieq48Adend} &\begin{array}{lll} l_{\dashv_{B}}(l_{\vdash_{A}}(x)a)\alpha_{2}(y) &+& (r_{\vdash_{B}}(a)x) \dashv_{A}\alpha_{2}(y)=\cr &&\alpha_{1}(x)\vdash_{A} (l_{\dashv_{B}}(a)y) + r_{\vdash_{B}}(r_{\dashv_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq49Adend} &l_{\vdash_{B}}(\beta_{1}(a))(x \dashv_{A} y) = (l_{\vdash_{B}}(a)x) \dashv_{A}\alpha_{2}(y) + l_{\dashv_{B}}(r_{\vdash_{A}}(x)a)\alpha_{2}(y), \\ \label{bieq50Adend} &r_{\vdash_{B}}(\beta_{2}(a))(x \ast_{A} y)= \alpha_{1}(x)\vdash_{A} (r_{\vdash_{B}}(a)y) + r_{\vdash_{B}}(l_{\vdash_{A}}(y)a)\alpha_{1}(x), \\ \label{bieq51Adend} &\begin{array}{lll} \alpha_{1}(x)\vdash_{A} (l_{\vdash_{B}}(a)y) &+& r_{\vdash_{B}}(r_{\vdash_{A}}(y)a)\alpha_{1}(x)=\cr && l_{\vdash_{B}}(l_{A}(x)a)\alpha_{2}(y) + (r_{B}(a)x) \vdash_{A}\alpha_{2}(y), \end{array} \\ \label{bieq52Adend} &l_{\vdash_{B}}(\beta_{1}(a))(x \vdash_{A} y) = (l_{B}(a)x) \vdash_{A}\alpha_{2}(y) + l_{\vdash_{B}}(r_{A}(x)a)\alpha_{2}(y). \end{align} Then $(A,B,l_{\dashv_A},r_{\dashv_A},l_{\vdash_A},r_{\vdash_A},\beta_1,\beta_2,l_{\dashv_B},r_{\dashv_B},l_{\vdash_B},r_{\vdash_B},\alpha_1,\alpha_2)$ is called a matched pair of BiHom-dendriform algebras. In this case, on the direct sum $A\oplus B$ of the underlying vector spaces of $A$ and $B$, there exists a BiHom-dendriform algebra structure given by \begin{align*} (x + a) \dashv(y + b)&:=x \dashv_A y + (l_{\dashv_A}(x)b + r_{\dashv_A}(y)a)+a \dashv_B b + (l_{\dashv_B}(a)y + r_{\dashv_B}(b)x),\cr (x + a) \vdash (y + b)&:=x \vdash_A y + (l_{\vdash_A}(x)b + r_{\vdash_A}(y)a)+a \vdash_B b + (l_{\vdash_B}(a)y + r_{\vdash_B}(b)x),\cr (\alpha_{1}\oplus\beta_{1})(x + a)&:=\alpha_{1}(x) + \beta_{1}(a),\cr (\alpha_{2}\oplus\beta_{2})(x + a)&:=\alpha_{2}(x) + \beta_{2}(a). \end{align*} \end{thm} We denote this BiHom-dendriform algebra by $A\bowtie^{l_A,r_A,\beta_1,\beta_2}_{l_B,r_B,\alpha_1,\alpha_2}B$. \begin{defn}\label{LMMP} A BiHom-tridendriform algebra is a sextuple $(T, \dashv, \vdash, \cdot,\alpha,\beta)$ consisting of a linear space $T$, three bilinear maps $\dashv, \vdash, \cdot : T\otimes T\rightarrow T$, and two linear maps $\alpha,\beta : T\rightarrow T$ satisfying for any $x, y, z\in T$, \begin{eqnarray} \alpha\circ\beta &=& \beta\circ\alpha,\label{t0}\\ \alpha(x\dashv y) &=& \alpha(x)\dashv\alpha(y), \alpha(x\vdash y)=\alpha(x)\vdash\alpha(y),\label{t0.0}\\ \beta(x\dashv y) &=& \beta(x)\dashv\beta(y), \beta(x\vdash y)=\beta(x)\vdash\beta(y),\label{t0.00}\\ \beta(x\cdot y)&=& \beta(x)\cdot\beta(y), \beta(x\cdot y)=\beta(x)\cdot\beta(y),\label{t0.000}\\ (x\dashv y)\dashv\beta(z)&=& \alpha(x)\dashv(y\dashv z+y\vdash z+y\cdot z),\label{t1}\\ (x\vdash y)\dashv\beta(z)&=& \alpha(x)\vdash(y\dashv z),\label{1.2}\\ \alpha(x)\vdash(y\vdash z)&=& (x\dashv y+x\vdash y+x\cdot y)\vdash\beta(z),\\ (x\dashv y)\cdot\beta(z)&=& \alpha(x)\cdot(y\vdash z),\\ (x\vdash y)\cdot\beta(z)&=& \alpha(x)\vdash(y\cdot z),\\ (x\cdot y)\dashv\beta(z)&=&\alpha(x)\cdot(y\dashv z),\label{1.6}\\ (x\cdot y)\cdot\beta(z)&=&\alpha(x)\cdot(y\cdot z). \end{eqnarray} \end{defn} \begin{rmk} BiHom-tridendriform algebras are a BiHom-X algebras. Also, when the BiHom-associative product ''$\cdot$'' is identically null, we get a BiHom-dendriform algebra. \end{rmk} \begin{prop} Let $(T, \dashv, \vdash, \cdot,\alpha,\beta)$ be a BiHom-tridendriform algebra. \\ Then $(T, \dashv, \vdash',\alpha,\beta)$ is a BiHom-dendriform algebra, where for any $x, y\in T$, $$x\vdash' y:=x\vdash y+x\cdot y.$$ \end{prop} \begin{proof} We prove only one axiom, as others are proved similarly. For any $x, y, z\in T$, \begin{align*} (x\vdash'y)\dashv\beta(z)&=(x\vdash y+x\cdot y)\dashv\beta(z)\\ &=\alpha(x)\vdash(y\dashv z)+\alpha(x)\cdot(y\dashv z) \quad \mbox{(by~\eqref{1.2}~and~\eqref{1.6})}\\ &=\alpha(x)\vdash'(y\dashv z). \qedhere \end{align*} \end{proof} \begin{thm}\label{car1} Let $(A, \cdot,\alpha, \beta,R)$ be some Rota-Baxter BiHom-associative algebra of weight $\lambda$, and three new operations $\dashv, \vdash$ and $\ast$ on $A$ are defined by $$x\dashv y:=x\cdot R(y), \quad x\vdash y:=R(x)\cdot y, \quad x\ast y:=\lambda x\cdot y.$$ Then $(A, \dashv, \vdash, \ast, \alpha,\beta)$ is a BiHom-tridendriform algebra. \end{thm} \begin{proof} We prove only one axiom, as others are proved similarly. For any $x, y, z\in A$, \begin{align*} (x\vdash y)\dashv\beta(z)&=(R(x)\cdot y)\dashv\beta(z)\\ &=(R(x)\cdot y)\cdot R(\beta(z))=\alpha(R(x))\cdot (y\cdot R(z))\\ &=\alpha(R(x))\cdot (y\dashv z)=\alpha(x)\vdash (y\dashv z)=\alpha(x)\vdash (y\dashv z). \qedhere \end{align*} \end{proof} In Theorem \ref{tp}, we associate a BiHom-associative algebra to any BiHom-tridendri\-form algebra. \begin{thm}\label{tp} If $(T, \dashv, \vdash, \cdot,\alpha,\beta)$ is a BiHom-tridendriform algebra, and $$x\ast y=x\vdash y+x\dashv y+ x\cdot y,$$ then $(T, \ast,\alpha,\beta)$ is a BiHom-associative algebra. \end{thm} \begin{proof} For any $x, y, z\in T$, \begin{multline*} (x\ast y)\ast \beta(z)-\alpha(x)\ast(y\ast z)= (x\vdash y)\vdash\beta(z)+(x\dashv y)\vdash\beta(z)\\ +(x\cdot y)\vdash\beta(z) +(x\vdash y)\dashv\beta(z) +(x\dashv y)\dashv\beta(z) +(x\cdot y)\dashv\beta(z)\\ +(x\vdash y)\cdot\beta(z) +(x\dashv y)\cdot\beta(z) +(x\cdot y)\cdot\beta(z)\\ -\alpha(x)\vdash(y\vdash z)-\alpha(x)\vdash(y\dashv z)-\alpha(x)\vdash(y\cdot z) \\ -\alpha(x)\dashv(y\vdash z) -\alpha(x)\dashv(y\dashv z) -\alpha(x)\dashv(y\cdot z) \\ -\alpha(x)\cdot(y\vdash z)-\alpha(x)\cdot(y\dashv z) -\alpha(x)\cdot(y\cdot z). \end{multline*} The left hand side vanishes by axioms in Definition \ref{LMMP}. This proves that $(T, \ast, \alpha,\beta)$ is a BiHom-associative algebra. \end{proof} Now, we introduce the notion of bimodule of BiHom-tridendriform algebra. \begin{defn} Let $(T, \dashv, \vdash,\cdot, \alpha_{1}, \alpha_{2})$ be a BiHom-tridendriform algebra, and $V$ be a vector space. Let $l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash},l_{\cdot}, r_{\cdot} : T \rightarrow gl(V),$ and $\beta_{1}, \beta_{2}: V \rightarrow V$ be eight linear maps. Then, ( $ l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash},l_{\cdot}, r_{\cdot}, \beta_{1}, \beta_{2}, V$) is called a bimodule of $T $ if the following equations hold for any $ x, y \in T $ and $v\in V$: $$\begin{array}{llllllll} l_{\dashv}(x \dashv y)\beta_{2}(v)&=& l_{\dashv}(\alpha_{1}(x))l_{\ast}(y)v,&& r_{\dashv}(\alpha_{2}(x))l_{\dashv}(y)v&=&l_{\dashv}(\alpha_{1}(y))r_{\ast}(x)v,\\ r_{\dashv}(\alpha_{2}(y))r_{\dashv}(y)v &=& r_{\dashv}(x\ast y)\beta_{1}(v),&& l_{\dashv}(x \vdash y)\beta_{2}(v) &=& l_{\vdash}(\alpha_{1}(x))l_{\dashv}(y)v,\\ r_{\dashv}(\alpha_{2}(x))l_{\vdash}(y)v &=& l_{\vdash}(\alpha_{1}(y))r_{\dashv}(x)v,&& r_{\dashv}(\alpha_{2}(x))r_{\vdash}(y)v &=& r_{\vdash}(y\dashv x)\beta_{1}(v),\\ l_{\vdash}(x\ast y)\beta_{2}(v) &=& l_{\vdash}(\alpha_{1}(x))l_{\vdash}(y)v,&& r_{\vdash}(\alpha_{2}(x))l_{\ast}(y)v&=& l_{\vdash}(\alpha_{1}(y))r_{\vdash}(x)v,\\ r_{\vdash}(\alpha_{2}(x))r_{\ast}(y)v &=& r_{\vdash}(y \vdash x)\beta_{1}(v),&& l_{\cdot}(x\dashv y)\beta_{2}(v)&=&l_{\cdot}(\alpha_{1}(x))l_{\vdash}(y)v,\\ r_{\cdot}(\alpha_{2}(x))l_{\dashv}(y)v&=&l_{\cdot}(\alpha_{1}(y))r_{\vdash}(x)v,&& r_{\cdot}(\alpha_{2}(x))r_{\dashv}(y)v&=&r_{\cdot}(y\vdash x)\beta_{1}(v),\\ l_{\cdot}(x\vdash y)\beta_{2}(v)&=&l_{\vdash}(\alpha_{1}(x))l_{\cdot}(y)v,&& r_{\cdot}(\alpha_{2}(x))l_{\vdash}(y)v&=&l_{\vdash}(\alpha_{1}(y))r_{\cdot}(x)v,\\ r_{\cdot}(\alpha_{2}(x))r_{\vdash}(y)v&=&r_{\vdash}(y\cdot x)\beta_{1}(v),&& l_{\dashv}(x\cdot y)\beta_{2}(v)&=&l_{\cdot}(\alpha_{1}(x))l_{\dashv}(y)v, \\ r_{\dashv}(\alpha_{2}(x))l_{\cdot}(y)v&=&l_{\cdot}(\alpha_{1}(y))r_{\dashv}(x)v,&& r_{\dashv}(\alpha_{2}(x))r_{\cdot}(y)v&=&r_{\cdot}(y\dashv x)\beta_{1}(v),\\ l_{\cdot}(x\cdot y)\beta_{2}(v)&=&l_{\cdot}(\alpha_{1}(x))l_{\cdot}(y)v,&& r_{\cdot}(\alpha_{2}(x))l_{\cdot}(y)v&=&l_{\cdot}(\alpha_{1}(y))r_{\cdot}(x)v, \\ r_{\cdot}(\alpha_{2}(x))r_{\cdot}(y)v&=&r_{\cdot}(y\cdot x)\beta_{1}(v),&& \beta_{1}(l_{\vdash}(x)v)&=& l_{\vdash}(\alpha_{1}(x))\beta_{1}(v),\\ \beta_{1}(r_{\vdash}(x)v)&=& r_{\vdash}(\alpha_{1}(x))\beta_{1}(v),&& \beta_{2}(l_{\vdash}(x)v) &=& l_{\vdash}(\alpha_{2}(x))\beta_{2}(v),\cr\beta_{2}(r_{\vdash}(x)v)&=& r_{\vdash}(\alpha_{2}(x))\beta_{2}(v),&& \beta_{1}(l_{\dashv}(x)v)&=& l_{\dashv}(\alpha_{1}(x))\beta_{1}(v),\cr \beta_{1}(r_{\dashv}(x)v)&=& r_{\dashv}(\alpha_{1}(x))\beta_{1}(v),&& \beta_{2}(l_{\dashv}(x)v) &=& l_{\dashv}(\alpha_{2}(x))\beta_{2}(v),\\\beta_{2}(r_{\dashv}(x)v)&=&r_{\dashv}(\alpha_{2}(x))\beta_{2}(v), \end{array}$$ where $ x \ast y = x \dashv y + x \vdash y+x\cdot y, l_{\ast} = l_{\dashv} + l_{\vdash}+l_{\cdot}, r_{\ast} = r_{\dashv} + r_{\vdash}+r_{\cdot} $. \end{defn} \begin{prop} Let $(l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash},l_{\cdot}, r_{\cdot}, \beta_{1}, \beta_{2}, V)$ be a bimodule of a BiHom-tri\-dendriform algebra $(T,\dashv, \vdash,\cdot, \alpha_{1}, \alpha_{2}).$ Then, on the direct sum $T\oplus V $ of the underlying vector spaces of $T$ and $V$, there exists a BiHom-tridendriform algebra structure given, for all $ x, y \in T, u, v \in V $, by \begin{eqnarray*} (x + u) \dashv' (y + v) &:=& x \dashv y + l_{\dashv}(x)v + r_{\dashv}(y)u, \cr (x + u) \vdash' (y + v) &:=& x \vdash y + l_{\vdash}(x)v + r_{\vdash}(y)u,\cr (x + u) \cdot (y + v) &:=& x \cdot y + l_{\cdot}(x)v + r_{\cdot}(y)u. \end{eqnarray*} We denote it by $ T\times_{l_{\dashv},r_{\dashv}, l_{\vdash}, r_{\vdash}, l_{\cdot}, r_{\cdot},\alpha_{1}, \alpha_{2}, \beta_{1}, \beta_{2}} V$. \end{prop} \begin{proof} We prove only one axiom, as others are proved similarly. For any $x_{1},x_{2},x_{3}\in T$ and $v_1, v_2, v_3\in V$, \begin{align*} &((x_1+v_1)\vdash'(x_2+v_2))\dashv'(\alpha_2+\beta_{2})(x_3+v_3)\\ &\quad =(x_1\vdash x_2+l_{\vdash}(x_1)v_2+r_{\vdash}(x_2)v_1)\dashv'(\alpha_2(x_3)+\beta_2(v_3))\\ &\quad =(x_1\vdash x_2)\dashv\alpha_2(x_3)+l_{\dashv}(x_1\vdash x_2)\beta_2(v_3)\\ &\quad\quad+r_\dashv(\alpha_2(x_3))l_\vdash(x_1)v_1+r_{\dashv}(\alpha_{2}(x_3))r_{\vdash}(x_2)v_1. \\ &(\alpha_1+\beta_1)(x_1+v_1)\vdash'((x_{2}+v_{2})\dashv'(x_3+v_3))\\ &\quad =(\alpha_1(x_1)+\beta_1(v_1))\vdash'(x_2\dashv x_3+l_{\dashv}(x_{2})v_3+r_{\dashv}(x_3)v_2)\\ &\quad =\alpha_1(x_1)\vdash(x_2\dashv x_3)+l_{\vdash}(\alpha_1(x_1))l_{\dashv}(x_2)v_3\\ &\quad \quad+l_{\vdash}(\alpha_1(x_1))r_{\dashv}(x_{3})v_2+r_\vdash(x_2+x_3)\beta_1(v_1). \end{align*} We deduce that $((x_1+v_1)\vdash'(x_2+v_2))\dashv'(\alpha_2+\beta_{2})(x_3+v_3)=((x_1+v_1)\vdash'(x_2+v_2))\dashv'(\alpha_2+\beta_{2})(x_3+v_3)$. This ends the proof. \end{proof} \begin{exes} Some examples of bimodules of BiHom-tridendriform algebra can be constructed as follows. \\ 1) Let $(T,\dashv, \vdash,\cdot,\alpha,\beta)$ be a BiHom-tridendriform algebra. Then $$(L_{\dashv},R_{\dashv},L_{\vdash},R_{\vdash},L_\cdot,R_\cdot,\alpha,\beta,T) \ \mbox{is a bimodule of}\ T,$$ where for all $(a,b)\in T^{\times 2}$, \begin{alignat*}{4} L_{\dashv}(a)b&=a\dashv b, & \quad R_{\dashv}(a)b&=b\dashv a, \\ L_{\vdash}(a)b&=a\vdash b, & \quad R_{\vdash}(a)b&=b\vdash a, \\ L_{\cdot}(a)b&=a\cdot b, & \quad R_{\cdot}(a)b&=b\cdot a. \end{alignat*} More generally, if $B$ is a two-sided BiHom-ideal of $(T,\dashv, \vdash,\cdot,\alpha,\beta)$, then \\ $$(L_{\dashv},R_{\dashv},L_{\vdash},R_{\vdash},L_\cdot,R_\cdot,\alpha,\beta,B) \ \mbox{is a bimodule of}\ T,$$ where for all $x\in B$ and $(a,b)\in T^{\times 2}$, \begin{align*} L_{\dashv}(a)x &= a\dashv x=x\dashv a=R_{\dashv}(a)x, \\ L_{\vdash}(a)x &= a\vdash x=x\vdash a=R_{\vdash}(a)x, \\ L_{\cdot}(a)x &= a\cdot x=x\cdot a=R_{\cdot}(a)x. \end{align*} 2) If $(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},l_\cdot,r_\cdot,V)$ is a bimodule of a BiHom-tridendriform algebra $(T,\dashv, \vdash,\cdot)$ is a BiHom-tridendriform algebra, then $(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},l_\cdot,r_\cdot,Id_{V},Id_{V},V)$ is a bimodule of $\mathbb{T}$, where $\mathbb{T}=(T,\dashv, \vdash,\cdot,Id_{T}, Id_{T})$ is a BiHom-tridendriform algebra. \end{exes} \begin{prop} If $f:(T,\dashv_1, \vdash_1,\cdot_1,\alpha_{1},\alpha_2)\longrightarrow(T',\dashv_2, \vdash_2,\cdot_2,\beta_{1},\beta_{2})$ is a morphism of BiHom-tridendriform algebras, then $(l_{\dashv_1},r_{\dashv_1},l_{\vdash_1},r_{\vdash_1},l_{\cdot_1},r_{\cdot_1},\beta_1,\beta_{2},T')$ becomes a bimodule of $T$ via $f$, that is, for all $(a,b)\in T\times T'$, \begin{alignat*}{4} l_{\dashv_1}(a)b &=f(a)\dashv_2 b, &\quad r_{\dashv_1}(a)b &=b \dashv_2 f(a),\\ l_{\vdash_1}(a)b &=f(a)\vdash_2 b, &\quad r_{\vdash_1}(a)b &=b \vdash_2 f(a),\\ l_{\cdot_1}(a)b &=f(a)\cdot_2 b, &\quad r_{\cdot_1}(a)b &=b \cdot_2 f(a). \end{alignat*} \end{prop} \begin{proof} We prove only one axiom, since other axioms are proved similarly. For any $x,y\in T$ and $z\in T'$, \begin{align*} l_{\vdash_1}(x\dashv_1 y)\beta_2(z)&=f(x\dashv_1 y)\dashv_2\beta_2(z)\\ &=(f(x)\dashv_2 f(y))\dashv_2 \beta_2(z)=\beta_1 f(x)\dashv_2(f(y)\ast_{2} z)\\ &=f(\alpha_1(x))\dashv_2 l_{\ast_1}(y)z=l_{\dashv_1}(\alpha_{1}(x))l_{\ast_1}(y)z. \qedhere \end{align*} \end{proof} \begin{defn} An abelian extension of BiHom-tridendriform algebra is a short exact sequence of BiHom-tridendriform algebra $$0\longrightarrow (V,\alpha_{V},\beta_{V})\stackrel{\mbox{i}} \longrightarrow(T,\dashv_T, \vdash_T,\cdot_T,\alpha_{T},\beta_{T})\stackrel{\mbox{$\pi$}}\longrightarrow (T',\dashv_{T'}, \vdash_{T'},\cdot_{T'},\alpha_{T'},\beta_{T'})\longrightarrow 0 ,$$ where $(V,\alpha_{V},\beta_{V})$ is a trivial BiHom-tridendriform algebra, $i$ and $\pi$ are morphisms of BiHom-algebras. Furthermore, if there is a morphism $s:(T',\dashv_{T'}, \vdash_{T'},\cdot_{T'},\alpha_{T'},\beta_{T'}) \longrightarrow (T,\dashv_T, \vdash_T,\cdot_T,\alpha_{T},\beta_{T})$ such that $\pi\circ s=id_{T'}$, then the abelian extension is said to be split and $s$ is called a section of $\pi$. \end{defn} \begin{rmk} Consider the split null extension $T\oplus V$ determined by the bimodule\\ $(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},l_\cdot,r_\cdot,\alpha_V,\beta_V,V)$ for the BiHom-tridendriform algebra $(T,\dashv_T, \vdash_T,\cdot_T,\alpha,\beta)$ in the previous proposition. Write elements $a+v$ of $T\oplus V$ as $(a,v).$ Then there is an injective homomorphism of BiHom-modules $i :V\rightarrow T\oplus V $ given by $i(v)=(0,v)$ and a surjective homomorphism of BiHom-modules $\pi : T\oplus V\rightarrow T$ given by $\pi(a,v)=a.$ Moreover, $i(V)$ is a two-sided BiHom-ideal of $T\oplus V$ such that $T\oplus V/i(V)\cong T$. On the other hand, there is a morphism of BiHom-algebras $\sigma: T\rightarrow T\oplus V$ given by $\sigma(a)=(a,0)$ which is clearly a section of $\pi.$ Hence, we obtain the abelian split exact sequence of BiHom-tridendriform algebra and $(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},l_\cdot,r_\cdot, \alpha_V,\beta_{V},V)$ is a bimodule for $T$ via $\pi.$ \end{rmk} \begin{prop}\label{propa} Let ($ l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash},l_{\cdot}, r_{\cdot}, \beta_{1}, \beta_{2}, V$) be a bimodule of a BiHom-triden\-driform algebra $(T, \dashv, \vdash,\cdot, \alpha_{1}, \alpha_{2})$. Let $(T, \ast, \alpha_{1}, \alpha_{2})$ be the associated BiHom-associative algebra. Then, $( l_{\dashv}+l_{\vdash}+l_{\cdot}, r_{\dashv}+r_{\vdash}+r_{\cdot},\beta_{1}, \beta_{2}, V)$ is a bimodule of $(T, \ast, \alpha_{1}, \alpha_{2})$. \end{prop} \begin{proof} We prove only one axiom. The other axioms are proved similarly. For any $x, y \in A, v \in V$, \begin{align*} &l_\ast(x\ast y)\beta_2(v)=(l_\dashv+l_\vdash+l_\cdot)(x\ast y)\beta_2(v) =(l_\dashv+l_\vdash+l_\cdot)(x\dashv y+x\vdash y+x\cdot y)\beta_2(v)\\ &\quad=l_\dashv(x\dashv y)\beta_2(v)+l_\dashv(x\vdash y)\beta_2(v)+l_\dashv(x\cdot y)\beta_2(v)+l_\vdash(x\ast y)\beta_2(v)\\ &\quad\quad +l_\cdot(x\dashv y)\beta_2(v)+l_\cdot(x\vdash y)\beta_2(v)+l_\cdot(x\cdot y)\beta_2(v)\\ &\quad =l_\dashv(\alpha_1(x))l_\ast(y)v+l_\vdash(\alpha_1(x))l_\dashv(y)v+l_\cdot(\alpha_1(x))l_\dashv(y)v+l_\vdash(\alpha_1(x))l_\vdash(y)v\\ &\quad\quad+l_\cdot(\alpha_1(x))l_\vdash(y)v+l_\vdash(\alpha_1(x))l_\cdot(y)v+l_\cdot(\alpha_1(x))l_\cdot(y)v\\ &\quad=(l_\dashv+l_\vdash+l_\cdot)(\alpha_1(x))(l_\dashv+l_\vdash+l_\cdot)(y)v=l_\ast(\alpha_1(x))l_\ast(y)v. \qedhere \end{align*} \end{proof} \begin{thm}\label{mamm1} Let $(T,\dashv, \vdash,\cdot,\alpha_1,\alpha_2)$ be a BiHom-tridendriform algebra, and $V_{\beta_1,\beta_2}=(l_{\dashv},r_{\dashv},l_{\vdash},r_{\vdash},l_{\cdot},r_{\cdot},\beta_1,\beta_2,V)$ be a bimodule of $T$. Let $\alpha'_1,\alpha'_2$ be two endomorphisms of $T$ such that any two of the maps $\alpha_1,\alpha'_1,\alpha_2,\alpha'_2$ commute and $\beta'_1,~\beta'_2$ be linear maps of $V$ such that any two of the maps $\beta_1,\beta'_1,\beta_2,\beta'_2$ commute. Suppose furthermore that $$\left\{ \begin{array}{lllllll} \beta'_1\circ l_\dashv=(l_\dashv\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ l_\dashv=(l_\dashv\circ\alpha'_2)\beta'_2,& \\ \beta'_1\circ l_\vdash=(l_\vdash\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ l_\vdash=(l_\vdash\circ\alpha'_2)\beta'_2,&\\ \beta'_1\circ l_\cdot=(l_\cdot\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ l_\cdot=(l_\cdot\circ\alpha'_2)\beta'_2,& \\ \end{array} \right.$$ $$\left\{ \begin{array}{lllllll} \beta'_1\circ r_\dashv=(r_\dashv\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ r_\dashv=(r_\dashv\circ\alpha'_2)\beta'_2,& \\ \beta'_1\circ r_\vdash=(r_\vdash\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ r_\vdash=(r_\vdash\circ\alpha'_2)\beta'_2,&\\ \beta'_1\circ r_\cdot=(r_\cdot\circ\alpha'_1)\beta'_1,~~ \beta'_2\circ r_\cdot=(r_\cdot\circ\alpha'_2)\beta'_2,& \end{array} \right.$$ and write $T_{\alpha'_1,\alpha'_2}=(T,\dashv_{\alpha'_1,\alpha'_2}, \vdash_{\alpha'_1,\alpha'_2},\cdot_{\alpha'_1,\alpha'_2},\alpha_1\alpha'_1,\alpha_2\alpha'_2)$ for the BiHom-tridendriform algebra, and $V_{\beta'_1,\beta'_2}=(\widetilde{l}_{\dashv},\widetilde{r}_{\dashv},\widetilde{l}_{\vdash}, \widetilde{r}_{\vdash},\widetilde{l}_{\cdot},\widetilde{r}_{\cdot},\beta_1\beta'_1,\beta_2\beta'_2,V)$, where \begin{equation} \begin{array}{llll} \widetilde{l}_{\dashv}&=(l_{\dashv}\circ\alpha'_1)\beta'_2, &\widetilde{r}_{\dashv}&=(r_{\dashv}\circ\alpha'_2)\beta'_1, \\ \widetilde{l}_{\vdash}&=(l_{\vdash}\circ\alpha'_1)\beta'_2, &\widetilde{r}_{\vdash}&=(r_{\vdash}\circ\alpha'_2)\beta'_1,\\ \widetilde{l}_{\cdot}&=(l_{\cdot}\circ\alpha'_1)\beta'_2, &\widetilde{r}_{\cdot}&=(r_{\cdot}\circ\alpha'_2)\beta'_1. \end{array} \end{equation} This gives the BiHom-module $V_{\beta'_1,\beta'_2}$ the structure of $T_{\alpha'_1,\alpha'_2}$-bimodule. \end{thm} \begin{proof} We prove only one axiom, since other axioms are proved similarly. For any $x,y\in T$ and $v\in V$, \begin{align*} \widetilde{l}_{\dashv}(x\dashv_{\alpha'_1\alpha'_2}y)\beta_2\beta'_2(v) &=\widetilde{l}_{\dashv}(\alpha'(x)\dashv_{\alpha'_1\alpha'_2}\alpha'(y))\beta_2\beta'_2(v)\\ &=l_{\dashv}(\alpha'{2}_{1}^(x)\dashv\alpha'_1\alpha'_2(y))\beta_2\beta'{2}_{2}^(v)\\ &=l_{\dashv}(\alpha_1\alpha'^{2}_{1}(x))l_{\ast}(\alpha'_1\alpha'_2(y))\beta'^{2}_{2}(v)\\ &=\widetilde{l}_{\dashv}(\alpha_1\alpha'_1(x))l_{\ast}(\alpha'_1(y))\beta'_2(v)\\ &=\widetilde{l}_{\dashv}(\alpha_1\alpha'_1(x))\widetilde{l}_{\ast}(y)v. \qedhere \end{align*} \end{proof} Let ($ l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash},l_{\cdot}, r_{\cdot}, \beta_{1}, \beta_{2}, V$) be a bimodule of a BiHom-tridendriform algebra $(T, \dashv, \vdash,\cdot, \alpha_{1}, \alpha_{2})$ and $l_{\dashv}^{\ast}, r_{\dashv}^{\ast}, l_{\vdash}^{\ast}, r_{\vdash}^{\ast},l_{\cdot}^{\ast}, r_{\cdot}^{\ast}:T\rightarrow gl(V^{\ast}).$ Let $\alpha_1^{\ast},\alpha_{2}^{\ast}:T^{\ast}\rightarrow T^{\ast},~~\beta_{1}^{\ast},\beta_{2}^{\ast}:V^{\ast}\rightarrow V^{\ast}$ be the dual maps of respectively $\alpha_1,\alpha_2,\beta_1$ and $\beta_2$ such that $$\begin{array}{llllllll} \langle l_{\dashv}^{\ast}(x)u^{\ast},v\rangle =\langle u^{\ast},l_{\dashv}(x)v\rangle,&& \langle r^{\ast}_{\vdash}(x)u^{\ast},v\rangle =\langle u^{\ast},r_{\vdash}(x)v\rangle,\\ \langle l_{\vdash}^{\ast}(x)u^{\ast},v\rangle =\langle u^{\ast},l_{\vdash}(x)v\rangle,&& \langle r^{\ast}_{\vdash}(x)u^{\ast},v\rangle =\langle u^{\ast},r_{\vdash}(x)v\rangle,\\ \langle l_{\cdot}^{\ast}(x)u^{\ast},v\rangle =\langle u^{\ast},l_{\cdot}(x)v\rangle,&& \langle r^{\ast}_{\cdot}(x)u^{\ast},v\rangle =\langle u^{\ast},r_{\cdot}(x)v\rangle,\\ \alpha_{1}^{\ast}(x^{\ast}(y)=x^{\ast}(\alpha_{1}(y)),&& \alpha_{2}^{\ast}(x^{\ast}(y)=x^{\ast}(\alpha_{2}(y)),\\ \beta_{1}^{\ast}(u^{\ast}(v)=u^{\ast}(\beta_{1}(v)),&& \beta_{2}^{\ast}(u^{\ast}(v)=u^{\ast}(\beta_{2}(v)). \end{array}$$ \begin{prop} Let ($ l_{\dashv}, r_{\dashv}, l_{\vdash}, r_{\vdash},l_{\cdot}, r_{\cdot}, \beta_{1}, \beta_{2}, V$) be a bimodule of a BiHom-tri\-dendriform algebra $(T, \dashv, \vdash,\cdot, \alpha_{1}, \alpha_{2})$. Then ($ l_{\dashv}^{\ast}, r_{\dashv}^{\ast}, l_{\vdash}^{\ast}, r_{\vdash}^{\ast},l_{\cdot}^{\ast}, r_{\cdot}^{\ast}, \beta_{1}^{\ast}, \beta_{2}^{\ast}, V^{\ast}$) is a bimodule of $(T, \dashv, \vdash,\cdot, \alpha_{1}, \alpha_{2})$ provided that $$\begin{array}{llllllll} \beta_{2}(l_{\dashv}(x \dashv y))u&=& l_{\ast}(y)l_{\dashv}(\alpha_{1}(x))u,&& l_{\dashv}(y)r_{\dashv}(\alpha_{2}(x))u&=&r_{\ast}(x)l_{\dashv}(\alpha_{1}(y))u,\\ r_{\dashv}(y)r_{\dashv}(\alpha_{2}(y))u &=& \beta_{1}(r_{\dashv}(x\ast y))u,&& \beta_{2}(l_{\dashv}(x \vdash y))u &=&l_{\dashv}(y) l_{\vdash}(\alpha_{1}(x))u,\\ l_{\vdash}(y)r_{\dashv}(\alpha_{2}(x))u &=& r_{\dashv}(x)l_{\vdash}(\alpha_{1}(y))u,&& r_{\vdash}(y)r_{\dashv}(\alpha_{2}(x))u &=& \beta_{1}(r_{\vdash}(y\dashv x))u,\\ \beta_{2}(l_{\vdash}(x\ast y))u &=& l_{\vdash}(y)l_{\vdash}(\alpha_{1}(x))u,&& l_{\ast}(y)r_{\vdash}(\alpha_{2}(x))u&=& r_{\vdash}(x)l_{\vdash}(\alpha_{1}(y))u,\\ r_{\ast}(y)r_{\vdash}(\alpha_{2}(x))u &=& \beta_{1}(r_{\vdash}(y \vdash x))u,&& \beta_{2}(l_{\cdot}(x\dashv y))u&=&l_{\vdash}(y)l_{\cdot}(\alpha_{1}(x))u,\\ l_{\dashv}(y)r_{\cdot}(\alpha_{2}(x))u&=&r_{\vdash}(x)l_{\cdot}(\alpha_{1}(y))u,&&r_{\dashv}(y) r_{\cdot}(\alpha_{2}(x))u&=&\beta_{1}(r_{\cdot}(y\vdash x))u,\\ \beta_{2}(l_{\cdot}(x\vdash y))u&=&l_{\cdot}(y)l_{\vdash}(\alpha_{1}(x))u,&& l_{\vdash}(y)r_{\cdot}(\alpha_{2}(x))u&=&r_{\cdot}(x)l_{\vdash}(\alpha_{1}(y))u,\\ r_{\vdash}(y) r_{\cdot}(\alpha_{2}(x))u&=&\beta_{1}(r_{\vdash}(y\cdot x))u,&& \beta_{2}(l_{\dashv}(x\cdot y))u&=&l_{\dashv}(y)l_{\cdot}(\alpha_{1}(x))u, \\ l_{\cdot}(y)r_{\dashv}(\alpha_{2}(x))u&=&r_{\dashv}(x)l_{\cdot}(\alpha_{1}(y))u,&&r_{\cdot}(y) r_{\dashv}(\alpha_{2}(x))u&=&\beta_{1}(r_{\cdot}(y\dashv x))u,\\ \beta_{2}(l_{\cdot}(x\cdot y))u&=&l_{\cdot}(y)l_{\cdot}(\alpha_{1}(x))u,&& l_{\cdot}(y)r_{\cdot}(\alpha_{2}(x))u&=&r_{\cdot}(x)l_{\cdot}(\alpha_{1}(y))u, \\ r_{\cdot}(y)r_{\cdot}(\alpha_{2}(x))u&=&\beta_{1}(r_{\cdot}(y\cdot x))u. \end{array}$$ where $ x \ast y = x \dashv y + x \vdash y+x\cdot y, l_{\ast} = l_{\dashv} + l_{\vdash}+l_{\cdot}, r_{\ast} = r_{\dashv} + r_{\vdash}+r_{\cdot}, $ for all $x,y\in T$ and $u\in V$. \end{prop} \begin{thm} Let $(A, \dashv_{A}, \vdash_{A},\cdot_A, \alpha_{1}, \alpha_{2})$ and $(B, \dashv_{B}, \vdash_{B}, \cdot_B,\beta_{1}, \beta_{2})$ be two BiHom-tridendriform algebras. Suppose that there are linear maps \begin{align*} & l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}},l_{\cdot_A},r_{\cdot_A} : A \rightarrow gl(B), \\ & l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}},l_{\cdot_B},r_{\cdot_B} : B \rightarrow gl(A), \end{align*} such that \begin{eqnarray*} (l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}},l_{\cdot_A},r_{\cdot_A}, \beta_{1}, \beta_{2}, B) \ \mbox{is a bimodule of}\ A, \\ (l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}},l_{\cdot_B},r_{\cdot_B} \alpha_{1}, \alpha_{2}, A) \ \mbox{is a bimodule of}\ B, \end{eqnarray*} and for \begin{align*} &x\ast_A y=x\dashv_A y+x\vdash_A y+x\cdot_A y,~l_{A} = l_{\dashv_{A}} + l_{\vdash_{A}}+l_{\cdot_{A}}, r_{A} = r_{\dashv_{A}} + r_{\vdash_{A}}+r_{\cdot_{A}},\\ &a\ast_B b=a\dashv_B b+a\vdash_B b+a\cdot_B b,~l_{B} = l_{\dashv_{B}} + l_{\vdash_{B}}+l_{\cdot_{B}} , r_{B} = r_{\dashv_{B}} + r_{\vdash_{B}}+r_{\cdot_{B}}.\end{align*} and for any $ x, y \in A,~ a, b \in B $, \begin{eqnarray} \label{bieq201} r_{\dashv_{A}}(\alpha_{2}(x))(a \dashv_{B} b) = r_{A(l_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\dashv_{B} (r_{\dashv_{A}}(x)b), \\ \label{bieq202} \begin{array}{ll} l_{\dashv_{A}}(l_{\dashv_{B}}(a)x)\beta_{2}(b) & + (r_{\dashv_{A}}(x)a) \dashv_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\dashv_{B} (l_{\dashv_{A}}(x)b) + r_{\dashv_{A}}(r_{\dashv_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq203} l_{\dashv_{A}}(\alpha_{1}(x))(a \ast_{B} b) = ( l_{\dashv_{A}}(x)a) \ast_{B}\beta_{2}(b) + l_{\dashv_{A}}(r_{\dashv_{B}}(a)x)\beta_{2}(b), \\ \label{bieq204} r_{\dashv_{A}}(\alpha_{2}(x))(a \vdash_{B} b) = r_{\vdash_{A}}(l_{\dashv_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\vdash_{B} (r_{\dashv_{A}}(x)b), \\ \label{bieq205} \begin{array}{ll} l_{\dashv_{A}}(l_{\vdash_{B}}(a)x)\beta_{2}(b) & + (r_{\vdash_{A}}(x)a) \dashv_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\vdash_{B} (l_{\dashv_{A}}(x)b) + r_{\vdash_{A}}(r_{\dashv_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq206} l_{\dashv_{A}}(\alpha_{1}(x))(a \dashv_{B} b) = ( l_{\vdash_{A}}(x)a) \dashv_{B}\beta_{2}(b) + l_{\dashv_{A}}(r_{\vdash_{B}}(a)x)\beta_{2}(b), \\ \label{bieq207} r_{\vdash_{A}}(\alpha_{2}(x))(a \ast_{B} b) = r_{\vdash_{A}}(l_{\vdash_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\vdash_{B} (r_{\vdash_{A}}(x)b), \\ \label{bieq208} \begin{array}{ll} l_{\vdash_{A}}(l_{A}(a)x)\beta_{2}(b) & + (r_{A}(x)a) \vdash_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\vdash_{B} (l_{\vdash_{A}}(x)b) + r_{\vdash_{A}}(r_{\vdash_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq209} l_{\vdash_{A}}(\alpha_{1}(x))(a \vdash_{B} b) = ( l_{\vdash_{A}}(x)a) \vdash_{B}\beta_{2}(b) + l_{A}(r_{B}(a)x)\beta_{2}(b), \\ \label{bieq210} r_{\cdot_{A}}(\alpha_{2}(x))(a \dashv_{B} b) = r_{\cdot_{A}}(l_{\vdash_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\cdot_{B} (r_{\vdash_{A}}(x)b), \\ \label{bieq211} \begin{array}{ll} l_{\cdot_{A}}(l_{\dashv_{B}}(a)x)\beta_{2}(b) & + (r_{\dashv_{A}}(x)a) \cdot_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\cdot_{B} (l_{\vdash_{A}}(x)b) + r_{\cdot_{A}}(r_{\vdash_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq212} l_{\cdot_{A}}(\alpha_{1}(x))(a \vdash_{B} b) = ( l_{\dashv_{A}}(x)a) \cdot_{B}\beta_{2}(b) + l_{\cdot_{A}}(r_{\dashv_{B}}(a)x)\beta_{2}(b), \\ \label{bieq213} r_{\cdot_{A}}(\alpha_{2}(x))(a \vdash_{B} b) = r_{\vdash_{A}}(l_{\cdot_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\vdash_{B} (r_{\cdot_{A}}(x)b), \\ \label{bieq214} \begin{array}{ll} l_{\cdot_{A}}(l_{\vdash_{B}}(a)x)\beta_{2}(b) & + (r_{\vdash_{A}}(x)a) \cdot_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\vdash_{B} (l_{\cdot_{A}}(x)b) + r_{\vdash_{A}}(r_{\cdot_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq215} l_{\vdash_{A}}(\alpha_{1}(x))(a \cdot_{B} b) = ( l_{\vdash_{A}}(x)a) \cdot_{B}\beta_{2}(b) + l_{\cdot_{A}}(r_{\vdash_{B}}(a)x)\beta_{2}(b), \\ \label{bieq216} r_{\dashv_{A}}(\alpha_{2}(x))(a \cdot_{B} b) = r_{\cdot_{A}}(l_{\dashv_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\cdot_{B} (r_{\dashv_{A}}(x)b), \\ \label{bieq217} \begin{array}{ll} l_{\dashv_{A}}(l_{\cdot_{B}}(a)x)\beta_{2}(b) & + (r_{\cdot_{A}}(x)a) \dashv_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\cdot_{B} (l_{\dashv_{A}}(x)b) + r_{\cdot_{A}}(r_{\dashv_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq218} l_{\cdot_{A}}(\alpha_{1}(x))(a \dashv_{B} b) = ( l_{\cdot_{A}}(x)a) \dashv_{B}\beta_{2}(b) + l_{\dashv_{A}}(r_{\cdot_{B}}(a)x)\beta_{2}(b), \\ \label{bieq219} r_{\cdot_{A}}(\alpha_{2}(x))(a \cdot_{B} b) = r_{\cdot_{A}}(l_{\cdot_{B}}(b)x)\beta_{1}(a) + \beta_{1}(a)\cdot_{B} (r_{\cdot_{A}}(x)b), \\ \label{bieq220} \begin{array}{ll} l_{\cdot_{A}}(l_{\cdot_{B}}(a)x)\beta_{2}(b) & + (r_{\cdot_{A}}(x)a) \cdot_{B}\beta_{2}(b)=\cr & \beta_{1}(a)\cdot_{B} (l_{\cdot_{A}}(x)b) + r_{\cdot_{A}}(r_{\cdot_{B}}(b)x)\beta_{1}(a), \end{array} \\ \label{bieq221} l_{\cdot_{A}}(\alpha_{1}(x))(a \cdot_{B} b) = ( l_{\cdot_{A}}(x)a) \cdot_{B}\beta_{2}(b) + l_{\cdot_{A}}(r_{\cdot_{B}}(a)x)\beta_{2}(b),\\ \label{bieq222} r_{\dashv_{B}}(\beta_{2}(a))(x \dashv_{A} y) = r_{B(l_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\dashv_{A} (r_{\dashv_{B}}(a)y), \\ \label{bieq223} \begin{array}{ll} l_{\dashv_{B}}(l_{\dashv_{A}}(x)a)\alpha_{2}(y) & + (r_{\dashv_{B}}(a)x) \dashv_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\dashv_{B} (l_{\dashv_{B}}(a)y) + r_{\dashv_{B}}(r_{\dashv_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq224} l_{\dashv_{B}}(\beta_{1}(a))(x \ast_{A} y) = ( l_{\dashv_{B}}(a)x) \ast_{A}\alpha_{2}(y) + l_{\dashv_{B}}(r_{\dashv_{A}}(x)a)\alpha_{2}(y),\\ \label{bieq225} r_{\dashv_{B}}(\beta_{2}(a))(x \vdash_{A} y) = r_{\vdash_{B}}(l_{\dashv_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\vdash_{A} (r_{\dashv_{B}}(a)y), \\ \label{bieq226} \begin{array}{ll} l_{\dashv_{B}}(l_{\vdash_{A}}(x)a)\alpha_{2}(y) & + (r_{\vdash_{B}}(a)x) \dashv_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\vdash_{B} (l_{\dashv_{B}}(a)y) + r_{\vdash_{B}}(r_{\dashv_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq227} l_{\dashv_{B}}(\beta_{1}(a))(x \dashv_{A} y) = ( l_{\vdash_{B}}(a)x) \dashv_{A}\alpha_{2}(y) + l_{\dashv_{B}}(r_{\vdash_{A}}(x)a)\alpha_{2}(y),\\ \label{bieq228} r_{\vdash_{B}}(\beta_{2}(a))(x \ast_{A} y) = r_{\vdash_{B}}(l_{\vdash_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\vdash_{A} (r_{\vdash_{B}}(a)y), \\ \label{bieq229} \begin{array}{ll} l_{\vdash_{B}}(l_{B}(x)a)\alpha_{2}(y) & + (r_{B}(a)x) \vdash_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\vdash_{B} (l_{\vdash_{B}}(a)y) + r_{\vdash_{B}}(r_{\vdash_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq230} l_{\vdash_{B}}(\beta_{1}(a))(x \vdash_{A} y) = ( l_{\vdash_{B}}(a)x) \vdash_{A}\alpha_{2}(y) + l_{B}(r_{A}(x)a)\alpha_{2}(y),\\ \label{bieq231} r_{\cdot_{B}}(\beta_{2}(a))(x \dashv_{A} y) = r_{\cdot_{B}}(l_{\vdash_{A}}(y)x)\alpha_{1}(x) + \alpha_{1}(x)\cdot_{A} (r_{\vdash_{B}}(a)y), \\ \label{bieq232} \begin{array}{ll} l_{\cdot_{B}}(l_{\dashv_{A}}(x)a)\alpha_{2}(y) & + (r_{\dashv_{B}}(a)x) \cdot_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\cdot_{B} (l_{\vdash_{B}}(a)y) + r_{\cdot_{B}}(r_{\vdash_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq233} l_{\cdot_{B}}(\beta_{1}(a))(x \vdash_{A} y) = ( l_{\dashv_{B}}(a)x) \cdot_{A}\alpha_{2}(y) + l_{\cdot_{B}}(r_{\dashv_{A}}(x)a)\alpha_{2}(y),\\ \label{bieq234} r_{\cdot_{B}}(\beta_{2}(a))(x \vdash_{A} y) = r_{\vdash_{B}}(l_{\cdot_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\vdash_{A} (r_{\cdot_{B}}(a)y), \\ \label{bieq235} \begin{array}{ll} l_{\cdot_{B}}(l_{\vdash_{A}}(x)a)\alpha_{2}(y) & + (r_{\vdash_{B}}(a)x) \cdot_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\vdash_{B} (l_{\cdot_{B}}(a)y) + r_{\vdash_{B}}(r_{\cdot_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq236} l_{\vdash_{B}}(\beta_{1}(a))(x \cdot_{A} y) = ( l_{\vdash_{B}}(a)x) \cdot_{A}\alpha_{2}(y) + l_{\cdot_{B}}(r_{\vdash_{A}}(x)a)\alpha_{2}(y),\\ \label{bieq237} r_{\dashv_{B}}(\beta_{2}(a))(x \cdot_{A} y) = r_{\cdot_{B}}(l_{\dashv_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\cdot_{A} (r_{\dashv_{B}}(a)y), \\ \label{bieq238} \begin{array}{ll} l_{\dashv_{B}}(l_{\cdot_{A}}(x)a)\alpha_{2}(y) & + (r_{\cdot_{B}}(a)x) \dashv_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\cdot_{B} (l_{\dashv_{B}}(a)y) + r_{\cdot_{B}}(r_{\dashv_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq239} l_{\cdot_{B}}(\beta_{1}(a))(x \dashv_{A} y) = ( l_{\cdot_{B}}(a)x) \dashv_{A}\alpha_{2}(y) + l_{\dashv_{B}}(r_{\cdot_{A}}(x)a)\alpha_{2}(y),\\ \label{bieq240} r_{\cdot_{B}}(\beta_{2}(a))(x \cdot_{A} y) = r_{\cdot_{B}}(l_{\cdot_{A}}(y)a)\alpha_{1}(x) + \alpha_{1}(x)\cdot_{A} (r_{\cdot_{B}}(a)y), \\ \label{bieq241} \begin{array}{ll} l_{\cdot_{B}}(l_{\cdot_{A}}(x)a)\alpha_{2}(y) & + (r_{\cdot_{B}}(a)x) \cdot_{A}\alpha_{2}(y)=\cr & \alpha_{1}(x)\cdot_{B} (l_{\cdot_{B}}(a)y) + r_{\cdot_{B}}(r_{\cdot_{A}}(y)a)\alpha_{1}(x), \end{array} \\ \label{bieq242} l_{\cdot_{B}}(\beta_{1}(a))(x \cdot_{A} y) = ( l_{\cdot_{B}}(a)x) \cdot_{A}\alpha_{2}(y) + l_{\cdot_{B}}(r_{\cdot_{A}}(x)x)\alpha_{2}(y). \end{eqnarray} Then, there is a BiHom-tridendriform algebra structure on the direct sum $ A \oplus B $ of the underlying vector spaces of $ A $ and $ B $ given for any $ x, y \in A, a, b \in B $ by \begin{eqnarray*} (x + a) \dashv ( y + b ) &:=& (x \dashv_{A} y + r_{\dashv_{B}}(b)x + l_{\dashv_{B}}(a)y)\cr &+&(l_{\dashv_{A}}(x)b + r_{\dashv_{A}}(y)a + a \dashv_{B} b ), \cr (x + a) \vdash ( y + b ) &:=& (x \vdash_{A} y + r_{\vdash_{B}}(b)x + l_{\vdash_{B}}(a)y)\cr &+& (l_{\vdash_{A}}(x)b + r_{\vdash_{A}}(y)a + a \vdash_{B} b ),\cr (x + a) \cdot ( y + b ) &:=& (x \cdot_{A} y + r_{\cdot_{B}}(b)x + l_{\cdot_{B}}(a)y)\cr &+&(l_{\cdot_{A}}(x)b + r_{\cdot_{A}}(y)a + a \cdot_{B} b ). \end{eqnarray*} \end{thm} \begin{proof} The proof is obtained in a similar way as for Theorem \ref{matched ass}. \end{proof} Let $ A \bowtie^{l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}},l_{\cdot_{A}}, r_{\cdot_{A}}, \beta_{1}, \beta_{1}}_{l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}},l_{\cdot_{B}}, r_{\cdot_{B}}, \alpha_{1}, \alpha_{2}} B $ denote this BiHom-tridendriform algebra. \begin{defn} Let $ (A, \dashv_{A}, \vdash_{A},\cdot_{A}, \alpha_{1}, \alpha_{2}) $ and $ (B, \dashv_{B}, \vdash_{B},\cdot_{B}, \beta_{1}, \beta_{2}) $ be two BiHom-tridendriform algebras. Suppose there exist linear maps $$ l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}}, l_{\cdot_{A}}, r_{\cdot_{A}} : A \rightarrow gl(B),$$ $$ l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}},l_{\cdot_{B}}, r_{\cdot_{B}} : B \rightarrow gl(A) $$ such that $(l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}},l_{\cdot_{A}}, r_{\cdot_{A}}, \beta_{1}, \beta_{2})$ is a bimodule of $ A,$ and $$(l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}},l_{\cdot_{B}}, r_{\cdot_{B}}, \alpha_{1}, \alpha_{2}) \ \mbox{is a bimodule of}\ B.$$ If \eqref{bieq201} - \eqref{bieq242} are satisfied, then $$(A, B, l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}},l_{\cdot_{A}}, r_{\cdot_{A}}, \beta_{1}, \beta_{2}, l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}}, l_{\cdot_{B}}, r_{\cdot_{B}},\alpha_{1}, \alpha_{2})$$ is called a matched pair of BiHom-tridendriform algebras. \end{defn} \begin{cor} Let $$(A, B, l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}},l_{\cdot_{A}}, r_{\cdot_{A}}, \beta_{1}, \beta_{2}, l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}},l_{\cdot_{B}}, r_{\cdot_{B}}, \alpha_{1}, \alpha_{2}) $$ be a matched pair of BiHom-tridendriform algebras. Then, $$(A, B, l_{\dashv_{A}} + l_{\vdash_{A}}+l_{\cdot_{A}}, r_{\dashv_{A}} + r_{\vdash_{A}}+r_{\cdot_{A}},\beta_{1}, \beta_{2}, l_{\dashv_{B}} + l_{\vdash_{B}}+l_{\cdot_{B}}, r_{\dashv_{B}} + r_{\vdash_{B}}+ r_{\cdot_{B}}, \alpha_{1}, \alpha_{2})$$ is a matched pair of the associated BiHom-associative algebras $(A, \ast_{A}, \alpha_{1}, \alpha_{2})$ and $(B, \ast_{B}, \beta_{1}, \beta_{2})$. \end{cor} \begin{proof} Let $(A, B, l_{\dashv_{A}}, r_{\dashv_{A}}, l_{\vdash_{A}}, r_{\vdash_{A}},l_{\cdot_{A}}, r_{\cdot_{A}}, \beta_1,\beta_2, l_{\dashv_{B}}, r_{\dashv_{B}}, l_{\vdash_{B}}, r_{\vdash_{B}}, l_{\cdot_{B}}, r_{\cdot_{B}},\alpha_1,\alpha_2)$ be a matched pair of a BiHom-tridendriform algebras $$(A, \dashv_{A}, \vdash_{A}, \cdot_A,\alpha_1,\alpha_2)\quad \mbox{and}\quad (B, \dashv_{B}, \vdash_{B},\cdot_B,\beta_1,\beta_2).$$ In view of Proposition \ref{propa}, the linear maps $l_{\dashv_{A}} + l_{\vdash_{A}}+l_{\cdot_A}, r_{\dashv_{A}} + r_{\vdash_{A}}+r_{\cdot_A}:A\rightarrow gl(B)$ and $l_{\dashv_{B}} + l_{\vdash_{B}}+l_{\cdot_B}, r_{\dashv_{B}} + r_{\vdash_{B}}+r_{\cdot_B}:B\rightarrow gl(A)$ are bimodules of the underlying BiHom-associative algebras $(A,\ast_A, \alpha_1,\alpha_2)$ and $(B,\ast_B,\beta_1,\beta_2)$, respectively. Thus, \eqref{3}-\eqref{5} are equivalent to \eqref{bieq201}-\eqref{bieq221}, and \eqref{6}-\eqref{8} are equivalent to \eqref{bieq222}-\eqref{bieq242}. \end{proof}
{ "timestamp": "2021-05-06T02:07:25", "yymm": "2105", "arxiv_id": "2105.01812", "language": "en", "url": "https://arxiv.org/abs/2105.01812" }
\section{Introduction} We divide this work into two main tasks. The first task is to present a formula to compute the local Euler obstruction of an isolated determinantal singularity in terms of Newton polyhedra. We strongly use the results introduced by Esterov \cite{Esterov}. In his work, Esterov presents a formula to compute the Euler characteristic of the Milnor fiber of a function defined on an isolated determinantal singularity in terms of the Newton polyhedra of each column of the matrix, as a particular case of his results on resultantal singularities. In Section 2, we introduce the Newton polyhedron of a matrix and use it, together with the equivalence of matrices, to simplify Esterov's multiplicity formula. More precisely, we prove the following equality. \begin{theorem} \label{multiplicity} Let $A=(a_{ij}): (\mathbb{C}^m,0)\rightarrow (M_{n,k},0)$ be a germ of a matrix with polynomial entries whose Newton polyhedron $\Delta_A$ has bounded complement in $\mathbb{R}^m_{+}$. If the leading coefficients of $A$ are in good general position, then $A$ defines the germ of a determinantal singularity $X_A^n$, whose multiplicity is $$ m(X_A^n,0)=\displaystyle\binom{k}{k-n+1} \cdot m! \cdot \tilde{\Delta}_A^{k-n+1}L^{m-k+n-1},$$ where $L$ denotes the standard m-dimensional simplex and $\tilde{\Delta}_A=\mathbb{R}^m_{+} \setminus \Delta_A$. \end{theorem} In Section 3, we also apply these concepts to adapt Esterov's formula for the Euler characteristic of the Milnor fiber of a function defined on an isolated determinantal singularity, obtaining the next result. \begin{theorem} \label{fibramilnortheorem} Let $X_A^n$ be an isolated determinantal singularity given by the matrix $A=(a_{ij}): (\mathbb{C}^m,0)\rightarrow (M_{n,k},0)$, where $A$ has holomorphic entries and Newton polyhedron $\Delta_A$. Suppose that $\tilde{\Delta}_A$ is bounded. \begin{enumerate} \item[i)] If the leading coefficients of $A$ are in strong general position, then $X_A^n$ is smooth outside the origin. \item[ii)] If the Newton polyhedron $\Delta_f \subset \mathbb{R}^m_{+}$ of a germ $f:(\mathbb{C}^m,0)\rightarrow (\mathbb{C},0)$ intersects all coordinate axes, and the leading coefficients of $f$ are in general position with respect to $A$, then the Euler characteristic of the Milnor fiber of $f|_{X_A^n}$ is given by \begin{equation*} \begin{aligned} \chi (F_0)=\displaystyle \sum_{q=k-n+1}^k \sum_{\stackrel{ I\subset \{1,\dots ,m\}}{ |I|\geq q+1 }} \sum_{a=1}^{|I|-q} (-1&)^{|I|+k-n} \binom{|I|+q-a-2}{n+q-k-1}\\ &\times\binom{|I|-a-1}{q-1} \binom{k}{q} \cdot |I|!\cdot (\tilde{\Delta}_f^I)^a (\tilde{\Delta}_A^I)^{|I|-a}. \end{aligned} \end{equation*} \end{enumerate} \end{theorem} As a corollary of Theorem \ref{fibramilnortheorem}, we obtain the following formula for the local Euler obstruction of an isolated determinantal singularity. \begin{corollary} \label{CorollaryObstruction} Let $X_A^n$ be an isolated determinantal singularity defined by the germ $A:(\mathbb{C}^m,0)\rightarrow (M_{n,k},0)$, where $A$ has holomorphic entries and Newton polyhedron $\Delta_A$. Suppose that $\tilde{\Delta}_A$ is bounded, if there exists a generic linear form $l: (\mathbb{C}^m,0) \rightarrow (\mathbb{C},0)$ such as its Newton polyhedron intersects all coordinate axes and the leading coefficients of $dl$ are in general position with respect to $A$, then \\ \begin{equation*} \begin{aligned} \Eu_{X_A^n}(0)= \sum_{q=k-n+1}^k \sum_{\stackrel{ I\subset \{1,\dots ,m\}}{ |I|\geq q+1 }} \sum_{a=1}^{|I|-q} (-1&)^{|I|+k-n} \binom{|I|+q-a-2}{n+q-k-1}\\ &\times \binom{|I|-a-1}{q-1}\binom{k}{q} \cdot |I|!\cdot (L^I)^a (\tilde{\Delta}_A^I)^{|I|-a}, \end{aligned} \end{equation*} where $L$ is the standard m-dimensional simplex. \end{corollary} The second task of this work is to provide conditions in terms of Newton polyhedra which guarantee the Whitney equisingularity of a family of isolated determinantal singularities. For this purpose, in Section 4, we apply \cite[Theorem 5.3]{BBT4}, which states that a family of d-dimensional isolated determinantal singularities is Whitney equisingular if, and only if, all the polar multiplicities from $m_0$ to $m_d$ are constant in this family. Then, we have the following theorem. \begin{theorem} \label{equi} Let $\left\{(X_{A_t}^n, 0) \right\}_{ t\in D }$, be a d-dimensional family of determinantal singularities, defined by the germ of matrices $A_t:(\mathbb{C}^m,0)\rightarrow (M_{n,k},0)$ with holomorphic entries. Suppose that, for all $t \in D$, the following three conditions are satisfied: \begin{enumerate} \item $X_{A_t}^n$ has isolated singularity at $0 \in \mathbb{C}^m$; \item the Newton polyhedron $\Delta_{A_t}$ of $A_t$ is independent of $t$ and $\tilde{\Delta}_{A_t}$ is bounded; \item the leading coefficients of $A_t$ are in strong general position. \end{enumerate} Then the family $\left\{(X_{A_t}^n, 0) \right\}_{ t\in D}$ is Whitney equisingular. \end{theorem} In Section 5, we introduce some applications of the results presented along this work, such as Whitney equisingularity for a family of functions defined on isolated determinantal singularities and a L\^{e}-Greuel type formula for the vanishing Euler characteristic of an isolated determinantal singularity. \section{Equivalent matrices and Newton polyhedra} In this section we define the Newton polyhedron of a germ of matrix and use equivalence of matrices in order to extend and simplify some of the formulas presented by Esterov in \cite{Esterov}. \subsection{Determinantal singularities} Determinantal varieties have been wi\-dely studied by researchers in Commutative Algebra and Algebraic Geometry (see \cite{Bruns2,Bruns1}). In Singularity Theory there are countless articles with the purpose of studying those varieties, we can quote, for instance, Ebeling and Gusein-Zade \cite{EGZ}, Frühbis-Krüger and Neumer \cite{AFG2}, Nuño-Ballesteros, Oréfice-Okamoto and Tomazella \cite{BBT}, Pereira and Ruas \cite{MC} and Zach \cite{Zach}. Consider $M_{n,k}$ the set of complex matrices of size $n\times k$ and $M_{n,k}^s$ the subset consisting of the matrices with rank less than $s$, where $0< s\leq n \leq k$ are integers. The set $M_{n,k}^s$ is an irreducible subvariety of $M_{n,k}$ with co-dimension $(n-s+1)(k-s+1)$, which is called {\it generic determinantal variety}, and $M_{n,k}^{s-1}$ is its singular set. Let $A:(\mathbb{C}^m,0) \rightarrow (M_{n,k},0)$ be a holomorphic map germ defined by $A(x)=(a_{ij}(x))$ with $a_{ij} \in \mathcal{O}_m$, where $\mathcal{O}_m$ is the ring of holomorphic functions in $\mathbb{C}^m$, for $1 \leq i \leq n$ and $1 \leq j \leq k$. \begin{definition} Let $(X_A^s,0) \subset (\mathbb{C}^m,0)$ be the germ of the variety defined by $X_A^s=A^{-1}(M_{n,k}^s)$. We say that $X_A^s$ is a {\it determinantal variety} of type $(n,k;s)$ in $(\mathbb{C}^m,0)$ if its dimension is equal to $$ m- (n-s+1)(k-s+1).$$ \end{definition} The analytical structure of $X_A^s$ is the one defined by $A$ and $M_{n,k}^s$, \ie given by the minors of size $s$ of $A(x)$. When $s=1$, the determinantal variety $X_A^s$ is a complete intersection variety. \begin{definition} Let $(X_A^s,0) \subset (\mathbb{C}^m,0)$ be a determinantal variety of type $(n,k;s)$ satisfying the condition $$s=1 \textrm{ or } m<(n-s+2)(k-s+2).$$ The variety $(X_A^s,0)$ is said to be an {\it isolated determinantal singularity} (IDS) if $X_A^s$ is smooth at $x$ and $\textrm{rank } A(x)=s-1$ for all $x\neq 0$ in a neighbourhood of the origin. \end{definition} \begin{remark} Isolated determinantal singularities are $\mathcal{G}$-determined (see \cite{MP}), then they have a polynomial representative. \end{remark} \begin{definition} The germs of matrices $A,\tilde{A}: (\mathbb{C}^m,0)\to (M_{n,k},0)$ are said to be {\it equivalent} if they belong to the same equivalence class of the following relation: $$A\sim \tilde{A} \Leftrightarrow \exists P \in \rm{GL}(n;\mathbb{C}), \exists Q \in \rm{GL}(k;\mathbb{C}): \tilde{A}= P\cdot A\cdot Q,$$ where $\rm{GL}(p,\mathbb{C})$ is the group of invertible matrices in $M_{p,p}(\mathbb{C})$. \end{definition} Given a matrix $A \in M_{n,k}$, let $I_s(A)$ be the ideal generated by the $s$ size minors of $A$. It is well known that for the matrices $P \in \rm{GL}(n;\mathbb{C})$ and $Q \in \rm{GL}(k;\mathbb{C})$, we have $I_s(A)= I_s( P\cdot A\cdot Q)$ (see \cite[Chapter XVI, Sections 7-8]{BML}). The reason is that $A$ represents a $\mathbb{C}$-linear map $\mathbb{C}^k \to \mathbb{C}^n$ and the entries of the matrix $\wedge^s:\wedge^s(\mathbb{C}^n) \to \wedge^s(\mathbb{C}^k)$ are the s size minors of $A$, where $\wedge^s$ is the exterior product of vectors taking $s$ at a time. Since $\wedge^s(PA)=\wedge^s (P)\wedge^s(A)$, then $I_s(PA)\subseteq I_s(P) I_s(A)\subseteq I_s(A)$ and $I_s(PA)\subseteq I_s(P)$. Therefore, $I_s(A)=I_s(P^{-1}(PA))\subseteq I_s(PA)$ and, consequently, $I_s(PA)=I_s(A)$. Similarly, $I_s(AQ)=I_s(A)$. Therefore, if the germs $A,\tilde{A}:(\mathbb{C}^m,0) \rightarrow (M_{n,k},0)$ are equivalent, then $X_A^s=X_{\tilde{A}}^s$. Furthermore, the action $\rm{GL}(n;\mathbb{C})\times \rm{GL}(k;\mathbb{C})$ on the space of matrices $M_{n,k}(\mathcal{O}_m)$ is a subgroup of the $\mathcal{G}$-action (see \cite{Damon,MP}). Since the invariants such as polar multiplicities, Euler obstruction, vanishing Euler characteristic only depend on the $\mathcal{G}$-equivalence class, the equivalence of matrices does not alter them. \subsection{Newton polyhedra} The Newton polyhedra of polynomial functions are important objects which can be very useful to compute some invariants, such as the Milnor number, local Euler obstruction, multiplicities, among others. The monomial $x_1^{a_1} \cdots x_m^{a_m}$ is denoted by $x^a$, where $a=(a_1, \dots , a_m) \in \mathbb{Z}^m$. We denote by $\mathbb{R}^m_{+}$, the positive orthant of $\mathbb{R}^m$. A subset $\Delta \subset \mathbb{R}^m_{+}$ is called a {\it Newton polyhedron} when there exists some $P\subset \mathbb{Z}^m_{+}$ such that $\Delta$ is the convex hull of the set $\{p+v: p\in P \textrm{ and } v \in \mathbb{R}^m_{+}\}$. In this case, $\Delta$ is said to be the Newton polyhedron determined by $P$. \begin{definition} If $f\in \mathcal{O}_m$ is a germ of a polynomial function $f(x)=\displaystyle \sum_{p \in \mathbb{Z}^m_{+}} c_p x^p$, then the {\it support} of $f$ is $\rm{supp}(f):=\setdef{p\in \mathbb{Z}^m_{+}}{ c_p \neq 0}$. The {\it Newton polyhedron} of $f$ $\Delta_f$ is the Newton polyhedron determined by $\rm{supp}(f)$. If $A(x)=(a_{ij}(x))$ is a germ of a $n\times k$-matrix with polynomial entries, we denote by $$\rm{supp}(A):=\displaystyle\bigcup_{\begin{array}{c} i\in \{1,\dots ,n\} \\ j\in \{1,\dots ,k\} \end{array} } {\rm{supp}}(a_{ij}).$$ The {\it Newton polyhedron} of $A$ $\Delta_A$ is the Newton polyhedron determined by $\rm{supp}(A)$. The coefficient $c_p$ is said to be a {\it leading coefficient} of $a_{ij}(x)=\displaystyle\sum_{p\in \mathbb{Z}^m_{+}} c_p x^p$, if $p$ is contained in a bounded face of $\Delta_A$. The leading coefficients of a matrix $A=(a_{ij})$ are the union of the leading coefficients of all entries $a_{ij}$. \end{definition} \begin{remark} Given a germ of a matrix $A$, there is always a germ $\tilde{A}$ equivalent to $A$ such that the Newton polyhedron of each entry of $\tilde{A}$ is equal to $\Delta_A$. Since both the matrices $A$ and $\tilde{A}$ define the same determinantal singularity, whenever we need the Newton polyhedron of each entry of a matrix $A$, we can replace them by $\Delta_A$. This will come in handy especially because we need a condition that Newton polyhedra must have bounded complement in $\mathbb{R}^m_{+}$ in order to compute some invariants. \end{remark} \begin{definition} Let $A$ be a germ of a matrix with polynomial entries. We say that $\tilde{A}$ is $\Delta_A$-equivalent to $A$ if $\tilde{A}$ is equivalent to $A$ and the Newton polyhedron of each entry of $\tilde{A}$ is equal to $\Delta_A$. \end{definition} \begin{example} \label{exelementary} Consider the simple determinantal singularity $\Lambda_{1,1}$, from \cite[pg. 9]{AFG2}, which is given by the germ $A:(\mathbb{C}^4,0)\to (M_{2,3},0)$, where $$A=\left[ \begin{array}{ccc} w & y & x \\ z & w & y \end{array} \right]. $$ None of the entries of $A$ have Newton polyhedron $\Delta_{i,j}$, with bounded complement. Now, consider the matrices $$P=\left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right] \textrm{ and } Q=\left[ \begin{array}{ccc} 1 & 1 & 1\\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{array} \right].$$ We obtain the germ $\tilde{A}=P\cdot A\cdot Q$, given by $$\tilde{A}=\left[ \begin{array}{ccc} x+2y+z+2w & x+3y+z+3w & 2x+3y+z+2w\\ x+3y+2z+3w & x+4y+2z+5w & 2x+5y+2z+3w \end{array} \right].$$ The Newton polyhedron $\Delta_{i,j}$ of each entry of $\tilde{A}$ is equal to $\Delta_A$ and have complement in $\mathbb{R}^4_{+}$ equal to the 4-dimensional standard simplex, therefore bounded. \end{example} We denote by $f^{\lambda}$ the lowest order non-zero $\lambda$-quasi-homogeneous component of $f$, where ${\lambda}=({\lambda}_1, \dots ,{\lambda}_m)$ is a collection of positive weights, assigned to the variables $x_1, \dots ,x_m$. Given a germ of a matrix $A$, we are interested in applying Esterov's results to the variety $X_{\tilde{A}}^s$, where $\tilde{A}$ is $\Delta_A$-equivalent to $A$. For that, we need to clarify some concepts. Consider a germ of a matrix given by $A(x)= (a_{ij}(x))$ we denote the function $$f_A(x)=\displaystyle \sum_{p\in supp(A)} x^p.$$ For each entry $$a_{ij}(x)=\displaystyle \sum_{p \in \mathbb{Z}^m_{+}} c_px^p,$$ we denote by $$a^{\lambda}_{ij}(x)=\displaystyle \sum_{p\in supp(f_A^{\lambda})} c_px^p.$$ \begin{example} \label{exE6L} Consider the simple determinantal singularity $E_{6}\vee L$, from \cite[pg. 48]{AFG2}. This singularity is defined by the germ $A:(\mathbb{C}^3,0)\to (M_{2,3},0)$, where $$A=\left[ \begin{array}{ccc} z & -y^2 & -x^3\\ 0 & x & y \end{array} \right]. $$ In this case, $f_A=x+x^3+y+y^2+z$ and for ${\lambda}=(1,1,1)$, the lowest order non-zero ${\lambda}$-quasi-homogeneous component of $f_A$ is $f_A^{\lambda}=x+y+z$. Then $supp(f_A^{\lambda})=\{(1,0,0),(0,1,0),(0,0,1)\}$ and $$(a^{\lambda}_{ij})=\left[ \begin{array}{ccc} z & 0 & 0\\ 0 & x & y \end{array} \right]. $$ \end{example} \begin{definition}\label{defgeneralposition} Let $A=(a_{ij}): (\mathbb{C}^m,0) \rightarrow (M_{n,k},0)$ be a germ of a matrix with polynomial entries. The leading coefficients of $A$ are said to be {\it in general position}, if, for every collection ${\lambda}$ of positive weights and every subset $\mathcal{I}\subset \{1, \dots , n\}$, the set of all points $x \in (\mathbb{C} \setminus 0)^m$, such that the matrix $$\displaystyle (a^{\lambda}_{ij}(x))_{ i \in \mathcal{I}, \; j \in \{1, \dots ,k\}}$$ is degenerate, has the maximal possible co-dimension $k - |\mathcal{I}| +1$. \end{definition} We are interested in using these concepts together with the equivalence of matrices, to present some invariants of determinantal varieties in terms of the Newton polyhedron of the matrix which defines the germ. To do so, we first look at the effects of the equivalence of matrices on the definition of general position. Firstly, we note that there are some germs of matrices whose leading coefficients may be brought into general position using equivalence and there are also germs which may not, as we can see in the following example. \begin{example}\label{Ex-1} Consider the simple determinantal singularity $\Lambda_{1,1}$ from Example \ref{exelementary}. We have $f_A=x+y+z+w$ and since it is linear, then all coefficients of each entry of $A$ are leading. Therefore, for every collection of positive weights ${\lambda}$, the lowest order ${\lambda}$-quasi-homogeneous component of $f_A$ is $f^{\lambda}_A=x+y+z+w$. Thus, $$(a_{ij}^{\lambda})=A=\left[ \begin{array}{ccc} w & y & x \\ z & w & y \end{array} \right]. $$ We have three possibilities for the set $\mathcal{I}\subset\{1,2\}$: $\mathcal{I}_1=\{1\}$, $\mathcal{I}_2=\{2\}$ and $\mathcal{I}_{12}=\{1,2\}$. This gives us the following matrices $$A^{\lambda}_1 (x)=\displaystyle (a^{\lambda}_{ij}(x))_{ i \in \mathcal{I}_1, \; j \in \{1, 2 ,3\}}=\left[ \begin{array}{ccc} w & y & x \end{array} \right],$$ $$A^{\lambda}_2(x)= \displaystyle (a^{\lambda}_{ij}(x))_{ i \in \mathcal{I}_2, \; j \in \{1, 2,3\}}=\left[ \begin{array}{ccc} z & w & y \end{array} \right],$$ $$A^{\lambda}_{12}(x)=\displaystyle (a^{\lambda}_{ij}(x))_{ i \in \mathcal{I}_{12}, \; j \in \{1, 2,3\}}=\left[ \begin{array}{ccc} w & y & x\\ z & w & y \end{array} \right].$$ Since $$\begin{array}{ccc} X_{A^{\lambda}_1}^1\cap (\mathbb{C}\setminus 0)^4= \emptyset & \Rightarrow & \dim (X_{A^{\lambda}_1}^1\cap (\mathbb{C}\setminus 0)^4)\neq 1, \\ X_{A^{\lambda}_2}^1\cap (\mathbb{C}\setminus 0)^4= \emptyset & \Rightarrow & \dim (X_{A^{\lambda}_2}^1\cap (\mathbb{C}\setminus 0)^4)\neq 1, \end{array}$$ the leading coefficients of $A$ are not in general position.\\ However, as in the Example \ref{exelementary}, the germ $\tilde{A}$ is $\Delta_A$-equivalent to $A$. With analogous computations, we can see that the leading coefficients of $\tilde{A}$ are indeed in general position.\\ On the other hand, if we consider the simple determinantal singularity $E_6\vee L$ from Example \ref{exE6L}, we have $$A^{\lambda}_{12}(x)=\displaystyle (a^{\lambda}_{ij}(x))_{ i \in \mathcal{I}_{12}, \; j \in \{1, 2,3\}}=\left[ \begin{array}{ccc} z & 0 & 0\\ 0 & x & y \end{array} \right] $$ and $X_{A^{\lambda}_{12}}^2\cap (\mathbb{C}\setminus 0)^4)=\emptyset$. Since $X_{A^{\lambda}_{12}}^2=X_{\tilde{A}^{\lambda}_{12}}^2$ for every germ $\tilde{A}$ equivalent to $A$, the leading coefficients of $A$ can not be brought into general position using equivalence of matrices. \end{example} Motivated by this example, we present the following definition. \begin{definition} Let $A=(a_{ij}): (\mathbb{C}^m,0) \rightarrow (M_{n,k},0)$ be a germ of a matrix with polynomial entries. The leading coefficients of $A$ are said to be {\it in good general position} if there is a germ $\tilde{A}$, $\Delta_A$-equivalent to $A$ such that the leading coefficients of $\tilde{A}$ are in general position. \end{definition} The set of all convex bounded polyhedra in $\mathbb{R}^m$ is denoted by $\mathcal{M}$. The {\it mixed volume} is defined in \cite[Definition $2.6$]{GKZ} as the unique symmetric multi-linear function $$\displaystyle \textrm{MV} : \underbrace{\mathcal{M}\times \cdots \times \mathcal{M}}_m \rightarrow \frac{\mathbb{Z}}{m!},$$ which satisfies $\textrm{MV}(\Gamma ,\dots ,\Gamma)=\rm{Vol}(\Gamma)$ for every polyhedron $\Gamma\in \mathcal{M}$, where $\rm{Vol}(\Gamma)$ is the usual Euclidean volume of $\Gamma$. More explicitly, we have $$ \textrm{MV}(\Gamma_1, \dots , \Gamma_m) = \displaystyle \frac{1}{m!} \sum_{r=1}^m (-1)^{m-r} \sum_{1\leq i_1 \leq \cdots \leq i_r\leq m} \rm{Vol}(\Gamma_{i_1} + \cdots + \Gamma_{i_r}),$$ where $\Gamma_{i_1} + \cdots + \Gamma_{i_r}$ is the Minkowiski sum of the sets $\Gamma_{i_1}, \dots , \Gamma_{i_r}$ . For brevity, we denote the mixed volume $$\textrm{MV}(\underbrace{\Gamma_1, \dots , \Gamma_1}_{a_1}, \dots ,\underbrace{\Gamma_r, \dots , \Gamma_r}_{a_r})=\Gamma_1^{a_1}\cdots \Gamma_r^{a_r}.$$ In \cite{Esterov}, Esterov presented a formula to compute the multiplicity of a determinantal variety $X_A^n$ such that $\Delta_j$ has bounded complement, for all $j=1,\dots ,k$, where $\Delta_{j}$ is the Newton polyhedron of $a_{ij}$. Using $\Delta_A$-equivalence of matrices we could improve it, Theorem \ref{multiplicity} depends of a single Newton polyhedron and the range of matrices where the formula may be applied is wider. \begin{proof}(of Theorem \ref{multiplicity}) Let $\tilde{A}$ be a germ of a matrix $\Delta_A$-equivalent to $A$. In \cite[Theorem 1.9]{Esterov}, it is proved that $X_{\tilde{A}}^n$ is a determinantal singularity and it has multiplicity \begin{equation} \label{mult} m(X_{\tilde{A}}^n,0)=\displaystyle \sum_{0<j_0<\cdots < j_{k-n}\leq k} m! \cdot \tilde{\Delta}_{j_0}^1 \cdots \tilde{\Delta}_{j_{k-n}}^1 L^{m-k+n-1}, \end{equation} \noindent where $\Delta_j$ is the Newton polyhedron of the function $\tilde{a}_{ij}: \mathbb{C}^m \to \mathbb{C}$. The condition of general position guarantees that the formula does not depend on the row where the Newton polyhedra are computed. Since $X_A^n=X_{\tilde{A}}^n$ and $\Delta_j=\Delta_A$, for all $j=1,\dots ,k$, by Equation \eqref{mult}, the multiplicity of $X_A^n=X_{\tilde{A}}^n$ is $$\begin{array}{ccccc} m(X_A^n,0) & = & m(X_{\tilde{A}}^n,0) & = & \displaystyle \sum_{0<j_0<\cdots < j_{k-n}\leq k} m! \cdot \tilde{\Delta}_A^{k-n+1}L^{m-k+n-1}\\ & & & = & \binom{k}{k-n+1} \cdot m!\cdot \tilde{\Delta}_A^{k-n+1}L^{m-k+n-1}. \end{array}$$ \end{proof} In the proof of the previous theorem the concept of equivalence of matrices is essential, permitting us to compute the multiplicity of $X_A^n$ using only one Newton polyhedron instead of $k$, which makes the computation much easier. Next we exemplify the importance of equivalent matrices to our results. \begin{example} Consider $A$ the germ given in Example \ref{exelementary}. The Newton polyhedron $\Delta_A$ has bounded complement, the leading coefficients of $A$ are in good general position and $\tilde{\Delta}_A= L$. By Theorem \ref{multiplicity}, $X_A^2$ is a determinantal singularity and its multiplicity is $$m(X_A^2,0)=\binom{3}{2} \cdot 4! \cdot L^4 = \binom{3}{2} =3. $$ If we denote by $\Delta_1$, $\Delta_2$ and $\Delta_3$ the Newton polyhedra of the entries of the first row of $A$, none of then have bounded complement in $\mathbb{R}^4_{+}$. Moreover, the leading coefficients of $A$ are not in general position, therefore, Theorem \cite[Theorem 1.9]{Esterov} can not be applied. However, there is a germ $\tilde{A}$ $\Delta_A$-equivalent to $A$ and $\tilde{\Delta}_A$ is bounded. Not only we can apply the Theorem \ref{multiplicity} on $\tilde{A}$ but the formula given on this theorem is much simpler than Equation \eqref{mult}. \end{example} \begin{corollary} Under the conditions of the Theorem \ref{multiplicity}, if the germ $A: (\mathbb{C}^m,0) \rightarrow (M_{n,k},0) $ is such that $\tilde{\Delta}_A=L$, then the multiplicity of $X_A^n$ is given by $$ m(X_A^n,0)=\binom{k}{k-n+1}.$$ \end{corollary} In the next example, we apply Theorem \ref{multiplicity} to compute the local Euler obstruction of a determinantal variety. To do so, we compute the multiplicty of its polar varieties. Lê and Teissier \cite{LT} related the local Euler obstruction of $X$ with its polar multiplicities. More precisely, if $X$ is a germ at zero of a reduced analytic space in $\mathbb{C}^m$ of dimension $d$, then \begin{equation}\label{LeTemultiplicities} \Eu_{X}(0) = \sum_{k=0}^{d-1} (-1)^k m_k(X,0), \end{equation} where $m_k(X,0)$ is the $k^{{\rm{th}}}$-polar multiplicity of $X$ at zero. In order to apply Theorem \ref{multiplicity} together with \Eqref{LeTemultiplicities}, it is essential that the polar varieties are determinantal, which is not true in general. \begin{example} Consider the cusp deformation $V=\setdef{(x,y,z) \in \mathbb{C}^3}{ y^2-x^3-x^2z^2=0}$. \begin{figure}[ht!] \centering \includegraphics[scale=0.9]{cuspidedeformacao.pdf} \caption{The cusp deformation.}\label{figure2} \end{figure} \noindent We have $P_0(V)=V$ and considering the generic linear projection $p(x,y,z)=(x,z)$ we obtain the polar variety $P_1(V)=\setdef{(x,y,z) \in \mathbb{C}^3}{ x+z^2=0 \textrm{ and } y=0}$. \\ Let $A: (\mathbb{C}^3,0) \rightarrow (M_{2,2},0)$ be the germ given by the matrix $$A=\left[\begin{array}{cc} y & x+z^2 \\ x^2 & y \end{array} \right]$$ and $B: (\mathbb{C}^3,0) \rightarrow (M_{1,2},0)$ be the germ given by the matrix $$B=\left[\begin{array}{cc} y & x+z^2 \end{array} \right].$$ Then $P_0(V)=X_A^2$ and $P_1(V)=X_B^1$. Consider the matrix $$Q=\left[\begin{array}{cc} 1 & 1\\ 1 & 2 \end{array} \right].$$ We have the germs $\tilde{A}=QAQ$ and $\tilde{B}=BQ$ given by $$\tilde{A}=\left[\begin{array}{cc} x+x^2+2y+z^2 & 2x+x^2+3y+2z^2 \\ x+2x^2+3y+z^2 & 2x+2x^2+5y+2z^2 \end{array} \right]$$ and $$\tilde{B}=\left[\begin{array}{cc} x+y+z^2 & 2x+y+2z^2 \end{array} \right].$$ Since the leading coefficients of both $\tilde{A}$ and $\tilde{B}$ are in general position, the leading coefficients of $A$ and $B$ are in good general position. Now, $\Delta_A=\Delta_B$, and $\tilde{\Delta}_A$ is bounded (see Figure \ref{figure1}). \begin{figure}[ht!] \centering \includegraphics[scale=0.9]{conedeformacao2.pdf} \caption{The Newton polyhedron of $A$ and $B$.}\label{figure1} \end{figure} \\ Then, by Theorem \ref{multiplicity} \begin{itemize} \item $m_0(V)=m(X_A^2,0)= \displaystyle\binom{2}{1} \cdot 3!\cdot \tilde{\Delta}_A^1L^2=2$; \item $m_1(V)=m(X_B^1,0)=\displaystyle\binom{2}{2} \cdot 3!\cdot \tilde{\Delta}_B^2L^1=1$. \end{itemize} Therefore, by \Eqref{LeTemultiplicities} $$\Eu_V(0)= m_0(V)- m_1(V)=2-1=1.$$ \end{example} \section{Local Euler obstruction of an IDS} Let $f:(\mathbb{C}^m,0) \rightarrow (\mathbb{C},0)$ be a polynomial function and $X_A^s$ be a germ of an IDS in $\mathbb{C}^m$. The Milnor fiber of $f$ restricted to $X_A^s$ is $F_0:=X_A^s\cap B_{\varepsilon} \cap f^{-1}(t_0)$, where $B_{\varepsilon}$ is a ball with center at $0\in \mathbb{C}^m$ and radius $\varepsilon >0$ and $|t_0|\neq 0$ is small enough. In \cite{Esterov} a formula is presented to compute the Euler characteristic of the Milnor fiber in terms of the Newton polyhedron of $f$ and the Newton polyhedra of the columns of $A$. Using the results from the previous section, we present a simpler formula for this invariant. First, we introduce the following concepts. \begin{definition} \label{definitionstrong} The leading coefficients of $A$ are said to be {\it in strong general position}, if, for every collection ${\lambda}$ of positive weights, the polynomial matrix $(a^{\lambda}_{ij})$ defines a non-singular determinantal set in $(\mathbb{C}\setminus 0)^m$. In this case, the leading coefficients of a function $f:(\mathbb{C}^m,0)\to (\mathbb{C},0)$, are said to be {\it in general position with respect to} $A$, if, for every collection ${\lambda}$ of positive weights, the restriction of the 1-form $df^{\lambda}$ to the determinantal set, defined by the matrix $(a^{\lambda}_{ij}(x))$ in $(\mathbb{C}\setminus 0)^m$, has no zeros, which means that $f^{\lambda}$ has no critical points in this determinantal set. \end{definition} \begin{remark} Contrary to what happens with the general position, equivalence of matrices does not alter the strong general position and the general position with respect to $A$, since it does not alter the determinantal set. \end{remark} The definitions of general position were introduced by Esterov (\cite{Esterov2}), they are generalisations of the concepts of non-degenerate hypersurface and complete intersection singularity (see \cite{Kouch,Oka}). In \cite[pages 5-6]{Esterov}, Esterov discuss its relation with his definition of ``for almost all function''. \begin{example} \label{exstrong} Consider $A$ the germ given in Example \ref{exelementary} $$A=\left[ \begin{array}{ccc} w & y & x \\ z & w & y \end{array} \right]. $$ Since the matrix $A$ has linear entries, all the coefficients are leading, then for all collections of positive weights $\lambda$, the matrix $(a_{ij}^{\lambda})=A$ and since the origin is the only singular point of $X_A^n$, then the leading coefficients of $A$ are in strong general position. Now, consider the function $f:(\mathbb{C}^4,0)\to (\mathbb{C},0)$ given by $f(x,y,z,w)=3x+4y-z+w$, since $f$ is linear, we have $f^{\lambda}=f$ for every collection of positive weights $\lambda$. Furthermore, the linear form $df$ only vanishes at the origin of $X_A^n$, then, the leading coefficients of $df$ are in general position with respect to $A$. \end{example} For a set $I \subset \{1, \dots , m\}$, let $\mathbb{R}^I$ be a coordinate plane given by the equations $x_i=0$, for $i \notin I$. Given a polyhedron $\Delta \subset \mathbb{R}^m_{+}$, we denote by $\tilde{\Delta}^I$ the intersection $\mathbb{R}^I \cap \tilde{\Delta}$, where $\tilde{\Delta}=\mathbb{R}^m_{+}\setminus \Delta$. \begin{proof}(of Theorem \ref{fibramilnortheorem}) Let $\tilde{A}$ be a germ $\Delta_A$-equivalent to $A$. Esterov \cite[Theorem 1.12]{Esterov} shows that, the determinantal singularity $X_A^n=X_{\tilde{A}}^n$ is smooth outside the origin and \[ \begin{aligned} \chi (F_0)= \sum_{ \stackrel{a \in \mathbb{N}, \; I\subset \{1,\dots ,m\}}{ \{j_1, \dots , j_q\} \subset \{1, \dots, k\}}} (-1)^{|I|+k-n} & \binom{|I|+q-a-2}{n+q-k-1} \\ &\times \sum_{\stackrel{a_{j_1}, \dots ,a_{j_q} \in \mathbb{N}}{a_{j_1}+ \cdots + a_{j_q}=|I|-a }} |I|!\cdot (\tilde{\Delta}_f^I)^{a} (\tilde{\Delta}^I_{j_1})^{a_{j_1}} \cdots (\tilde{\Delta}^I_{j_q})^{a_{j_q}}, \end{aligned} \] where $\Delta_j$ is the Newton polyhedron of $\tilde{a}_{ij}(x)$. Since $\Delta_A =\Delta_{j}$, for all $j\in\{1,\dots, k\}$, we have \[ \begin{aligned} \chi (F_0)= \sum_{ \stackrel{a \in \mathbb{N}, \; I\subset \{1,\dots ,m\}}{ \{j_1, \dots , j_q\} \subset \{1, \dots, k\}}} (-1)^{|I|+k-n} & \binom{|I|+q-a-2}{n+q-k-1} \\ &\times \sum_{\stackrel{a_{j_1}, \dots ,a_{j_q} \in \mathbb{N}}{a_{j_1}+ \cdots + a_{j_q}=|I|-a }} |I|!\cdot (\tilde{\Delta}_f^I)^{a} (\tilde{\Delta}_A^I)^{a_{j_1}} \cdots (\tilde{\Delta}_A^I)^{a_{j_q}}. \end{aligned} \] Furthermore, the number of combinations for the sum $a_{j_1}+ \cdots + a_{j_q}=|I|-a$ is $\displaystyle\binom{|I|-a-1}{q-1}$ , then we have \noindent $\chi (F_0)=\displaystyle \sum_{ \stackrel{a \in \mathbb{N}, \; I\subset \{1,\dots ,m\}}{ \{j_1, \dots , j_q\} \subset \{1, \dots, k\} }} (-1)^{|I|+k-n} \binom{|I|+q-a-2}{n+q-k-1}$ \begin{flushright} $\displaystyle \times \binom{|I|-a-1}{q-1} \binom{k}{q}\cdot |I|!\cdot (\tilde{\Delta}_f^I)^{a}(\tilde{\Delta}_A^I)^{|I|-a}.$ \end{flushright} We assume $\left(\begin{array}{c} n \\ k \end{array} \right)=0$ for $k \notin \{0,\dots n\}$, then, all terms in this sum are equal to zero, except the terms with $|I|-a\geq q>k-n$. Hence, \noindent $\displaystyle \chi (F_0)=\sum_{q=k-n+1}^k \sum_{\stackrel{ I\subset \{1,\dots ,m\}}{ |I|\geq q+1 }} \sum_{a=1}^{|I|-q} (-1)^{|I|+k-n} \binom{|I|+q-a-2}{n+q-k-1} $ \begin{flushright} $\displaystyle \binom{|I|-a-1}{q-1} \binom{k}{q}\cdot |I|!\cdot (\tilde{\Delta}_f^I)^a (\tilde{\Delta}_A^I)^{|I|-a}.$ \end{flushright} \end{proof} As a corollary of Theorem \ref{fibramilnortheorem}, we can compute the local Euler obstruction of a determinantal variety with isolated singularity. Let $(X, 0) \subset (\mathbb{C}^m, 0)$ be a $d$-dimensional germ of a reduced equidimensional analytic variety on an open set $U \subset \mathbb{C}^m$. Consider $\mathcal{V} = \left\{V_i \right\}_{i=0}^q$ a Whitney stratification of $U$ adapted to $X$ (i.e. $X$ is a union of strata) and assume that $V_0=\{0\}$ is a stratum. We choose a representative small enough $X$ of $(X, 0)$ such that $0$ belongs to the closure of all strata. We also assume that the strata $V_0,\dots, V_q$ are connected, the analytic sets $\overline{V_1},\ldots,\overline{V_q}$ are reduced and $V_q=X_{\rm reg}$. The Euler obstruction at a point $x \in X$, denoted by ${\rm Eu}_{X}(x)$, was defined by MacPherson \cite{Mac}, using $1$-forms and Nash modification. An equivalent definition of the local Euler obstruction was given by Brasselet and Schwartz in the context of index of vector fields \cite{BS}. Brasselet, Lê and Seade \cite{BLS} presented the following Lefschetz type formula for ${\rm Eu}_{X}(0)$, which is very important to compute the Euler obstruction. \begin{theorem}[\cite{BLS}]\label{BLS} Let $(X,0)$ be an equidimensional reduced algebraic variety and $\mathcal{V} = \left\{V_i\right\}_{i=0}^{q}$ a Whitney stratification of $X$, then for each generic linear form $l$, there exists $\varepsilon_0$ such that for any $\varepsilon$ with $0<\varepsilon<\varepsilon_0$ and $t_0\neq0$ sufficiently small, the Euler obstruction of $(X,0)$ is equal to \[ {\rm Eu}_X(0)=\sum^{q}_{i=0}\chi(V_i\cap B_{\varepsilon}\cap l^{-1}(t_0)) \cdot {\rm Eu}_{X}(V_i), \] where $B_{\varepsilon}$ is a ball with center at $0$ and radius $\varepsilon$, $\chi (\cdot)$ is the Euler characteristic, ${\rm Eu}_{X}(V_i)$ is the Euler obstruction of $X$ at a point of $V_i, \ i=0,\ldots,q$ and $0<|t_0|\ll\varepsilon\ll1$. \end{theorem} Let $X_A^s \subset \mathbb{C}^m$ be an IDS, since $X_A^s $ has isolated singularity at the origin, the partition $$\mathcal{V}=\{\{0\},X_A^s \setminus \{0\}\}$$ is a Whitney stratification of $X_A^s $. Thus, if $l:(\mathbb{C}^m,0)\rightarrow (\mathbb{C},0)$ is a generic linear form, by Theorem \ref{BLS} \[ \begin{aligned} \Eu_{X_A^s }(0)&=\chi (\{0\}\cap B_{\varepsilon} \cap l^{-1}(t_0)) \cdot \Eu_{X_A^s }(\{0\}) \\ &+ \chi ((X_A^s \setminus \{0\})\cap B_{\varepsilon} \cap l^{-1}(t_0)) \cdot \Eu_{X_A^s}(X_A^s \setminus \{0\}). \end{aligned} \] On the other hand, as $t_0\neq 0$, then $\{0\}\cap B_{\varepsilon} \cap l^{-1}(t_0)= \emptyset $. Therefore, $$\chi (\{0\}\cap B_{\varepsilon} \cap l^{-1}(t_0))=0.$$ Moreover, the stratum $X_A^s\setminus \{0\}$ is the smooth part of $X_A^s$, then $\Eu_{X_A^s}(X_A^s\setminus \{0\})=1$. Consequently, $$ \Eu_{X_A^s}(0)= \chi (X_A^s\setminus \{0\}\cap B_{\varepsilon} \cap l^{-1}(t_0)).$$ \begin{proof}(of Corollary \ref{CorollaryObstruction}) Hence, Corollary \ref{CorollaryObstruction} follows by Theorem \ref{fibramilnortheorem}. \end{proof} As an application of Corollary \ref{CorollaryObstruction}, in the following example, we compute the local Euler obstruction of an ICIS in terms of a single Newton polyhedron. \begin{example} \label{EuICIS} Let $(X,0) \subset (\mathbb{C}^m,0)$ be an ICIS defined by the polynomial functions $f_1, \dots ,f_k$, where $f_i:(\mathbb{C}^m,0)\rightarrow (\mathbb{C},0)$ for $i=1,\dots ,k$. Since $X$ is an ICIS, $X$ is also a determinantal singularity given by $X=X_A^1$, where $A=[\begin{array}{ccc} f_1 &\cdots & f_k \end{array}]$. Let $l: (\mathbb{C}^m,0) \rightarrow (\mathbb{C},0)$ a generic linear form. If $\tilde{\Delta}_A$ is bounded, the Newton polyhedron of $l$ intersects all coordinate axes and the leading coefficients of $l$ are in general position with respect to $A$, then the local Euler obstruction of $X$ is $$\displaystyle \Eu_{X}(0)= \sum_{\stackrel{I\subset \{1,\dots ,m\}}{ |I|\geq k+1 }} \sum_{a=1}^{|I|-k} (-1)^{|I|+k-1} \cdot \binom{|I|-a-1}{k-1} \cdot |I|!\cdot (L^I)^a (\tilde{\Delta}_A^I)^{|I|-a}.$$ \end{example} Next, we compute an explicit example of the local Euler obstruction of a non simple ICIS. \begin{example} Consider the hypersurface $X_A^1$ given by the germ $A:(\mathbb{C}^3,0)\to (M_{1,1},0)$, where $$A(x,y,z)=\left[ \begin{array}{c} x^2+x^3+y^6-z^2\end{array}\right].$$ The function $l:(\mathbb{C}^3,0)\to (\mathbb{C},0)$, given by $l(x,y,z)=x-30y-z$ is a generic linear form and the leading coefficients of $l$ are in general position with respect to $A$. Then, by Example \ref{EuICIS} $\begin{array}{lll} \Eu_{X_A^1}(0) & = & \displaystyle \sum_{\stackrel{I\subset \{1,2 ,3\}}{ |I|\geq 2 }} \sum_{a=1}^{|I|-1} (-1)^{|I|} |I|!\cdot (L^I)^a (\tilde{\Delta}_A^I)^{|I|-a} \\ & = & 2!(L^{I_{12}})^1(\tilde{\Delta}_A^{I_{12}})^1+2!(L^{I_{13}})^1(\tilde{\Delta}_A^{I_{13}})^1+2!(L^{I_{23}})^1(\tilde{\Delta}_A^{I_{23}})^1\\ & - & (3!(L)^1(\tilde{\Delta}_A)^2+3!(L)^2(\tilde{\Delta}_A)^1)\\ & = & 2 +2 +2 -4-2=0\\ \end{array}$. \begin{figure}[h!] \centering \includegraphics[scale=0.75]{polyhiper.pdf} \caption{The Newton polyhedron of $A$.}\label{figure3} \end{figure} \end{example} In the following we present a class of IDS for which the local Euler obstruction is given just as a sum of binomial coefficients, which can be easily computed with a computer programm. \begin{corollary} \label{corosimplex} When a germ $A$ satisfies the conditions for Corollary \ref{CorollaryObstruction} and $\tilde{\Delta}_A=L$, we have the following formula for the local Euler obstruction of $X_A^n$: \noindent $\Eu_{X_A^n}(0)= \displaystyle\sum_{q=k-n+1}^k \sum_{|I|=q+1}^m \sum_{a=1}^{|I|-q} (-1)^{|I|+k-n} \binom{|I|+q-a-2}{n+q-k-1}$ \begin{flushright} $\displaystyle\times\binom{|I|-a-1}{q-1} \binom{k}{q}\binom{m}{|I|}.$ \end{flushright} \end{corollary} Note that, if the germ $A$ has linear entries, then $\tilde{\Delta}_A$ is the standard simplex. \begin{example} Consider $A$ the germ given in Example \ref{exelementary}. Since the matrix $A$ have linear entries, its Newton polyhedron has complement equal to the 4-dimensional simplex. Now, consider the generic linear form $l:(\mathbb{C}^4,0)\to (\mathbb{C},0)$ given by $l(x,y,z,w)= 3x+4y-z+w$. As we saw at Example \ref{exstrong}, the leading coefficients of $l$ are in general position with respect to $A$. Then, by Corollary \ref{corosimplex},\\ $\begin{array}{lll} \Eu_{X_A^2}(0) & = & \displaystyle \sum_{q=2}^3 \sum_{|I|=q+1}^4 \sum_{a=1}^{|I|-q} (-1)^{|I|+1} \binom{|I|+q-a-2}{q-2} \binom{|I|-a-1}{q-1} \binom{3}{q}\binom{4}{|I|}\\ & = & -1. \end{array}$ \end{example} \begin{example} Let $X_A^2$ be the simple determinantal singularity $\Omega_k$, from \cite[pg. 25]{AFG2}, defined by the germ $A:(\mathbb{C}^6,0)\to (M_{2,3},0)$, where $$A=\left[ \begin{array}{ccc} x & y & v \\ z & w & x+u^k \end{array} \right]. $$ Then, we have $\Eu_{X_A^2}(0)=\displaystyle \sum_{q=2}^3 \sum_{|I|=q+1}^4 \sum_{a=1}^{|I|-q} (-1)^{|I|+1} \binom{|I|+q-a-2}{q-2}$ \begin{flushright} $\displaystyle \binom{|I|-a-1}{q-1} \binom{3}{q} \cdot |I|!\cdot (L^I)^a (\tilde{\Delta}_A^I)^{|I|-a}=2.$ \end{flushright} Here, we used the mathematical software Polymake to compute the mixed volume of Polyhedra. \end{example} \section{Whitney equisingularity of families of IDS} In this section, we use the information given by Newton polyhedra and strong general position to present conditions which guarantee the Whitney equisingularity of a family of isolated determinantal singularities. The relation between Whitney equisingularity and Newton polyhedra was strongly studied by Brian\c con (on unpublished notes), for a family of non-degenerate isolated hypersurface singularities and by Eyral and Oka for families of non-degenerate non-isolated hypersurfaces and complete intersection singularities (\cite{EyralOka,EyralOka2}). Although Theorem \ref{equi} uses most of the same elements (adapted to IDS) as the results presented by these authors, its proof is made with a different approach: the constancy of polar multiplicities on a family of IDS. The concept of Whitney equisingularity is strongly related to the polar multiplicities for many important classes of spaces. For instance, in \cite{Gaffney, Gaffney2}, Gaffney showed that a family of d-dimensional isolated complete intersection singularities (ICIS) $\left\{ (X_t,0) \right\}$ of any dimension $d$ is Whitney equisingular if, and only if, the polar multiplicities $m_i(X_t, 0)$, $i = 0,\dots, d$ are constant on the family. The polar multiplicities $m_i(X,0)$, for $i = 0,\dots, d-1$ are defined for any variety $(X, 0)$ (see \cite{LT}), however $m_d(X, 0)$ was defined, initially, only for ICIS in \cite{Gaffney}. Using topological and geometric information from generic linear projections, Pereira and Ruas \cite{MC} and Nu\~no-Ballesteros, Or\'efice-Okamoto and Tomazella \cite{BBT} defined the top polar multiplicity $m_d(X_A^s, 0)$ for an IDS $X_A^s$. In \cite{BBT4}, the authors proved that a family of IDS $\left\{ (X_{A_t}^s,0) \right\}$ is Whitney equisingular if, and only if, $m_i(X_{A_t}^s, 0)$, $i = 0,\dots, d$, does not depend on $t$. Given an IDS $(X_A^s,0)$, the top polar multiplicity $m_d(X_A^s, 0)$ is related to the local Euler obstruction of $X_A^s$. In fact, in \cite[pg. 486]{BBT}, the authors proved $$m_d(X_A^s, 0) = \nu(X_A^s, 0) + (-1)^{d-1} {\rm{Eu}}_{X_A^s}(0) + 1,$$ where $\nu(X,0)$ is the vanishing Euler characteristic of $X_A^s$. In the following we present the definition of vanishing Euler characteristic of an IDS. \begin{definition} The {\it vanishing Euler characteristic} of an IDS $X_A^s$, denoted by $\nu(X_A^s,0)$, is defined as $$\nu(X_A^s, 0) = (-1)^d(\chi(\tilde{X_{A}^s}) - 1),$$ where $\tilde{X_{A}^s}$ is the generic fiber of the determinantal smoothing\footnote{For the definition see \cite[Definition 3.3]{BBT}.} of $X_A^s$, $\chi( \cdot )$ denotes the Euler characteristic and $d=\dim X_A^s$. \end{definition} For more details on the vanishing Euler characteristic see \cite[Definition 3.2]{BBT}. When $X_A^s$ has co-dimension $2$ the vanishing Euler characteristic coincides with the Milnor number defined by Pereira and Ruas in \cite{MC}. Moreover, the vanishing Euler characteristic has a Lê-Greuel type formula expressing $\nu(X_A^s,0)$ in terms of the top polar multiplicity and the vanishing Euler characteristic of a generic section (see \cite[Theorem 4.3 and Definition 4.4]{BBT}), \ie the following equality holds \begin{equation} \label{toppolar} m_d (X_A^s,0) = \nu(X_A^s, 0) + \nu(X_A^s\cap p^{-1}(0),0), \end{equation} where $p: \mathbb{C}^m \to \mathbb{C}$ is a generic linear function and $d = \dim X_A^s$. Before we proof Theorem \ref{equi}, we recall some notions about determinantal deformations. Consider a map germ $\mathcal{A}: (\mathbb{C}^m \times \mathbb{C}, 0) \to (M_{n,k},0)$ such that $\mathcal{A}(x, 0) = A(x)$ for all $x \in \mathbb{C}^m$. When $X_{A}^s$ is a determinantal variety, the projection $$\begin{array}{cccc} \pi: &(X_{\mathcal{A}}^s,0) & \to & (\mathbb{C},0) \\ & (x,t) & \mapsto & t \end{array}$$ is called a {\it determinantal deformation of} $X_{A}^s$ . If we fix a small enough representative $A : B_{\varepsilon} \to (M_{n,k},0)$, where $B_{\varepsilon}$ is the open ball centered at the origin with radius $\varepsilon > 0$, then we set $A_t(x) := \mathcal{A}(x, t)$ and $X_{A_t}^s = A^{-1}_t (M^{s}_{n,k})$. If $X_{A_t}^s$ is a determinantal deformation of $(X_{A_0}^s, 0)$ as above, we say that: \begin{enumerate} \item $X_{A_t}^s$ is origin preserving if $0 \in S(X_{A_t}^s)$, for all $t$ in $D$, where $S(X_{A_t}^s)$ denotes the singular set of $X_{A_t}^s$ and $D \subset \mathbb{C}$ is a disc around the origin. Then $\left\{(X_{A_t}^s, 0) \right\}_{ t\in D }$ is called a {\it $1$-parameter family} of IDS; \item $\left\{(X_{A_t}^s, 0) \right\}_{ t\in D }$ is a good family if there exists $\varepsilon > 0$ with $S(X_{A_t}^s) = \left\{0\right\}$ on $B_{\varepsilon}$, for all $t$ in $D$; \item $\left\{(X_{A_t}^s, 0) \right\}_{ t\in D }$ is Whitney equisingular if it is a good family and $\left\{X_{\mathcal{A}}^s \setminus T, T \right\}$ satisfies the Whitney conditions, where $T = \left\{0\right\} \times D$. \end{enumerate} In \cite[Theorem 5.3]{BBT4}, the authors proved that a good family $\{(X_{A_t}^s, 0)\}$ of IDS of dimension $d$ is Whitney equisingular if, and only if, all the polar multiplicities $m_i(X_{A_t}^s, 0), i = 0,\dots ,d$ are constant on the family. \begin{proof}(of Theorem \ref{equi}) Firstly, since $\tilde{\Delta}_{A_t}$ is bounded and the leading coefficients of $A_t$ are in strong general position, then, by Theorem \ref{fibramilnortheorem} $X_{A_t}^n$ is smooth outside the origin, consequently, $\left\{(X_{A_t}^n, 0) \right\}_{ t\in D }$ is a good family. Moreover, since $X_{A_t}^n$ is smooth outside the origin, it is contractible, then $\chi (X_{A_t}^n)=1$. By \cite[Corollary 4.3]{BBT4}, $ \nu (X_{A_t}^n,0)$ is constant on $t$, \ie \begin{equation} \label{vanishingconstant} \nu (X_{A_t}^n,0)= \nu (X_{A_0}^n,0), \end{equation} for all $t \in D$. To compute the top polar multiplicity $m_d(X_{A_t}^n,0)$, we use Equation \eqref{toppolar} to obtain $$m_d(X_{A_t}^n,0)=\nu (X_{A_t}^n,0) + \nu (X_{A_t}^n\cap H_t,0),$$ where $H_t\subset \mathbb{C}^{m}$ is a generic hyperplane for all $t\in D$. At this point, it is important to notice that $H_t$ denotes a generic hyperplane with respect to $X_{A_t}^n$, which is not necessarily a perturbation of $H_0$. Furthermore, as $H_t$ is a generic hyperplane and $X_{A_t}^n$ is smooth outside the origin, $X_{A_t}^n\cap H_t$ is a (d-1)-dimensional determinantal singularity which is smooth outside the origin as well, thus, by the same arguments used in the Equation \eqref{vanishingconstant}, we have $\nu (X_{A_t}^n\cap H_t,0) =\nu (X_{A_0}^n\cap H_0,0)$. Hence, \begin{equation}\label{toppolar1} \begin{array}{lll} m_d(X_{A_t}^n,0) & = & \nu (X_{A_t}^n,0) + \nu (X_{A_t}^n\cap H_t,0) \\ & = & \nu (X_{A_0}^n,0) + \nu (X_{A_0}^n\cap H_0,0) \\ &= & m_d(X_{A_0}^n,0). \end{array} \end{equation} On the other hand, the polar multiplicities $m_j(X_{A_t}^n,0)$, $j=0,\dots , d-2$, can be described using generic hyperplanes. In fact, from \cite[Lemma $2.6$]{GGR} \begin{equation}\label{pmcortes} m_{d-l} (X_{A_t}^n\cap H_t^1\cap \cdots \cap H_t^l,0)=m_{d-l}(X_{A_t}^n,0), \end{equation} where $H_t^1, \dots , H_t^l$ are generic hyperplanes for all $t \in D$. Since, $$\dim_{\mathbb{C}^{m-l}} X_{A_t}^n\cap H_t^1\cap \cdots \cap H_t^l=d-l,$$ by \Eqref{toppolar1} and \eqref{pmcortes}, \begin{equation*} \begin{array}{lcl} m_{d-l} (X_{A_t}^n,0) & = & m_{d-l}(X_{A_t}^n\cap H_t^1\cap \cdots \cap H_t^l,0) \\ & = & m_{d-l}(X_{A_0}^n\cap H_0^1\cap \cdots \cap H_0^l,0) \\ & = & m_{d-l} (X_{A_0}^n,0), \end{array} \end{equation*} for all $l=2,\dots , d$. Lastly, for the polar multiplicity $m_{d-1}(X_{A_t}^n,0)$ we use the following Nuño-Ballesteros, Oréfice-Okamoto and Tomazella's formula \cite[pg. 486]{BBT}. $$\Eu_{X_{A_t}^n}(0) = (-1)^{d-1} \nu (X_{A_t}^n\cap H_t,0) +1.$$ Since $\nu (X_{A_t}^n\cap H_t,0) = \nu (X_{A_0}^n\cap H_t,0)$, we have $$\Eu_{X_{A_t}^n}(0)=\Eu_{X_{A_0}^n}(0).$$ Moreover, by Equation \eqref{LeTemultiplicities} $$\sum_{k=0}^{d-1} (-1)^k m_k (X_{A_t}^n,0)=\Eu_{X_{A_t}^n}(0)=\Eu_{X_{A_0}^n}(0)=\sum_{k=0}^{d-1} (-1)^k m_k (X_{A_0}^n,0).$$ In addition, $m_j(X_{A_t}^n,0)=m_j(X_{A_0}^n,0)$, $j=0,1,\dots , d-2$, then $$m_{d-1}(X_{A_t}^n, 0)= m_{d-1}(X_{A_0}^n, 0).$$ Therefore, $m_j(X_{A_t}^n,0)=m_j(X_{A_0}^n,0)$, for all $0\leq j\leq d$. Hence, by \cite[Theorem 5.3]{BBT4}, $\left\{(X_{A_t}^n, 0) \right\}_{ t\in D }$ is Whitney equisingular. \end{proof} Using Theorem \ref{equi}, Example \ref{exelementary} and the elements from the previous sections, we present an example of a Whitney equisingular family. \begin{example} Let $\left\{(X_{A_t}^2, 0) \right\}_{ t\in D }$ be the family of 2-dimensional determinantal singularities defined by the germ $A_t: (\mathbb{C}^4,0) \rightarrow (M_{2,3},0)$, with $$A_t=\left[ \begin{array}{ccc} w+tw^2 & y & x\\ z & w &y \end{array} \right].$$ We have, $f_{A_t}=x+tx^2+y+z+w$ and for every collection of weights $\lambda$, $f^{\lambda}_{A_t}=x+y+z+w$. Therefore, for all $t\in D$, $$((a_{ij}^{\lambda})_t)=A_0=\left[ \begin{array}{ccc} w & y & x \\ z & w &y \end{array} \right].$$ Thus, the leading coefficients of $A_t$ are in strong general position for all $t\in D$. Furthermore, $\Delta_{A_t}=\Delta_{A_0}$ for all $t \in D$ and $\tilde{\Delta}_{A_t}$ is the 4-dimensional standard simplex, which is bounded. Then, by Theorem \ref{equi}, $\left\{(X_{A_t}^2, 0) \right\}_{ t\in D}$ is Whitney equisingular. \end{example} \section{Applications} Let $X_{A}^s$ be a determinantal singularity defined by the matrix $A:(\mathbb{C}^m,0) \rightarrow (M_{n,k},0)$ and $f:(X_{A}^s,0) \rightarrow (\mathbb{C},0)$ be a function with isolated singularity at the origin. Since $f^{-1}(0)$ is a hypersurface, $\dim X_{A}^s\cap f^{-1}(0)=\dim X_{A_t}^s-1$. In general $X_{A}^s\cap f^{-1}(0)$ is not determinantal, since the variety $X_{A}^s\cap f^{-1}(0)$ is not always given by a matrix $A^f \in {M}_{n,k}$. When there exists such matrix, $X_{A^f}^s = X_{A}^s\cap f^{-1}(0) \subset \mathbb{C}^{m-1}$ is said to be a {\it determinantal fiber}. If $X_{A^f}^s$ is a determinantal fiber then $X_{A^f}^s\subset \mathbb{C}^{m-1}$ is determinantal. \begin{remark} \label{remarkAf} Let $f(x)=\displaystyle \sum_{p\in \mathbb{Z}^m_{+}} c_px^p$ be a polynomial function and suppose that there exists $p=(0,\dots,0,p_i,0,\dots,0)\in supp(f)$ such that for all $r\neq p\in supp(f)$, $r=(r_1,\dots, r_{i-1},0,r_{i+1}, \dots, r_m)$, \ie $f$ is a homogeneous function on $x_i$. Consider a matrix $A:(\mathbb{C}^m,0) \rightarrow (M_{n,k},0)$ such that for all $q=(q_1,\dots, q_{i-1},q_i,q_{i+1}, \dots, q_m)\in supp(A)$, we have $q_i=\lambda_i p_i$. If at least one $\lambda_i\neq 0$, then $X_{A^f}^s = X_{A}^s\cap f^{-1}(0) \subset \mathbb{C}^{m-1}$ is a determinantal fiber defined by the matrix $A^f:(\mathbb{C}^{m-1},0) \rightarrow (M_{n,k},0)$. \end{remark} We can write the support of $A^f$ in terms of the support of $A$ and $f$, however, this is not an easy task in general. In the following, we present a case, where this can be done in a simpler manner: given a point $p=(p_1,\dots , p_m)\in \mathbb{Z}^m_{+}$, we denote by $\hat{p_i}=(p_1,\dots ,p_{i-1},p_{i+1},\dots ,p_m)\in \mathbb{Z}^{m-1}_{+}$. Now, let $f(x)=c_{p_{i}}x_{i}^{p_{i}} + \displaystyle \sum_{\hat{p_{i}}\in \mathbb{Z}^{m-1}_{+}} c_{\hat{p_{i}}} x^{\hat{p_{i}}}$ and let $A$ be a germ of a matrix such that $$supp(A)=\{(0,\dots ,0,p_i,0,\dots ,0), (q_1,\dots ,q_{i-1},0, q_{i+1},\dots , q_m): q_j\in \mathbb{Z}_{+} \}.$$ Therefore, $supp(A^f)=\{\hat{k_i}: k\in supp(A)\cup supp(f)\}$ and $\Delta_{A^f}$ is the Newton polyhedron determined by $supp(A^f)$. Note that, whenever $f:(\mathbb{C}^m,0) \rightarrow (\mathbb{C},0)$ is a regular function at the origin, we can make a change of coordinates and assume that $f(x_1,\dots ,x_m)=x_m$. Therefore, we are in a particular case of Remark \ref{remarkAf}. Then $X_{A}^s\cap f^{-1}(0)$ is a determinantal fiber. In this case, $$supp(A^f)=\{\hat{p_m}: p=(p_1,\dots ,p_m)\in supp(A)\}$$ and $\Delta_{A^f}=\Delta_A\cap \mathbb{R}_{+}^{m-1}$. However, as we can see in Example \ref{exfiber}, regular functions are not the only functions which make $X_{A}^s\cap f^{-1}(0)$ a determinantal fiber. \begin{example} \label{exfiber} Let $X_A^2$ be the simple determinantal singularity $\Omega_k$, from \cite[pg. 25]{AFG2}, defined by the germ $A:(\mathbb{C}^6,0)\to (M_{2,3},0)$, where $$A=\left[ \begin{array}{ccc} x & y & v \\ z & w & x+u^k \end{array} \right],$$ and the function $f:(\mathbb{C}^6,0)\to (\mathbb{C},0)$ defined by $f(x,y,z,w,v,u)=x^2+y^2+z^2+w^2+v^2-u^k$, for $k\geq 2$. Then, the variety $X_{A^f}^2\cap f^{-1}(0)\subset \mathbb{C}^5$ is given by the matrix $$A^f=\left[ \begin{array}{ccc} x & y & v \\ z & w & x+x^2+y^2+z^2+w^2+v^2 \end{array} \right].$$ Therefore, $X_{A}^2\cap f^{-1}(0)=X_{A^f}^2$ is a determinantal fiber. \end{example} \subsection{Whitney equisingularity for family of functions} Let $\{f_t:(X_{A_t}^s,0) \rightarrow (\mathbb{C},0)\}_{t\in D}$ be a good family of functions defined on a family $\{(X_{A_t}^s, 0)\}$ of IDS such that $X_{A_t^{f_t}}^s$ is a determinantal fiber for all $t \in D$. In \cite{CBBT}, the authors proved that this family is Whitney equisingular if, and only if, all the polar multiplicities $m_i(X_{A_t}^s, 0), i = 0,\dots ,d$ and $m_i(X_{A_t^{f_t}}^s, 0), i = 0,\dots ,d-1$ are constant on the family. Let $\mathcal{A}: (\mathbb{C}^m\times \mathbb{C},0) \rightarrow (M_{n,k},0)$ be a determinantal deformation of $X_{A}^s$ and $$\begin{array}{cccc} F: & (X_{\mathcal{A}}^s,0) & \rightarrow & (\mathbb{C}\times \mathbb{C},0) \\ & (x,t) & \mapsto & F(x,t):=(f_t(x),t) \end{array}$$ be an unfolding of $f$. Then, we say that: \begin{enumerate} \item $F$ is {\it origin preserving} if $0 \in X_{A_t}^s$ and $f_t(0)=0$ for all $t$ small enough. Then $F$ is a 1-parameter family of map germs $\{f_t:(X_{A_t}^s,0) \rightarrow (\mathbb{C},0)\}_{t\in D}$, where $D$ is an open disc around the origin; \item $\{f_t:(X_{A_t}^s,0) \rightarrow (\mathbb{C},0)\}_{t\in D}$ is a {\it good family} if there is a representative defined in $D \times U$, such that $X_{A_t}^s\setminus 0$ is smooth and $f_t$ is regular on $X_{A_t}^s\setminus 0$, for any $t \in D$, where $D$ and $U$ are neighbourhoods of the origin in $\mathbb{C}$ and $\mathbb{C}^m$, respectively; \item $\{f_t:(X_{A_t}^s,0) \rightarrow (\mathbb{C},0)\}_{t\in D}$ is {\it Whitney equisingular} if it is a good family and there is a representative as in item $(ii)$ which admits a regular stratification\footnote{By a regular stratification, we mean a Whitney stratification where $F$ satisfies the Thom condition (see \cite{Massey}).}given by $\mathcal{V}=\{X_{\mathcal{A}}^s\setminus F^{-1}(T), F^{-1}(T)\setminus S, S\}$ in the source and $\mathcal{V}'=\{(\mathbb{C}\times \mathbb{C}\setminus T, T\}$ in the target, where $S=D\times 0\subset \mathbb{C}\times \mathbb{C}^m$ and $T=D\times 0 \subset \mathbb{C}\times \mathbb{C}$. \end{enumerate} Consider the unfoldings $\mathcal{A}:(\mathbb{C}^m\times \mathbb{C},0)\to(M_{n,k},0)$ and $F:(\mathbb{C}^m\times \mathbb{C},0)\to(\mathbb{C}\times\mathbb{C},0)$ of $A:(\mathbb{C}^m,0)\to(M_{n,k},0)$ and $f:(\mathbb{C}^m,0)\to(\mathbb{C},0)$, respectively. Suppose that both $X_{A_t}^s$ and $F$ are origin preserving, both $\{(X_{A_t}^s,0)\}_{t\in D}$ and $\{f_t:(X_{A_t}^s,0) \rightarrow (\mathbb{C},0)\}_{t\in D}$ are good families and $X_{A_t}^s\cap f_t^{-1}(0)$ is a determinantal fiber for all $t\in D$. Then $X_{A_t^{f_t}}^s$ is a determinantal deformation of $X_{A_0^{f_0}}^s$ and \begin{enumerate} \item $X_{A_t^{f_t}}^s$ is origin preserving and $\{(X_{A_t^{f_t}}^s,0)\}_{t\in D}$ is a 1-parameter family of IDS; \item $\{(X_{A_t^{f_t}}^s,0)\}_{t\in D}$ is a good family. \end{enumerate} \begin{example} Let $X_{A_t}^2$ be defined by the matrix $$A_t=\left[ \begin{array}{ccc} w & y+ty^2 & x \\ z & w & y \end{array} \right]$$ and $f_t(x,y,z,w)= x+y+ty^3+z-w$, for $t$ sufficient small. For each $t$, $X_{A_t^{f_t}}^2\cap f_t^{-1}(0)\subset \mathbb{C}^3$ is a family of determinantal fibers, given by $$A^{f_t}_t=\left[ \begin{array}{ccc} x+y+ty^3+z & y+ty^2 & x \\ z & x+y+ty^3+z & y \end{array} \right].$$ Also, we have $$supp(A)=\{(1,0,0,0),(0,1,0,0),(0,2,0,0),(0,0,1,0),(0,0,0,1)\},$$ $$supp(f)=\{(1,0,0,0),(0,1,0,0),(0,3,0,0),(0,0,1,0),(0,0,0,1)\},$$ $$supp(A^f)=\{((1,0,0),(0,1,0),(0,2,0),(0,0,1),(0,3,0)\}.$$ Furthermore, $\tilde{\Delta}_{A_f}$ is the 3-dimensional standard simplex. \end{example} Now, we assume $f:(X_A^n,0)\to(\mathbb{C},0)$ being a germ of function smooth outside the origin and suppose $\tilde{\Delta}_A$ and $\tilde{\Delta}_f$ bounded. Let $\left\{(X_{A_t}^n, 0) \right\}_{ t\in D }$ be a family of isolated determinantal singularities, with $A_0=A$ and $F:(X_{\mathcal{A}}^n,0)\rightarrow (\mathbb{C}\times \mathbb{C},0)$ an unfolding, with $f_0=f$. \begin{corollary} If $f_t$ is smooth outside the origin, the Newton polyhedron of $A_t$ and $f_t$ are independent of $t$, the leading coefficients of $A_t$ are in strong general position and $X_{A_t^{f_t}}^s$ is a determinantal fiber, for all $t\in D$, then the family $\{f_t:(X_{A_t}^n,0) \rightarrow (\mathbb{C},0)\}_{t\in D}$ is Whitney equisingular. \end{corollary} \begin{proof} Since $X_{A_t^{f_t}}^n$ is a determinantal fiber, for all $t\in D$, by \cite[Theorem 5.3]{CBBT}, the family $\{f_t:(X_{A_t}^n,0) \rightarrow (\mathbb{C},0)\}_{t\in D}$ is Whitney equisingular if, and only if, the families $\left\{(X_{A_t}^n, 0) \right\}_{ t\in \mathbb{C} }$ and $\left\{(X_{A_t^{f_t}}^n, 0) \right\}_{ t\in D }$ are Whitney equisingular. By Theorem \ref{equi}, the family $\left\{(X_{A_t}^n, 0) \right\}_{ t\in D}$ is Whitney equisingular. Furthermore, $f_t$ is smooth outside the origin and the leading coefficients of $A_t$ are in strong general position, then the leading coefficients of $A_t^{f_t}$ are also in strong general position. Lastly, since $\Delta_{f_t}$ and $\Delta_{A_t}$ are independent of $t$, then $\Delta_{A^{f_t}}$ is independent of $t$. Moreover, $\tilde{\Delta}_A$ and $\tilde{\Delta}_f$ are bounded, then $\tilde{\Delta}_{A^{f_t}}$ is bounded, for all $t\in D$. By Theorem \ref{equi}, the family $\left\{(X_{A_t^{f_t}}^n, 0) \right\}_{ t\in D }$ is Whitney equisingular. Consequently, the family $\{f_t:(X_{A_t}^n,0) \rightarrow (\mathbb{C},0)\}_{t\in D}$ is Whitney equisingular. \end{proof} \subsection{Constancy of Morse points} In \cite{ABOT} it is presented a formula relating the local Euler obstruction of $f$ to the vanishing Euler characteristic of the fiber $X_{A^f}^s$, where $f : (X_{A}^s, 0) \to (\mathbb{C},0)$ is an analytic function germ with isolated singularity on an IDS. Using the results of \cite{ABOT,NN} and the hypothesis that $X_{A^f}^s $ is a determinantal fiber, we establish a Lê-Greuel type formula for germs of functions $f,g:X_{A}^n \rightarrow \mathbb{C}$ with stratified isolated singularity. To present the next result it is necessary the following definition. \begin{definition} Let $\mathcal{V}$ be a good stratification\footnote{The concept of good stratification may be found on \cite{Massey}.} of $X$ relative to $f$. We say that $g:(X,0)\rightarrow (\mathbb{C},0)$ is prepolar with respect to $\mathcal{V}$ at the origin if the origin is a stratified singularity of $g$. \end{definition} Let $f:(\mathbb{C}^m,0)\rightarrow (\mathbb{C},0)$ be a holomorphic function germ such that $f|_{X_A^s}: X_A^s \rightarrow \mathbb{C}$ has isolated singularity at the origin. The authors define in \cite{BBT} an invariant which provides geometrical and topological information of the Milnor fiber of $f$. In the following we present the definition of this invariant. \begin{definition} The {\it vanishing Euler characteristic} of the fiber $(X_A^s\cap f^{-1}(0),0)$ is defined by $$\nu (X_A^s\cap f^{-1}(0),0)=(-1)^{\dim X_A^s-1} (\chi (\tilde{X_{A}^s}\cap B_{\varepsilon}\cap \tilde{f}^{-1}(c)) -1),$$ where $\tilde{X_{A}^s}$ is the generic fiber of the determinantal smoothing of $X_{A}^s$, $\tilde{f}$ is a morsification of $f$ and $1\gg \varepsilon \gg |c|>0$ sufficiently general. \end{definition} \begin{corollary} \label{levanishing} Let $X_{A}^n$ be a d-dimensional IDS, $f:(X_{A}^n,0) \rightarrow (\mathbb{C},0)$ be a function with isolated stratified singularity at the origin and $g:(X_{A}^n,0) \rightarrow (\mathbb{C},0)$ be a prepolar function with respect to a good stratification $\mathcal{V}$ of $X_{A}^n$ relative to $f$ at $0$. Suppose that $X_{A^g}^n$ is a determinantal fiber, then $$ \nu (X_{A}^n\cap f^{-1}(0)) +\nu (X_{A^g}^n\cap f^{-1}(0))=n_{reg},$$ where $n_{reg}$ is the number of Morse points which appear in a stratified morsification of $f$ in a small neighbourhood of $0$. \end{corollary} \begin{proof} Since $X_{A}^n$ is an IDS, $X_{A^g}^n$ is a determinantal fiber and $g$ is a prepolar function with respect to a good stratification $\mathcal{V}$ of $X_{A}^n$ relative to $f$ at $0$, both $X_{A}^n$ and $X_{A^g}^n$ are IDS. Thus, by \cite[Proposition 3.7]{ABOT}, the following equations hold \begin{equation} \label{notbrasselet} \nu (X_{A}^n\cap f^{-1}(0))=(-1)^{d-1}\big[(\chi (X_{A}^n\cap f^{-1}(t_0)\cap B_{\varepsilon})-1\big], \end{equation} \begin{equation} \label{notbrasselet2} \nu (X_{A^g}^n\cap f^{-1}(0))=(-1)^{d-2}\big[(\chi (X_{A^g}^n\cap f^{-1}(t_0)\cap B_{\varepsilon})-1\big]. \end{equation} Furthermore, adding \Eqref{notbrasselet}, \eqref{notbrasselet2} and applying \cite[Theorem 4.4]{NN} $$(-1)^{d-1}\big[(\chi (X_{A}^n\cap f^{-1}(t_0)\cap B_{\varepsilon})-(\chi (X_{A^g}^n\cap f^{-1}(t_0)\cap B_{\varepsilon})\big]=n_{reg}.$$ Therefore, \begin{equation*} \begin{array}{lclcl} \nu (X_{A}^n\cap f^{-1}(0)) + \nu (X_{A^g}^n\cap f^{-1}(0)) & = & (-1)^{d-1}(-1)^{d-1} n_{reg} & = & n_{reg} . \end{array} \end{equation*} \end{proof} \begin{corollary}\label{morsepolyhedra} Under the assumptions of Corollary \ref{levanishing}, suppose that $\tilde{\Delta}_f$, $\tilde{\Delta}_A$ and $\tilde{\Delta}_{A^g}$ are bounded, and the leading coefficients of $f$ are in general position with respect to $A$ and $A^g$, respectively. Then, $n_{reg}$ is given in terms of mixed volumes of $\tilde{\Delta}_f$, $\tilde{\Delta}_A$ and $\tilde{\Delta}_{A^g}$. \end{corollary} \begin{proof} It follows directly from Theorem \ref{fibramilnortheorem} and Corollary \ref{levanishing}. \end{proof} As a consequence of this result, it is possible to guarantee that $n_{{\rm{reg}}}$ is constant on families. \begin{corollary} Under the assumptions of Corollaries \ref{levanishing} and \ref{morsepolyhedra}, suppose that for all $t \in D$, the leading coefficients of $df_t$ are in general position with respect to $A_t$ and $A^{g}_{t}$, respectively. If the Newton polyhedra of $f_{t}$, $g_{t}$ and $A_{t}$ are independent of $t$, where $\{f_t, g_t : (X_{A_t}^n,0) \to (\mathbb{C},0)\}_{t \in D}$ and $\{(X_{A_t}^n,0)\}_{t \in D}$ are families of functions and IDS, respectively, then ${n}_{reg}$ is constant for $t \in D$. \end{corollary} \vspace{0.7cm} \noindent \textbf{Acknowledgements.} The authors are grateful to Bruna Oréfice Okamoto from DM-UFSCar for helpful conversations about determinantal varieties and for her help with some examples. We would also like to thank Anne Frühbis-Krüger and Matthias Zach for their help in solving some examples and especially with the simplification of the formula from Theorem \ref{fibramilnortheorem}. \bibliographystyle{amsalpha-lmp}
{ "timestamp": "2022-03-22T01:27:23", "yymm": "2105", "arxiv_id": "2105.01805", "language": "en", "url": "https://arxiv.org/abs/2105.01805" }
{ "timestamp": "2021-05-06T02:07:10", "yymm": "2105", "arxiv_id": "2105.01808", "language": "en", "url": "https://arxiv.org/abs/2105.01808" }
\section*{Acknowledgment} {\color{black}The authors would like to thank the shepherd and anonymous reviewers for the constructive and insightful guidance and comments. This work was supported in part by the National Science Foundation under Grant CNS-1828593, OAC-1829771, EEC-1840458, and CNS-1950704, Office of Naval Research under Grant N00014-20-1-2065, and the Commonwealth Cyber Initiative, an investment in the advancement of cyber R\&D, innovation and workforce development. For more information about CCI, visit cyberinitiative.org.} \bibliographystyle{IEEEtranS} \section{Conclusion and Further Discussions}\label{conclusion} This paper has focused on a deep optimization on the HE-based linear computation in privacy-preserved neural networks. It aims to minimize the Perm operations, thus substantially reducing the overall computation time. To this end, we have proposed {\em GALA: \underline{G}reedy comput\underline{A}tion for \underline{L}inear \underline{A}lgebra}, which views the HE-based linear computation as a series of Homomorphic Add, Mult and Perm operations and chooses the least expensive operation in each linear computation step to reduce the overall cost. GALA has made the following contributions: (1) It has introduced a row-wise weight matrix encoding with combined share generation (i.e., row-encoding-share-RaS (Rotated and Sum)) to reduce the Perm operations for dot product. (2) It has designed a first-Add-second-Perm approach (named {\em kernel grouping}) to reduce the Perm operations for convolution. {\color{black}As such, GALA efficiently reduces the cost for the HE-based linear computation, which is a critical building block in almost all of the recent frameworks for privacy-preserved neural networks, including GAZELLE, DELPHI, and CrypTFlow2. With its deep optimization of the HE-based linear computation, GALA can be a plug-and-play module integrated into these systems to further boost their efficiency. Our experiments show that GALA achieves a significant speedup up to 700$\times$ for the dot product and 14$\times$ for the convolution computation under different data dimensions.} Meanwhile, GALA demonstrates an encouraging runtime boost by 2.5$\times$, 2.7$\times$, 3.2$\times$, 8.3$\times$, 7.7$\times$, and 7.5$\times$ over GAZELLE and 6.5$\times$, 6$\times$, 5.7$\times$, 4.5$\times$, 4.2$\times$, and 4.1$\times$ over CrypTFlow2, on AlexNet, VGG, ResNet-18, ResNet-50, ResNet-101, and ResNet-152, respectively. It is worth pointing out that even with the significant progress toward privacy preserved machine learning in recent years (including this work), there still exists a large performance gap between the plaintext system (generally below a second) and the privacy preserved system (ranging from seconds to hundreds of second). Nevertheless, it is still promising to attain the long-term goal for practical implementation of privacy preserved machine learning. First, the privacy preserved machine learning system is to be deployed on clouds with abundant computation power. Hence, even though it takes significantly more time than the plaintext system on the same local hardware, running it on clouds with parallel computing infrastructure can significantly reduce the gap. Second, the research efforts on the in-depth optimization of the privacy-preserved computation further help to close the runtime gap. Altogether, the combination of advanced algorithms and cloud computation resources may enable the privacy-preserved system to achieve a response time well suited for some practical applications in the near future. \section{Evaluation}\label{evaluation} {\color{black} We conduct the experiments in both LAN and WAN settings. The LAN setting is implemented on a Gigabit Ethernet in our lab between two workstations as the client and server, respectively. Both machines run Ubuntu, and have an Intel i7-8700 3.2GHz CPU with 12 threads and 16 GB RAM. The WAN setting is based on a connection between a local PC and an Amazon AWS server with an average bandwidth 200Mbps and round-trip time around 13ms. We have downloaded the codes released by GAZELLE\footnote{Available at https://github.com/chiraag/gazelle\_mpc}, DELPHI\footnote{\color{black}Available at https://github.com/mc2-project/delphi} and CrypTFlow2\footnote{\color{black}Available at https://github.com/mpc-msri/EzPC/tree/master/SCI}, and run all experiments on the same hardware devices and network settings. We conduct a series of experiments under various neural network architectures. In each experiment, we first run the baseline algorithm (i.e., GAZELLE, DELPHI or CrypTFlow2) to obtain the baseline total runtime (including online runtime and offline runtime), and then replace the linear computation of the baseline algorithm by GALA to get a new total runtime, which is then used to compute the speedup. While the codes for GAZELLE, DELPHI and CrypTFlow2 are implemented in different ways (for example, GAZELLE is based on its crypto platform while DELPHI and CrypTFlow2 are based on the Microsoft SEAL library), we focus on the speedup of GALA on top of each of them.} We also set the cryptographic parameters in line with GAZELLE: 1) Parameters for both HE and GC schemes are selected for a 128-bit security level. 2) A plaintext modulus $p$ of 20 bits is enough to store all the intermediate values in the network computation. 3) The ciphertext modulus $q$ is chosen to be a 60-bit pseudo-Mersenne prime that is slightly smaller than the native machine word on a 64-bit machine to enable lazy modular reductions. 4) The selection of the number of slots is the smallest power of two that allows for a 128-bit security which in our case is $n = 2048$. We refer readers to~\cite{juvekar2018gazelle} for more details about the parameter selection. \subsection{Microbenchmarks} In this section, we benchmark and compare the runtime of GALA's linear optimization (i.e., matrix-vector multiplication and convolution computation) with state-of-the-art approaches. We claim the same communication cost and inference accuracy with GAZELLE and achieve improved computation efficiency. \noindent\textbf{1) Matrix-Vector Multiplication:} Table~\ref{mv_performance} compares the computation complexity of GALA's matrix-vector optimization with GAZELLE and two other optimization schemes (i.e., a diagonal method (Diagonal)~\cite{halevi2014algorithms} and an extended method (Extended)~\cite{chen2019efficient}). We can see that GALA largely reduces the expensive Perm operation to zero in our cases (including the HstPerm) while GAZELLE needs up to 11 Perm and Extended~\cite{chen2019efficient} needs up to 520 Perm (including HstPerm). On the other hand, GALA also maintains a light overhead for HE multiplication/addition, i.e., only one multiplication, compared with other three optimizations, e.g., Diagonal~\cite{halevi2014algorithms} and Extended~\cite{chen2019efficient} involve up to 2048 multiplication/addtion. {\color{black}The runtime results for matrix-vector multiplication are summarized in Table~\ref{mv_performance_time}, which includes the original runtime of GAZELLE, DELPHI and CrypTFlow2, and the speedup of GALA on top of each.} We take the share-RaS calculation cost (see the plaintext computing for final share at the client in step (d) of Figure~\ref{weight_sum_diagram3}) as part of the runtime cost of GALA for fair comparison. Meanwhile, as multiple copies are packed in one ciphertext, the HstPerm operation includes a common DecPerm to enable hoist optimization for rotation (see the details in~\cite{juvekar2018gazelle}). As can be seen from Table~\ref{mv_performance_time}, GALA's optimization gains a large speedup due to the row-encoding-share-RaS module, which reduces the costly Perm, Mult, and Add operations for a series of RaS calculation. Specifically, GALA achieves the speedup of 1795$\times$, 208$\times$ and 57$\times$ over the Diagonal~\cite{halevi2014algorithms} under different matrix dimensions in the LAN setting. This benefit stems from the fact that the computation complexity of the Diagonal is related to the input dimension $n_i$, which is always large in the state-of-the-art neural networks such as AlexNet~\cite{krizhevsky2012imagenet}, VGG~\cite{simonyan2014very} and ResNet~\cite{he2016deep}. For a similar reason, GALA significantly outperforms the Extended method~\cite{chen2019efficient}. Meanwhile, GALA has a speedup of 59$\times$, 13$\times$ and 19$\times$ over GAZELLE under different matrix dimensions in the LAN setting. This computation gain comes from the HstPerm-free scheme (i.e., row-encoding) and elimination of RaS computation (i.e., share-RaS scheme) compared to GAZELLE, which is particularly effective for large $\frac{n_i}{n_o}$ ratio and large ciphertext slots (see the superior performance for the neural network with a dimension of $1\times2048$). These features suit well to current convolutional neural networks which have tens of thousands of values to be fed into the fully connected layers~\cite{simonyan2014very, he2016deep}. {\color{black}Compared with DELPHI and CrypTFlow2, GALA achieves a speedup for weight matrix multiplication up to 700$\times$ in the LAN setting. This is largely due to GALA's deep optimization for HE computation. We also notice that GALA's speedup slows down in WAN which is due to the communication rounds needed for conversions between HE and GC. Therefore it leads to significant round time in total compared with the light HE computation overhead. For example, the round-trip time is around 13 milliseconds while the GALA's optimized HE cost is within one millisecond. } \begin{table}[!tbp] \footnotesize \centering \caption{Computation complexity of matrix-vector multiplication.} \begin{tabular}{c|c|c|c|c} \hline \hline \multicolumn{5}{c}{Dimension ($n_o\times{n_i}$): 1$\times$2048}\\ \hline Metric & Diagonal\cite{halevi2014algorithms} & GAZELLE & Extended\cite{chen2019efficient} & GALA\\ \hline \# Perm & 0 & 11 & 0& 0\\ \# HstPerm$^{\natural}$ & 2047 & 0 & 2047& 0\\ \# ScMult & 2048 & 1 & 2048& 1\\ \# Add & 2047 & 11 & 2047& 0\\ \hline \hline \multicolumn{5}{c}{Dimension ($n_o\times{n_i}$): 2$\times$1024}\\ \hline Metric & Diagonal\cite{halevi2014algorithms} & GAZELLE & Extended\cite{chen2019efficient} & GALA\\ \hline \# Perm & 0 & 10 & 9& 0\\ \# HstPerm$^{\natural}$ & 1023 & 0 & 511& 0\\ \# ScMult & 1024 & 1 & 512& 1\\ \# Add & 1023 & 10 & 520& 0\\ \hline \hline \multicolumn{5}{c}{Dimension ($n_o\times{n_i}$): 16$\times$128}\\ \hline Metric & Diagonal\cite{halevi2014algorithms} & GAZELLE & Extended\cite{chen2019efficient} & GALA\\ \hline \# Perm & 0 & 7 & 4& 0\\ \# HstPerm$^{\natural}$ & 127 & 0 & 7& 0\\ \# ScMult & 128 & 1 & 8& 1\\ \# Add & 127 & 7 & 11& 0\\ \hline \hline \multicolumn{5}{l}{$^{\natural}$Rotations of the input with a common DecPerm} \end{tabular}\label{mv_performance} \end{table} \begin{table}[!tbp] \footnotesize \centering \caption{Runtime cost of matrix-vector multiplication.} \begin{tabular}{c|c|c|c|c|c} \hline \hline \multicolumn{6}{c}{Dimension ($n_o\times{n_i}$): 1$\times$2048}\\ \hline \multirow{2}{*}{Approach} & \color{black}Comm. & \multicolumn{2}{c|}{LAN (ms)} & \multicolumn{2}{c}{\color{black}WAN (ms)}\\ & \color{black}(MB) & Time & \textbf{Speedup} & \color{black}Time & \color{black}\textbf{Speedup} \\ \hline Diagonal\cite{halevi2014algorithms} & \color{black}0.03 &57 & \textbf{1795$\times$}&\color{black}75 & \color{black}\textbf{4$\times$}\\ Extended\cite{chen2019efficient} & \color{black}0.03 &57.5 & \textbf{1796$\times$}& \color{black}77 &\color{black}\textbf{4$\times$} \\ GAZELLE\cite{juvekar2018gazelle} & \color{black}0.03 & 1.9& \textbf{59$\times$}& \color{black}19.3& \color{black}{1$\times$}\\ \color{black}DELPHI\cite{mishra2020delphi} & \color{black}0.14&\color{black}28 &\color{black}\textbf{700$\times$} & \color{black}59.5& \color{black}\textbf{3.2$\times$}\\ \color{black}CrypTFlow2\cite{rathee2020cryptflow2} & \color{black}0.13 & \color{black}28&\color{black}\textbf{700$\times$} & \color{black}46.2 & \color{black}\textbf{2.5$\times$}\\ \hline \hline \multicolumn{6}{c}{Dimension ($n_o\times{n_i}$): 2$\times$1024}\\ \hline Diagonal\cite{halevi2014algorithms} & \color{black}0.03 &28 & \textbf{208$\times$}&\color{black}47 & \color{black}\textbf{2.5$\times$}\\ Extended\cite{chen2019efficient} & \color{black}0.03 &16 & \textbf{116$\times$}& \color{black}36 &\color{black}\textbf{1.9$\times$} \\ GAZELLE\cite{juvekar2018gazelle} & \color{black}0.03 & 1.8& \textbf{13$\times$}& \color{black}19& \color{black}{1$\times$}\\ \color{black}DELPHI\cite{mishra2020delphi} & \color{black}0.13&\color{black}26.5 &\color{black}\textbf{176$\times$} & \color{black}57.8& \color{black}\textbf{3.1$\times$}\\ \color{black}CrypTFlow2\cite{rathee2020cryptflow2} & \color{black}0.13 & \color{black}26.5&\color{black}\textbf{176$\times$} & \color{black}44.7 & \color{black}\textbf{2.4$\times$}\\ \hline \hline \multicolumn{6}{c}{Dimension ($n_o\times{n_i}$): 16$\times$128}\\ \hline Diagonal\cite{halevi2014algorithms} & \color{black}0.03 &3.7 & \textbf{57$\times$}&\color{black}21 & \color{black}{1$\times$}\\ Extended\cite{chen2019efficient} & \color{black}0.03 &1 & \textbf{16$\times$}& \color{black}20.4 &\color{black}{1$\times$} \\ GAZELLE\cite{juvekar2018gazelle} & \color{black}0.03 & 1.2& \textbf{19$\times$}& \color{black}21& \color{black}{1$\times$}\\ \color{black}DELPHI\cite{mishra2020delphi} & \color{black}0.13&\color{black}20.5 &\color{black}\textbf{292$\times$} & \color{black}51.7& \color{black}\textbf{2.8$\times$}\\ \color{black}CrypTFlow2\cite{rathee2020cryptflow2} & \color{black}0.13 & \color{black}20.5&\color{black}\textbf{292$\times$} & \color{black}38.7 &\color{black} \textbf{2.1$\times$}\\ \hline \hline \end{tabular}\label{mv_performance_time} \end{table} \noindent\textbf{2) Convolution Computation:} {\color{black} We benchmark and compare the computation complexity and runtime of GALA with GAZELLE, DELPHI and CrypTFlow2 for convolution calculation. The details are illustrated in Table~\ref{cov_performance} and~\ref{conv_performance_time}.} As for the computation complexity, we compare GALA with GAZELLE whose privacy-preserved convolution calculation over HE is one of the most optimized methods in current literature. While introducing no extra HE multiplication/addition, GALA reduces the most expensive Perm, i.e., DecPerm and HstPerm, by up to 59$\times$ for input size of 16$\times$16\textit{@}2048 with kernel size of 1$\times$1\textit{@}512. This block with large channels and small kernel size is featured in state-of-the-art neural networks such as ResNets~\cite{he2016deep}, which makes GALA suitable to boost the modern networks. As for runtime comparison shown in Table~\ref{conv_performance_time}, GALA demonstrates 9$\times$, 14$\times$ and 2.6$\times$ speedup over GAZELLE with different input and kernel dimensions in LAN setting. As analyzed in Sec.~\ref{system:conv}, due to the fundamental complexity reduction by GALA's kernel grouping approach, GALA reduces the expensive Perm operation by a factor of $\frac{c_i}{c_n}$. As we mention above, the large speedup is achieved under large input channels and small kernel size, the proposed approach fits very well with the state-of-the-art networks such as ResNets~\cite{he2016deep}, where the feature maps are always with large channels (which results in large $c_i$ while $c_n$ is fixed) and small kernels (that are usually 1$\times$1, 3$\times$3 and 5$\times$5 at most, which benefit small HE multiplication/addition). {\color{black} Meanwhile, the speedup over DELPHI and CrypTFlow2 is up to 7.4$\times$ in the LAN setting. On the other hand, the speedup of GALA in the WAN setting is also decent, up to 8.7$\times$, 6.3$\times$ and 6.5$\times$ for GAZELLE, DELPHI and CrypTFlow2, respectively. This is because the computation cost of convolution increases accordingly with regard to the communication cost, compared with the case of matrix-vector multiplication. } \begin{table}[!tbp] \footnotesize \centering \caption{Computation complexity of convolution.} \begin{tabular}{c|c|c|c|c} \hline \hline Input $^{\dag}$ & Kernel $^{\ddag}$ & Metric & GAZELLE\cite{juvekar2018gazelle}& GALA\\ \hline \multirow{4}{*}{16$\times$16$\textit{@}$128} & \multirow{4}{*}{1$\times$1$\textit{@}$128} & \# DecPerm & 1792& 112\\ & & \# HstPerm & 1792& 112\\ & & \# ScMult & 2048& 2048\\ & & \# Add & 2032& 2032\\ \hline \hline \multirow{4}{*}{16$\times$16$\textit{@}$2048} & \multirow{4}{*}{1$\times$1$\textit{@}$512} & \# DecPerm & 114944 & 2048\\ & & \# HstPerm & 114688 & 1792\\ & & \# ScMult & 131072 & 131072\\ & & \# Add & 130944 & 130944\\ \hline \hline \multirow{4}{*}{16$\times$16$\textit{@}$128} & \multirow{4}{*}{3$\times$3$\textit{@}$128} & \# DecPerm & 1808& 128\\ & & \# HstPerm & 1920& 240\\ & & \# ScMult & 18432& 18432\\ & & \# Add & 18416& 18416\\ \hline \hline \multirow{4}{*}{\color{black}16$\times$16$\textit{@}$2048} & \multirow{4}{*}{\color{black}5$\times$5$\textit{@}$64} & \color{black}\# DecPerm & \color{black}14592 & \color{black}312\\ & & \color{black}\# HstPerm &\color{black} 20480 & \color{black}6200\\ & & \color{black}\# ScMult & \color{black}409600& \color{black}409600\\ & & \color{black}\# Add & \color{black}409592& \color{black}409592\\ \hline \hline \multicolumn{5}{l}{$^{\dag}$Dim. is in the form of $u_w\times{u_h}\textit{@}c_i$}\\ \multicolumn{5}{l}{$^{\ddag}$Dim. is in the form of $k_w\times{k_h}\textit{@}c_o$ with $c_i$ channels per kernel} \end{tabular}\label{cov_performance} \end{table} \begin{table}[!t] \footnotesize \centering \caption{Runtime cost of convolution.} \begin{tabular}{c|c|c|c|c|c} \hline \hline \multicolumn{6}{c}{Dimension (Input Dim.$^{\dag}$, Kernel Dim.$^{\ddag}$): 16$\times$16$\textit{@}$128, 1$\times$1$\textit{@}$128}\\ \hline \multirow{2}{*}{Approach} & \color{black}Comm. & \multicolumn{2}{c|}{LAN (ms)} & \multicolumn{2}{c}{\color{black}WAN (ms)}\\ & \color{black}(MB) & Time & \textbf{Speedup} & \color{black}Time &\color{black} \textbf{Speedup} \\ \hline GAZELLE & \color{black}0.5 & 321& \textbf{9$\times$}& \color{black}408& \color{black}\textbf{3.2$\times$}\\ \color{black}DELPHI & \color{black}2.1&\color{black}391 &\color{black}\textbf{3.1$\times$} &\color{black} 502& \color{black}\textbf{2.3$\times$}\\ \color{black}CrypTFlow2 & \color{black}2 & \color{black}389&\color{black}\textbf{3.1$\times$} &\color{black} \color{black}482 & \color{black}\textbf{2.2$\times$}\\ \hline \hline \multicolumn{6}{c}{Dimension (Input Dim.$^{\dag}$, Kernel Dim.$^{\ddag}$): 16$\times$16$\textit{@}$2048 , 1$\times$1$\textit{@}$512}\\ \hline GAZELLE & \color{black}8 & 20583.5& \textbf{14$\times$}& \color{black}21784& \color{black}\textbf{8.7$\times$}\\ \color{black}DELPHI & \color{black}31&\color{black}17939 &\color{black}\textbf{4.4$\times$} & \color{black}19205& \color{black}\textbf{3.7$\times$}\\ \color{black}CrypTFlow2 & \color{black}29 & \color{black}17928&\color{black}\textbf{4.4$\times$} & \color{black}19101 & \color{black}\textbf{3.6$\times$}\\ \hline \hline \multicolumn{6}{c}{Dimension (Input Dim.$^{\dag}$, Kernel Dim.$^{\ddag}$): 16$\times$16$\textit{@}$128, 3$\times$3$\textit{@}$128}\\ \hline GAZELLE & \color{black}0.5 & 457& \textbf{2.6$\times$}& \color{black}547& \color{black}\textbf{2.1$\times$}\\ \color{black}DELPHI & \color{black}2&\color{black}2563.6 &\color{black}\textbf{5.8$\times$} & \color{black}2671& \color{black}\textbf{5$\times$}\\ \color{black}CrypTFlow2 & \color{black}1.9 & \color{black}2559&\color{black}\textbf{5.8$\times$} & \color{black}2648 & \color{black}\textbf{5$\times$}\\ \hline \hline \multicolumn{6}{c}{\color{black}Dimension (Input Dim.$^{\dag}$, Kernel Dim.$^{\ddag}$): 16$\times$16$\textit{@}$2048, 5$\times$5$\textit{@}$64}\\ \hline \color{black}GAZELLE & \color{black}8 & \color{black}5875.2& \color{black}\textbf{ 1.7$\times$}& \color{black}7073& \color{black}\textbf{ 1.5$\times$}\\ \color{black}DELPHI & \color{black}31&\color{black}56499 &\color{black}\textbf{ 7.4$\times$} & \color{black}57765& \color{black}\textbf{ 6.3$\times$}\\ \color{black}CrypTFlow2 & \color{black}29 & \color{black}56409&\color{black}\textbf{ 7.4$\times$} & \color{black}57582 & \color{black}\textbf{ 6.5$\times$}\\ \hline \hline \multicolumn{6}{l}{$^{\dag}$Dim. is in the form of $u_w\times{u_h}\textit{@}c_i$}\\ \multicolumn{6}{l}{$^{\ddag}$Dim. is in the form of $k_w\times{k_h}\textit{@}c_o$ with $c_i$ channels per kernel} \end{tabular}\label{conv_performance_time} \end{table} \subsection{Performance with Classic Networks} \begin{table}[!t] \footnotesize \centering \caption{Computation complexity of state-of-the-art neural network models.} \begin{tabular}{c|c|c|c} \hline \hline Net. Frameworks & Metric & GAZELLE\cite{juvekar2018gazelle} & GALA\\ \hline \multirow{3}{*}{\color{black}MLP}& \color{black}\# Perm & \color{black}70 & \color{black}55\\ &\color{black}\# ScMult & \color{black}56 & \color{black}56\\ &\color{black}\# Add & \color{black}70 & \color{black}55\\ \hline \hline \multirow{5}{*}{AlexNet}& \# Perm & 40399 & 1157\\ &\# DecPerm & 143 & 142\\ &\# HstPerm & 1493 & 1492\\ &\# ScMult & 481298 & 481298\\ &\# Add & 481096 & 481089\\ \hline \hline \multirow{5}{*}{VGG} & \# Perm & 66055 & 2115\\ & \# DecPerm & 161 & 160\\ &\# HstPerm & 1283 & 1280\\ &\# ScMult & 663556 & 663556\\ &\# Add & 663370 & 663363\\ \hline \hline \multirow{5}{*}{ResNet-18}& \# Perm & 180375 & 5921\\ & \# DecPerm & 483 & 482\\ &\# HstPerm & 3467 & 3464\\ &\# ScMult & 1399363 & 1399363\\ &\# Add & 1398778 & 1398771\\ \hline \hline \multirow{5}{*}{ResNet-50}& \# Perm & 1464119 & 30615\\ &\# DecPerm & 2819 & 2818\\ &\# HstPerm & 3863 & 3848\\ &\# ScMult & 2935408 & 2935408\\ &\# Add & 2931734 & 2931727\\ \hline \hline \multirow{5}{*}{ResNet-101} & \# Perm & 2560823 & 64887\\ & \# DecPerm & 6083 & 6082\\ &\# HstPerm & 8215 & 8200\\ &\# ScMult & 5302896 & 5302896\\ &\# Add & 5294326 & 5294319\\ \hline \hline \multirow{5}{*}{ResNet-152}& \# Perm & 3463991 & 95127\\ & \# DecPerm & 8963 & 8962\\ &\# HstPerm & 12055 & 12040\\ &\# ScMult & 7252592 & 7252592\\ &\# Add & 7239894 & 7239887\\ \hline \hline \end{tabular}\label{overall_nets_num} \end{table} \begin{table}[!t] \footnotesize \centering \caption{\color{black}Runtime cost of classic model.} {\scriptsize \begin{tabular}{c|c|c|c|c|c} \hline \hline \multicolumn{6}{c}{\color{black}Network Model: MLP}\\ \hline \multirow{2}{*}{\color{black}Approach} & \color{black}Comm. & \multicolumn{2}{c|}{\color{black}LAN (ms)} & \multicolumn{2}{c}{\color{black}WAN (ms)}\\ &\color{black} (MB) & \color{black}Time &\color{black} \textbf{Speedup} & \color{black}Time & \color{black}\textbf{Speedup} \\ \hline \color{black}SecureML & \color{black}0.21 & \color{black}31.9 & \color{black}{ \textbf{2.6}$\times$} &\color{black}79.3 & \color{black}\textbf{1.5$\times$}\\ \color{black}MiniONN & \color{black}4.4 & \color{black}14.1 & \color{black}{ 1$\times$} &\color{black}227.6 & \color{black}{ {1}$\times$}\\ \color{black}GAZELLE & \color{black}0.21 & \color{black}15 &\color{black} { 1$\times$}& \color{black}84.9 &\color{black} { 1$\times$}\\ \color{black}DELPHI & \color{black}84 & \color{black}204.5 & \color{black}\textbf{ 3.1$\times$} &\color{black} 3658.3 & \color{black}{ 1$\times$}\\ \color{black}CrypTFlow2 & \color{black}12.4 & \color{black}246 &\color{black}\textbf{ 2.3$\times$} & \color{black}780.6 & \color{black} { 1.2$\times$}\\ \hline \hline \end{tabular}\label{mlp_performance_time} } \end{table} \begin{table}[!t] \footnotesize \centering \caption{Runtime cost of state-of-the-art models.} {\scriptsize \begin{tabular}{c|c|c|c|c|c} \hline \hline \multicolumn{6}{c}{Network Model: AlexNet}\\ \hline \multirow{2}{*}{Approach} & Comm. & \multicolumn{2}{c|}{LAN (ms)} & \multicolumn{2}{c}{WAN (ms)}\\ & (MB) & Time & \textbf{Speedup} & Time & \textbf{Speedup} \\ \hline GAZELLE & \color{black}17.45 & 11,019.2 & \textbf{ 2.5$\times$}& \color{black}13,669.6 & \color{black}\textbf{ 1.9$\times$}\\ \color{black}DELPHI& \color{black}617 & \color{black}90,090.1&\color{black}\textbf{ 2.9$\times$} & \color{black}114,955 & \color{black}\textbf{ 2$\times$}\\ \color{black}CrypTFlow2 & \color{black}116.6 & \color{black}69,133.6 &\color{black}\textbf{ 6.5$\times$} & \color{black}73,876.8 & \color{black}\textbf{ 4.8$\times$}\\ \cline{1-1} \color{black}OT-based & \multirow{2}{*}{\color{black}2,108} & \multirow{2}{*}{\color{black}226,431.7} & \multirow{2}{*}{\color{black}\textbf{21$\times$}} & \multirow{2}{*}{\color{black}310,985.6} & \multirow{2}{*}{\color{black}\textbf{20$\times$}}\\ \color{black}CrypTFlow2 & & & & & \\ \hline \hline \multicolumn{6}{c}{Network Model: VGG}\\ \hline GAZELLE & \color{black}22.8 & 18,067.4 & \textbf{ 2.7$\times$}& \color{black}21,566.2 & \color{black}\textbf{ 2$\times$}\\ \color{black}DELPHI & \color{black}718.5 & \color{black}123,198.4 &\color{black}\textbf{ 2.9$\times$} & \color{black}152,176.4 & \color{black}\textbf{ 1.5$\times$}\\ \color{black}CrypTFlow2 & \color{black}150 &\color{black}97,038.9 &\color{black}\textbf{ 6$\times$} & \color{black}103,169.1 & \color{black}\textbf{ 4.6$\times$}\\ \cline{1-1} \color{black}OT-based & \multirow{2}{*}{\color{black}5,063.7 } & \multirow{2}{*}{\color{black}340,342.9 } & \multirow{2}{*}{\color{black}\textbf{ 21$\times$}} & \multirow{2}{*}{\color{black}543,242 } & \multirow{2}{*}{\color{black}\textbf{ 24$\times$}}\\ \color{black}CrypTFlow2 & & & & & \\ \hline \hline \multicolumn{6}{c}{Network Model: ResNet-18}\\ \hline GAZELLE & \color{black}54 & 42,748.3 & \textbf{ 3.2$\times$}& \color{black}51,032.7 & \color{black}\textbf{ 2.3$\times$}\\ \color{black}DELPHI & \color{black}2,033.9 & \color{black}250,618.4 &\color{black}\textbf{ 2.6$\times$} & \color{black}332,524.2 & \color{black}\textbf{ 1.9$\times$}\\ \color{black}CrypTFlow2 & \color{black}354 & \color{black}190,684.7&\color{black}\textbf{ 5.7$\times$} & \color{black}205,146.8 & \color{black}\textbf{ 4.3$\times$}\\ \cline{1-1} \color{black}OT-based & \multirow{2}{*}{\color{black}6,292.1 } & \multirow{2}{*}{\color{black}650,989.7 } & \multirow{2}{*}{\color{black}\textbf{ 19.5$\times$}} & \multirow{2}{*}{\color{black}903,492.6 } & \multirow{2}{*}{\color{black}\textbf{ 19$\times$}}\\ \color{black}CrypTFlow2 & & & & & \\ \hline \hline \multicolumn{6}{c}{Network Model: ResNet-50}\\ \hline GAZELLE & \color{black}297.1 & 276,886.8 & \textbf{ 8.3$\times$}& \color{black}321,600.2 & \color{black}\textbf{ 4$\times$}\\ \color{black}DELPHI & \color{black}10,489 & \color{black}746,568.8 &\color{black}\textbf{ 1.7$\times$} & \color{black}1167,566.8 & \color{black}\textbf{ 1.4$\times$}\\ \color{black}CrypTFlow2 & \color{black}1,831 & \color{black}425,454.4 &\color{black}\textbf{ 4.5$\times$} & \color{black}499,429.6 & \color{black}\textbf{ 2.9$\times$}\\ \cline{1-1} \color{black}OT-based & \multirow{2}{*}{\color{black}13,104 } & \multirow{2}{*}{\color{black}1364,463.2 } & \multirow{2}{*}{\color{black}\textbf{ 14.4$\times$}} & \multirow{2}{*}{\color{black}3307,902.6 } & \multirow{2}{*}{\color{black}\textbf{ 19$\times$}}\\ \color{black}CrypTFlow2 & & & & & \\ \hline \hline \multicolumn{6}{c}{Network Model: ResNet-101}\\ \hline GAZELLE & \color{black}603.1 &486,745.2 & \textbf{ 7.7$\times$}& \color{black}577,454.9 & \color{black}\textbf{ 3.7$\times$}\\ \color{black}DELPHI & \color{black}22,199.4 & \color{black}1411,383.8 &\color{black}\textbf{ 1.7$\times$} & \color{black}2302,091.8 & \color{black}\textbf{ 1.3$\times$}\\ \color{black}CrypTFlow2 & \color{black}3,582.8 & \color{black}777,057.4 &\color{black}\textbf{ 4.2$\times$} & \color{black}921,735.6 & \color{black}\textbf{ 2.8$\times$}\\ \cline{1-1} \color{black}OT-based & \multirow{2}{*}{\color{black}23,857 } & \multirow{2}{*}{\color{black}2467,606.1 } & \multirow{2}{*}{\color{black}\textbf{ 13.3$\times$}} & \multirow{2}{*}{\color{black}6006,071.4 } & \multirow{2}{*}{\color{black}\textbf{ 18.2$\times$}}\\ \color{black}CrypTFlow2 & & & & & \\ \hline \hline \multicolumn{6}{c}{Network Model: ResNet-152}\\ \hline GAZELLE & \color{black}873.1 & 659,833.7 & \textbf{ 7.5$\times$}& \color{black}786,587 & \color{black}\textbf{ 3.6$\times$}\\ \color{black}DELPHI & \color{black}29,433 & \color{black}1975,798.9 &\color{black}\textbf{ 1.6$\times$} & \color{black}3157,176.8 & \color{black}\textbf{ 1.3$\times$}\\ \color{black}CrypTFlow2 & \color{black}5,141 & \color{black}1065,103.4 &\color{black}\textbf{ 4.1$\times$} & \color{black}1272,772.6& \color{black}\textbf{ 2.7$\times$}\\ \cline{1-1} \color{black}OT-based & \multirow{2}{*}{\color{black}32,804 } & \multirow{2}{*}{\color{black}3379,188.7 } & \multirow{2}{*}{\color{black}\textbf{ 13$\times$}} & \multirow{2}{*}{\color{black}8245,124.5 } & \multirow{2}{*}{\color{black}\textbf{ 17.5$\times$}}\\ \color{black}CrypTFlow2 & & & & & \\ \hline \hline \end{tabular}\label{all_performance_time} } \end{table} {\color{black} In this section, we benchmark the GALA performance on a 4-layer Multi Layer Perceptron (MLP)\footnote{\color{black}The network structure is 784-128-128-10.} which is also adopted in other privacy preserving frameworks including GAZELLE, SecureML~\cite{mohassel2017secureml} and MiniONN~\cite{liu2017oblivious} as a baseline network}, as well as state-of-the-art neural network models including AlexNet~\cite{krizhevsky2012imagenet}, VGG~\cite{simonyan2014very}, ResNet-18~\cite{he2016deep}, ResNet-50~\cite{he2016deep}, ResNet-101~\cite{he2016deep}, and ResNet-152~\cite{he2016deep}. We use MNIST dataset~\cite{mnist2020} for the MLP and CIFAR-10 dataset~\cite{CIFAR2020} for state-of-the-art networks. Table~\ref{overall_nets_num} shows computation complexity of the proposed GALA compared with GAZELLE. We can see that GALA reduces GAZELLE's Perm by 34$\times$, 31$\times$, 30$\times$, 47$\times$, 39$\times$, and 36$\times$ for AlexNet, VGG, ResNet-18, ResNet-50, ResNet-101, and ResNet-152, respectively. The fundamental base for this speedup lies in GALA's deep optimization for HE-based linear computation. {\color{black} We also notice that GALA achieves limited reduction of Perm in MLP (from 70 to 55). This is due to the small ratio between the number of slots in the ciphertext and output dimension in each layer, i.e, $\frac{n}{n_o}$, which limits the performance gain. {\color{black} The limited gain is also observed in Table~\ref{mlp_performance_time} which shows the system speedup of GALA over GAZELLE, CrypTFlow2, DELPHI, SecureML and MiniONN. Specifically, GALA boosts CrypTFlow2 by 2.3$\times$ in the LAN setting. SecureML also gains 2.6$\times$ in the LAN setting. Meanwhile, GALA's performance is similar to GAZELLE and MiniONN. The is due to the relatively small network size and noticeable communication overhead (i.e., the large round time in total compared with computation cost). Nevertheless, none of the competing schemes achieves a better performance than GALA.} It is worth pointing out that the MLP network is not widely adopted in practical scenarios. On the other hand, as the state-of-the-art deep neural networks utilize large channels and small-size kernels to capture data features while the size of feature maps is large, GALA is especially effective for accelerating such large state-of-the-art network models. Table~\ref{all_performance_time} shows the runtime of GAZELLE, DELPHI and CrypTFlow2, and the speedup of GALA on top of each. By reducing HE operations, especially Perm operations, GALA achieves noticeable boost over the GAZELLE, DELPHI and CrypTFlow2 frameworks.} Specifically, the results show that GALA boosts GAZELLE by 2.5$\times$ (from 11s to 4.3s), 2.7$\times$ (from 18s to 6.5s), 3.2$\times$ (from 43s to 13s), 8.3$\times$ (from 276s to 33s), 7.7$\times$ (from 486s to 62s), and 7.5$\times$ (from 659s to 87s) in LAN setting, on AlexNet, VGG, ResNet-18, ResNet-50, ResNet-101, and ResNet-152, respectively. CrypTFlow2 (CCS'20) is the latest framework for privacy preserved neural networks. It optimizes the nonlinear operations of DELPHI, and adopts a similar HE scheme of DELPHI for linear operations. GALA is an efficient plug-and-play module to optimize the linear operations of CrypTFlow2. As shown in the Tables~\ref{mv_performance_time} and~\ref{conv_performance_time}, GALA's optimization of linear operations can further boost CrypTFlow2 by 700$\times$ and 7.4$\times$ for matrix-vector multiplication and convolution in the LAN setting, respectively. This speedup stems from GALA's streamlined HE calculation compared with the one of CrypTFlow2. Slow-down is observed in the WAN setting, but CrypTFlow2 can still gain up to 6.5$\times$ speedup for convolution due to the computation-intensive nature for large input channels with small kernel dimensions featured in state-of-the-art network models. As for the overall system speedup, GALA can boost CrypTFlow2 by 6.5$\times$, 6$\times$, 5.7$\times$, 4.5$\times$, 4.2$\times$, and 4.1$\times$ in LAN, and by 4.8$\times$, 4.6$\times$, 4.3$\times$, 2.9$\times$, 2.8$\times$, and 2.7$\times$ in WAN, based on the aforementioned network architectures. It might appear anti-intuitive that while CrypTFlow2 is a more recent system than DELPHI, the speedup of GALA over DELPHI is smaller than its speedup over CrypTFlow2. This is because CrypTFlow2 has optimized the nonlinear part of DELPHI, significantly reducing its runtime. As a result, the runtime of linear operations in CrypTFlow2 accounts for a very high percentage as illustrated in Table~\ref{percent_nets}. Hence CrypTFlow2 can benefit more from GALA's optimization of linear computation, resulting in a higher speedup in terms of the overall runtime. It is worth pointing out that the ability to accelerate CrypTFlow2 is highly desirable since it is the latest privacy-preserving framework. {\color{black}Meanwhile, we also show GALA's speedup on top of the OT-based CrypTFlow2 which relies on OT to complete the linear computation. As significant communication cost, including round cost, is involved in OT, the overhead of linear computation, especially in the WAN setting, increases compared with HE-based CrypTFlow2, which results in greater speedup achieved by GALA. } \begin{table}[!tbp] \footnotesize \centering \caption{\color{black}Percentages of linear computation in state-of-the-art neural network models.} \begin{tabular}{c|c|c|c|c} \hline \hline \color{black}Networks & \color{black}GAZELLE & \color{black}DELPHI&\color{black}CrypTFlow2 & \color{black}Plaintext\\ \color{black}AlexNet & \color{black}97.7 & \color{black}76.9&\color{black}98.7 & \color{black}98.5\\ \color{black}VGG &\color{black}98.2 &\color{black}77.9 & \color{black}98.8& \color{black}98.1\\ \color{black}ResNet-18 & \color{black}98.3& \color{black}75.1& \color{black}98.6& \color{black}98.9\\ \color{black}ResNet-50 &\color{black}98.5 & \color{black}55.2& \color{black}96.8 & \color{black}97.9\\ \color{black}ResNet-101 &\color{black}98.4 & \color{black}53.2& \color{black}96.5& \color{black}98.3\\ \color{black}ResNet-152 &\color{black}98 & \color{black}52& \color{black}96.4& \color{black}98.4\\ \hline \hline \end{tabular}\label{percent_nets} \end{table} \begin{figure*}[!tbp] \centering \includegraphics[trim= {6.6cm 0.1cm 16cm 0cm}, clip, scale=0.40]{modern_nets.pdf} \caption{Layer-wise accumulated runtime and GALA speedup over GAZELLA on different networks: (a) AlexNet; (b) VGG; (c) ResNet-18; (d) ResNet-50; (e) ResNet-101; (f) ResNet-152. The bar with values on the left y-axis indicates speedup, and the curve with values on the right y-axis indicates the accumulated runtime. The layers with speedup of 1 are nonlinear layers. \label{overall_nets} \end{figure*} Next we examine the runtime breakdown of different layers for those six state-of-the-art networks as shown in~Fig. \ref{overall_nets}, which allows detailed observation. Note that the layer indexing here is slightly different from the original plaintext model for the sake of HE operations, e.g., the nonlinear activation or pooling following a convolution operation is counted as a separate layer. The $x$-axis of each subfigure in~Fig. \ref{overall_nets} shows the layer index of a sequence of linear (convolution or matrix-vector multiplication) and nonlinear (activation or pooling) layers that constitute each network model. The $y$-axis plots the accumulated running time (milliseconds) up to a layer, and the speedup of GALA over GAZELLE in each layer. For example, Fig. \ref{overall_nets} (a) illustrates the result for AlexNet. The most time-consuming computation in GAZELLE is in layer ``6'', ``8'' and ``10'', which are all convolution computations. This is evidenced by the large jump of runtime from these layers to the next layer. GALA decreases the time for these linear computations by nearly 3$\times$. Meanwhile, the nonlinear layers (activation/pooling) have a speedup of 1, as GALA has the same computation cost as GAZELLE in those layers. Since the nonlinear computation contributes to only a small portion of the total cost, it does not significantly affect the overall performance gain of GALA that focuses on accelerating the linear computation. Note that GALA does not benefit much in the first layer of AlexNet, i.e., the first convolution, as the input has only three channels. However, the speedup for the following more costly convolutions allows GALA to effectively reduce the overall cost. A similar observation can be seen from the result on VGG. As for the four ResNets frameworks, the most significant performance gain stems from the convolution with 1$\times$1 kernels. As ResNets repeat the blocks with multiple 1$\times$1 convolution kernels, GALA effectively accelerates this type of convolution due to its deeply optimized linear computation mechanism (see details in Sec.~\ref{system:conv}), thus reducing the overall runtime. {\color{black}Similar trend is observed for DELPHI and CrypTFlow2}. {\color{black} It is also worth mentioning that GALA focuses on optimizing the HE-based linear operations only and can be integrated into a baseline model (such as GAZELLE, CryptFlow2, or DELPHI). The proposed approach does not introduce approximation. Hence it does not result in any accuracy loss compared to the baseline privacy preserved model. Furthermore, compared with the original plaintext model, the only possible accuracy loss in GALA comes from the quantification of floating point numbers to fixed point numbers in the HE operations. Such quantification is indispensable in all HE-based frameworks including CryptFlow2. From our experiments, the model accuracy loss due to quantification is negligible, as shown in Table~\ref{nets_accuracy}. } \begin{table}[!tbp] \small \centering \caption{\color{black}Accuracy with floating and fixed point in state-of-the-art neural network models. Top-1 accuracy: only the prediction with the highest probability is a true label; Top-5 accuracy: any one of the five model predictions with higher probability is a true label.} \begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{\color{black}Network Models} & \multicolumn{2}{c|}{\color{black}Floating-point } & \multicolumn{2}{c}{\color{black}Fix-point }\\ \cline{2-5} & \color{black}Top1 & \color{black}Top5 & \color{black}Top1 & \color{black}Top5\\ \hline \color{black}AlexNet &\color{black}78.89\% &\color{black}97.32\% &\color{black}78.43\% &\color{black} 97.26\% \\ \color{black}VGG &\color{black}92.09\% &\color{black}99.72\% &\color{black}92.05\% &\color{black}99.68\% \\ \color{black}ResNet-18 &\color{black}93.33\% &\color{black}99.82\% &\color{black}93.21\% &\color{black} 99.81\%\\ \color{black}ResNet-50 &\color{black}93.86\% &\color{black}99.85\% &\color{black} 93.86\% &\color{black}99.84\% \\ \color{black}ResNet-101 &\color{black}94.16\% &\color{black}99.79\% &\color{black}94.12\% &\color{black}99.79\% \\ \color{black}ResNet-152 &\color{black}94.23\% &\color{black}99.81\% &\color{black}94.15\% &\color{black}99.79\% \\ \hline \hline \end{tabular}\label{nets_accuracy} \end{table} \section{Introduction} Deep Learning (DL) is becoming prevalent and pervasive, e.g., for pattern recognition~\cite{li2015convolutional}, medical diagnosis \cite{fakoor2013using}, speech recognition~\cite{dahl2012context} and credit-risk assessment~\cite{fan2018denoising}. In particular, Convolutional Neural Network (CNN) has demonstrated superior performance in computer vision such as image classification~\cite{krizhevsky2012imagenet,simonyan2014very} and facial recognition~\cite{schroff2015facenet}. Since designing and training a deep neural network model requires intensive resource and DL talents, cloud providers begin to offer Machine Learning as a Service (MLaaS) \cite{wang2018rafiki}, where a proprietary DL model is trained and hosted on a cloud. Clients can utilize the service by simply sending queries (inference) to the cloud and receiving results through a web portal. While this emerging cloud service is embraced as an important tool for efficiency and productivity, the interaction between clients and cloud servers leads to new vulnerabilities. This work focuses on the development of privacy-preserved and computationally efficient MLaaS. Although communication can be readily secured from end to end, privacy still remains a fundamental challenge. On the one hand, the clients must submit their data to the cloud for inference, but they want the data privacy well protected, preventing curious cloud provider or attacker with access to the cloud from mining valuable information. In many domains such as health care~\cite{mozaffari2015systematic} and finance~\cite{sohangir2018big}, data are extremely sensitive. For example, when patients transmit their physiological data to the server for medical diagnosis, they do not want anyone (including the cloud provider) to see it. Regulations such as Health Insurance Portability and Accountability Act (HIPAA) \cite{act1996health} and the recent General Data Protection Regulation (GDPR) in Europe~\cite{file2012proposal} have been in place to impose restrictions on sharing sensitive user information. On the other hand, cloud providers do not want users to be able to extract their proprietary model that has been trained with significant resource and efforts~\cite{tramer2016stealing}. Furthermore, the trained model contains private information about the training data set and can be exploited by malicious users~\cite{shokri2017membership,song2017machine,wang2018stealing}. To this end, there is an urgent need to develop effective and efficient schemes to ensure that, in MLaaS, a cloud server does not have access to users' data and a user cannot learn the server's model. A series of efforts have been made to enable privacy-preserved MLaaS, by leveraging cryptographic techniques as summarized below. The first is the {\em Homomorphic Encryption (HE)-Based Approaches}. For example, in CryptoNets \cite{cryptonets}, Faster CryptoNets \cite{chou2018faster} and CryptoDL \cite{hesamifard2018privacy}, the client encrypts data using HE and sends the encrypted data to the server. The server performs polynomial computations (e.g., addition and multiplication) over encrypted data to calculate an encrypted inference result. The client finally obtains the inference outcome after decryption. E2DM \cite{jiang2018secure} adopts a more efficient HE (i.e., packed HE \cite{brakerski2013packed}) which packs multiple messages into one ciphertext and thus improves the computation efficiency. The second approaches is based on {\em Garbled Circuit (GC)}~\cite{yao1986generate}. DeepSecure \cite{rouhani2018deepsecure} and XONN \cite{riazi2019xonn} binarize the computations in neural networks and employ GC to obliviously obtain the prediction without leaking sensitive client data. The third approach exploits {\em Secret Sharing (SS)}. SS is used in \cite{wan2007privacy} and \cite{wagh2019securenn} to split the client data into shares. The server only owns one share of the data. The computations are completed by interactive share exchanges. In addition, Differential Privacy (DP) \cite{shokri2015privacy,abadi2016deep,phan2016differential} and Secure Enclave (SE)~\cite{mckeen2013sgx,ohrimenko2016oblivious,bayerl2020offline,zheng2017opaque} are also explored to protect data security and privacy in neural networks. {\color{black} In order to deal with different properties of linearity (weighted sum and convolution functions) and nonlinearity (activation and pooling functions) in neural network computations, several efforts have been made to orchestrate multiple cryptographic techniques to achieve better performance \cite{zhang2018gelu, li2018falcon, juvekar2018gazelle, mohassel2017secureml, riazi2018chameleon, liu2017oblivious, zheng2019helen, chen2009privacy, yuan2013privacy, mohassel2018aby, xu2019cryptonn, chandran2017ezpc,boemer2020mp2ml, kumar2020cryptflow,rathee2020cryptflow2, mishra2020delphi}.} {\color{black} Among them, the schemes with HE-based linear computations and GC-based nonlinear computations (called the HE-GC neural network framework hereafter) demonstrate superior performance \cite{li2018falcon, juvekar2018gazelle, liu2017oblivious, mishra2020delphi}. Specifically, the GAZELLE framework \cite{juvekar2018gazelle} represents the state-of-the-art design for the HE-based linear computation and achieves a speedup of three orders of magnitude than the classic CryptoNets inference system~\cite{cryptonets}. } Despite the rapid improvement, there is still a significant gap in computation speed, rendering the existing schemes infeasible for practical applications. {\color{black} For example, the time constraints in many real-time applications (such as speech recognition) are within a few seconds~\cite{Alex2020,Google2020}.} In contrast, our benchmark has shown that GAZELLE takes 43 seconds and 659 seconds to run the well-known deep neural networks ResNet-18 and ResNet-152~\cite{he2016deep} on an Intel i7-8700 3.2GHz CPU (see detailed experimental settings in Sec.~\ref{evaluation}), which renders it impractical in real-world applications. This performance gap motivates us to further improve the efficiency of the HE-GC neural network frameworks. In deep neural network, both the fully-connected and convolutional layers are based on the linear computation, while the activation functions perform nonlinear computation. The former dominates the total computation time in state-of-the-art deep neural networks. For example, the runtime of the nonlinear computation in GAZELLE is merely 2.3\%, 1.8\%, 1.7\%, 1.5\%, 1.6\%, and 2\%, respectively, on AlexNet~\cite{krizhevsky2012imagenet}, VGG~\cite{simonyan2014very}, ResNet-18~\cite{he2016deep}, ResNet-50~\cite{he2016deep}, ResNet-101~\cite{he2016deep}, and ResNet-152~\cite{he2016deep}. The nonlinear cost in the original plaintext models is even lower (averaged 1.7\%). This indicates a great potential to speed up the overall system through optimizing linear computations. {\color{black} Although a few recent approaches, e.g., DELPHI~\cite{mishra2020delphi} and CrypTFlow2~\cite{rathee2020cryptflow2}, perform better than GAZELLE in terms of the overall system runtime, they all inherit the HE-based linear computation in GAZELLE. This work contributes a solid optimization on the HE-based linear computation (i.e., dot product and convolution), which can be integrated into those systems (including GAZELLE, DELPHI and CrypTFlow2) to further improve their overall system performance.} The HE-based computation consists of three basic operations: Homomorphic Addition (Add), Multiplication (Mult), and Permutation (Perm). Our investigation has shown that the most time-consuming part of the HE-based computation is a series of Perm operations that are imperative to enable dot product and convolution. Our experiments show that Perm is 56 times slower than Add and 34 times slower than Mult. As shown in Table~\ref{cost_example}, in the dot product by multiplying a 2$\times$2048 matrix with a length-2048 vector, the cost in GAZELLE is dominated by Perm, which takes about \emph{98\% of the computation time}. This observation motivates the proposed linear optimization, which aims to minimize the Perm operations, thus substantially reducing the overall computation time. With less Perm operations, the proposed approach demonstrates 10$\times$ speedup in the above matrix-vector computation. \begin{table}[!htbp] \footnotesize \centering \caption{Cost of matrix-vector multiplication (time in millionsecond).} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Total (ms)} & \multicolumn{2}{c|}{Perm} & \multicolumn{2}{c|}{Mult} & \multicolumn{2}{c}{Add} \\ \cline{3-8} & & \# & time & \# & time & \# & time \\ \hline GAZELLE & 2 & 11 & 1.96 & 2 & 0.01 & 11 & 0.037 \\ \hline Proposed & 0.2 & 1 & 0.17 & 2 & 0.01 & 1 & 0.003 \\ \hline \hline \end{tabular}\label{cost_example} \end{table} This significant speedup lies in a simple and efficient idea to choose the least expensive operation in each linear computation step to reduce the overall cost. We name the proposed approach {\em GALA: \underline{G}reedy comput\underline{A}tion for \underline{L}inear \underline{A}lgebra} in privacy-preserved neural networks. We view the HE-based linear computation as a series of Homomorphic Add, Mult and Perm operations. The two inputs are the encrypted vector (or channels) from the client and the plaintext weight matrix (or kernel) from the server. The output is the encrypted dot product (or convolution). The objective in each step is to choose the most efficient operations in the descending priorities of Add, Mult and Perm. To this end, we (1) design a row-wise weight matrix encoding with combined share generation\footnote{The resultant linear output will be shared between server and client as the input of GC-based nonlinear computation.} (i.e., a row-encoding-share-RaS (Rotated and Sum) approach) to reduce the number of Perm operations in dot product by $\log_2{\frac{n}{n_o}}$ where $n$ is the number of slots in a ciphertext and $n_o$ is the output dimension of dot product, and (2) propose a first-Add-second-Perm approach (named {\em kernel grouping}) to reduce the number of Perm operations of convolution by a factor of $\frac{c_i}{c_n}$ where $c_i$ and $c_n$ are respectively the number of channels in input data and the number of channels that can be packed in a ciphertext. $n$ is always greater than and can be up to 8192 times of $n_o$ depending on the dimension of dataset~\cite{iris2020} and HE implementation~\cite{sealcrypto}. {\color{black} At the same time, $\frac{c_i}{c_n}$ is at least one and can be up to 256 for state-of-the-art neural network architectures such as ResNets~\cite{he2016deep} where the large channels, i.e., 1024 and 2048, and small kernels size, i.e., 1$\times$1 and 3$\times$3, are adopted. The larger input data from users will result in smaller $c_n$, which accordingly contributes to higher speedup especially in the state-of-the-art CNNs. As such, GALA efficiently boosts the performance of HE-based linear computation, which is a critical building block in almost all of the recent frameworks for privacy-preserved neural networks, e.g., GAZELLE, DELPHI, and CrypTFlow2. Furthermore, GALA's deep optimization of the HE-based linear computation can be integrated as a plug-and-play module into these systems to further improve their overall efficiency. {\color{black}For example, GALA can serve as a computing module in the privacy-preserved DL platforms, MP2ML~\cite{boemer2020mp2ml} and CrypTFlow~\cite{kumar2020cryptflow}, which are compatible with the user-friendly TensorFlow~\cite{abadi2016tensorflow} DL framework.} Our experiments show that GALA achieves a significant speedup up to 700$\times$ for the dot product and 14$\times$ for the convolution computation under various data dimensions. Meanwhile, GALA demonstrates an encouraging runtime boost by 2.5$\times$, 2.7$\times$, 3.2$\times$, 8.3$\times$, 7.7$\times$, and 7.5$\times$ over GAZELLE and 6.5$\times$, 6$\times$, 5.7$\times$, 4.5$\times$, 4.2$\times$, and 4.1$\times$ over CrypTFlow2, on AlexNet, VGG, ResNet-18, ResNet-50, ResNet-101, and ResNet-152, respectively. More details are given in Sec. \ref{evaluation}. } The rest of the paper is organized as follows. Sec. \ref{background} introduces the primitives that GALA relies on. Sec. \ref{sec:proposed} describes the design details of GALA. The experimental results are illustrated and discussed in Sec. \ref{evaluation}. Finally, Sec. \ref{conclusion} concludes the work. \section{Preliminaries}\label{background} In this section, we introduce the overall system architecture and threat model, as well as cryptographic tools used in GALA. \subsection{System Model} We consider an MLaaS system shown in Fig.~\ref{mlaas}. The client owns private data. The server is in the cloud and has a well-trained deep learning model to provide the inference service based on the received client's data. For example, a doctor sends an encrypted medical image (such as a chest X-ray) to the server, which runs the neural network model and returns the encrypted prediction to the doctor. The prediction is then decrypted into a plaintext result to assist diagnosis and health care planning. \begin{figure}[!b] \centering \includegraphics[trim= {0cm 0cm 2cm 0cm}, clip, scale=0.5]{mlaas.pdf} \caption{An overview of the MLaaS system. \label{mlaas} \end{figure} While various deep learning techniques can be employed to enable MLaaS, we focus on the Convolutional Neural Network (CNN), which has achieved wide success and demonstrated superior performance in computer vision such as image classification~\cite{krizhevsky2012imagenet,simonyan2014very} and face recognition~\cite{schroff2015facenet}. A CNN consists of a stack of layers to learn a complex relation among the input data, e.g., the relations between pixels of an input image. It operates on a sequence of linear and nonlinear transformations to infer a result, e.g., whether an input medical image indicates that the patient has tuberculosis. The linear transformations are in two typical forms: \emph{dot product} (i.e., matrix-vector multiplication) and \emph{convolution}. The nonlinear transformations leverage \emph{activations} such as the Rectified Linear Unit (\emph{ReLu}) to approximate complex functions~\cite{hornik1991approximation} and \emph{pooling} (e.g., max pooling and mean pooling) for dimensionality reduction. CNN repeats the linear and nonlinear transformations recursively to reduce the high-dimensional input data to a low-dimensional feature vector for classification at the \emph{fully connected layer}. Without losing generality, we use image classification as an example in the following discussion, aiming to provide a lucid understanding of the CNN architecture as illustrated in Fig.~\ref{overview}. \begin{figure}[!b] \centering \includegraphics[width=1\columnwidth]{overview.pdf} \caption{An overview of CNN model. \label{overview} \end{figure} \emph{Convolution.} The input to a convolutional layer has the dimension of $u_w\times u_h\times c_i$, where $u_w$ and $u_h$ are the width and height of the input feature map and $c_i$ is the number of the feature maps (or channels). For the first layer, the feature maps are simply the input image(s). Hereafter, we use the subscripts $i$ and $o$ to denote the input and output, respectively. The input is convolved with $c_o$ groups of kernels. The size of each group of kernels is $k_w\times k_h\times c_i$, in which $k_w$ and $k_h$ are the width and height of the kernel. The number of channels of each kernel group must match with the input, i.e., $c_i$. The convolution will produce the feature output, with a size of $w_o\times h_o\times c_o$. Specifically, the process of convolution can be visualized as placing the kernel at different locations of the input data. At each location, a sum of element-wise product is computed between the kernel and corresponding data values within the kernel window, as shown in Fig.~\ref{overview}. \emph{Dot Product.} The last convolutional layer is typically connected with the fully-connected layer, which computes the weighted sum, i.e., a dot product between the weight matrix $w$ of size $n_o\times n_i$ and a flatten feature vector of size $n_i\times 1$. The output is a vector with the size of $n_o\times 1$. Each element of the output vector is calculated as a sum of element-wise product between one row of weight matrix and the flatten feature vector, as shown in Fig.~\ref{overview}. \emph{Activation.} Nonlinear activation is applied to convolutional and weighted-sum outputs in an elementwise manner, as shown in Fig.~\ref{overview}. The commonly used activation functions include \textit{ReLu}, $f(x)=\max\{0,x\}$; \emph{sigmoid}, $f(x)=\frac{1}{1+e^{-x}}$; and \emph{tanh}, $f(x)=\frac{e^{2x}-1}{e^{2x}+1}$. The last layer uses the \emph{softmax} function $f(x)=\frac{e^{x}}{\sum_ie^{x(i)}}$ to normalize the output into a probability vector. \emph{Pooling.} Pooling conducts downsampling to reduce dimensionality. In this work, we consider \emph{Mean pooling}, which is implemented in CryptoNets and also commonly adopted in state-of-the-art CNNs. It splits a feature map into regions and averages the regional elements. Compared to max pooling (another pooling function which selects the maximum value in each region), authors in~\cite{zhou2016learning} have claimed that while the max and mean pooling functions are rather similar, the use of mean pooling encourages the network to identify the complete extent of the object, which builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. In HE-GC neural network frameworks, the mean pooling is easily conducted on the shares of both client and server, without extra cost~\cite{liu2017oblivious,juvekar2018gazelle}. In this work, we mainly focus on privacy-preserved linear optimization (i.e., convolution and dot product). The privacy-preserved nonlinear optimizations (especially activations) are based on GC as introduced in other HE-GC approaches such as GAZELLE \cite{juvekar2018gazelle}. \subsection{Threat Model} Similar to GAZELLE~\cite{juvekar2018gazelle} and other previous works, namely the SecureML~\cite{mohassel2017secureml}, MiniONN~\cite{liu2017oblivious}, DeepSecure~\cite{rouhani2018deepsecure} and XONN~\cite{riazi2019xonn}, we adopt the semi-honest model, in which both parties try to learn additional information from the message received (assuming they have a bounded computational capability). That is, the client $\mathcal{C}$ and server $\mathcal{S}$ will follow the protocol, but $\mathcal{C}$ wants to learn model parameters and $\mathcal{S}$ attempts to learn the client's data. Note that, many applications are built on well-known deep network structures such as AlexNet~\cite{krizhevsky2012imagenet}, VGG-16/19~\cite{simonyan2014very} and ResNet-50~\cite{he2016deep}. Hence we do not intend to protect the structure (number of layers, kernel size, etc), but focus on the protection of model parameters. In the case that the implemented structure is proprietary and has to be protected, the server can introduce redundant layers and kernels to hide the real structure at a computational expense~\cite{liu2017oblivious, juvekar2018gazelle}. Hence, the overarching goal is to make the server oblivious of the private data from the client, and prevent the client from learning model parameters of the server. GAZELLE has demonstrated the security of HE-GC neural network framework according to the cryptographic standard of ideal/real security~\cite{goldreich2019play, goldreich2009foundations, goldwasser1989knowledge}. The same security framework is adopted in this work. {\color{black} Note that, while the client can use the server's prediction service as a blackbox oracle to extract the model~\cite{tramer2016stealing,wang2018stealing}, or even infer the training set~\cite{fredrikson2015model, nasr2018comprehensive, shokri2017membership}, GALA does not aim to protect against the black-box attack. Instead, it focuses on protecting the input data and the model parameters during the inference process, which stays in line with the threat model of GAZELLE~\cite{juvekar2018gazelle}, SecureML~\cite{mohassel2017secureml}, DELPHI~\cite{mishra2020delphi}, CrytoFlow2~\cite{rathee2020cryptflow2}, etc., the output of neural network model is returned to the client which decrypts the result and gets the plaintext prediction. } \subsection{Cryptographic Tools}\label{priliminary:crypto} The proposed privacy-preserved deep neural network framework, i.e., GALA, employs three fundamental cryptographic tools as outlined below. \emph{(1) Packed Homomorphic Encryption}. Homomorphic Encryption (HE) is a cryptographic primitive that supports meaningful computations on encrypted data without the decryption key, which has found increasing applications in data communications, storage and computations~\cite{takabi2010security}. Traditional HE operates on individual ciphertext~\cite{paillier1999public}, while the \emph{packed homomorphic encryption} (PHE) enables packing of multiple values into a single ciphertext and performs component-wise homomorphic computation in a Single Instruction Multiple Data (SIMD) manner~\cite{brakerski2013packed} to take advantage of parallelism. Among various PHE techniques, our work builds on the Brakerski-Fan-Vercauteren (BFV) scheme \cite{fan2012somewhat}, which involves four parameters\footnote{The readers are referred to \cite{juvekar2018gazelle} for more detail.}: 1) ciphertext modulus $q$, 2) plaintext modulus $p$, 3) the number of ciphertext slots $n$, and 4) a Gaussian noise with a standard deviation $\sigma$. The secure computation involves two parties, i.e., the client $\mathcal{C}$ and server $\mathcal{S}$. In PHE, the encryption algorithm encrypts a plaintext message vector $x$ from $\mathbb{Z}^n$ into a ciphertext $[x]$ with $n$ slots. We denote $[x]_{\mathcal{C}}$ and $[x]_{\mathcal{S}}$ as the ciphertext encrypted by client $\mathcal{C}$ and server $\mathcal{S}$, respectively. The decryption algorithm returns the plaintext vector $x$ from the ciphertext $[x]$. Computation can be performed on the ciphertext. In a general sense, an evaluation algorithm inputs several ciphertext $[x_1],[x_2],\cdots,$ and outputs a ciphertext $[x']=f([x_1],[x_2],\cdots)$. The function $f$ is constructed by homomorphic addition (Add), Multiplication (Mult) and permutation (Perm). Specifically, Add($[x]$,$[y]$) outputs a ciphertext $[x+y]$ which encrypts the elementwise sum of $x$ and $y$. Mult($[x]$,$s$) outputs a ciphertext $[x\odot{s}]$ which encrypts the elementwise multiplication of $x$ and plaintext $s$. It is worth pointing out that GALA is designed to require scalar multiplication between a ciphertext and a plaintext only, but not the much more expensive multiplication between two ciphertext. Hereafter, we use ScMult to denote the scalar multiplication involved in GALA. Perm($[x]$) permutes the $n$ elements in $[x]$ into another ciphertext $[x_\pi]$, where $x_\pi=(x({\pi{_0}}),x({\pi{_1}}),\cdots)$ and $\pi_i$ is a permutation of $\{0,1,\cdots,n-1\}$. Additionally, the computation cost for a series of Perm operations on the same ciphertext can be optimized by first conducting one Perm Decomposition (DecPerm) on the ciphertext and then doing the corresponding series of Hoisted Perm (HstPerm) operations~\cite{juvekar2018gazelle}. Since only one DecPerm is involved, it can amortize the total permutation time. The run-time of Add and ScMult is significantly lower than that of Perm. From our experiments, a Perm operation is 56 times slower than an Add operation and 34 times slower than a ScMult operation. This observation motivates the proposed linear optimization, which aims to minimize the number of Perm operations, thus substantially reducing the overall computation time. Meanwhile, PHE introduces noise in the ciphertext which theoretically hides the original message \cite{juvekar2018gazelle, brakerski2012fully}. Assume the noise of $[x]$ and $[y]$ are $\eta_0$ and $\eta_1$, then the noise after the Add operation is approximately $\eta_0+\eta_1$. The noise after a ScMult operation is $\eta_{mult}\eta_0$ where $\eta_{mult}$ is the \textit{multiplicative noise growth of the SIMD scalar multiplication operation}~\cite{juvekar2018gazelle}. The noise after a Perm operation is $\eta_0+\eta_{rot}$ where $\eta_{rot}$ is the \textit{additive noise growth of a permutation operation}~\cite{juvekar2018gazelle}. Roughly, we have $\eta_{rot}>\eta_{mult}\gg\eta_0\gg1$. If the noise goes beyond a certain level, the decryption would fail. Thus it is also important to have a good noise management over the ciphertext. We will show in Sec.~\ref{noise_manage} that GALA has a better noise control than GAZELLE, which further guarantees the overall success for the linear computations. \emph{(2) Secret Sharing}. In the secret sharing protocol, a value is shared between two parties, such that combining the two secrets yields the true value \cite{riazi2018chameleon}. In order to additively share a secret $m$, a random number, $r$, is selected and two shares are created as $\langle{m}\rangle_0=r$ and $\langle{m}\rangle_1=m-r$. Here, $m$ can be either plaintext or ciphertext. A party that wants to share a secret sends one of the shares to the other party. To reconstruct the secret, one needs to only add two shares $m=\langle{m}\rangle_0+\langle{m}\rangle_1$. While the overall idea of secret sharing (SS) is straightforward, creative designs are often required to enable its effective application in practice. Specifically, in the HE-GC neural network framework, the linear result from the dot product or convolution is encrypted at the server side and needs to be shared with the client to enable the following GC-based nonlinear computation. Assume $m$ is the resulted ciphertext of a linear computation at the server, GAZELLE then generates the share $\langle{m}\rangle_0=r$ and sends $\langle{m}\rangle_1=m-r$ to the client. The two shares act as the input of the GC-based nonlinear computation. Here the computation of $m$ involves a series of Perm operations, which is time-consuming. Instead of directly generating the share $\langle{m}\rangle_0=r$ for $m$, we develop a \textit{share-RaS (Rotate and Sum) computing} for dot product which lets the server generate an indirect share $r'$ for the incomplete $m$, $m'$, while the true $r$ is easy to be derived from $r'$ and the true $\langle{m}\rangle_1=m-r$ is easy to be derived from $m'-r'$. The computation of $m'$ eliminates a large number of Perm operations thus reducing the computation complexity. Specifically, Our result shows that the proposed \textit{share-RaS computing} demonstrates 19$\times$ speedup for the dot product by multiplying a 16$\times$128 matrix with a length-128 vector (the detailed benchmarks are shown in Sec. \ref{evaluation}). {\color{black} \emph{(3) Oblivious Transfer}. In the 1-out-of-$k$ Oblivious Transfer (OT)~\cite{brassard1986all}, denoted as $(\myfrac{k}{1})$-OT$_\ell$, the sender's inputs are the $k$ strings, $m_0, m_1, \cdots, m_{k-1}\in{\{0, 1\}^{\ell}}$, and the receiver's input is a value $i\in{\{0, 1, \cdots, k-1\}}$. At the end of the OT execution, the receiver obtains $m_i$ from the functionality and the sender receives no output. Here, the OT protocol guarantees that 1) the receiver learns nothing about $m_{j,j\neq{i}}$, and 2) the sender learns nothing about $i$. An advancement in the practicality of OT protocols is the OT extension~\cite{ishai2003extending}, which is further optimized such as in~\cite{kolesnikov2013improved}. A special type of OT extension is the correlated OT extension (COT)~\cite{asharov2013more}. Particularly, the 1-out-of-2 COT, denoted as $(\myfrac{2}{1})$-COT$_\ell$, can be used for linear computation\footnote{We refer readers to~\cite{beaver1991efficient, demmler2015aby, mohassel2017secureml, rathee2020cryptflow2} for more details.}. In $(\myfrac{2}{1})$-COT$_\ell$, the sender's two inputs to each OT are not independent. Instead, the two inputs to each OT instance are a random value $s_0$ and a value $s_1 = f(s_0)$ for a correlation function $f$ of the sender's choice. The receiver obtains either $s_0$ or $s_1$ as output depending on $b$. } \section{System Description}\label{sec:proposed} In this section, we introduce the proposed system, GALA, for streamlining the linear computations (i.e., matrix-vector multiplication and convolution) in privacy-preserved neural network models. The HE-based linear computation consists of three basic operations: Homomorphic Addition (Add), Multiplication (Mult), and Permutation (Perm). Our investigation has shown that the linear computation dominates the total computation cost and the most time-consuming part of HE-based linear computation is a series of Perm operations that are imperative to enable dot product and convolution. GALA aims to minimize the Perm operations, thus substantially reducing the overall computation time. We view the HE-based linear computation as a series of Add, Mult and Perm. The two inputs to linear computation are the encrypted vector (or channels) from the client and the plaintext weight matrix (or kernel) from the server. The output is the encrypted dot product (or convolution). The objective in each step is to choose the most efficient operations in the descending priorities of Add, Mult and Perm. {\color{black} Therefore, the overhead for the HE-based linear computation can be efficiently reduced by GALA. The recent privacy-preserved neural network frameworks can integrate GALA as a plug-and-play module to further boost their efficiency.} We also analyze the (better) noise management and (guaranteed) system security of GALA. \subsection{Row-encoding-share-RaS Matrix-Vector Multiplication}\label{sys:matrix_vector} We first focus on matrix-vector multiplication (dot product) which multiplies a plaintext matrix at the server with an encrypted vector from the client. We first discuss a naive method followed by the mechanism employed in the state-of-the-art framework (i.e., GAZELLE \cite{juvekar2018gazelle}), and then introduce the proposed optimization of GALA that significantly improves the efficiency in matrix-vector multiplication. For a lucid presentation of the proposed GALA and comparison with the state-of-the-art framework, we adopt the same system model used in \cite{juvekar2018gazelle}. More specifically, we consider a Fully Connected (FC) layer with $n_i$ inputs and $n_o$ outputs. The number of slots in one ciphertext is $n$. We also adopt the assumptions used in \cite{juvekar2018gazelle}: $n$, $n_i$ and $n_o$ are powers of two, and $n_o$ and $n_i$ are smaller than $n$. If they are larger than $n$, the original $n_o\times{n_i}$ matrix can be split into $n{\times}n$ sized blocks that are processed independently. \begin{figure}[!bp] \centering \includegraphics[scale=0.31]{linearcomp1.pdf} \caption{Naive matrix-vector multiplication. \label{weight_sum_diagram1} \end{figure} \vspace*{0.05in}\noindent\textbf{1) Naive Method:} The naive calculation for matrix-vector multiplication is shown in Figure \ref{weight_sum_diagram1}, where $\bm{w}$ is the $n_o\times{n_i}$ plaintext matrix on the server and $[\bm{x}]_{\mathcal{C}}$ is the HE-encrypted vector provided by the client. The server encodes each row of $\bm{w}$ into a separate plaintext vector (see step (a) in Figure \ref{weight_sum_diagram1}). The length of each encoded vector is $n$ (including padded 0's if necessary). We denote these encoded plaintext vectors as $\bm{w}_0, \bm{w}_1, \cdots, \bm{w}_{(n_o-1)}$. For example, the yellow and green rows in step (a) of Figure \ref{weight_sum_diagram1} are $\bm{w}_0$ and $\bm{w}_1$, respectively. The server intends to compute the dot product between $\bm{w}$ and $[\bm{x}]_{\mathcal{C}}$. To this end, it first uses ScMult to compute the elementwise multiplication between $\bm{w}_i$ and the encrypted input vector $[\bm{x}]_{\mathcal{C}}$ to get $[\bm{u}_i]_{\mathcal{C}} = [\bm{w}_i\odot\bm{x}]_{\mathcal{C}}$ (see step (b) in Figure \ref{weight_sum_diagram1}). The sum of all elements in $\bm{u}_i$ will be the $i$-th element of the desired dot product between $\bm{w}$ and $[\bm{x}]_{\mathcal{C}}$. However, as discussed in Sec. \ref{priliminary:crypto}, it is not straightforward to obtain the sum under the packed HE. A \emph{rotate-and-sum} (RaS) calculation must be used here, as illustrated in step (c) of Figure \ref{weight_sum_diagram1}. Specifically, the entries in $[\bm{u}_i]_{\mathcal{C}}$ are first rotated through Perm by $\frac{n_i}{2}$ positions such that the first $\frac{n_i}{2}$ entries of the rotated $[\bm{u}_i]_{\mathcal{C}}$ are actually the second $\frac{n_i}{2}$ entries of the original $[\bm{u}_i]_{\mathcal{C}}$. Then the server uses Add to conduct elementwise addition between the rotated $[\bm{u}_i]_{\mathcal{C}}$ and the original $[\bm{u}_i]_{\mathcal{C}}$, which results in a ciphertext whose first $\frac{n_i}{2}$ entries contain the elementwise sum of the first and second $\frac{n_i}{2}$ entries of $\bm{u}_i$. The server conducts this RaS process for $\log_2n_i$ iterations. Each iteration acts on the resulted ciphertext from the previous iteration, and rotates by half of the previous positions, as shown in Step (c) of Figure \ref{weight_sum_diagram1}. Finally, the server gets a ciphertext where the first entry is the $i$-th element in $\bm{wx}$. By applying this procedure on each of the $n_o$ rows (i.e., $\bm{w}_0, \bm{w}_1, \cdots, \bm{w}_{(n_o-1)}$), the server obtains $n_o$ ciphertext. Altogether, the first entries of those ciphertext correspond to $\bm{wx}$. We now analyze the complexity of the above linear computation process, in terms of the number of operations and output ciphertext. We consider the process starting from the server's reception of $[\bm{x}]_{\mathcal{C}}$ (i.e., the encrypted input data from the client) until it obtains the to-be-shared ciphertext\footnote{In HE-GC neural network computing, the resultant ciphertext from linear calculation are shared between client and server as the input of GC-based nonlinear function.} (i.e., the $n_o$ ciphertext after RaS). There are a total of $n_o$ scalar multiplications (ScMult) operations, $n_o\log_2n_i$ Perm operations and $n_o\log_2 n_i$ Add operations. It yields $n_o$ output ciphertext, each of which contains one element of the linear result $\bm{wx}$. This inefficient use of the ciphertext space results in a low efficiency for linear computations. \begin{figure}[!tbp] \centering \includegraphics[scale=0.3]{linearcomp2.pdf} \caption{Hybrid matrix-vector multiplication. \label{weight_sum_diagram2} \end{figure} \vspace*{0.05in}\noindent\textbf{2) Hybrid Calculation (GAZELLE):} In order to fully utilize the $n$ slots in a ciphertext and further reduce the complexity, the state-of-the-art scheme is to combine the diagonal encoding \cite{halevi2014algorithms} and RaS, by leveraging the fact that $n_o$ is usually much smaller than $n_i$ in FC layers. This hybrid method shows that the number of expensive Perm operations is a function of $n_o$ rather than $n_i$, thus accelerating the computation of FC layers~\cite{juvekar2018gazelle}. The basic idea of the hybrid method is shown in Figure~\ref{weight_sum_diagram2}. Specifically, the server encodes $\bm{w}$ into $n_o$ plaintext vectors through a diagonal manner. For example, in step (a) of Figure~\ref{weight_sum_diagram2}, the first plaintext vector $\bm{w}_0$ consists of the yellow elements of matrix $\bm{w}$, (A1, B2, A3, B4), and the second plaintext vector $\bm{w}_1$ consists of the green elements (A2, B3, A4, B1). Note that the $\bm{w}_0$ in this method is different from the $\bm{w}_0$ in the naive method of Figure~\ref{weight_sum_diagram1}. So is $\bm{w}_1$. The server then rotates $[\bm{x}]_{\mathcal{C}}$ by $i$ positions, shown in step (b), and uses ScMult to perform elementwise multiplication with $\bm{w}_i$. For example, in step (c) of Figure~\ref{weight_sum_diagram2}, $\bm{w}_0$ is multiplied with the encrypted data $[\bm{x}]_{\mathcal{C}}$ and $\bm{w}_1$ is multiplied with the input that is rotated by one position (i.e., $[\bm{x}']_{\mathcal{C}}$). As a result, the server gets $n_o$ multiplied ciphertext, $\{[\bm{u}_i]_{\mathcal{C}}\}$. The entries in each of $\{[\bm{u}_i]_{\mathcal{C}}\}$ are partial sums of the elements in the matrix-vector multiplication $\bm{wx}$. For example, as shown in step (c) of Figure \ref{weight_sum_diagram2}, the server obtains two multiplied ciphertext (i.e., $[\bm{u}_0]_{\mathcal{C}}$ and $[\bm{u}_1]_{\mathcal{C}}$) whose elements are partial sums of the first and second elements of $\bm{wx}$ (i.e., (A1M1 + A2M2 + A3M3 + A4M4) and (B1M1 + B2M2 + B3M3 + B4M4)). Then the server sums them up elementwise, to form another ciphertext, which is the vector in the middle of step (d) in Figure~\ref{weight_sum_diagram2}. At this point, similar to the naive method, the server proceeds with $\log_2{\frac{n_i}{n_o}}$ RaS iterations and finally obtains a single ciphertext whose first $n_o$ entries are the corresponding $n_o$ elements of $\bm{wx}$ (see the first two elements of the vector after RaS in step (d)). Furthermore, as the number of slots $n$ in a ciphertext is always larger than the dimension of the input vector, $n_i$, the computation cost is further reduced by packing copies of input $\bm{x}$ as much as possible to form $[\bm{x}_\textmd{pack}]_{\mathcal{C}}$. Thus $[\bm{x}_\textmd{pack}]_{\mathcal{C}}$ has $\frac{n}{n_i}$ copies of $\bm{x}$ and the server is able to multiply $\frac{n}{n_i}$ encoded vectors with $[\bm{x}_{\textmd{pack}}]_{\mathcal{C}}$ by one ScMult operation. Therefore the server gets $\frac{n_in_o}{n}$ rather than $n_o$ multiplied ciphertext. The resulted single ciphertext now has $\frac{n}{n_o}$ rather than $\frac{n_i}{n_o}$ blocks. The server then applies $\log_2{\frac{n}{n_o}}$ RaS iterations to get the final ciphertext, whose first $n_o$ entries are the $n_o$ elements of $\bm{wx}$. The hybrid method requires $\frac{n_in_o}{n}$ scalar multiplications (ScMult), $\frac{n_in_o}{n}-1$ HstPerm rotations for $[\bm{x}_{\textmd{pack}}]_{\mathcal{C}}$, $\log_2\frac{n}{n_o}$ Perm rotations, and $\frac{n_in_o}{n}+\log_2\frac{n}{n_o}-1$ additions (Add). There is only one output ciphertext, which efficiently improves the slot utilization compared to the naive method. \vspace*{0.05in}\noindent\textbf{3) Row-encoding-share-RaS Multiplication (GALA):} The proposed GALA framework is motivated by two observations on the hybrid method. First, the hybrid method essentially strikes a tradeoff between Perm and HstPerm operations, where the number of Perms (which is the most expensive HE operation) is proportional to the number of slots in a ciphertext. This is not desired as we prefer a large $n$ to pack more data for efficient SIMD HE. GALA aims to make the number of Perm operations disproportional to the number of slots and eliminate all HstPerm operations on the input ciphertext. The second observation is the $\log_2\frac{n}{n_o}$ RaS operations. We discover that this is actually unnecessary. Specifically, the unique feature in the HE-GC neural network framework is that the resultant single ciphertext from linear computing is shared between the client and server, to be the input for the nonlinear computing in the next phase. As the shares are in plaintext, we propose to transfer the final $\log_2\frac{n}{n_o}$ RaS operations in the HE domain to $\log_2\frac{n}{n_o}$ RaS operations in plaintext. This significantly reduces expensive Perm operations. For example, multiplying a 16$\times$128 matrix with a length-128 vector by our proposed scheme shows about 19$\times$ speedup compared with the hybrid method~\cite{juvekar2018gazelle} on a commodity machine (see detailed benchmarks in Sec. \ref{evaluation}). \begin{figure}[!tbp] \centering \includegraphics[trim= {0cm 0cm 0cm 0cm}, clip, scale=0.308]{linearcomp3.pdf} \caption{Row-encoding-share-RaS multiplication. \label{weight_sum_diagram3} \end{figure} Figure \ref{weight_sum_diagram3} illustrates GALA's matrix-vector calculation. The server first conducts the row-wise weight matrix encoding which encodes $\bm{w}$ into $n_o$ plaintext vectors in a diagonal manner, as shown in step (a) in Figure~\ref{weight_sum_diagram3}. Compared with the hybrid method, the row-wise weight matrix encoding of GALA enables the server to directly multiply $\bm{w}_i$ and $[\bm{x}]_{\mathcal{C}}$, eliminating the Perm operations on $[\bm{x}]_{\mathcal{C}}$ in step (b). Furthermore, the encoding also benefits the noise management in the resultant to-be-shared ciphertext as to be analyzed in Sec. \ref{noise_manage}. As a result, the server gets $n_o$ multiplied ciphertext, $\{[\bm{u}_i]_{\mathcal{C}}\}$, such that the first entry of $[\bm{u}_i]_{\mathcal{C}}$ is a partial sum of the $i$-th element of the matrix-vector multiplication $\bm{wx}$. For example, in step (b) of Figure~\ref{weight_sum_diagram3}, the first element A1M1 in $[\bm{u}_0]_{\mathcal{C}}$ is a partial sum of the first element of $\bm{wx}$ (i.e., A1M1 + A2M2 + A3M3 + A4M4), and the first element in $[\bm{u}_1]_{\mathcal{C}}$ is a partial sum of the $2$nd element of $\bm{wx}$ (i.e., B1M1 + B2M2 + B3M3 + B4M4). Then, the server conducts rotations on each $[\bm{u}_i]_{\mathcal{C}}$, with totally $(n_o-1)$ Perm operations excluding the trivial rotation by zero, to make the first entry of $[\bm{u}_i]_{\mathcal{C}}$ to be a partial sum of the first element of $\bm{wx}$. Next, the server adds all of the rotated $[\bm{u}_i]_{\mathcal{C}}$ to obtain a single ciphertext whose entries are repeatedly a partial sum of the elements of $\bm{wx}$. For example, in step (c) of Figure \ref{weight_sum_diagram3}, $[\bm{u}_1]_{\mathcal{C}}$ is rotated by one position, and then added with $[\bm{u}_0]_{\mathcal{C}}$ to get one ciphertext, whose entries are the partial sum of the first and second elements of $\bm{wx}$. Till now, the natural next step is to conduct $\log_2\frac{n_i}{n_o}$ RaS iterations to get a final ciphertext whose first $n_o$ entries are the $n_o$ elements of $\bm{wx}$, i.e., the approach used by the hybrid method~\cite{liu2017oblivious,juvekar2018gazelle}. With GALA, we propose to eliminate the $\log_2\frac{n_i}{n_o}$ time-consuming RaS iterations by integrating it with the generation of shares for the GC-based nonlinear computing. As introduced in the hybrid method~\cite{liu2017oblivious,juvekar2018gazelle}, in order to do the GC based nonlinear computing, the encrypted linear output is shared as follows: (1) the server generates a random vector; (2) the server subtracts the random vector from the ciphertext (the encrypted linear output); (3) the subtracted ciphertext is sent to the client, which subsequently decrypts it and obtains its share. Here we let the server encode a similar random vector and subtract it from the ciphertext obtained in step (c) of Figure~\ref{weight_sum_diagram3}. The subtracted ciphertext is sent to the client, which decrypts ciphertext, and then applies $\log_2{\frac{n_i}{n_o}}$ RaS iterations on the plaintext, as illustrated in step (d) of Figure~\ref{weight_sum_diagram3}. Similarly, the server gets its share by $\log_2{\frac{n_i}{n_o}}$ plaintext RaS iterations on its encoded random vector. Hence, in GALA, the server replaces the ciphertext RaS operations by much faster plaintext RaS operations. This significantly improves the computation efficiency. Furthermore, in order to make use of all slots in a ciphertext, the client packs $\frac{n}{n_i}$ input $\bm{x}$ to form a packed vector $[\bm{x}_{\textmd{pack}}]_{\mathcal{C}}$. Then the server multiplies $\frac{n}{n_i}$ encoded weight vectors with $[\bm{x}_{\textmd{pack}}]_{\mathcal{C}}$ by one ScMult operation. As a result, the server obtains $\frac{n_in_o}{n}$ multiplied ciphertext, which are respectively rotated to enable the elementwise sum, finally producing a single ciphertext that has $\frac{n}{n_o}$ to-be-accumulated blocks. Without any further HE RaS iterations, the server then starts to encode the random vector for the share generation. The only extra computation is the plaintext RaS iteration(s) at both the client and server, which is much faster compared to the ones in HE domain. As a result, GALA needs $\frac{n_in_o}{n}$ ScMult operations, $(\frac{n_in_o}{n}-1)$ Perm operations, and $(\frac{n_in_o}{n}-1)$ Add operations. It yields one output ciphertext, and makes efficient utilization of ciphertext slots. Table \ref{complexity_matrix_vector} compares the complexity among the naive method, the hybrid method (i.e., GAZELLE) and the proposed row-encoding-share-RaS matrix-vector multiplication (GALA). We can see that the proposed method completely eliminates the HstPerm operations and significantly reduces the Perm operations. \begin{table}[!h] \scriptsize \renewcommand\arraystretch{1.5} \centering \caption{Complexity comparison of three methods.} \begin{tabular}{ccccc} \hline \hline Method & \# Perm & \# HstPerm & \# ScMult & \# Add\\ \hline \hline Naive & $n_o\log_2{n_i}$ & $0$ & $n_o$ & $n_o\log_2{n_i}$ \\ \hline GAZELLE & $\log_2\frac{n}{n_o}$ & $\frac{n_in_o}{n}-1$ & $\frac{n_in_o}{n}$ & $\log_2\frac{n}{n_o}+\frac{n_in_o}{n}-1$ \\ \hline GALA & $\frac{n_in_o}{n}-1$ & $0$ & $\frac{n_in_o}{n}$ & $\frac{n_in_o}{n}-1$ \\ \hline \hline \end{tabular} \label{complexity_matrix_vector} \end{table} \subsection{Kernel Grouping Based Convolution}\label{system:conv} In this subsection, we introduce GALA's optimization for convolution. Similar to the discussion on the matrix-vector multiplication, we first begin with the basic convolution for the Single Input Single Output (SISO), then go through the state-of-the-art scheme for the Multiple Input Multiple Output (MIMO) (i.e., the GAZELLE framework~\cite{juvekar2018gazelle}). Finally we elaborate GALA's first-Add-second-Perm ({\em kernel grouping}) scheme that achieves more efficient convolution computation. We assume the server has $c_o$ plaintext kernels with a size of $k_w\times{k_h}\times{c_i}$ and the client sends to the server the encrypted data in the size of $u_w\times{u_h}$ with $c_i$ channels. The server needs to homomorphically convolve the encrypted data from the client with its plaintext kernels to produce the encrypted output. \begin{figure}[!tbp] \centering \includegraphics[trim= {0.2cm 0cm 0cm 0cm}, clip, scale=0.30]{convolution1.pdf} \caption{SISO convolution. \label{convolution_diagram1} \end{figure} \noindent\textbf{1) Basic SISO convolution:} SISO is a special case of MIMO where $c_i=c_o=1$. In this case, the encrypted data from the client has a size of $u_w\times{u_h}$ with one channel (i.e., a 2D image) and there is only one kernel with size $k_w\times{k_h}$ (i.e., a 2D filter) at the server. The SISO convolution is illustrated by an example in Figure~\ref{convolution_diagram1} where $[\bm{x}]_{\mathcal{C}}$ is the encrypted data from the client and K is the plaintext kernel at the server. The process of convolution can be visualized as placing the kernel K at different locations of the input data $[\bm{x}]_{\mathcal{C}}$. At each location, a sum of an element-wise product between the kernel and corresponding data values within the kernel window is computed. For example, in Figure~\ref{convolution_diagram1}, the first value of the convolution between $[\bm{x}]_{\mathcal{C}}$ and kernel K is (M1F5 + M2F6 + M4F8 + M5F9). It is obtained by first placing the center of K, i.e., F5, at M1 and then calculating the element-wise product between K and the part of $[\bm{x}]_{\mathcal{C}}$ that is within K's kernel window (i.e., M1, M2, M4 and M5). The final result is the sum of the element-wise product. The rest of convolution values are calculated similarly by placing F5 at M2 to M9. We now elaborate the convolution by an example when F5 is placed at M5 (i.e., the central element of $[\bm{x}]_{\mathcal{C}}$). In this example, the kernel size is $k_wk_h=9$. The convolution is derived by summing the element-wise product between the 9 values in K and the corresponding 9 values around M5. This can be achieved by rotating $[\bm{x}]_{\mathcal{C}}$ in a raster scan fashion~\cite{juvekar2018gazelle}. Specifically, $[\bm{x}]_{\mathcal{C}}$ is converted to a vector by concatenating all rows. Then, it is rotated by $(k_wk_h-1)$ rounds, with half of them in the forward direction and the other half in the backward direction. We denote $\pi_j$ as the rotation by $j$ positions, where a positive sign of $j$ indicates the forward direction and negative the backward direction, as shown in step (a) of Figure~\ref{convolution_diagram1}. The convolution is obtained by (1) forming the kernel coefficients according to the partial sum at the corresponding location as shown in step (b) of Figure~\ref{convolution_diagram1}, (2) scaling the 9 rotated $\pi_j$ with the corresponding kernel coefficients, and (3) summing up all scaled $\pi_j$ (see step (c)). The rotation for $[\bm{x}]_{\mathcal{C}}$ is completed by HstPerm\footnote{With a common DecPerm operation.}. The scaling is done by ScMult and the summation is achieved by Add. Therefore, the SISO convolution requires a total of $(k_wk_h-1)$ HstPerm operations (excluding the trivial rotation by zero), $k_wk_h$ ScMult operations and $(k_wk_h-1)$ Add operations. The output is one ciphertext\footnote{We assume the input size $u_w{u_h}$ is smaller that the ciphertext size $n$.} which contains the convolution result. \begin{figure}[!tbp] \centering \includegraphics[scale=0.26]{convolution2.pdf} \caption{MIMO convolution. \label{convolution_diagram2} \end{figure} \noindent\textbf{2) Output Rotation based MIMO convolution (GAZELLE):} We now consider the more general case, i.e., MIMO, where $c_i$ or $c_o$ is not one. The naive approach is to directly apply SISO convolution by first encrypting the $c_i$ input channels into $c_i$ ciphertext, $\{[\bm{x}_i]_{\mathcal{C}}\}$. Each of the $c_o$ kernels includes $c_i$ filters. Each $[\bm{x}_i]_{\mathcal{C}}$ is convolved with one of the $c_i$ filters by SISO and the final convolution is obtained by summing up all of the $c_i$ SISO convolutions. As a result, the naive approach requires $c_i(k_wk_h-1)$ HstPerm operations (for $c_i$ input channels), $c_ic_ok_wk_h$ ScMult operations and $c_o(c_ik_wk_h-1)$ Add operations. There are $c_o$ output ciphertext. Given the number of slots $n$ in a ciphertext is usually larger than the channel size $u_w{u_h}$, the ciphertext utilization (i.e., the meaningful slots that output desired results) in the $c_o$ output ciphertext is low. In order to improve the ciphertext utilization and computation efficiency for MIMO convolution, the state-of-the-art method (i.e., the output rotation~\cite{juvekar2018gazelle}) first packs $c_n$ channels of input data into one ciphertext, which results in $\frac{c_i}{c_n}$ input ciphertext (see Figure~\ref{convolution_diagram2} where the four input channels form two ciphertext, each of which includes two channels). Meanwhile, the $c_o$ kernels are viewed as a $c_o\times{c_i}$ kernel block and each row of the block includes $c_i$ 2D filters for one kernel. Then the MIMO convolution is viewed as a matrix-vector multiplication where the element-wise multiplication is replaced by convolution. As each ciphertext holds $c_n$ channels, the kernel block is divided into $\frac{c_oc_i}{c_n^2}$ blocks (see step (a) in Figure~\ref{convolution_diagram2}, where the kernel block is divided into K1 to K4). Next, each divided block is diagonally encoded into $c_n$ vectors such that the first filters in all vectors are in the first column of the kernel block (see the four groups of vectors in step (a) of Figure~\ref{convolution_diagram2}). In this way, each input ciphertext can directly convolve with the vectors in each divided block by SISO, and the convolution for each divided block is obtained by rotating the $c_n$ convolved vectors to the same kernel order as the diagonal one and summing them up (see step (b)). Finally, the convolution for $c_n$ kernels is calculated by adding the convolution of $\frac{c_i}{c_n}$ blocks associated with the same kernels as illustrated in step (b) of Figure~\ref{convolution_diagram2}. Clearly, there are $\frac{c_o}{c_n}$ output ciphertext, as expected. For each of the $\frac{c_oc_i}{c_n^2}$ blocks, there are total $c_n$ SISO-like convolutions, requiring $c_nk_wk_h$ ScMult operations, $(c_n-1)$ Perm operations and $(c_nk_wk_h-1)$ Add operations. Next, there are $\frac{c_i}{c_n}$ block convolutions which are associated with the same kernel order. Thus they are added up to obtain the final convolution result. Meanwhile, the rotation group for each input ciphertext is reused to convolve with different kernel blocks. Thus there are total $\frac{c_i(k_wk_h-1)}{c_n}$ HstPerm operations with $\frac{c_i}{c_n}$ common DecPerm operations. In all, the MIMO convolution needs a total of $\frac{c_ic_o}{c_n^2}(c_n-1)$ Perm, $\frac{c_i}{c_n}(k_wk_h-1)$ HstPerm, $k_wk_h\frac{c_ic_o}{c_n}$ ScMult and $\frac{c_o}{c_n}(c_ik_wk_h-1)$ Add operations. \begin{figure}[!tbp] \centering \includegraphics[scale=0.26]{convolution3.pdf} \caption{Kernel grouping based MIMO convolution. \label{convolution_diagram3} \end{figure} \noindent\textbf{3) Kernel Grouping Based MIMO convolution (GALA):} One key observation on the above MIMO convolution is that, each of the $\frac{c_oc_i}{c_n^2}$ blocks needs $(c_n-1)$ expensive Perm operations in order to get the convolution for that block. However, we actually do not need to get the convolution for each block. As our goal is to get the convolution for each kernel, the blocks that are associated with the same kernel are combined in our proposed first-Add-second-Perm approach ({\em kernel grouping}) to reduce the Perm cost. Specifically, in step (a) of Figure~\ref{convolution_diagram3}, the whole kernel block is divided into two blocks K1 and K2 such that each block is the combination of $\frac{c_i}{c_n}$ $c_n$-by-$c_n$ divided blocks, which correspond to the same kernels (i.e., the first and second kernel in K1 and the third and fourth kernel in K2). For each newly formed block, all of the vectors are first convolved with the corresponding input ciphertext by SISO-like convolution. Then the convolved vectors that are associated with the same kernel order are first added together (see the addition of convolved vectors before rotation in step (b) of Figure~\ref{convolution_diagram3}). Finally, these added vectors are rotated to the same kernel order and summed up to obtain the convolution result (see the rotation and final addition for each block in step (b) of Figure~~\ref{convolution_diagram3}). This kernel grouping calculation results in $(c_n-1)$ Perm operations for each of $\frac{c_o}{c_n}$ newly formed blocks, which reduces the Perm complexity by a factor of $\frac{c_i}{c_n}$ compared with GAZELLE's MIMO convolution. This reduction is nontrivial especially for the state-of-the-art neural networks such as ResNets \cite{he2016deep}, where $\frac{c_i}{c_n}$ can be 256. This is because these neural networks contain a large number of large-size feature maps in order to capture the complex input features~\cite{simonyan2014very, krizhevsky2012imagenet, he2016deep}. Similar to the output rotation based MIMO convolution discussed above, there are $\frac{c_o}{c_n}$ output ciphertext in the proposed scheme. For each of the $\frac{c_o}{c_n}$ newly formed blocks, there are $c_i$ SISO-like convolutions. Then for each of the $c_n$ kernel orders, there are $\frac{c_i}{c_n}$ convolutions to be summed up, which results in $c_n$ added convolutions. These added convolutions are further rotated to the same kernel order and summed up to get the final convolution. Therefore, the proposed MIMO convolution requires a total of $\frac{c_o}{c_n}(c_n-1)$ Perm, $\frac{c_i}{c_n}(k_wk_h-1)$ HstPerm, $k_wk_h\frac{c_ic_o}{c_n}$ ScMult, and $\frac{c_o}{c_n}(c_ik_wk_h-1)$ Add operations. Table~\ref{complexity_conv} compares the overall complexity for convolution computations. GALA's kernel grouping approach reduces the expensive Perm operations by a factor of $\frac{c_i}{c_o}$ without increasing other operations compared with the output rotation based MIMO convolution (i.e., the GAZELLE framework). The reduction in Perm operations leads to a significant speedup. Specifically, GALA shows about 14$\times$ speedup compared with GAZELLE in the convolution between input data with a size of 16$\times$16 with 2048 channels, and 512 kernels with a size of 1$\times$1$\textit{@}$2048 on a commodity machine (see detailed benchmarks in Sec. \ref{evaluation}). \begin{table}[!h] {\scriptsize \renewcommand\arraystretch{1.5} \centering \caption{Complexity comparison of convolution.} \begin{tabular}{ccccc} \hline \hline Method & \# Perm & \# HstPerm$^{\sharp}$ & \# ScMult & \# Add\\ \hline \hline GAZELLE & $\frac{c_ic_o(c_n-1)}{c_n^2}$ & $\frac{c_i(k_wk_h-1)}{c_n}$ & $\frac{c_ic_ok_wk_h}{c_n}$ & $\frac{c_o(c_ik_wk_h-1)}{c_n}$ \\ \hline GALA & $\frac{c_o(c_n-1)}{c_n}$ & $\frac{c_i(k_wk_h-1)}{c_n}$ & $\frac{c_ic_ok_wk_h}{c_n}$ & $\frac{c_o(c_ik_wk_h-1)}{c_n}$ \\ \hline \hline \multicolumn{5}{l}{$^{\sharp}$Rotations of the input with $\frac{c_i}{c_n}$ common DecPerm operations.} \end{tabular} \label{complexity_conv} } \end{table} \subsection{Noise Management}\label{noise_manage} The packed HE (e.g., the BFV scheme) introduces noise in the ciphertext which theoretically hides the original message \cite{juvekar2018gazelle, brakerski2012fully}. However, the noise management is critical to the correct decryption of ciphertext after a series of HE operations. We will show that GALA has better noise management compared with GAZELLE. Based on the computation complexity of matrix-vector multiplication and convolution, along with the noise change for HE operations as described in Sec.~\ref{priliminary:crypto}, Table~\ref{noise_growth_complexity} shows the noise growth of different schemes. As for the matrix-vector multiplication, GALA has a lower noise growth while keeping the number of output ciphertext as small as one\footnote{Note that the noise in Table~\ref{noise_growth_complexity} is calculated by assuming $(\frac{n_in_o}{n}-1)\geq0$. The noise of GALA is still lower than that of GAZELLE when $(\frac{n_in_o}{n}-1)<0$ as it means one ciphertext can hold data with size $n_o\times{n_i}$, which only involves one ScMult operation in GALA, and GAZELLE needs to subsequently conduct a series of RaS operations.}. As for the convolution computation, GALA reduces the noise term associated with rotation by a factor of $\frac{c_i}{c_n}$ compared to GAZELLE. This is nontrivial especially for state-of-the-art neural networks such as ResNets~\cite{he2016deep}, where $\frac{c_i}{c_n}$ can be 256. The number of output ciphertext is also maintained as small as $\frac{c_o}{c_n}$. Overall, GALA features a lower noise growth and lower computation complexity compared with GAZELLE. \begin{table}[!htbp] \footnotesize \centering \caption{Comparison of noise management.} \begin{tabular}{c|c|c} \hline \hline \multicolumn{3}{c}{Matrix-vector Multiplication}\\ \hline Method & Noise after computation & \# Cipher\\ \hline Naive & $n_i\eta_0\eta_{mult}+(n_i-1)\eta_{rot}$ & $n_o$\\ GAZELLE & $n_i\eta_0\eta_{mult}+[\frac{n_in_o-n}{n_o}\eta_{mult}+\frac{n-n_o}{n_o}]\eta_{rot}$ & 1\\ GALA & $\frac{n_in_o}{n}\eta_0\eta_{mult}+(\frac{n_in_o}{n}-1)\eta_{rot}$ & 1\\ \hline \hline \multicolumn{3}{c}{Convolution Computation}\\ \hline Method & Noise after computation & \# Cipher\\ \hline GAZELLE & $c_i\eta_{\Delta}+\frac{c_i}{c_n}(c_n-1)\eta_{rot}$ & $\frac{c_o}{c_n}$\\ GALA & $c_i\eta_{\Delta}+(c_n-1)\eta_{rot}$ & $\frac{c_o}{c_n}$\\ \hline \hline \multicolumn{3}{l}{$\eta_{\Delta}=k_wk_h\eta_{mult}\eta_0+(k_wk_h-1)\eta_{rot}\eta_{mult}$} \end{tabular}\label{noise_growth_complexity} \end{table} \subsection{System Security} GALA is based on the same security framework as GAZELLE~\cite{juvekar2018gazelle}. The security of linear computation in GALA is fully protected by the security of HE (e.g., the BFV scheme~\cite{brakerski2012fully, fan2012somewhat}). The nonlinear computation (which is not the focus of this paper) is protected by Garbled Circuits (GC)~\cite{yao1986generate} or its alternatives. {\color{black}The security of GC-based nonlinear computation has been proven in TASTY~\cite{henecka2010tasty} and MP2ML~\cite{boemer2020mp2ml}.}
{ "timestamp": "2021-05-06T02:07:57", "yymm": "2105", "arxiv_id": "2105.01827", "language": "en", "url": "https://arxiv.org/abs/2105.01827" }
\section{Introduction} Simplified microscopic models, such as classical particle chains in contact with heat baths, have proven useful to grasp the physics of thermal transport in low dimensions~\cite{fourier,ReviewLepriLiviPoliti2003,ReviewDhar2008,LepriBook2016}. The interest in one-dimensional models goes beyond the theoretical challenge to derive the laws of heat conduction from the microscopic dynamics, insofar as they can be useful for understanding the anomalies observed in real systems, such as carbon nanotubes~\cite{nano2008}, nanowires~\cite{YangZhangLi2010}, and molecular chains~\cite{chain1,chain2}. Moreover, these experiments and theories can lead to the development of new technologies for heat flow manipulation~\cite{e1}. An interesting example is the thermal diode, whose thermal conductivity along a given axis changes depending on the direction of the heat flux, yielding rectification in a preferential direction. This proposal, initially conceived through simplified theoretical modeling ~\cite{casati-diode}, soon found materialization in solid-state experiments~\cite{exp-diode}. Subsequently, several variants of microscopic models were proposed to determine the conditions to achieve efficient rectification, by analyzing for instance the effects of the range of the interactions or graded masses~\cite{efficiency1,efficiency2}, the role of the interface~\cite{interface,pons2017,efficiency3,BaowenLi2005,BaowenLi2011}, among others. In the meantime, several efforts have been directed towards an analytical understanding of the diode effect, putting into evidence the requirements of asymmetry and nonlinearity for rectification, for instance, by linearizing the equations of motion but, as counterpart, making the parameters along a mass graded chain temperature dependent~\cite{Tdependence}. Closely related, in a very recent work, the diode effect was shown in the so-called temperature-gradient harmonic oscillator chains~\cite{harmonic}. In the same spirit, a minimalistic model of two harmonic oscillators with temperature dependency has been recently studied~\cite{Muga2021}. The two-segment chain of classical spins in contact with multiple heat baths has also been studied~\cite{spins}, as well as quantum systems, to show rectification of the heat flow between two thermal baths through a pair of interacting qubits~\cite{quantum-diode} or even quantum spin chains~\cite{perfectdiode}. Here, we investigate a minimalist model of only two interacting classical particles connected to heat baths, in order to understand the diode effect directly from the equations of motion. % We solve these equations in the limit of small nonlinearity, from a perturbative approach. Then, we obtain expressions for the heat flow and rectification factor allowing us to directly grasp the impact of asymmetries and nonlinearities, as well as qualitative features of heat conduction and rectification, explicitly expressed in terms of the model parameters. The paper is organized as follows. The system is defined in Sec.~\ref{sec:system}. The perturbative solution and associated heat flow are described in Secs.~\ref{sec:solution} and \ref{sec:heatflow}, respectively, while the mathematical derivations can be found in the Appendix. The diode effect is discussed in Sec.~\ref{sec:rectification} with final remarks in Sec.~\ref{sec:final}. \begin{figure}[b!] \centering \includegraphics[width = 0.35\textwidth]{figure1.pdf} \caption{Schematic representation of a minimalist thermal diode.} \label{fig:two-particle-model} \end{figure} \section{The system} \label{sec:system} We consider a one-dimensional system composed of two particles, with masses $m_j$ (with $j=A,B$), coordinates $x$ and $y$, subject to on-site potentials $V_j$ and interacting through a potential $V_I$ such that the complete Hamiltonian is \begin{eqnarray} \mathcal{H} = \frac{p_A^2}{2m_A} + V_A(x) +\frac{p_B^2}{2 m_B} + V_B(y) + V_I (x-y). \end{eqnarray} Moreover, each particle $j$ is put in contact with a Langevin thermal bath at temperature $T_j$. A pictorial representation of this kind of system is provided in Fig.~\ref{fig:two-particle-model}. Let us remark that this system is very similar to the couple of harmonic oscillators recently investigated~\cite{Muga2021}, but in our case we introduce nonlinear forces. Namely, we treat the case where the interaction potential is harmonic with stiffness $k_I$, while the on-site potential of particle $j$ is $V_j(z)= k_j z^2/2 + \epsilon V^{nl}_j(z)$, where $k_j$ is the harmonic stiffness and $f_j(z)= -d V^{nl}_j(z)/dz$ is an arbitrary nonlinear force, whose intensity is controlled by the unitless constant $\epsilon$. Explicitly, the equations of motion are \begin{eqnarray} \label{eq:motionx} m_A \ddot{x} + \gamma_A \dot x + k_A x + k_I (x - y) &=& \epsilon f_A(x) + \eta_A(t), \\ \label{eq:motiony} m_B \ddot{y} + \gamma_B \dot y + k_B y + k_I (y - x) &=& \epsilon f_B(y) + \eta_B(t) \, , \end{eqnarray} where $\epsilon$ is a dimensionless parameter that controls the strength of the nonlinear forces, $\gamma_j$ is the damping coefficient and $\eta_j$ the fluctuating force of the Langevin thermostat $j$ ($j=A,B$), where $\eta_A$ and $\eta_B$ are independent zero-mean Gaussian-distributed white noises with \begin{eqnarray} \nonumber \langle \eta_A(t)\eta_A(t') \rangle &=& 2 \gamma_A T_A \delta(t-t'), \\ \nonumber \langle \eta_B(t)\eta_B(t') \rangle &=& 2 \gamma_B T_B \delta(t-t') \, , \end{eqnarray} where the temperature is in units of the Boltzmann constant. Although we might suppress some parameters by fixing space and timescales, we will keep them explicit to preserve the AB symmetry of the equations. \section{Perturbative solution} \label{sec:solution} Eqs.~(\ref{eq:motionx})-(\ref{eq:motiony}) cannot be solved exactly, however, if $\epsilon$ is small enough to ensure that the energy stored in the nonlinear mode is much smaller than in the harmonic one, we can expand the coordinates as \begin{eqnarray} x(t) &=& x_0(t) + \epsilon \, x_1(t) + \mathcal{O}(\epsilon^2), \label{eq:expansionx} \\ y(t) &=& y_0(t) + \epsilon \, y_1(t) + \mathcal{O}(\epsilon^2) , \label{eq:expansiony} \end{eqnarray} where the zeroth-order terms follow the equations \begin{eqnarray} m_A\ddot{x}_0 + \gamma_A \dot{x}_0 + k_A x_0 + k_I (x_0 - y_0) &=& \eta_A(t), \label{eq:x0} \\ m_B\ddot{y}_0 + \gamma_B \dot{y}_0 + k_B y_0 + k_I (y_0 - x_0) &=& \eta_B(t), \label{eq:y0} \end{eqnarray} which are linear and uncoupled equations, and the first-order corrections $x_1$ and $y_1$ follow \begin{eqnarray} m_A \ddot{x}_1 + \gamma_A \dot{x}_1 + k_A x_1 + k_I (x_1 - y_1) &=& f_A(x_0) \,, \label{eq:x1} \\ m_B \ddot{y}_1 + \gamma_B \dot{y}_1 + k_B y_1 + k_I (y_1 - x_1) &=& f_B(y_0). \label{eq:y1} \end{eqnarray} Since we are interested in the long-time behavior, the initial conditions are not relevant, therefore we will use the Fourier transform, defined as $ \tilde{z}(\omega) = \int_{-\infty}^{\infty} dt \, z(t) \, { e}^{-i \omega t}$, to solve the above stochastic differential equations. We start by expressing Eqs.~(\ref{eq:x0})-(\ref{eq:y0}) in Fourier space, namely, \begin{eqnarray} \label{eq:0x} \underbrace{\big( k_A + k_I - m_A\omega^2 + i \gamma_A \omega \big)}_{ \displaystyle a(\omega)} \tilde x_0 - k_I \tilde y_0 &=& \tilde \eta_A(\omega) \,, \\ \label{eq:0y} \underbrace{\big( k_B + k_I - m_B\omega^2 + i \gamma_B \omega \big)}_{\displaystyle b(\omega)} \tilde y_0 - k_I \tilde x_0 &=& \tilde \eta_B(\omega) \, , \end{eqnarray} whose solution in matrix form is \begin{eqnarray} \label{eq:xyw0} \begin{pmatrix} \tilde{x}_0(\omega) \\ \tilde{y}_0 (\omega) \end{pmatrix} =\frac{ \begin{pmatrix} b(\omega) & k_I\\ k_I & a(\omega) \end{pmatrix} \begin{pmatrix} \tilde{\eta}_A(\omega) \\ \tilde{\eta}_B (\omega) \end{pmatrix} }{ a(\omega)b(\omega)-k_I^2 }\,. \end{eqnarray} Similarly, solving Eqs.~(\ref{eq:x1}) and (\ref{eq:y1}) in Fourier space, we obtain (for more details see \ref{app:xy}) \begin{eqnarray} \label{eq:xyw1} \begin{pmatrix} \tilde{x}_1(\omega) \\ \tilde{y}_1 (\omega) \end{pmatrix}=\frac{ \begin{pmatrix} b(\omega) & k_I\\ k_I & a(\omega) \end{pmatrix} \begin{pmatrix} \mathcal{F} \left\{ f_A({x}_0) \right\} (\omega) \\ \mathcal{F} \left\{ f_B({y}_0) \right\} (\omega) \end{pmatrix} }{ a(\omega)b(\omega)-k_I^2 }\,. \end{eqnarray} \section{Heat flow} \label{sec:heatflow} The heat flow $J$ along the system can be defined in several forms that are equivalent when the system reaches a stationary state (see for instance \cite{ReviewDhar2008}). The potential $V_I(x-y)$ represents the energy stored in the interaction between neighboring particles, and the energetic flow can be written as \begin{eqnarray} \frac{d}{dt} \big\langle V_I(x-y) \big\rangle &=& \big\langle V'_I(x-y) \dot{x} \big\rangle - \big\langle V'_I(x-y) \dot{y} \big\rangle , \nonumber \end{eqnarray} and, under stationarity, $\langle V'_I(x-y) \dot{x} \rangle = \langle V'_I(x-y) \dot{y} \rangle$. From this identity, we have equivalent definitions that in our case, where $V_I'(x-y) = k_I (x-y)$, read \begin{eqnarray} \label{eq:Jdef} J &=& \langle k_I(x-y) \dot{x} \rangle = \langle k_I(x-y) \dot{y} \rangle\,, \end{eqnarray} or still $J=\langle k_I(x-y) (\dot{x} + \dot{y})/2 \rangle$. The heat flow $J$ can also be expanded in a series of $\epsilon$, as \begin{eqnarray} \label{eq:J0eJ1} J &=& J_0 + \epsilon J_1 + O(\epsilon^2) \, . \end{eqnarray} In the linear regime, Eq.~(\ref{eq:Jdef}) yields (see Appendix \ref{app:heat}) \begin{eqnarray} \label{eq:Jx} J_0 &=& k_I \langle x_0 \dot{x}_0 \rangle - k_I \langle y_0 \dot{x}_0 \rangle \\ &=& - k_I \int \frac{d \omega d \omega'}{(2 \pi)^2} e^{i t (\omega + \omega')} \langle \tilde y_0 (\omega) \; i \omega' \;\tilde{x}_0(\omega') \rangle \, . \;\;\; \end{eqnarray} Then, we obtain % \begin{eqnarray} \label{eq:J0} J_0 &=& \kappa_0 \big(T_A - T_B\big) \, , \end{eqnarray} where the zeroth-order thermal conductivity is \begin{eqnarray} \label{eq:kappa0} \kappa_0 &=& \int \frac{d \omega}{2 \pi} \frac{2 \gamma^2 k_I^2 \omega^2 }{|a(\omega)b(\omega) - k_I^2|^2} \,. \end{eqnarray} {\bf Note:} This expression shows that, regardless of the asymmetries that may be present, the linear system cannot be converted to a thermal diode since $\kappa_0$ does not depend on the temperatures and it is invariant under particle exchange, hence, the magnitude of the flow is the same in both directions. With regard to the dependence of $\kappa_0$ on the coupling strength $k_I$, Eq.~(\ref{eq:kappa0}) gives $\kappa_0 \sim k_I^2 + O(k_I^3)$, for small $k_I$. It is interesting to note that this is the scaling observed for two-segment chains with nonlinear forces of Frenkel-Kontorova (FK) type~\cite{casati-diode}. Now we proceed to calculate the first-order correction of the current $J$, containing the information on the nonlinearities. From Eq.~(\ref{eq:Jdef}), we have \begin{eqnarray} \nonumber J &=& - k_I \langle y(t) \dot{x}(t) \rangle =\\ \label{eq:J01} &=& \underbrace{- k_I \langle y_0 \dot{x}_0 \rangle}_{\displaystyle J_0} + \epsilon \underbrace{ (-k_I) \{ \langle y_1 \dot{x}_0 \rangle + \langle y_0 \dot{x}_1 \rangle \} }_{\displaystyle J_1} \, , \end{eqnarray} where the correlations in $J_1$ are calculated in Appendix~\ref{app:heat}, yielding $J_1=\kappa_1(T_A,T_B)\,\Delta T$, with \begin{eqnarray} \kappa_1(T_A,T_B) &=& \kappa_0 \sum_{j=A,B} \beta_j \, g_j\big(\sigma_{m j} T_m + \sigma_j T_j \big), \;\;\;\;\;\; \label{eq:kappa1} \end{eqnarray} where $T_m=(T_A+T_B)/2$ is the mean temperature, $\beta_j$, $\sigma_{m j}$ and $\sigma_j$ ($j=A,B$) are coefficients that do not depend on the temperature (derived in Appendix \ref{app:heat}, where explicit expressions are also given), and $g_j$ is derived from the Fourier transform of the force $f_j$ (for examples see Table I). \begin{table}[h!] \begin{tabular}{ |c|c| } \hline $ f_j(z) = \sum_{n\ge 0} c_{j,n} z^n$ & $ g_j(z)= \sum_{n\ge 1} c_{j,n}\, n!!\, z^{(n-1)/2}$ \\[3mm] \hline \hline $ -z^{2n-1}$, $n\in \mathbb{N}$ & $ -(2n-1)!!\,z^{n-1}$ \\[2mm] \hline $-\sin(Kz)$ & $ -K{ e}^{-K^2z/2}$ \\[2mm] \hline $-\sinh(Kz)$ & $ -K{ e}^{ K^2 z/2}$ \\[2mm] \hline \end{tabular} \caption{Nonlinear force $f_j$ and associated function $g_j$ (derived in Appendix~\ref{app:heat}). } \end{table} In Fig.~\ref{fig:num}, an illustrative example shows the good agreement between the first-order theoretical prediction, Eq.~(\ref{eq:J01}), and the numerical evaluation of Eq.~(\ref{eq:Jdef}), performed over trajectories obtained from the numerical integration of the equations of motion, using an eighth-order Runge-Kutta algorithm~\cite{RK8}. \begin{figure}[h!] \centering \includegraphics[width = 0.5\textwidth]{figure2.pdf} \caption{ Heat current difference $J-J_0$ vs. $\Delta T$. Solid lines correspond to the theoretical prediction given by Eq.~(\ref{eq:J01}) and symbols to the computation from numerical integration of the equations of motion, averaged over $10^5$ realizations. $V_A^{nl}(z)=z^4/4$, $V_B^{nl}(z)=0$, Different values of $\epsilon$ indicated on the figure were considered, $k_I=0.5$ and all other parameters are equal to 1. The inset shows the current $|J|$ as a function of $\epsilon$ for $\Delta T=\pm 1$ (i.e, $A\rightleftarrows B$). (Color online.) } \label{fig:num} \end{figure} For weak coupling, the scaling \begin{eqnarray} \kappa \sim k_I^2 + O(k_I^3,\epsilon k_I^2) \, , \end{eqnarray} obtained in the linear case, still holds, while $\kappa$ tends to a constant value for large $k_I$. Of course, if the interfacial interaction, which connects the two units of the system, vanishes, hence, $\kappa$ vanishes too, as expected due to disruption of the channel for energy flux. As a check of consistency, we verified that, if $f_j(z)$ were linear, then $g_j(z)$ would be a constant $\bar{g}_j<0$ ($n=0$ in Table I), in which case Eq.~(\ref{eq:kappa1}) becomes $\kappa_1 =(\beta_A \bar g_A+\beta_B \bar g_B)\,\kappa_0$\, in accord with the zeroth order expression for $\kappa_0$, Eq.~(\ref{eq:kappa0}), after substituting $k_A \to k_A - \epsilon \bar{g}_A$ and $k_B \to k_B - \epsilon \bar{g}_B$. Since the linear conductivity $\kappa_0$ does not depend on the bath temperatures, the heat flux will have the same magnitude in both directions. Therefore, for rectification, it is crucial that at least one of the two forces $f_j(z)$ be nonlinear. This nonlinearity would act by introducing a dependence of the conductivity on the temperatures, through the argument of $g_j(z)$ in Eq.~(\ref{eq:kappa1}), which originates from the correlations of the zeroth-order coordinates. A temperature dependence that is asymmetric under particle exchange is then responsible for thermal rectification, as discussed in the next section. Moreover, this is the basis of the diode effect on harmonic systems with imposed or natural temperature dependencies~\cite{Tdependence,harmonic,Muga2021}. \section{Diode effect} \label{sec:rectification} First, recall that $\beta_j$, $\sigma_{m j}$ and $\sigma_j$, which define $J_1$, are quantities that do not depend on the end temperatures. While $\sigma_{m j}$ (as well as $\kappa_0$) is always positive, $\beta_j$ and $\sigma_j$ have not definite sign in general, however, $\sigma_{m j}T_m+\sigma_j T_j$ must be positive. Moreover, in contrast to $\kappa_0$ and the $\sigma_{m j}$ whose expressions are invariant by AB-exchange, the coefficients $\beta_j$ or $\sigma_j$ may be non-symmetric in general, which would ensure that even if $g_A$ = $g_B$, the conductivity can become dependent on the direction of the flux. Let us define the fluxes $J_{AB}$ and $J_{BA}$ for the positive temperature gradient ($T_A=T_{h} > T_{c} =T_B$) and the reversed one, as schematized in Fig.~\ref{fig:two-particle-model}. From the expression of $J$, we have \begin{eqnarray} \nonumber J_{AB}\equiv J_{A\to B} &=& \big[ \kappa_0 + \epsilon \underbrace{\kappa_1(T_{h},T_{c})}_{\displaystyle \kappa_1^{AB}} \big](T_h-T_c),\\ \nonumber J_{BA}\equiv J_{B\to A} &=& \big[ \kappa_0 + \epsilon \underbrace{\kappa_1(T_{c},T_{h})}_{\displaystyle \kappa_1^{BA}} \big](T_c-T_h). \\ \end{eqnarray} Rectification emerges when $\kappa_1^{AB} \neq \kappa_1^{BA}$, molded by the functions $g_j(z)$, associated to nonlinear forces $f_j(z)$, which introduce the dependence of the conductivity on the bath temperatures. In what follows, to quantify the diode effect, we use the ratio \begin{eqnarray} \label{eq:xi} \chi \equiv\frac{ \big| |J_{AB}| - |J_{BA}| \big| }{\big( |J_{AB}| + |J_{BA}|\big)/2} = \epsilon \frac{ |\kappa_1^{AB} - \kappa_1^{BA}|}{\kappa_0} + O(\epsilon^2) \,. \end{eqnarray} This quantity coincides with the rectification factor~\cite{casati-diode} at first order in $\epsilon$ and it is twice the diodicity~\cite{alexander}. Notice that, the departure from the linear regime, signaled by $\epsilon\neq0$, together with asymmetry, is required to allow the diode effect ($\chi \neq 0$) at first order in $\epsilon$. However, the rectification $\chi$ is small, of order $\epsilon$. In the following sections, we will discuss the behavior of $\chi$ in some particular cases, in order to reduce the number of parameters. \subsection{Symmetric chain} Let us address the case where $k_A=k_B$, $m_A=m_B$ and $\gamma_A=\gamma_B$, hence, the asymmetry required for rectification must reside in the nonlinear on-site forces. In this simple case, the coefficients obtained in Appendix~\ref{app:heat} reduce to \begin{eqnarray} \sigma_m &=& \frac{[ ( k+k_I)\bar{m}+1]k_I^2}{k(k+2k_I)( k+k_I +\bar{m} k_I^2)} \,, \\ \sigma_A&=&\sigma_B=\sigma=\frac{1}{k+k_I +\bar{m} k_I^2} \,,\\ \beta_A&=&\beta_B = \sigma/2. \end{eqnarray} Then, \begin{eqnarray} \label{eq:xik} \chi&=& \epsilon\frac{\big| g_A(T_+)-g_A(T_-) +g_B(T_-)-g_B(T_+) \big|}{2[ k+k_I + \bar{m} k_I^2]}\,, \end{eqnarray} where $T_\pm =\sigma_m T_m +\sigma T_{\substack{ h\\[-2pt]c}}= (\sigma_m+\sigma)T_m \pm \sigma (T_h-T_c)/2$. First, we notice that $\chi$ is finite in the limit $k_I \to 0$ and it tends to zero in the opposite limit $k_I\to \infty$. Examples are given in Fig.~\ref{fig:xi1} for two different potentials. \begin{figure}[h!] \centering \includegraphics[width = 0.45\textwidth]{figure3.pdf} \caption{Scaled rectification factor as a function of the interfacial stiffness $k_I$, for the nonlinear on-site potential $V_A^{nl}(z)=z^4/4$, (power-law, dashed lines), and $V_A^{nl}(z)=-\cos(z)$, (sinusoidal, solid lines), for $k=0.1$ (dark green) and 1 (light green), as indicated in the legend. In all cases $V_B^{nl}\equiv 0$, $\bar{m}=1$, $T_m=1$, $\Delta T =0.4$. (Color online.) } \label{fig:xi1} \end{figure} Rectification enhancement can be achieved by augmenting the temperature difference, fixing the average, since $\Delta g_j \equiv g_j(T^+)-g_j(T^-) \approx g_j'([\sigma_m+\sigma]T_m)\, \Delta T + {\cal O}([\Delta T]^3)$. This effect is illustrated inf Fig.~\ref{fig:T1}. As a matter of fact, the increase of the rectification factor with the temperature difference has been observed in diverse models~\cite{efficiency2,efficiency3,bastida}. \begin{figure}[h!] \centering \includegraphics[width = 0.45\textwidth]{figure4.pdf} \caption{Scaled rectification factor as a function of the relative temperature difference $\Delta T/T_m$, for the nonlinear on-site potential $V_A^{nl}(z)=-\cos(z)$, $V_B^{nl}\equiv 0$, $k=k_I =0.1$, for different values of $T_m$, indicated in the legend. The inset displays the scaled rectification vs. $T_m$ for different values of $\Delta T$ indicated in the legend, showing that there is an optimal $T_m$, due to the loss of nonlinearity in the extremes of low and high temperatures for the chosen potential. (Color online.) } \label{fig:T1} \end{figure} The mass and inverse square damping contribute trough $\bar{m}=m/\gamma^2$ to spoil rectification if $k_I>0$. This suggests that the overdamped regime would perform rectification better. We can also understand how the preferential direction in which the conductivity is larger, for given bath temperatures, depends on the type of nonlinear forces. For instance, let us consider $g_B(z)=0$. If $g_A(z)$ is monotonically decreasing, like in the power-law case of Table I, then $\Delta g_A < 0$, for $T_A>T_B$, indicating that the preferential direction is from B to A (in general from smaller to larger nonlinear force). However, if the potential is sinusoidal, $g_A(z)$ is an increasing function (Table I), then the preferential direction is inverted with respect to the previous case (i.e., it is from A to B), as observed for asymmetric FK chains~\cite{casati-diode}). Let us take a closer look to the conductivity in the limit $k_I \to 0$, for some concrete potentials $V_A^{nl}$, while $V_B^{nl}=0$. Recall that the conductivity scales with $k_I^2$, then the fluxes vanish in the limit $k_I\to 0$, however $\chi$ can be large for finite but very small $k_I$. In that limit, we have $\sigma_m=0$, $\sigma=1/k $, hence the scaled mass $\bar{m}$ does not play a role in the rectification. For the power-law on-site potential $V_A^{nl}(z)= z^{2n}/(2n)$, $g_A(z)=-(2n-1)!! z^{n-1}$, with $n>1$, in the limit $k_I \to 0$, Eq.~(\ref{eq:xik}) becomes \begin{eqnarray} \label{eq:limxi0a} \chi= \epsilon \frac{ (2n-1)!! }{ 2 k }\left[ \left(\frac{T_h}{k}\right)^{n-1} - \left(\frac{T_c}{k}\right)^{n-1} \right] \,. \end{eqnarray} Equation~(\ref{eq:limxi0a}) predicts, for instance, that the ratio $\chi$ grows with the temperature difference $\Delta T= T_h-T_c$, with positive concavity for $n>2$, nearly linear for small $\Delta T$. These effects persist for finite $k_I$ as shown in Fig.~\ref{fig:T1}. For the sinusoidal on-site potential $V_A^{nl}(z)=- \cos(K z)/K $ (as in the FK model), $g_A(z)=- K { e}^{-K^2 z/2}$ (see Table I). In this case the dependence $\chi$ vs. $k_I$ can be nonmonotonic, with a finite optimal value, as shown in Fig.~\ref{fig:xi1}. In the limit $k_I\to 0$ we obtain \begin{eqnarray} \chi= \epsilon \frac{K}{2k} \left( e^{-K^2T_c/(2k)} - e^{-K^2T_h/(2k)}\right) \,. \end{eqnarray} The dependence on $\Delta T$ (for fixed mean temperature $T_m$) is also an increasing convex function. This behavior, which also holds for finite $k_I$, as exemplified in Fig.~\ref{fig:T1}, is qualitatively similar to that reported from simulations of diode models~\cite{efficiency2,efficiency3,bastida}. For the power-law potential the nonlinear correction is weak but in the same direction. \subsection{Small $k_I$ limit} In the previous section, we have seen that the limit of small $k_I$ is relevant, while it allows to simplify the analytical expressions significantly. Then, in this limit, we will analyze the effect of introducing the asymmetry alternatively in the stiffness ($k_A\neq k_B$), mass ($m_A\neq m_B$) or damping coefficient ($\gamma_A\neq \gamma_B$). As shown in Appendix~\ref{app:smallkI}, in this limit, \begin{eqnarray} \kappa_1 & \simeq & \kappa_0\Big[\beta_A\, g_A(\sigma_A T_A) + \beta_B \, g_B( \sigma_B T_B) \Big] \,, \end{eqnarray} where the explicit expressions for the coefficients are given in the Appendix~\ref{app:smallkI} for each asymmetry. The results for the rectification factor $\chi$ are illustrated in Fig.~\ref{fig:all}. {\bf Note:} We observe that any of these asymmetries can produce rectification. In particular, notice that, even when the chain is homogeneous, distinct thermostats (characterized by different friction coefficients) can also produce a diode effect. \begin{figure}[h!] \centering \includegraphics[width = 0.45\textwidth]{figure5a.pdf} \includegraphics[width = 0.45\textwidth]{figure5b.pdf} \caption{Scaled rectification factor vs. the relative temperature difference $\Delta T/T_m$, with $T_m=1$, for nonlinear on-site potential $V_A^{nl}(z)=V_B^{nl}(z)=$ (a) $-\cos(z)$ and (b) $z^4/4$ for: $m_A=5$, $k_A=k_B=m_B=\gamma_A=\gamma_B=1$ (green), $\gamma_A=5$, $\gamma_B=k_A=k_B=m_A=m_B=1$ (blue) $k_A=5$, $k_B=m_A=m_B=\gamma_A=\gamma_B=1$ (red). The insets show the dependence on the asymmetry factor $\lambda$ that for each parameter $p$ gives $p_A =\lambda p_B=\lambda$. (Color online.) } \label{fig:all} \end{figure} \section{Final remarks} \label{sec:final} We have presented analytical results starting from the microscopic classical dynamics of a two-particle system with nonlinear forces. Due to the nonlinearity of the equations, we tackled the solution from a perturbative approach valid for small nonlinear intensity $\epsilon$. It is noticeable that despite the simplicity of the system, the conductivity $\kappa = \kappa_0 + \epsilon \kappa_1$ has an intricate dependence on the system parameters. Therefore, it might be hard to make a portrait of this complexity only through molecular dynamics simulations, making worthy the present effort of obtaining analytical results from first principles. Some previously known results can be revisited from this perspective. Particularly, one can see how the temperature-dependence of the conductivity emerges from the nonlinearity of the forces, through the functions $g_j(z)$. The requirements of broken symmetry and of nonlinearity explicitly appear. The results also allow shedding light on effects observed in chains, e.g., the scaling of the conductivity with the interfacial stiffness $k_I$, the dependence of the rectification factor on $k_I$ and on the temperature difference. How nonlinearities determine the preferential direction has also been foreseen. The role of different asymmetries (in the mass, stiffness, on-site potential and even damping coefficient) was also shown. It is interesting to note that, from Appendix~\ref{app:heat}, it is possible to obtain that the nonlinearity yields a temperature-dependent power spectrum (anharmonic phonons), which can be seen as the correction to the harmonic theory responsible for phonon scattering~\cite{hanggi}. The relationship between temperature and the overlapping phonon bands has already been analytically studied for FK asymmetric chains \cite{casati-diode} and chains with dissimilar anharmonic segments (FK and Fermi-Pasta-Ulan-Tsingou) ~\cite{casati-diode,BaowenLi2005}. Our results are valid when the effect of the nonlinear forces can be treated as a perturbation to the predominantly linear solutions. Consequently, the predicted diode effect is very small. However, the results allow for a clear glance regarding the mechanisms behind rectification and the role of diverse asymmetries and nonlinearities. Possible extensions include baths of different nature (correlated or non-Gaussian) and nonlinear interfacial interactions. {\bf Acknowledgments:} We are grateful to Alexandre Almeida for the fruitful discussions. CA acknowledges Brazilian agency CNPq (process 311435/2020-3) for partial financial support. CAPES (finance code 001) is also acknowledged. \bibliographystyle{plain}
{ "timestamp": "2021-06-17T02:03:06", "yymm": "2105", "arxiv_id": "2105.01849", "language": "en", "url": "https://arxiv.org/abs/2105.01849" }
\section{Introduction} In a recent paper \cite{NguyenCurvature}, we derived global formulas to compute the curvature of a manifold $\mathcal{M}$, embedded differentiably in a Euclidean space $\mathcal{E}$, with metric defined by an operator $\mathsf{g}$ from $\mathcal{M}$ to the space of positive-definite operators on $\mathcal{E}$. The formulas have similar forms to the classical formula for the curvature in local coordinates. While we have provided a few applications of those formulas in that paper, we would like to show the formula could be used to compute the curvatures for a family of manifolds important in both theory and application. The purpose of this paper is to compute and analyze curvatures of a Stiefel manifold with the family of metrics defined in \cite{ExtCurveStiefel}. It turns out this family of metrics is the same family of metrics arising from the Cheeger deformation, which has been one of the main tools to construct non-negative curvature metrics\cite{Cheeger1973,GZ2000,Ziller2007}. Thus, the curvatures could be computed in two ways, one is from our formula using Christoffel functions, which is very similar to the local-coordinate formula, the other way is to use the relationship with the Cheeger deformation. In the second method, the Stiefel manifold is identified with a quotient manifold of the special orthogonal group with a left-invariant metric. Using a result of Michor \cite{Michor2007} and the Euler-Poisson-Arnold framework \cite{Arnold1966}, we compute the $(1,3)$-curvature tensor of the Cheeger deformation of a normal homogeneous space. The second approach provides independent confirmation of our curvature formulas. The first method probably requires lengthier calculation, however, it is straightforward conceptually and could be implemented symbolically. Recall for two positive integers $p < n$, the real Stiefel manifold $\St{p}{n}$ consists of real orthogonal matrices $Y$ of size $n\times p$. If $\alpha_1$, $\alpha_0$ are two positive numbers, the metric in \cite{ExtCurveStiefel} could be reparameterized so that the inner product of two tangent vectors $\xi, \eta$ on $\St{p}{n}$ at $Y\in \St{p}{n}$ is given by $\alpha_0\Tr(\xi^{\ft}\eta) + (\alpha_1-\alpha_0)\Tr(\xi^{\ft}YY^{\ft}\eta)$. Set $\alpha = \alpha_1/\alpha_0$, and up to scaling we can take $\alpha_0= 1$. This family of metrics contains both well-known metrics on Stiefel manifolds, the embedded ($\alpha=1$, where the metric is induced from the embedding in $\mathbb{R}^{n\times p}$) and canonical metrics $(\alpha=\frac{1}{2})$ ($\St{p}{n}$ is normal homogeneous in this case). It will be shown in \cref{prop:stf_leftinv} that if $\SOO(n)$ is equipped with a Cheeger deformation metric with deformation parameter $2\alpha$ (reviewed in \cref{sec:deform}) from the right-multiplication action of $\SOO(p)$ embedded diagonally then $\SOO(n)/\SOO(n-p)$ with the quotient metric could be identified with $\St{p}{n}$ with the metric just described. While a framework to compute curvatures for Cheeger deformation metrics is available, explicit formulas and detailed analysis are not yet known to the best of our knowledge (note \cite{RapcsakTamas} is an early paper dealing with the embedded metric). We provide formulas for Riemannian, Ricci, scalar, and sectional curvature for the Stiefel manifold equipped with this family of metrics. We show the sectional curvature range always contains a specific interval, which is likely to be the full curvature range for metrics in the family. The ends of the interval are piecewise smooth functions described in \cref{tab:sec_range}. In particular, except for some special cases, for the embedded metric on the Stiefel manifold, we show the curvature range contains the interval $[-\frac{1}{2}, 1]$, thus it could have negative curvatures, in contrast to the canonical metric, which has range $[0, \frac{5}{4}]$. Specifically, $\St{2}{3}$ has positive curvature for $\alpha < \frac{2}{3}$, non-negative curvature for $\alpha=\frac{2}{3}$ and both negative and positive curvature for $\alpha > \frac{2}{3}$. With $n > 3$, the Stiefel manifold $\St{2}{n}$ has non-negative curvature for $\alpha \leq \frac{2}{3}$, and both negative and positive curvature for $\alpha > \frac{2}{3}$, and we identify the exact sectional curvature range in this case. For $p \geq 3$, we show $\St{p}{n}$ has non-negative curvature for $\alpha \leq \frac{1}{2}$ and both negative and positive curvature otherwise. This agrees with \cite{GZ2000} and we actually show the curvature range contains negative values in the indicated intervals. We also show the Stiefel manifold always has an Einstein metric, and when $p >2$, there are two metrics in the family (up to a scaling factor) that make the Stiefel manifold an Einstein manifold. We note this may be the same metric as in \cite{Sagle1970}. For notations, if $n$ and $m$ are two positive integers, by $\mathbb{R}^{n\times m}$, we denote the space of $n\times m$ matrices in $\mathbb{R}$, the field of real numbers. We denote by $\mathfrak{o}(p)$ the space of antisymmetric matrices in $\mathbb{R}^{p\times p}$. The transpose of matrix or adjoint of an operator is denoted by $\ft$. Working on a manifold, say $\mathcal{M}$, by $\rD_{\xi} F$, we denote the directional (Lie) derivative of a scalar\slash vector\slash operator-valued function $F$ on $\mathcal{M}$ in direction $\xi$ (either a tangent vector defined at a point $x\in\mathcal{M}$, or a vector field on $\mathcal{M}$). If $\mathcal{E}$ is a Euclidean space (inner product space with a positive-definite inner product), the space of linear operators on $\mathcal{E}$ is denoted by $\mathfrak{L}(\mathcal{E}, \mathcal{E})$. Similarly, we denote by $\mathfrak{L}(\mathcal{E}\otimes \mathcal{E}, \mathcal{E})$ the space of bilinear form on $\mathcal{E}$ with value in $\mathcal{E}$. For two positive integers $n$ and $p$, the Stiefel manifold $\St{p}{n}$ is the space of matrices $Y\in \mathbb{R}^{n\times p}$ satisfying $Y^{\ft}Y = \dI_p$. The Frobenius norm is denoted by $\|\|_F$. \section{Curvature formulas for embedded manifolds with metric operators}\label{sec:review}Let $\mathcal{M}\subset \mathcal{E}$ be a differentiable embedding, where $\mathcal{E}$ is a Euclidean space with a given inner product $\langle\rangle_{\mathcal{E}}$, and $\mathcal{M}$ is a differentiable submanifold, and $\mathsf{g}$ is an operator-valued function from $\mathcal{M}$ to $\mathfrak{L}(\mathcal{E}, \mathcal{E})$, such that $\mathsf{g}$ is positive-definite, then $\mathsf{g}$ induces a Riemannian metric on $\mathcal{M}$, where the inner product of two tangent vectors $\xi,\eta$ at a point $x\in\mathcal{M}$ is defined by $\langle \xi, \mathsf{g}_x\eta\rangle_{\mathcal{E}}$. Here, each tangent space $T_x\mathcal{M}$ is identified with a subspace of $\mathcal{E}$ thanks to the embedding, so $\xi, \eta$ are considered as elements of $\mathcal{E}$, while $\mathsf{g}_x$ denotes the evaluation of the operator $\mathsf{g}$ at $x$. We call $(\mathcal{M}, \mathsf{g}, \mathcal{E})$ an embedded ambient structure. The embedding allows us to identify vector fields on $\mathcal{M}$ with $\mathcal{E}$-valued functions, thus we can take directional derivatives. A Christoffel function is a function $\Gamma$ from $\mathcal{M}$ with value in $\mathfrak{L}(\mathcal{E}\otimes\mathcal{E}, \mathcal{E})$, the space of $\mathcal{E}$-bilinear forms, such that for two vector fields $\mathtt{X}, \mathtt{Y}$ on $\mathcal{M}$, the Levi-Civita connection on $\mathcal{M}$ is given by $$\nabla_\mathtt{X}\mathtt{Y} = \rD_\mathtt{X}\mathtt{Y} + \Gamma(\mathtt{X}, \mathtt{Y})$$ In \cite{NguyenCurvature} we proved the following curvature formulas for three tangent vectors $\xi, \eta, \phi$ \begin{equation}\label{eq:rc1a} \begin{gathered} \RcM_{\xi,\eta}\phi = -(\rD_{\xi}\Gamma)(\eta, \phi) + (\rD_{\eta}\Gamma)(\xi, \phi)-\Gamma(\xi, \Gamma(\eta, \phi)) +\Gamma(\eta, \Gamma(\xi, \phi))\\ \RcM_{\xi,\eta}\phi = -(\rD_{\xi}\Gamma)(\eta, \phi) + (\rD_{\eta}\Gamma)(\xi, \phi)-\Gamma(\Gamma(\phi, \eta), \xi)) +\Gamma(\Gamma(\phi, \xi), \eta) \end{gathered} \end{equation} where $\rD_{\xi}\Gamma$ denotes the directional derivative of $\Gamma$, considered as an operator-valued function, in the direction $\xi$, for example. The curvature for three vector fields $\mathtt{X}, \mathtt{Y}, \mathtt{Z}$ is defined in the convention $$\RcM_{\mathtt{X}\mathtt{Y} }\mathtt{Z} = \nabla_{[\mathtt{X}, \mathtt{Y}]} \mathtt{Z} - \nabla_{\mathtt{X}} \nabla_{\mathtt{Y}} \mathtt{Z} + \nabla_{\mathtt{Y}} \nabla_{\mathtt{X}} \mathtt{Z}$$ \section{Curvatures of the Stiefel manifold} \label{sec:stiefel} In the following, $p < n$ are two positive integers. In \cite{ExtCurveStiefel}, the authors introduced a family of metrics on the Stiefel manifold $\St{p}{n}$ of orthogonal matrices in $\mathbb{R}^{n\times p}$ (thus $Y^{\ft}Y = \dI_p$). We introduced a different parameterization in \cite{Nguyen2020a}. The metric depends on two positive real numbers $\alpha_0$, $\alpha_1$ with ratio $\alpha = \frac{\alpha_1}{\alpha_0}$. In the convention of \cref{sec:review}, we have $\mathcal{M} := \St{p}{n}\subset \mathcal{E} :=\mathbb{R}^{n\times p}$, with the base inner product on $\mathcal{E}$ is the Frobenius inner product, thus $\langle \omega_1, \omega_2\rangle_{\mathcal{E}} = \Tr(\omega_1\omega_2^{\ft})$ for $\omega_1,\omega_2\in\mathcal{E}$. Consider the metric operator $\mathsf{g}\omega = \mathsf{g}_Y\omega := \alpha_0\omega + (\alpha_1 - \alpha_0)YY^{\ft}\omega$, for $Y\in \St{p}{n}$, with inverse $\mathsf{g}^{-1}\omega = \alpha^{-1}_0\omega + (\alpha^{-1}_1 - \alpha^{-1}_0)YY^{\ft}\omega$ and the inner product on $\mathcal{E}$ induced by $\mathsf{g}$ is $\langle\omega_1, \mathsf{g}_Y\omega_2\rangle_{\mathcal{E}} = \alpha_0\Tr\omega_1\omega_2^{\ft} + (\alpha_1 - \alpha_0)\Tr\omega_1^{\ft} YY^{\ft}\omega_2$, and this induces a Riemannian metric on $\St{p}{n}$. A geodesic equation for this metric was derived in \cite{ExtCurveStiefel}, and we provided a different derivation of a Christoffel function $\Gamma$ in \cite{Nguyen2020a}. We will give another derivation of $\Gamma$ in \cref{prop:stf_leftinv} to clarify the concepts and keep the material reasonably independent. For an orthogonal matrix $Y\in\St{p}{n}$ and $\omega, \omega_1, \omega_2\in\mathbb{R}^{n\times p}$, a Christoffel function is \begin{equation}\begin{gathered}\label{stf_gamma} \Gamma(\omega_1, \omega_2) = \frac{1}{2}Y(\omega_1^{\ft}\omega_2+\omega_2^{\ft}\omega_1) +(1-\alpha)(\dI_n-Y\bY^{\ft})(\omega_1\omega_2^{\ft}+\omega_2\omega_1^{\ft})Y \end{gathered} \end{equation} We can extend $Y$ to a full basis $(Y|Y_{\perp})$ of $\mathbb{R}^n$, by adding $Y_{\perp}$, an orthogonal complement. Thus, $Y_{\perp}\Yperp^{\ft} = \dI_n - YY^{\ft}$, $Y_{\perp}^{\ft}Y_{\perp} = \dI_{n-p}, Y^{\ft}Y_{\perp} = 0, Y_{\perp}^{\ft}Y = 0$. Any matrix $\omega\in\mathcal{E}=\mathbb{R}^{n\times p}$ could be represented in this basis as $\omega = YA +Y_{\perp} B$ with $A\in\mathbb{R}^{p\times p}$, $B\in\mathbb{R}^{p\times(n-p)}$ and $\omega$ is a tangent vector to $\St{p}{n}$ at $Y$ if and only if $A$ is antisymmetric, $A\in\mathfrak{o}(p)$, or equivalently $Y^{\ft}\omega +\omega^{\ft}Y = 0$. For two tangent vectors $\xi$ and $\eta$ at a point on the manifold, denote by $\langle\rangle_{\mathsf{g}}$ and $\|\|_{\mathsf{g}}$ the inner product and the norm defined by a metric operator $\mathsf{g}$. We will denote the wedge, the sectional curvature numerator, and the sectional curvature by \begin{equation} \begin{gathered} ||\xi\wedge\eta||_{\mathsf{g}}^2 = ||\xi||_{\mathsf{g}}^2||\eta||_{\mathsf{g}}^2 -\langle\xi, \eta\rangle_{\mathsf{g}}^2\\ \hcK(\xi, \eta) = \langle\RcM_{\xi, \eta}\xi, \eta\rangle_{\mathsf{g}}\\ \mathcal{K}(\xi, \eta) = \frac{\hcK(\xi, \eta)}{||\xi\wedge\eta||_{\mathsf{g}}^2}\\ \end{gathered} \end{equation} \begin{theorem}\label{prop:stiefel_cur} Representing three tangent vectors $\xi, \eta, \phi\in \mathbb{R}^{n\times p}$ at $Y\in\St{p}{n}$ in an orthogonal basis $(Y|Y_{\perp})$ of $\mathbb{R}^n$ as $\xi= YA_1+Y_{\perp}B_1, \eta = YA_2+Y_{\perp} B_2, \phi=YA_3+Y_{\perp} B_3$, where $A_1, A_2, A_3\in\mathfrak{o}(p)$ and $B_1, B_2, B_3 \in \mathbb{R}^{(n-p)\times p}$. Then the Riemannian curvature tensor is $\RcM_{\xi\eta}\phi = YA_R + Y_{\perp} B_R$ with $A_R\in\mathfrak{o}(p), B_R\in\mathbb{R}^{(n-p)\times p}$ where \begin{equation}\label{eq:cur_ABCA} \begin{gathered} A_R = Y^{\ft}\RcM_{\xi\eta}\phi =\frac{1-2\alpha}{4} (A_{1} B_{3}^{\ft} B_{2} - A_{2} B_{3}^{\ft} B_{1} - B_{1}^{\ft} B_{3} A_{2} + B_{2}^{\ft} B_{3} A_{1}) +\\ \frac{1-\alpha}{2}(A_{3} B_{1}^{\ft} B_{2} - A_{3} B_{2}^{\ft} B_{1} - B_{1}^{\ft} B_{2}A_{3}+ B_{2}^{\ft} B_{1} A_{3}) +\\ \frac{1}{4}([[A_{1}, A_{2}], A_{3}] - A_{1} B_{2}^{\ft} B_{3} + A_{2} B_{1}^{\ft} B_{3} + B_{3}^{\ft} B_{1} A_{2} - B_{3}^{\ft} B_{2} A_{1}) \end{gathered} \end{equation} \begin{equation}\label{eq:cur_ABCB} \begin{gathered} B_R = Y_{\perp}^{\ft}\RcM_{\xi\eta}\phi = \frac{2\alpha^{2}-\alpha}{2} (B_{1} A_{3} A_{2} - B_{2} A_{3} A_{1}) +\\ (\alpha^{2}-\alpha) (B_{3} A_{1} A_{2} - B_{3} A_{2} A_{1}) +(1 - \alpha) (B_{3} B_{1}^{\ft} B_{2} - B_{3} B_{2}^{\ft} B_{1}) +\\ \frac{\alpha-2}{2} (B_{1} B_{2}^{\ft} B_{3} -B_{2} B_{1}^{\ft} B_{3}) + \frac{\alpha}{2}(B_{1} A_{2} A_{3} - B_{1} B_{3}^{\ft} B_{2} - B_{2} A_{1} A_{3} + B_{2} B_{3}^{\ft} B_{1}) \end{gathered} \end{equation} If $p > 1$, the Ricci and scalar curvatures are given by: \begin{equation}\label{eq:ricci} \begin{gathered} \textsc{Ric}(\xi, \eta)= (\frac{2-p}{4} + (p-n)\alpha^2)\Tr(A_1A_2) + [(1-p)\alpha + (n-2)]\Tr(B_1^{\ft}B_2) \end{gathered} \end{equation} \begin{equation} \begin{gathered} \textsc{Scl}(Y) = ((1-p)\alpha + n-2)(n-p)p + ((n-p)\alpha + \frac{p-2}{4\alpha})\frac{p(p-1)}{2} \end{gathered} \end{equation} The sectional curvature numerator $\hcK$ is computed from one of the following \begin{equation}\label{eq:sec_cur} \begin{gathered} \hcK = \Tr(\frac{2-3\alpha}{2}B_2^{\ft}B_1B_1^{\ft}B_2 + \frac{3\alpha-4}{2}B_2^{\ft}B_1B_2^{\ft}B_1 + B_2^{\ft}B_2B_1^{\ft}B_1-\frac{\alpha}{4}[A_1, A_2]^2) \\ +\alpha\Tr((4\alpha-3)A_1A_2B_2^{\ft}B_1 + (3-2\alpha)A_1A_2B_1^{\ft}B_2-\alpha A_2^2B_1^{\ft}B_1 -\alpha A_1^2B_2^{\ft}B_2) \end{gathered} \end{equation} \begin{equation}\label{eq:sec_sum_sq} \begin{gathered} \hcK = \frac{\alpha}{4}\|[A_1,A_2] +(3-4\alpha)(B_2^{\ft}B_1-B_1^{\ft}B_2 )\|_F^2 +\\ \alpha^2\|B_1A_2-B_2A_1\|_F^2 + \frac{1}{2}\|B_1B_2^{\ft}-B_2B_1^{\ft}\|_F^2 + \frac{(1-2\alpha)^3}{2}\|B_2^{\ft}B_1-B_1^{\ft}B_2\|_F^2 \end{gathered} \end{equation} In particular, if $\alpha \leq \frac{1}{2}$, the sectional curvature is non-negative. If $\xi$ and $\eta$ are orthogonal, the sectional curvature denominator is $(\alpha_1\Tr A_1A_1^{\ft} +\alpha_0\Tr B_1B_1^{\ft})(\alpha_1\Tr A_2A_2^{\ft} +\alpha_0\Tr B_2B_2^{\ft})$. \end{theorem} We also use the following expansion of \cref{eq:sec_sum_sq} when $A_1$ or $A_2$ is zero. \begin{equation}\label{eq:sec_sum_sq_2} \begin{gathered} \hcK = \frac{\alpha}{4}\|[A_1,A_2]\|_F^2 +\frac{\alpha(3-4\alpha)}{2}\Tr[A_1, A_2](B_2^{\ft}B_1-B_1^{\ft}B_2 )^{\ft} +\\ \frac{2-3\alpha}{4}\|B_2^{\ft}B_1-B_1^{\ft}B_2\|_F^2 + \alpha^2\|B_1A_2-B_2A_1\|_F^2 + \frac{1}{2}\|B_1B_2^{\ft}-B_2B_1^{\ft}\|_F^2 \end{gathered} \end{equation} \begin{proof} As noted, any $\omega\in \mathbb{R}^{n\times p}$ could be expressed as $\omega = YA +Y_{\perp} B$, however $A$ may not be antisymmetric. By direct substitution $(\dI_n-Y\bY^{\ft})(\eta\omega^{\ft}+\omega\eta^{\ft})Y = Y_{\perp}(B_2A^{\ft} - BA_2)$, hence $$\Gamma(\eta, \omega) = \frac{1}{2}Y(- A_{2} A + A^{\ft} A_{2} + B^{\ft} B_{2} + B_{2}^{\ft} B) +(1-\alpha)Y_{\perp}(B_{2} A^{\ft} - B A_{2})$$ In particular, $Y^{\ft}\Gamma(\eta, \omega) = \frac{1}{2}(- A_{2} A + A^{\ft} A_{2} + B^{\ft} B_{2} + B_{2}^{\ft} B)$, $Y_{\perp}^{\ft}\Gamma(\eta, \omega) = (1-\alpha)(B_2A^{\ft} - BA_2)$, and $$\begin{gathered}\rD_{\xi}\Gamma(\eta, \phi) = \frac{1}{2}\xi(\eta^{\ft}\phi+\phi^{\ft}\eta) + \\(1-\alpha)\{ (\dI_n-Y\bY^{\ft})(\eta\phi^{\ft}+\phi\eta^{\ft})\xi -(\xiY^{\ft} +Y\xi^{\ft})(\eta\phi^{\ft}+\phi\eta^{\ft})Y \}\end{gathered} $$ Expanding $\xi, \eta,\phi$ $$\begin{gathered} Y_{\perp}^{\ft}(\rD_{\xi}\Gamma)(\eta, \phi)= \frac{1}{2}B_1(-A_2A_3-A_3A_2+B_2^{\ft}B_3+B_3^{\ft}B_2) +\\ (1-\alpha)\{B_2(-A_3 Y^{\ft}+ B_3^{\ft}Y_{\perp}) + B_3(-A_2Y^{\ft} + B_2^{\ft}Y_{\perp})\}(YA_1+Y_{\perp} B_1) -\\ (1-\alpha)(B_1Y^{\ft}(-YA_2A_3-YA_3A_2) \\ = \frac{B_{1} B_{2}^{\ft} B_{3}}{2}+ \frac{B_{1} B_{3}^{\ft} B_{2}}{2} +(\frac{1}{2} - \alpha)(B_{1} A_{2} A_{3} + B_{1} A_{3} A_{2}) +\\ (1-\alpha)(- B_{2} A_{3} A_{1}+ B_{2} B_{3}^{\ft} B_{1} - B_{3} A_{2} A_{1}+ B_{3} B_{2}^{\ft} B_{1}) \end{gathered}$$ Simplify $Y^{\ft}(\xi Y^{\ft} + Y\xi^{\ft}) = A_1Y^{\ft}-A_1Y^{\ft}+B_1^{\ft}Y_{\perp}^{\ft}=B_1^{\ft}Y_{\perp}^{\ft} $ $$\begin{gathered} Y^{\ft}(\rD_{\xi}\Gamma)(\eta, \phi)= \frac{1}{2}A_1(-A_2A_3+B_2^{\ft}B_3-A_3A_2+B_3^{\ft}B_2) -\\(1-\alpha)(B_1^{\ft}Y_{\perp}^{\ft})(-Y_{\perp} B_2A_3Y^{\ft}-Y_{\perp} B_3A_2Y^{\ft})Y=\\ (1- \alpha)(B_{1}^{\ft} B_{2} A_{3} + B_{1}^{\ft} B_{3} A_{2}) + \frac{1}{2}(-A_{1} A_{2} A_{3} -A_{1} A_{3} A_{2} + A_{1} B_{2}^{\ft} B_{3} + A_{1} B_{3}^{\ft} B_{2}) \end{gathered}$$ Next, use the formula for $\Gamma(\xi, \omega)$ with $\omega = \Gamma(\eta, \phi)$ $$\begin{gathered} Y^{\ft}\Gamma(\xi, \Gamma(\eta, \phi)) = \frac{1}{2}(-A_1(\frac{1}{2}(- A_{2} A_3 - A_3 A_{2} + B_3^{\ft} B_{2} + B_{2}^{\ft} B_3) ) +\\ (\frac{1}{2}(- A_{2} A_3 - A_3 A_{2} + B_3^{\ft} B_{2} + B_{2}^{\ft} B_3))^{\ft}A_1 +B_1^{\ft}((1-\alpha)(-B_2A_3 - B_3A_2) ) +\\ ((1-\alpha)(-B_2A_3 - B_3A_2))^{\ft}B_1) = \\ \frac{1-\alpha}{2}( A_{2} B_{3}^{\ft} B_{1} + A_{3} B_{2}^{\ft} B_{1} - B_{1}^{\ft} B_{2} A_{3} - B_{1}^{\ft} B_{3} A_{2}) + \frac{1}{4}(A_{1} A_{2} A_{3} +\\ A_{1} A_{3} A_{2} -A_{1} B_{2}^{\ft} B_{3} - A_{1} B_{3}^{\ft} B_{2} - A_{2} A_{3} A_{1} - A_{3} A_{2} A_{1} + B_{2}^{\ft} B_{3} A_{1} + B_{3}^{\ft} B_{2} A_{1}) \end{gathered}$$ $$\begin{gathered}Y_{\perp}(\Gamma(\xi, \Gamma(\eta, \phi)) = (1-\alpha)\{B_1(\frac{1}{2}(- A_{2} A_3 - A_3 A_{2} + B_3^{\ft} B_{2} + B_{2}^{\ft} B_3)^{\ft} -\\ ((1-\alpha)(-B_2A_3 - B_3A_2))A_1)\}=\\ (\alpha-1)^{2} (B_{2} A_{3} A_{1} + B_{3} A_{2} A_{1}) + \frac{\alpha-1}{2}(B_{1} A_{2} A_{3} + B_{1} A_{3} A_{2} - B_{1} B_{2}^{\ft} B_{3} - B_{1} B_{3}^{\ft} B_{2}) \end{gathered}$$ Therefore: $$\begin{gathered}Y^{\ft}\RcM_{\xi\eta}\phi= -\{(1- \alpha)(B_{1}^{\ft} B_{2} A_{3} + B_{1}^{\ft} B_{3} A_{2}) +\\ \frac{1}{2}(-A_{1} A_{2} A_{3} -A_{1} A_{3} A_{2} + A_{1} B_{2}^{\ft} B_{3} + A_{1} B_{3}^{\ft} B_{2})\} +\\ \{(1- \alpha)(B_{2}^{\ft} B_{1} A_{3} + B_{2}^{\ft} B_{3} A_{1}) + \frac{1}{2}(-A_{2} A_{1} A_{3} -A_{2} A_{3} A_{1} + A_{2} B_{1}^{\ft} B_{3} + A_{2} B_{3}^{\ft} B_{1})\} -\\ \{\frac{1-\alpha}{2}( A_{2} B_{3}^{\ft} B_{1} + A_{3} B_{2}^{\ft} B_{1} - B_{1}^{\ft} B_{2} A_{3} - B_{1}^{\ft} B_{3} A_{2}) + \frac{1}{4}(A_{1} A_{2} A_{3} +\\ A_{1} A_{3} A_{2} -A_{1} B_{2}^{\ft} B_{3} - A_{1} B_{3}^{\ft} B_{2} - A_{2} A_{3} A_{1} - A_{3} A_{2} A_{1} + B_{2}^{\ft} B_{3} A_{1} + B_{3}^{\ft} B_{2} A_{1})\} + \\ \{\frac{1-\alpha}{2}( A_{1} B_{3}^{\ft} B_{2} + A_{3} B_{1}^{\ft} B_{2} - B_{2}^{\ft} B_{1} A_{3} - B_{2}^{\ft} B_{3} A_{1}) + \frac{1}{4}(A_{2} A_{1} A_{3} +\\ A_{2} A_{3} A_{1} -A_{2} B_{1}^{\ft} B_{3} - A_{2} B_{3}^{\ft} B_{1} - A_{1} A_{3} A_{2} - A_{3} A_{1} A_{2} + B_{1}^{\ft} B_{3} A_{2} + B_{3}^{\ft} B_{1} A_{2})\} \\ = \frac{1-2\alpha}{4} (A_{1} B_{3}^{\ft} B_{2} - A_{2} B_{3}^{\ft} B_{1} - B_{1}^{\ft} B_{3} A_{2} + B_{2}^{\ft} B_{3} A_{1}) +\\ \frac{1-\alpha}{2}(A_{3} B_{1}^{\ft} B_{2} - A_{3} B_{2}^{\ft} B_{1} - B_{1}^{\ft} B_{2}A_{3}+ B_{2}^{\ft} B_{1} A_{3}) + \frac{1}{4}(A_{1} A_{2} A_{3} - A_{1} B_{2}^{\ft} B_{3} -\\ A_{2} A_{1} A_{3} + A_{2} B_{1}^{\ft} B_{3} - A_{3} A_{1} A_{2} + A_{3} A_{2} A_{1} + B_{3}^{\ft} B_{1} A_{2} - B_{3}^{\ft} B_{2} A_{1}) \end{gathered} $$ The last expression follows from a term by term collection, for example, the coefficient of $A_1A_2A_3$ is $-(-1/2) -1/4=1/4$, and similarly for all terms with coefficient $1/4$. The coefficient for $A_1B_3^{\ft}B_2$ is $-1/2+1/4+(1-\alpha)/2=(1-2\alpha/4)$, and similar to all the terms with that coefficient. $$\begin{gathered}Y_{\perp}^{\ft}\RcM_{\xi\eta}\phi = -(\frac{B_{1} B_{2}^{\ft} B_{3}}{2}+ \frac{B_{1} B_{3}^{\ft} B_{2}}{2} +(\frac{1}{2} - \alpha)(B_{1} A_{2} A_{3} + B_{1} A_{3} A_{2}) +\\ (1-\alpha)(- B_{2} A_{3} A_{1}+ B_{2} B_{3}^{\ft} B_{1} - B_{3} A_{2} A_{1}+ B_{3} B_{2}^{\ft} B_{1}) ) +\\ ( \frac{B_{2} B_{1}^{\ft} B_{3}}{2}+ \frac{B_{2} B_{3}^{\ft} B_{1}}{2} +(\frac{1}{2} - \alpha)(B_{2} A_{1} A_{3} + B_{2} A_{3} A_{1}) +\\ (1-\alpha)(- B_{1} A_{3} A_{2}+ B_{1} B_{3}^{\ft} B_{2} - B_{3} A_{1} A_{2}+ B_{3} B_{1}^{\ft} B_{2}) ) -\\ (\alpha-1)^{2} (B_{2} A_{3} A_{1} + B_{3} A_{2} A_{1}) - \frac{\alpha-1}{2}(B_{1} A_{2} A_{3} + B_{1} A_{3} A_{2} - B_{1} B_{2}^{\ft} B_{3} - B_{1} B_{3}^{\ft} B_{2})\\ +(\alpha-1)^{2} (B_{1} A_{3} A_{2} + B_{3} A_{1} A_{2}) + \frac{\alpha-1}{2}(B_{2} A_{1} A_{3} + B_{2} A_{3} A_{1} - B_{2} B_{1}^{\ft} B_{3} - B_{2} B_{3}^{\ft} B_{1}) \\ \end{gathered}$$ Again, we collect term by term, (we do use a symbolic calculation program helper). The coefficient for $B_1B_2^{\ft}B_3$ is $-1/2+(\alpha-1)/2=(\alpha-2)/2$, and similar for $B_2B_1^{\ft}B3$. The coefficient for $B_1B_3^{\ft}B_2$ is $-1/2+(1-\alpha) +(\alpha-1)/2=-\alpha/2$, and similar for $B_2B_3^{\ft}B_1$. The coefficient for $B_1A_2A_3$ is $-(1/2-\alpha)-(\alpha-1)/2=\alpha/2$, and similar for $B_2A_1A_3$. The coefficient for $B_1A_3A_2$ is $ -(\frac{1}{2}-\alpha)-(1-\alpha) -\frac{\alpha-1}{2} +(\alpha-1)^2= \alpha^2-\frac{\alpha}{2} = \frac{2\alpha^2-\alpha}{2}$ and similar for $B_2A_3A_1$. The coefficient for $B_3A_2A_1$ is $(1-\alpha)-(\alpha-1)^2 = \alpha-\alpha^2$, and $B_3A_1A_2$ follows by permutation. The coefficient for $B_3B_2^{\ft}B_1$ is $-(1-\alpha)$, and similar for $B_3B_1^{\ft}B_2$. Finally $$\begin{gathered} Y_{\perp}^{\ft}\RcM_{\xi\eta}\phi=\frac{2\alpha^{2}-\alpha}{2} (B_{1} A_{3} A_{2} - B_{2} A_{3} A_{1}) + (\alpha^{2}-\alpha) (B_{3} A_{1} A_{2} - B_{3} A_{2} A_{1}) +\\(1 - \alpha) (B_{3} B_{1}^{\ft} B_{2} - B_{3} B_{2}^{\ft} B_{1}) +\frac{\alpha-2}{2} (B_{1} B_{2}^{\ft} B_{3} -B_{2} B_{1}^{\ft} B_{3}) +\\ \frac{\alpha}{2}(B_{1} A_{2} A_{3} - B_{1} B_{3}^{\ft} B_{2} - B_{2} A_{1} A_{3} + B_{2} B_{3}^{\ft} B_{1}) \end{gathered} $$ The Ricci curvature is $\Tr((A_2, B_2)\mapsto (A_R, B_R))$. Using item 3 of \cref{lem:mat_traces}, for the $A_R$ component, we compute the trace of $$A_2\mapsto \frac{1-2\alpha}{4}(- A_{2} B_{3}^{\ft} B_{1} - B_{1}^{\ft} B_{3} A_{2}) + \frac{1}{4}([[A_{1}, A_{2}], A_{3}] + A_{2} B_{1}^{\ft} B_{3} + B_{3}^{\ft} B_{1} A_{2})$$ which evaluates to $\frac{1-2\alpha}{4}(p-1)\Tr(-B_3^{\ft}B_1) + \frac{1}{4}((2-p)\Tr(A_1A_3)+p\Tr(B_1^{\ft}B_3)-\Tr(B_1^{\ft}B_3))$, or $\frac{2-p}{4}\Tr(A_1A_3) + (p-1)\frac{\alpha}{2}\Tr(B_1^{\ft}B_3)$. Here, we need $p>1$, otherwise $\mathfrak{o}(p)$ is zero and there is no contribution from this component. For the $B_R$ component, use item 1 of \cref{lem:mat_traces}, we compute $$\begin{gathered}\Tr(B_2\mapsto \frac{2\alpha^{2}-\alpha}{2} ( - B_{2} A_{3} A_{1}) + (1 - \alpha) (B_{3} B_{1}^{\ft} B_{2} - B_{3} B_{2}^{\ft} B_{1}) +\\ \frac{\alpha-2}{2} (B_{1} B_{2}^{\ft} B_{3} -B_{2} B_{1}^{\ft} B_{3}) + \frac{\alpha}{2}(- B_{1} B_{3}^{\ft} B_{2} - B_{2} A_{1} A_{3} + B_{2} B_{3}^{\ft} B_{1}))\\= \frac{2\alpha^{2}-\alpha}{2}(n-p) \Tr(-A_{3} A_{1}) + (1 - \alpha)(p-1)\Tr(B_{3} B_{1}^{\ft}) +\\ \frac{\alpha-2}{2}(1-n+p)\Tr(B_{1}^{\ft} B_{3}) + \frac{\alpha(n-2p)}{2}\Tr(B_{1} B_{3}^{\ft}) - \frac{\alpha(n-p)}{2}\Tr(A_{1} A_{3}) \end{gathered}$$ The Ricci curvature is $$\begin{gathered}(\frac{2-p}{4} -\frac{2\alpha^{2}-\alpha}{2}(n-p) - \frac{\alpha(n-p)}{2} ) \Tr(A_1A_3) +\\ \{(p-1)\frac{\alpha}{2}+(1 - \alpha)(p-1) +\frac{\alpha-2}{2}(1-n+p) + \frac{\alpha(n-2p)}{2}\}\Tr(B_1^{\ft}B_3)\\ = (\frac{2-p}{4} + (p-n)\alpha^2)\Tr(A_1A_2) + [(1-p)\alpha + (n-2)]\Tr(B_1^{\ft}B_2) \end{gathered} $$ The Ricci map is thus $(A_2, B_2)\mapsto ((\frac{p-2}{4\alpha} + (n-p)\alpha)A_2, ((1-p)\alpha + (n-2))B_2)$, which gives us the scalar curvature formula. For the sectional curvature, we substitute $A_1, B_1$ in place of $A_3, B_3$ in the expressions for $A_R$ and $B_R$, then compute $\Tr(-\alpha A_R A_2 + B_R B_2^{\ft})$ $$\begin{gathered} \hcK(\xi, \eta) = \Tr(-\alpha(\frac{1-2\alpha}{4} (A_{1} B_{1}^{\ft} B_{2} - A_{2} B_{1}^{\ft} B_{1} - B_{1}^{\ft} B_{1} A_{2} + B_{2}^{\ft} B_{1} A_{1}) +\\ \frac{1-\alpha}{2}(A_{1} B_{1}^{\ft} B_{2} - A_{1} B_{2}^{\ft} B_{1} - B_{1}^{\ft} B_{2}A_{1}+ B_{2}^{\ft} B_{1} A_{1}) +\\ \frac{1}{4}([[A_{1}, A_{2}], A_{1}] - A_{1} B_{2}^{\ft} B_{1} + A_{2} B_{1}^{\ft} B_{1} + B_{1}^{\ft} B_{1} A_{2} - B_{1}^{\ft} B_{2} A_{1}))A_{2}) \\+\Tr(( \frac{2\alpha^{2}-\alpha}{2} (B_{1} A_{1} A_{2} - B_{2} A_{1} A_{1}) + (\alpha^{2}-\alpha) (B_{1} A_{1} A_{2} - B_{1} A_{2} A_{1}) +\\(1 - \alpha) (B_{1} B_{1}^{\ft} B_{2} - B_{1} B_{2}^{\ft} B_{1}) +\frac{\alpha-2}{2} (B_{1} B_{2}^{\ft} B_{1} -B_{2} B_{1}^{\ft} B_{1}) +\\ \frac{\alpha}{2}(B_{1} A_{2} A_{1} - B_{1} B_{1}^{\ft} B_{2} - B_{2} A_{1} A_{1} + B_{2} B_{1}^{\ft} B_{1}))B_2^{\ft}) \end{gathered} $$ We collect terms. From $-\Tr([[A_1,A_2]A_1]A_2) = \Tr[A_1, A_2][A_1, A_2]^{\ft}$, terms involving $A_1, A_2$ only are $\alpha/4\Tr[A_1, A_2][A_1, A_2]^{\ft}$. Terms with both $A$'s and $B$'s: $$\begin{gathered} \Tr(\alpha(\frac{1-2\alpha}{4} (-A_{1} B_{1}^{\ft} B_{2}A_2 + A_{2} B_{1}^{\ft} B_{1}A_2 + B_{1}^{\ft} B_{1} A_{2}^2 - B_{2}^{\ft} B_{1} A_{1}A_2) +\\ \frac{1-\alpha}{2}(-A_{1} B_{1}^{\ft} B_{2}A_2 + A_{1} B_{2}^{\ft} B_{1}A_2 + B_{1}^{\ft} B_{2}A_{1}A_2 - B_{2}^{\ft} B_{1} A_{1}A_2) +\\ \frac{1}{4}( A_{1} B_{2}^{\ft} B_{1}A_2 - A_{2} B_{1}^{\ft} B_{1}A_2 - B_{1}^{\ft} B_{1} A_{2}^2 + B_{1}^{\ft} B_{2} A_{1}A_{2}))) \\+\alpha\Tr( \frac{2\alpha-1}{2} (B_{1} A_{1} A_{2}B_2^{\ft} - B_{2} A_{1} A_{1}B_2^{\ft}) + (\alpha-1) (B_{1} A_{1} A_{2}B_2^{\ft} - B_{1} A_{2} A_{1}B_2^{\ft}) +\\ \frac{1}{2}(B_{1} A_{2} A_{1}B_2^{\ft} - B_{2} A_{1} A_{1}B_2^{\ft}))=\\ \alpha\Tr( (\frac{1-\alpha}{2}+\frac{1}{4}-(\alpha-1)+\frac{1}{2})A_2A_1B_2^{\ft}B_1+ (-\frac{1-2\alpha}{4}-\frac{1-\alpha}{2})A_2A_1B_1^{\ft}B_2 +\\ (-\frac{1-2\alpha}{4}-\frac{1-\alpha}{2}+\alpha-1+\frac{2\alpha-1}{2})A_1A_2B_2^{\ft}B_1+ \\ (\frac{1-\alpha}{2}+\frac{1}{4})A_1A_2B_1^{\ft}B_2+ (2\frac{1-2\alpha}{4}-\frac{2}{4}) A_2^2B_1^{\ft}B_1+ (-\frac{2\alpha-1}{2}-\frac{1}{2})A_1^2B_2^{\ft}B_2) =\\ \alpha\Tr( \frac{9-6\alpha}{4}A_2A_1B_2^{\ft}B_1 +\frac{4\alpha-3}{4}A_2A_1B_1^{\ft}B_2 + \frac{12\alpha-9}{4}A_1A_2B_2^{\ft}B_1+\\ \frac{3-2\alpha}{4}A_1A_2B_1^{\ft}B_2 -\alpha A_2^2B_1^{\ft}B_1 -\alpha A_1^2B_2^{\ft}B_2)=\\ \alpha\Tr((4\alpha-3)A_1A_2B_2^{\ft}B_1 + (3-2\alpha)A_1A_2B_1^{\ft}B_2-\alpha A_2^2B_1^{\ft}B_1 -\alpha A_1^2B_2^{\ft}B_2) \end{gathered} $$ where we use $\Tr(A_2A_1B_2^{\ft}B_1) = \Tr((A_2A_1B_2^{\ft}B_1)^{\ft}) = \Tr(A_1A_2B_1^{\ft}B_2)$ and similarly $\Tr(A_2A_1B_1^{\ft}B_2) = \Tr(A_1A_2B_2^{\ft}B_1)$. Next, we collect the terms with $B_1$ and $B_2$ only: $$\begin{gathered} \Tr((1 - \alpha) (B_{1} B_{1}^{\ft} B_{2}B_2^{\ft} - B_{1} B_{2}^{\ft} B_{1}B_2^{\ft}) +\frac{\alpha-2}{2} (B_{1} B_{2}^{\ft} B_{1}B_2^{\ft} -B_{2} B_{1}^{\ft} B_{1}B_2^{\ft}) +\\ \frac{\alpha}{2}( - B_{1} B_{1}^{\ft} B_{2}B_2^{\ft} + B_{2} B_{1}^{\ft} B_{1}B_2^{\ft})) =\\ \Tr((1-\frac{3\alpha}{2})B_1B_1^{\ft}B_2B_2^{\ft} +(\alpha-1 +\frac{\alpha-2}{2})B_1B_2^{\ft}B_1B_2^{\ft} +(-\frac{\alpha-2}{2} +\frac{\alpha}{2})B_2B_1^{\ft}B_1B_2^{\ft})\\ =\Tr(\frac{2-3\alpha}{2}B_1B_1^{\ft}B_2B_2^{\ft} +\frac{3\alpha-4}{2}B_1B_2^{\ft}B_1B_2^{\ft} +B_2B_1^{\ft}B_1B_2^{\ft}) \end{gathered} $$ This proves \cref{eq:sec_cur}. On the other hand, it is clear on the right-hand side of \cref{eq:sec_sum_sq}, the $A$'s only term is $\frac{\alpha}{4}\Tr[A_1, A_2][A_1, A_2]^{\ft}$, the $B$'s only term is: $$\begin{gathered} (\frac{\alpha(3-4\alpha)^2}{4}+\frac{(1-2\alpha)^3}{2})\Tr(B_2^{\ft}B_1-B_1^{\ft}B_2)(B_2^{\ft}B_1-B_1^{\ft}B_2)^{\ft} +\\ \frac{1}{2}\Tr(B_1B_2^{\ft}-B_2B_1^{\ft})(B_1B_2^{\ft}-B_2B_1^{\ft})^{\ft} =\\ \frac{2-3\alpha}{4}\Tr(B_2^{\ft}B_1B_1^{\ft}B_2 -B_2^{\ft}B_1B_2^{\ft}B_1 - B_1^{\ft}B_2B_1^{\ft}B_2 +B_1^{\ft}B_2B_2^{\ft}B_1) +\\ \frac{1}{2}\Tr(B_1B_2^{\ft}B_2B_1^{\ft} -B_1B_2^{\ft}B_1B_2^{\ft} - B_2B_1^{\ft}B_2B_1^{\ft} +B_2B_1^{\ft}B_1B_2^{\ft})\\ =\Tr(2\frac{2-3\alpha}{4}B_1B_1^{\ft}B_2B_2^{\ft} + (-2\frac{2-3\alpha}{4}-2\frac{1}{2})B_1B_2^{\ft}B_1B_2^{\ft} + 2\frac{1}{2}B_2B_1^{\ft}B_1B_2^{\ft})\\ =\Tr(\frac{2-3\alpha}{2}B_1B_1^{\ft}B_2B_2^{\ft} + \frac{3\alpha-4}{2}B_1B_2^{\ft}B_1B_2^{\ft} + B_2B_1^{\ft}B_1B_2^{\ft}) \end{gathered} $$ The terms with both $A$ and $B$ in \cref{eq:sec_sum_sq} are: $$\begin{gathered} \alpha\Tr(\frac{3-4\alpha}{2}(A_2A_1-A_1A_2)(B_2^{\ft}B_1-B_1^{\ft}B_2 ) + \alpha(B_1A_2-B_2A_1)^{\ft} (B_1A_2-B_2A_1))\\ = \alpha\Tr\{(\frac{3-4\alpha}{2}+\alpha)A_2A_1B_2^{\ft}B_1 -\frac{3-4\alpha}{2}A_2A_1B_1^{\ft}B_2 -\frac{3-4\alpha}{2}A_1A_2B_2^{\ft}B_1 +\\ (\frac{3-4\alpha}{2}+\alpha) A_1A_2B_1^{\ft}B_2 -\alpha A_2^2B_1^{\ft}B_1 -\alpha A_1^2B_2^{\ft}B_2 \}\\ = \alpha\Tr\{\frac{3-2\alpha}{2}A_2A_1B_2^{\ft}B_1 -\frac{3-4\alpha}{2}A_2A_1B_1^{\ft}B_2 -\frac{3-4\alpha}{2}A_1A_2B_2^{\ft}B_1 +\\ \frac{3-2\alpha}{2} A_1A_2B_1^{\ft}B_2 -\alpha A_2^2B_1^{\ft}B_1 -\alpha A_1^2B_2^{\ft}B_2 \}\\ = \alpha\Tr\{(4\alpha-3)A_1A_2B_2^{\ft}B_1 + (3-2\alpha) A_1A_2B_1^{\ft}B_2 -\alpha A_2^2B_1^{\ft}B_1 -\alpha A_1^2B_2^{\ft}B_2 \} \end{gathered} $$ Therefore we have shown \cref{eq:sec_sum_sq} gives us the sectional curvature numerator. For the sign of the sectional curvature, in \cref{eq:sec_sum_sq} the terms are all positive, except for the last, which is non-negative if $\alpha \leq\frac{1}{2}$. The formula for the curvature denominator is clear. \end{proof} Recall an Einstein manifold is a Riemannian manifold where the Ricci curvature tensor is proportional to the metric tensor. We have a quick application \begin{corollary}For $p>1$, the Stiefel manifold with the metric $\mathsf{g} = \alpha_0\omega + (\alpha_1 - \alpha_0)YY^{\ft}\omega$ is an Einstein manifold if and only if $\alpha=\alpha_1/\alpha_0$ satisfies the equation \begin{equation}\label{eq:einstein} (n-1)\alpha^2 - (n-2)\alpha + \frac{p-2}{4} = 0 \end{equation} For $p=2$, $\alpha = \frac{n-2}{n-1}$ is the only value of $\alpha$ that makes $\St{2}{n}$ an Einstein manifold. If $p>2$, there are two values of $\alpha$ in the family making the Stiefel manifold an Einstein manifold. \end{corollary} \begin{proof}From \cref{eq:ricci}, the manifold is an Einstein manifold if and only if $(n-p)\alpha^2 +(p-2)/4 = \alpha (n-2 +(1-p)\alpha)$, from here \cref{eq:einstein} follows. When $p = 2$, it is clear $\frac{n-2}{n-1}$ is the only solution. When $p>2$, \cref{eq:einstein} has positive discriminant $(n-2)^2 + (p-2)(n-1)$, and has two positive roots. \end{proof} It is noted in \cite{ExtCurveStiefel} that when $p = n-1$, $\St{n-1}{n}$ is just $\SOO(n)$. Thus, we have provided $\SOO(n)$ with Einstein metrics. \section{Sectional curvature range} We have seen the sectional curvature numerator $\hcK$ could be expressed as a weighted sum of squares, this allows us to estimate the sectional curvature range. If $p=1$ then the Stiefel manifold is a sphere and has constant sectional curvature. Therefore we will assume $p > 1$ below. It is easy to establish upper and lower bounds (not tight) for the sectional curvature from \cref{eq:sec_sum_sq}. Using the triangle inequality we can bound $\mathcal{K}$ from \cref{eq:sec_sum_sq} by bounding an expression of the form $K_1=a\|[A_1, A_2]\|^2_F + b\|B_1B_2^{\ft}-B_2B_1^{\ft}\|^2_F + c\|B_1^{\ft}B_2 - B_2^{\ft}B_1\|^2_F + d\|B_1A_2 - B_2A_1\|_F^2$ by the curvature denominator $S :=(\alpha\|A_1\|_F^2 + \|B_1\|_F^2)(\alpha\|A_2\|_F^2 + \|B_2\|_F^2)$. We use the inequality $\|[X, Z]\|_F^2 \leq \|X\|_F^2\|Z\|_F^2$, for two antisymmetric matrices in $\mathfrak{o}(n)$ if $n>3$ (\cite{Ge2014}, lemma 2.5 provides the explicit matrices where we have equality, see also \cite{GKRgroup}, proposition 4.2). Apply that inequality with $X = \frac{1}{\sqrt{2}}\begin{bmatrix}\sqrt{2\alpha}A_1 & -B_1^{\ft}\\B_1 & 0\end{bmatrix}$, $Z= \frac{1}{\sqrt{2}}\begin{bmatrix}\sqrt{2\alpha} A_2 & B_2^{\ft}\\B_2 & 0\end{bmatrix}$ and similar inequalities for $B_1=B_2=0$, $A_1=A_2=0$, we can bound each term of $K_1$ by $S$, thus getting a bound for $\mathcal{K}$. We will attempt to provide more refined bounds. The analysis of sectional curvature range for Stiefel manifolds is more complicated than that of symmetric spaces because of the presence of both the $A$ and $B$ components. The manifold is homogeneous, therefore the sectional curvature range is the same at any point. Let $E_{ij}$ $1\leq i\leq p$ be the elementary matrix in $\mathbb{R}^{p\times p}$ with the $(i,j)$ entry is $1$, and other entries $0$. Let $e_{ij}$ be the elementary matrix in $\mathbb{R}^{(n-p)\times p}$ $(1\leq i\leq n-p, 1\leq j\leq p)$ with the $(i, j)$ entry is $1$ and the other entries are zero. In \cref{tab:corners}, we show sectional curvature values of $\St{p}{n}$ at several sections (pairs of linearly independent tangent vectors), each defined by a quadruple $(A_1, B_1, A_2, B_2)$. A few of those sections come from the corresponding sections for $\SOO(n)$, in \cite{Ge2014} as cited. We have noted that $\mathcal{K}$ is non-negative if $\alpha\leq \frac{1}{2}$, and \cref{tab:corners} shows a section with $\mathcal{K}=\frac{2-3\alpha}{2}$, thus, if $\alpha >\frac{2}{3}$, $\mathcal{K}$ always has negative values in its range. When $p=2$, we will show $\mathcal{K}$ is non-negative if $\alpha\leq \frac{2}{3}$. When $p>2$, $\mathcal{K}$ could be negative if $\frac{1}{2}\leq \alpha\leq \frac{2}{3}$. To see this, let $A_1 = E_{12} - E_{21}, B_1 = \gamma^{1/2} e_{11}, A_2 = E_{23}-E_{32}$, $B_2 = \gamma^{1/2} e_{13}$ for $\gamma\in\mathbb{R},\gamma> 0$. Thus, $[A_1,A_2] = E_{13} - E_{31}$, $B_1A_2 = B_2A_1 = 0$, $B_1B_2^{\ft} = 0$, $B_1^{\ft}B_2 -B_2^{\ft}B_1= \gamma(E_{13}-E_{31})$. By \cref{eq:sec_sum_sq_2}, the corresponding sectional curvature is \begin{equation}\label{eq:frc} \mathfrak{c}(\gamma) = \frac{\alpha/2 + \alpha(4\alpha-3)\gamma + (2-3\alpha)\gamma^2/2}{(2\alpha+\gamma)^2} \end{equation} with $\frac{d}{d\gamma}\mathfrak{c}(\gamma) = \alpha((7-10\alpha)\gamma -1 -6\alpha +8\alpha^2)/(\gamma + 2\alpha)^3$, $\mathfrak{c}$ is minimized at \begin{equation}\label{eq:gammamin} \gamma_{\min}(\alpha) = (1 + 6\alpha - 8\alpha^2)/(7-10\alpha) \end{equation} Substitute in, the function $\mathfrak{l}(\alpha) := \mathfrak{c}(\gamma_{\min}(\alpha))$ is slightly negative for $\alpha$ in the interval $(\frac{1}{2}, \frac{7}{10})$, which contains $\frac{2}{3}$. Note that $\alpha=\frac{7}{10}$ is a removable singularity of $\mathfrak{l}$, and setting $\mathfrak{l}(\frac{7}{10}) =\lim_{\gamma\to\infty}\mathfrak{c}(\gamma) =\frac{1}{2}(2-3\times \frac{7}{10}) = \frac{-1}{20}$ makes it a smooth function. This function is strictly decreasing and negative in the interval $(\frac{1}{2}, \frac{7}{10})$, with $\mathfrak{l}(\frac{1}{2}) = 0$ and $\mathfrak{l}(\frac{2}{3})$ around $-0.02$. The curvature range contains the interval between the maximum and minimum of values in \cref{tab:corners} if the condition in the last column of the table is satisfied. For $p=2$, \cref{prop:p_2} determines the exact curvature range. For $p>2$, numerically, the sections in that table seem to determine the range completely. For each $\alpha$, the lower and upper bound of the curvature range, found numerically by optimizing $\mathcal{K}$ over the space of all sections, the Grassmann manifold of two-dimensional subspaces of $\mathbb{R}^{\dim\St{p}{n}}$ is within the maximal and minimal values of the sections in the table if the condition in the last column is satisfied, as shown in \cref{fig:stiefel_curv_4_3}, \ref{fig:stiefel_curv_n_3}, \ref{fig:stiefel_curv_n_geq4}, \ref{fig:stiefel_curv_n_n_1}. There, we plot the graphs of the curvatures of the list of sections as functions of $\alpha$ for the scenarios, and also plot the results of the numerical optimization for curvature range, for a set of $30$ values of $\alpha$. The optimization is done for $n=4, p=3$, $n=5 , p \in \{3, 4\}$, $n=6, p\in\{3, 4, 5\}$, $n=10, p\in\{3, 5, 10, 9\}$, $n= 100, p\in\{10, 20\}$. The curve $ll$ in the figure is for the function $\mathfrak{l}$. The reason the optimized maximum is sometimes smaller than the proposed maximum, for small $\alpha$, is because the optimizer may be stuck at a local maximum. \begin{table} \begin{tabular}{c c c} \hline $\mathcal{K}$ & $A$ and $B$ & condition\\ \hline 0 & $A_1=A_2=E_{12}-E_{21},B_1= 2e_{13}, B_2=-\alpha e_{13}$ & $n\geq 4, p\geq 3$\\ 0 & $A_1=A_2=0,B_1= e_{11}, B_2=e_{22}$ & $n\geq 4, p\leq n-2$\\ 1 & $A_1=A_2=0, B_1= e_{11}, B_2 = e_{21}$&$n\geq 4, p\leq n-2$\\ $\frac{1}{2\alpha+1}$ & $A_1=E_{12}-E_{21}, A_2=E_{1p}-E_{p1}, B_1=-e_{1p},B_2=e_{12}$& $p\geq3$\\ $\frac{1}{8\alpha}$ &$A_1=E_{12} - E_{21}, A_2=E_{23} - E_{32}, B_1=B_2=0$ & $p\geq 3$\\ $\frac{1}{4\alpha}$ &$A_1=E_{12} - E_{21}+E_{p-1,p}-E_{p,p-1}$ & $p\geq 4$\\ & $A_2=E_{1,p-1} - E_{p-1,1}-E_{2,p}+E_{p,2}, B_1=B_2=0$ & \\ $\frac{\alpha}{2}$ & $A_1=(E_{12} - E_{21}),A_2=0, B_1= 0, B_2=e_{11}$ &\\ $\frac{2-3\alpha}{2}$ & $A_1=0,A_2=0, B_1= e_{11}, B_2=e_{12}$ &\\ $\frac{4-3\alpha}{2}$ & $A_1=A_2=0, B_1= e_{11}+e_{22}, B_2=e_{12}-e_{21}$ &$n\geq4, p \leq n-2$\\ $\mathfrak{l}(\alpha)$ & $A_1 = E_{12}-E_{21}, A_2 = E_{23}-E_{31}$ & \\ & $B_1=\gamma_{\min}(\alpha)^{1/2}e_{11}, B_2 = \gamma_{\min}(\alpha)^{1/2}e_{13}$ & $p\geq 3$, $\alpha < 7/10$ \end{tabular} \caption{Sectional curvature at representative sections. $\mathfrak{l}(\alpha) = \mathfrak{c}(\gamma_{\min}(\alpha))$, from \cref{eq:gammamin} and \cref{eq:frc}.} \label{tab:corners} \end{table} \begin{figure} \centering \includegraphics[scale=0.4]{stiefel_curv_4_3.png} \caption{Numerical test for curvature range $n=4, p=3$. Max, min sims are curvature ranges from numerical optimization.} \label{fig:stiefel_curv_4_3} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{stiefel_curv_n_3.png} \caption{Numerical test for curvature range $n>4, p=3$} \label{fig:stiefel_curv_n_3} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{stiefel_curv_n_p_geq4.png} \caption{Numerical test for curvature range $n-2\geq p\geq 4$. Max, min sims are curvature ranges from numerical optimization.} \label{fig:stiefel_curv_n_geq4} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{stiefel_curv_n_n_1.png} \caption{Numerical test for curvature range $p=n-1\geq 4$} \label{fig:stiefel_curv_n_n_1} \end{figure} \begin{proposition}\label{prop:p_2} If $p=2$ and $n=3$, then the sectional curvature range of $\St{p}{n}$ is $[\frac{\alpha}{2}, \frac{2-3\alpha}{2}]$ if $\alpha\leq \frac{1}{2}$ and $[\frac{2-3\alpha}{2}, \frac{\alpha}{2}]$ otherwise. In particular, if $\alpha < \frac{2}{3}$, $\St{2}{3}$ has strictly positive sectional curvature. If $p = 2$ and $n > 3$ then the sectional curvature range is $[0, \frac{4-3\alpha}{2}]$ if $\alpha\leq \frac{2}{3}$, $[\frac{2-3\alpha}{2}, 1]$ if $\frac{2}{3}< \alpha \leq 2$ and $[\frac{2-3\alpha}{2}, \frac{\alpha}{2}]$ if $\alpha > 2$. Hence, when $n>3$, $\St{2}{n}$ has non-negative curvature if $\alpha \leq \frac{2}{3}$. \end{proposition} \begin{proof}When $p = 2$, $\mathfrak{o}(2)$ is one dimension so $[A_1, A_2] = 0$ and we can set $A_1 = (2\alpha)^{-1/2}c_1 J, A_2 = (2\alpha)^{-1/2}c_2 J$ for $J = \begin{bmatrix}0 &1 \\ -1 & 0\end{bmatrix}$, with $c_1, c_2\in \mathbb{R}$. Further, for two orthogonal matrices in $U, V$ of compatible dimensions, the sectional curvature is unchanged if we replace $(A_1, B_1, A_2, B_2)$ with $(VA_1V^{\ft},UB_1V, VA_2V^{\ft}, UB_2V)$. Thus, we can assume $B_1$ is rectangular diagonal, with diagonal entries denoted by $d_i$, $1\leq i\leq \min(n-p, p)$. We denote entries of $B_2$ by $b_{ij}$, $1\leq i\leq n-p, 1\leq j\leq p$. We note $B_1A_2-B_2A_1 = (2\alpha)^{-1/2}(c_2B_1J - c_1B_2J)$, and since $JJ^{\ft} = \dI_2$, $\alpha^2\|B_1A_1-B_2A_2\|_F^2 = \alpha/2(c_2^2\Tr(B_1B_1^{\ft}) + c_1^2\Tr(B_2B_2^{\ft}) - 2c_1c_2\Tr(B_1B_2^{\ft}))$. The orthogonal condition $\alpha\Tr A_1A_2^{\ft} + \Tr B_1B_2^{\ft} = 0$ implies $c_1c_2 + \Tr B_1B_2^{\ft} = c_1c_2 + \sum_{i=1}^{\min(p, n-p)} d_ib_{ii}=0$, or $c_1c_2 = -\Tr B_1B_2^{\ft}$, so $-2c_1c_2\Tr B_1B_2^{\ft} = c_1^2c_2^2+(\Tr B_1B_2)^2$. This implies $$\alpha^2\|B_1A_1-B_2A_2\|_F^2 = \alpha/2(c_2^2\Tr(B_1B_1^{\ft}) + c_1^2\Tr(B_2B_2^{\ft}) + c_1^2c_2^2 + \Tr(B_1B_2^{\ft})^2)$$ For the case $n=3$, from \cref{eq:sec_sum_sq_2}, the curvature numerator $\hcK$ is reduced to $$\frac{2-3\alpha}{2}b_{12}^2d_1^2 + \frac{\alpha}{2}(c_2^2d_1^2 + c_1^2(b_{11}^2+b_{12}^2) + c_1^2c_2^2 + d_1^2b_{11}^2)$$ and the curvature denominator is $S=(c_1^2 + d_1^2)(c_2^2 + b_{11}^2+b_{12}^2)$. We have $\hcK - \alpha/2S = (1-2 \alpha)b_{12}^2d_1^2$, $\hcK -(1-3\alpha/2)S = (2\alpha - 1)(c_2^2d_1^2 + c_1^2(b_{11}^2+b_{12}^2) + c_1^2c_2^2 + d_1^2b_{11}^2)$. Thus, the signs of the differences are dependent on $1-2\alpha$, and $\hcK$ is between the smaller and the larger of $\alpha/2S$ and $(1-3\alpha/2)S$. The bound is tight based on \cref{tab:corners}. When $n > 3$, the denominator is $S=(c_1^2 + \sum_{i=1}^2 d_i^2)(c_2^2 + \sum_{ij}b^2_{ij})$. $B_1$ consists of a square diagonal block of size $2\times 2$ and the remaining zero block of size $(n-4)\times 2$. Expand $\|B_1B_2^{\ft} - B_2B_1^{\ft}\|_F^2$ by dividing $B_2$ to a square block corresponding to indices not exceeding two, which contributes $2(b_{21}d_1 - b_{12}d_2)^2$ and the remaining blocks, which contributes $2\sum_{j=1}^2\sum_{i>2}b_{ij}^2d_j^2$, $\hcK$ is $$\begin{gathered} \frac{2-3\alpha}{2}(b_{21}d_2-b_{12}d_1)^2 + \frac{\alpha}{2}(c_2^2\sum d_i^2 + c_1^2\sum_{ij}b_{ij}^2 + c_1^2c_2^2 + (\sum_{i=1}^2 d_i b_{ii})^2)\\ + (b_{21}d_1-b_{12}d_2)^2 +\sum_{j=1}^2\sum_{i>2}b_{ij}^2d_j^2 \end{gathered}$$ The above expression shows when $\alpha \leq 2/3$, $\mathcal{K} \geq 0$. In this case, $1\leq 2-3\alpha/2$, $\alpha/2\leq 2-3\alpha/2$, thus $\sum_{j=1}^2\sum_{i>2}b_{ij}^2d_j^2\leq (2-3\alpha/2)\sum_{i=1}^2 d_i^2\sum_{i>2}b^2_{ij}$ and $$\frac{\alpha}{2}(c_2^2\sum d_i^2 + c_1^2\sum_{ij}b_{ij}^2 + c_1^2c_2^2)\leq \frac{4-3\alpha}{2}(c_2^2\sum d_i^2 + c_1^2\sum_{ij}b_{ij}^2 + c_1^2c_2^2)$$ To show $\hcK\leq (2-3\alpha/2)S$, we only need to show $$\frac{2-3\alpha}{2}(b_{21}d_2-b_{12}d_1)^2 + (b_{21}d_1-b_{12}d_2)^2+\frac{\alpha}{2}(\sum_{i=1}^2 d_i b_{ii})^2\leq (2-\frac{3\alpha}{2})\sum_{k=1}^2d_k^2\sum_{i\leq2}b_{ij}^2 $$ This follows from Cauchy-Schwarz's theorem, applying to three different combinations on the left-hand side then sum up the inequalities, as the first two terms on the left-hand side are dominated by $((2-3\alpha)/2 +1)(d_1^2+d_2^2)(b_{21}^2+b_{12}^2)$, while the last one is dominated by $\alpha/2(d_1^2+d_2^2)(b_{11}^2 + b_{22}^2)\leq (2-3\alpha/2)(d_1^2+d_2^2)(b_{11}^2 + b_{22}^2)$. Next, when $\alpha > 2/3$, by Cauchy-Schwarz, $\hcK\geq (1-3\alpha/2)(b_{21}^2+b_{12}^2)(d_1^2+d_2^2) \geq (1-3\alpha/2)S$, as $1-3\alpha/2 < 0$. When $2/3 < \alpha \leq 2$, $\alpha/2 \leq 1$, thus $\mathcal{K}\leq S$, as the first term of $\hcK$ is negative, while we can use Cauchy-Schwarz on $(\sum d_ib_{ii})^2$ and $(b_{21}d_1 - b_{12}d_2)^2$ as before. Finally, for $\alpha > 2$, $\hcK\leq \alpha/2 S$, again because the first term of $\hcK$ is negative, while the remaining terms are dominated by the corresponding terms in $\alpha/2S$, using Cauchy-Schwarz if necessary. Again, the bounds are tight using \cref{tab:corners}. \end{proof} We note $\St{2}{3}$ is $\SOO(3)$, and could be considered as the sphere $S^3$ with antipodal points identified (via the quaternion representation, for example). From the formula for the metric, we see this is the projective version of the Berger sphere. \begin{proposition} For $p\geq 3$, the sectional curvature range of $\St{p}{n}$ contains an interval $I = I(n, p, \alpha)$ as described in \cref{tab:sec_range}. The first row describes the applicable combination of $(n, p)$, the columns labeled $\alpha_u$ specify the range of $\alpha$ where the interval formula next to it is applicable. The interval is applicable for $\alpha$ greater than the previous $\alpha_u$ (if exists) and not exceeding the current $\alpha_u$. \begin{table}[!ht] \begin{tabular}{|c c |c c| c c| c c|} \hline \multicolumn{2}{|c|}{$(n=4, p=3)$} & \multicolumn{2}{|c|}{$(n, 3),n\geq 5$} & \multicolumn{2}{|c|}{$(n, p),n-2\geq p\geq 4$} & \multicolumn{2}{|c|}{$(n, n-1), n \geq 5$}\\ \hline $\alpha_u$ & I &$\alpha_u$ & I &$\alpha_u$ & I &$\alpha_u$ & I \\ \hline $\frac{1}{6}$&$[0, \frac{1}{8\alpha}]$ &$\frac{4-\sqrt{13}}{6}$ &$[0, \frac{1}{8\alpha}]$ &$\frac{4-\sqrt{10}}{6}$ &$[0, \frac{1}{4\alpha}]$ &$1/2$ & $[0,\frac{1}{4\alpha}]$\\ $1/2$& $[0, \frac{1}{1+2\alpha}]$ &$1/2$ &$[0, \frac{4-3\alpha}{2})]$ &$1/2$ &$[0, \frac{4-3\alpha}{2}]$ &$\frac{7}{10}$ & $[\mathfrak{l}(\alpha), \frac{1}{1+2\alpha}]$ \\ $\frac{7}{10}$ & $[\mathfrak{l}(\alpha), \frac{1}{1+2\alpha}]$ &$\frac{2}{3}$ &$[\mathfrak{l}(\alpha), \frac{4-3\alpha}{2}]$ &$\frac{2}{3}$ &$[\mathfrak{l}(\alpha), \frac{4-3\alpha}{2}]$ &$\frac{\sqrt{17}-1}{4}$ & $[\frac{2-3\alpha}{2},\frac{1}{1+2\alpha}]$ \\ $\frac{\sqrt{17}-1}{4}$& $[\frac{2-3\alpha}{2}, \frac{1}{1+2\alpha}]$&$\frac{7}{10}$ &$[\mathfrak{l}(\alpha), 1]$ &$\frac{7}{10}$ &$[\mathfrak{l}(\alpha), 1]$ & $\infty$&$[\frac{2-3\alpha}{2}, \frac{\alpha}{2}]$\\ $\infty$ & $[\frac{2-3\alpha}{2}, \frac{\alpha}{2}]$ &$2$ &$[\frac{2-3\alpha}{2}, 1]$ &$2$ &$[\frac{2-3\alpha}{2}, 1]$ & & \\ & &$\infty$ &$[\frac{2-3\alpha}{2},\frac{\alpha}{2}]$ &$\infty$ &$[\frac{2-3\alpha}{2}, \frac{\alpha}{2}]$ & &\\ \hline \end{tabular} \caption{Interval contained in the sectional curvature range of the Stiefel manifold $\St{p}{n}$ with metric defined by $\alpha$. $\mathfrak{l}(\alpha) = \mathfrak{c}(\gamma_{\min})$ with $\mathfrak{c}$ defined in \cref{eq:frc}, and $\gamma_{\min}$ in \cref{eq:gammamin}.} \label{tab:sec_range} \end{table} \end{proposition} To illustrate, with $(n, p) = (4, 3)$, for $\alpha\leq \frac{1}{6}$, the sectional curvature range contains the interval $[0, \frac{1}{8\alpha}]$, for $\frac{1}{6}<\alpha\leq \frac{1}{2}$, it contains the interval $[0, \frac{1}{1+2\alpha}]$, etc. In the final row, for $\alpha > \frac{\sqrt{17}-1}{4}$, it contains the interval $[\frac{2-3\alpha}{2}, \frac{\alpha}{2}]$. \begin{proof}It is straightforward to check that for each pair $(n, p)$ in \cref{tab:sec_range}, the values indicated correspond to a quadruple $(A_1, B_1, A_2, B_2)$ in \cref{tab:corners}, which is applicable for the pair. For example, in the case $(n, p) = (4, 3)$, the only applicable values from \cref{tab:corners} are $0$ (from the first row), $\frac{1}{2\alpha+1}$, $\frac{1}{8\alpha}$, $\mathfrak{l}(\alpha)$ and $\frac{2-3\alpha}{2}$. To show the sectional curvature range contains $I$, it remains to verify the lower end of $I$ is not greater than the upper end, which is immediate, as $\mathfrak{l}(\alpha)$ is negative between $0$ and $\frac{7}{10}$, and $\frac{2-3\alpha}{2}$ is negative for $\alpha>\frac{7}{10}>\frac{2}{3}$. The graphs in figures \ref{fig:stiefel_curv_4_3}, \ref{fig:stiefel_curv_n_3}, \ref{fig:stiefel_curv_n_geq4}, \ref{fig:stiefel_curv_n_n_1} display the relative values of these functions. As all the functions involved are simple algebraic functions, except for $\mathfrak{l}$, if we can assess the contribution of $\mathfrak{l}$, it will be easy to check that the lower end of $I$ corresponds to the smallest value among the applicable values, and the upper to the largest of the applicable values. The function $\gamma_{\min}$ from \cref{eq:gammamin} has a root at $\alpha_s=\frac{3+\sqrt{17}}{8}$ at around $0.89$, and is negative in the interval $(\frac{7}{10},\alpha_s)$, hence $\sqrt{\gamma_{\min}}$ and $B_1, B_2$ for this section are not defined, so $\mathfrak{l}(\alpha)$ cannot be an extremum for $\alpha \in (\frac{7}{10},\alpha_s)$. In the interval $[\alpha_s, 2]$, $\mathfrak{l}$ has the approximate range of $[0.14, 0.38]$, less than $1$, and in the interval $[\alpha_s, \frac{\sqrt{17}-1}{4}]$ it is less than $\frac{1}{1+2\alpha}$. For large $\alpha$, $\gamma_{\min}$ is approximated by $0.8\alpha$, thus $\mathfrak{l}(\alpha)$ has an asymptote with slope $\frac{4\times 0.8 - 3\times 0.8^2/2}{2.8^2}\approx 0.286$, smaller than the slope of $\frac{\alpha}{2}$. It is also easy to graph $\mathfrak{l}$ in the interim to show beyond the contribution to the lower bound in $[1/2, \frac{7}{10}]$, $\mathfrak{l}$ has no other effect on the curvature range. With that analysis, for the case $(n, p) = (4, 3)$, the only applicable values from \cref{tab:corners} are $0$ (from the first row), $\frac{1}{2\alpha+1}$, $\frac{1}{8\alpha}$, $\mathfrak{l}(\alpha)$ and $\frac{2-3\alpha}{2}$. If $\alpha < 1/2$, all these functions are non-negative, and thus $0$ is the smallest value among them. When $\frac{1}{2} <\alpha < \frac{7}{10}$, $\mathfrak{l}(\alpha)$ is negative, and in the interval $[\frac{2}{3}, \frac{7}{10}]$, $\frac{2-3\alpha}{2}$ is also negative, but $\mathfrak{l}(\alpha)$ is the lesser of the two, while we have discussed $\mathfrak{l}(\alpha)$ has no effect for $\alpha>\frac{7}{10}$. Thus, for $\alpha >\frac{7}{10}$ the upper end of $I$ is $\max(\frac{1}{1+2\alpha}, \frac{\alpha}{2})$, with the break-even point $\frac{\sqrt{17}-1}{4}$. In general, consider the upper or lower ends of $I$ as functions of $\alpha$, the values in column $\alpha_u$ corresponds to nonsmooth points of these functions or infinity. For the case $n\geq 5, p = n-1$, $(0, \frac{1}{4\alpha}, \frac{1}{8\alpha}, \frac{1}{2\alpha+1}, \mathfrak{l}(\alpha), \frac{2-3\alpha}{2}, \frac{\alpha}{2})$ are the applicable curvature values. Again, with $\mathfrak{l}$ having only an effect in $[\frac{2}{3}, \frac{7}{10}]$, it is straightforward to verify the piecewise smooth function $\max(0, \frac{1}{4\alpha},\frac{1}{8\alpha}, \frac{1}{2\alpha+1}, \mathfrak{l}(\alpha),\frac{2-3\alpha}{2}, \frac{\alpha}{2})$ has the form corresponding to the upper end of $I$, and the lower end corresponding to the minimum of those functions, for $\alpha > \frac{7}{10}$. We address the case $p\geq n-2$ similarly. \end{proof} For $\alpha=\frac{1}{2}$, when $p=n-1, n\geq 4$, the range contains $[0, \frac{1}{2}]$, and it could be shown to be exactly $[0, \frac{1}{2}]$ as the manifold is isometric to $\SOO(n)$ with a bi-invariant metric. If $2\leq p\leq n-2, n\geq 5$, the range contains $[0, 2-3\alpha/2] = [0, 5/4]$, which is proved to be the exact range in \cite{Rentmee}. For $\alpha=1$, the interval is $[-1/2, 1]$. From the numerical evidence mentioned, this seems to be tight. We note for $p\geq 3$, both when $\alpha$ is large or $\alpha$ is small, the curvature range becomes large. \section{Deformation metrics on normal homogeneous manifolds}\label{sec:deform} For a Lie group $\mathtt{G}$, with $U\in\mathtt{G}$, we will denote by $\mathcal{L}_U$ the left-multiplication map and by $d\mathcal{L}_U$ its differential. As usual, $\ad_A$ denotes the operator $X\mapsto [A, X]$ on the Lie algebra $\mathfrak{g}$ of $\mathtt{G}$ ($A, X\in\mathfrak{g}$). We recall a few results on curvatures of Lie groups. \begin{proposition}\label{prop:curv_general}Let $\mathtt{G}$ be a connected Lie group with Lie algebra $\mathfrak{g}$ with a left-invariant metric given by an inner product $\langle\rangle_{\rP}$ on $\mathfrak{g}$. For $A\in\mathfrak{g}$, let $\ad_A^{\dagger}$ be the adjoint of $\ad_A$ under $\langle\rangle_{\rP}$, that means $\addg_A$ is a linear operator on $\mathfrak{g}$ such that $\langle[A, A_1], A_2\rangle_{\rP} = \langle A_1, \addg_AA_2\rangle_{\rP}$. Define \begin{equation}[A,B]_{\rP} = [A,B] -\addg_AB -\addg_BA \end{equation} Let $\nabla^{\mathtt{G}}$ be the Levi-Civita connection on $\mathtt{G}$. For two vector fields $\mathtt{X}, \mathtt{Y}$ on $\mathtt{G}$, there exists $\mathfrak{g}$-valued functions $A(U), B(U)$, $U\in\mathtt{G}$ such that $\mathtt{X}(U) = d\mathcal{L}_UA(U), \mathtt{Y}(U) = d\mathcal{L}_UB(U)$. We have \begin{equation}\label{eq:nabla_GXY} (\nabla^{\mathtt{G}}_{\mathtt{X}}\mathtt{Y})(U) = d\mathcal{L}_U((\rD_{\mathtt{X}}B)(U) + \frac{1}{2}[A(U), B(U)]_{\rP}) \end{equation} where $\rD_{\mathtt{X}}B$ is the Lie-derivative of the $\mathfrak{g}$-valued function $B$ by the vector field $\mathtt{X}$. For $\omega_1, \omega_2, \omega_3\in\mathfrak{g}$, the curvature of $\mathtt{G}$ at the identity is given by \begin{equation}\label{eq:group_curv} \rR^{\mathtt{G}}_{\omega_1, \omega_2}\omega_3 = \frac{1}{2}[[\omega_1, \omega_2], \omega_3]_{\rP} - \frac{1}{4}[\omega_1[\omega_2, \omega_3]_ {\rP}]_{\rP} +\frac{1}{4}[\omega_2[\omega_1, \omega_3]_{\rP}]_{\rP} \end{equation} Let $\mathfrak{k}$ be a subalgebra of $\mathfrak{g}$ such that $\rP$ is $\ad(\mathfrak{k})$-invariant, $\langle [A, K], B \rangle_{\rP} + \langle A, [K, B] \rangle_{\rP} = 0$ for $K\in\mathfrak{k}, A, B\in\mathfrak{g}$, and $\mathfrak{k}$ corresponds to a closed subgroup $\mathtt{K}\subset\mathtt{G}$, such that $\mathtt{K}$ acts freely and properly on $\mathtt{G}$ by isometries under right multiplication and $\mathtt{G}/\mathtt{K}$ is a homogeneous space. If $\mathfrak{g} = \mathfrak{k}\oplus\mathfrak{m}$ is an orthogonal decomposition under $\langle\rangle_{\rP}$, then the horizontal lift of the curvature of $\mathtt{M} =\mathtt{G}/\mathtt{K}$ at $o$, the equivariant class containing the unit of $\mathtt{G}$, evaluated at three horizontal vectors $\omega_1, \omega_2, \omega_3\in\mathfrak{m}$ is \begin{equation}\label{eq:hom_curv} \begin{gathered} \rR^{\mathtt{M}}_{\omega_1, \omega_2}\omega_3 = (\frac{1}{2}[[\omega_1, \omega_2], \omega_3]_{\rP} - \frac{1}{4}[\omega_1[\omega_2, \omega_3]_ {\rP}]_{\rP} +\frac{1}{4}[\omega_2[\omega_1, \omega_3]_{\rP}]_{\rP}\\ +\frac{1}{2}\addg_{\omega_3}[\omega_1, \omega_2]_{\mathfrak{k}} - \frac{1}{4}\addg_{\omega_1}[\omega_2, \omega_3]_{\mathfrak{k}} + \frac{1}{4}\addg_{\omega_2}[\omega_1, \omega_3]_{\mathfrak{k}} )_{\mathfrak{m}} \end{gathered} \end{equation} Here, $\omega_{\mathfrak{v}}$ denotes the orthogonal projection of $\omega$ to $\mathfrak{v}$ for an element $\omega\in\mathfrak{g}$ and a subspace $\mathfrak{v}$ of $\mathfrak{g}$. Also, given two vector fields $\mathtt{X}, \mathtt{Y}$ on $\mathtt{M}$, which lift to horizontal vector fields $\bar{\mathtt{X}}, \bar{\mathtt{Y}}$ on $\mathtt{G}$, with $\bar{\mathtt{X}}(U) = d\mathcal{L}_UA(U), \bar{\mathtt{Y}}=d\mathcal{L}_UB(U)$ for two $\mathfrak{g}$-valued functions $A(U), B(U)$ on $\mathtt{G}$ then the horizontal lift of $\nabla_{\mathtt{X}}\mathtt{Y}$ is given by \begin{equation}\label{eq:nabla_hom} \overline{\nabla_{\mathtt{X}}\mathtt{Y}(U)} = d\mathcal{L}_U((\rD_{\bar{\mathtt{X}}}B)(U) +\frac{1}{2}[A(U), B(U)]_{\rP})_{\mathfrak{m}} \end{equation} \end{proposition} Note that in general $[\quad]_{\rP}$ is not anticommutative, as the term $\addg_AB +\addg_BA$ is commutative, and we have $[A,B]_{\rP} - [B,A]_{\rP} = 2[A, B]$. \begin{proof}First, we note for three $\mathfrak{g}$-valued functions A, B, C $$\begin{gathered}\langle [A, B]_{\rP}, C\rangle_{\rP} + \langle B, [A, C]_{\rP} \rangle_{\rP} = \langle[A, B], C \rangle_{\rP} - \langle B,[A, C] \rangle_{\rP} - \langle A,[B, C] \rangle_{\rP}\\ + \langle B, [A, C] \rangle_{\rP} - \langle [A, B], C \rangle_{\rP} - \langle [C, B], A \rangle_{\rP} =0 \end{gathered}$$ For each smooth function $F:\mathtt{G}\to\mathsf{g}$, denote by $\mathcal{L}[F]$ the vector field $U\mapsto d\mathcal{L}_UF(U)$. Denote by $\langle\rangle_{\mathtt{G}}$ the left-invariant metric induced by $\rP$. For three vector fields $\mathtt{X}, \mathtt{Y}, \mathtt{Z}$ with $\mathtt{X} = \mathcal{L}[A], \mathtt{Y} = \mathcal{L}[B]$ and $\mathtt{Z} = \mathcal{L}[C]$ with three smooth $\mathfrak{g}$-valued functions $A, B, C$, we have $$\begin{gathered}\rD_{\mathtt{X}}\langle \mathtt{Y}, \mathtt{Z}\rangle_{\mathtt{G}} = \rD_{\mathtt{X}}\langle B, C\rangle_{\rP} = \langle \rD_{\mathtt{X}}B, C\rangle_{\rP} + \langle B, \rD_{\mathtt{X}}C\rangle_{\rP}\\ = \langle \mathcal{L} [\rD_{\mathtt{X}}B +\frac{1}{2}[A, B]_{\rP}], \mathtt{Z}\rangle_{\mathtt{G}} + \langle \mathtt{Y}, \mathcal{L}[(\rD_{\mathtt{X}}C +\frac{1}{2}[A, C]_{\rP}]\rangle_{\mathtt{G}} \end{gathered}$$ as the metric is left-invariant, $\rP$ is constant on $\mathfrak{g}$, and apply the just proved identity. We can verify $\mathcal{L} [\rD_{\mathtt{X}}B +\frac{1}{2}[A, B]_{\rP}]$ satisfies the derivative rule of a connection, and we have just proved it is metric compatible. Torsion-freeness follows from $[A,B]_{\rP} - [B,A]_{\rP} = 2[A, B]$, thus $\mathcal{L} [\rD_{\mathtt{X}}B +\frac{1}{2}[A, B]_{\rP}]$ is the Levi-Civita connection. Equation \ref{eq:nabla_GXY} is from \cite{Michor2007}, equation 3.3.2 (the author uses a right-invariant metric). It is related to the Euler-Poisson-Arnold equation (EPDiff), see equation (55) in Arnold's classical paper \cite{Arnold1966}. See also \cite{Milnor1976}. Equation (\ref{eq:group_curv}) now follows directly from the definition of curvature $\nabla_{[\mathtt{X}, \mathtt{Y}]}\mathtt{Z} - \nabla_{\mathtt{X}}\nabla_{\mathtt{Y}}\mathtt{Z} + \nabla_{\mathtt{Y}}\nabla_{\mathtt{X}}\mathtt{Z}$, applying to the invariant vector fields $\mathcal{L}[\omega_i], i\in\{1, 2, 3\}$. Equation \cref{eq:hom_curv} follows from the O'Neil equation (Theorem 2, \cite{ONeil1966}) , written in $(1, 3)$ tensor form. Indeed, the O'Neil tensor of two vector fields $\mathcal{L}[A], \mathcal{L}[B]$ on $\mathtt{G}$ for $\mathfrak{g}$-valued functions $A$ and $B$ evaluated at the coset $o$ is $\frac{1}{2}[A, B]_{\mathfrak{k}}$ as the just proved result for covariant derivatives shows the Lie bracket $\{\mathcal{L}[A], \mathcal{L}[B]\} = \mathcal{L}[[A, B]]$, then we use Lemma 2, \cite{ONeil1966}. By properties of adjoint and projection, the right-hand side of \cref{eq:hom_curv} is the unique vector in $\mathfrak{m}$ such that the O'Neil equation (equation 4, theorem 2, \cite{ONeil1966}) is satisfied. Equation (\ref{eq:nabla_hom}) follows from the result for $\mathtt{G}$ and property of horizontal lift of a connection in Riemannian submersion, e.g. lemma 7.45 in \cite{ONeil1983} (because of left-invariance, we can translate the projection to the identity). \end{proof} For a subspace $\mathfrak{v}\subset\mathfrak{g}$, we write $\omega_{1\mathfrak{v}}$ for $(\omega_1)_{\mathfrak{v}}$, the projection of $\omega_1$ to $\mathfrak{v}$ ($\omega_1\in\mathfrak{g}$). We write $[\omega_1, \omega_2]_{\mathfrak{v}}$, $[\omega_1, \omega_2]_{\rP\mathfrak{v}}$ for the corresponding projections of brackets. On a Lie group with a bi-invariant metric $\langle\rangle$, we now introduce a family of left-invariant metrics called the Cheeger deformation metrics (\cite{Cheeger1973,Ziller2007,GZ2000}). The Lie algebra used in the deformation will be called $\mathfrak{a}$ here (it is often called $\mathfrak{k}$, but we use $\mathtt{K}$ for the stabilizer group. We will use the letters $\mathfrak{a}, \mathfrak{b}$ corresponding to the component $A$, $B$ of the Stiefel tangent vectors as will be seen shortly). Let $\mathtt{A}$ be a connected subgroup of $\mathtt{G}$ with Lie algebra $\mathfrak{a}$. With the bi-invariant metric on $\mathtt{G}$, $\mathtt{A}$ acts via right multiplication as a group of isometries on $\mathtt{G}$. Give $\mathtt{G}\times\mathtt{A}$ a bi-invariant metric corresponding to the inner product on $\mathfrak{g}\oplus\mathfrak{a}$ evaluated as $\langle g, g\rangle +r\langle a, a\rangle$ for $(g, a)\in\mathfrak{g}\times \mathfrak{a}$ with $r > 0$, we have the submersion $\mathtt{G}\times \mathtt{A}\to\mathtt{G}$ given by $(U, Q)\mapsto UQ^{-1}$ ($U\in\mathtt{G}, Q\in\mathtt{A}$). Let $\mathfrak{g} = \mathfrak{a}\oplus\mathfrak{n}$ be an orthogonal decomposition with respect to $\langle\rangle$. The submersion induces a new metric on $\mathtt{G}$ which is shown in \cite{Ziller2007} to be $$\langle\omega_{\mathfrak{n}},\omega_{\mathfrak{n}} \rangle + \frac{r}{(r+1)} \langle\omega_{\mathfrak{a}}, \omega_{\mathfrak{a}}\rangle$$ for $\omega\in\mathfrak{g}$. Denote the Cheeger deformation metric $\rP_t$ on $\mathfrak{g}$ by the formula $\langle\omega_{\mathfrak{n}},\omega_{\mathfrak{n}} \rangle + t\langle\omega_{\mathfrak{a}}, \omega_{\mathfrak{a}}\rangle$ for $t> 0$. At $t=1$, it is the original metric. For $t < 1$, the metric corresponds to the submersion above with $r = t/(1-t)$, thus $\mathtt{G}$ has non-negative curvature by O'Neil's equation. For $t > 1$, the metric on $\mathtt{G}\times\mathtt{A}$ is semi-Riemannian but the corresponding metric on $\mathtt{G}$ is Riemannian. If $\mathfrak{n}$ contains a subalgebra $\mathfrak{k}$ corresponding to a closed subgroup $\mathtt{K}$ of $\mathtt{G}$, such that $\mathfrak{k}$ commutes with $\mathfrak{a}$ then $\mathtt{G}/\mathtt{K}$ could be equipped with the quotient metric induced from $\rP_t$. Hence, we will consider the situation when $\mathfrak{k}$ is a subalgebra of an algebra $\mathfrak{h}$ commuting if $\mathfrak{a}$. Note that $\mathtt{G}/\mathtt{K}$ with the original bi-invariant metric is called a {\it normal homogeneous space} in the literature, while $\rP_t$ is no longer bi-invariant. \begin{proposition}\label{prop:abh_split} Assume the Lie algebra $\mathfrak{g}$ has a bi-invariant metric $\langle\rangle$. Let $\mathfrak{h}\subset \mathfrak{g}$ be a Lie subalgebra of $\mathfrak{g}$ and $\mathfrak{h}^{\perp}$ be the orthogonal complement of $\mathfrak{h}$ in $\mathfrak{g}$ under $\langle\rangle$, $\mathfrak{g} = \mathfrak{h}\oplus\mathfrak{h}^{\perp}$. Then $\mathfrak{b} :=[\mathfrak{h}, \mathfrak{h}^{\perp}]\subset\mathfrak{h}^{\perp}$, or $\mathfrak{h}^{\perp}$ is a $\mathfrak{h}$-module. Let $\mathfrak{h}^{\perp} = \mathfrak{b}\oplus \mathfrak{a}$ be an orthogonal decomposition under $\langle\rangle$. We can characterize $\mathfrak{a}$ as the subspace $\{A\in \mathfrak{h}^{\perp}|[A,\mathfrak{h}]=0\}$. Then \begin{equation} \mathfrak{g} = \mathfrak{h}\oplus \mathfrak{b}\oplus \mathfrak{a}\end{equation} We have $[\mathfrak{a}, \mathfrak{b}]\subset\mathfrak{b}$, $\mathfrak{a}$ is a Lie subalgebra of $\mathfrak{g}$, $[\mathfrak{a}, \mathfrak{h}] = 0$ and $\mathfrak{b}$ is both a $\mathfrak{h}$ and $\mathfrak{a}$ module. The correspondence $\mathfrak{h}\mapsto\mathfrak{a}$ is involutive on the set of all subalgebras of $\mathfrak{g}$, that means if we apply the same procedure on $\mathfrak{a}$, we recover $\mathfrak{h}$. \end{proposition} \begin{proof} Let $X\in\mathfrak{h}^{\perp}$ and $A, H\in\mathfrak{h}$. Then $\langle [A, X],H\rangle = -\langle X,[A, H]\rangle =0$ since $\mathfrak{h}$ is a subalgebra of $\mathfrak{g}$, thus $[A, X]\in\mathfrak{h}^{\perp}$. Assume the $\langle\rangle$-orthogonal decomposition $\mathfrak{h}^{\perp} = \mathfrak{b}\oplus\mathfrak{a}$ with $\mathfrak{b} = [\mathfrak{h}, \mathfrak{h}^{\perp}]$. For $A\in\mathfrak{a}$, $\langle [A, \mathfrak{h}], \mathfrak{h}^{\perp}\rangle \subset \langle A, [\mathfrak{h}, \mathfrak{h}^{\perp}]\rangle \subset\{0\}$ and $[A, \mathfrak{h}]\subset\mathfrak{h}^{\perp}$ as $\mathfrak{h}^{\perp}$ is a $\mathfrak{h}$-module. Hence, $[A, \mathfrak{h}]=0$ as $\langle\rangle$ is non-degenerate on $\mathfrak{h}^{\perp}$. Conversely, if $A\in\mathfrak{h}^{\perp}$ and $[A, \mathfrak{h}]=0$ then $\langle A, [\mathfrak{h}, \mathfrak{h}^{\perp}]\rangle \subset \langle [A, \mathfrak{h}], \mathfrak{h}^{\perp}]\rangle\subset\{0\}$, thus $A\in\mathfrak{a}$. We have proved $\mathfrak{a}$ is characterized as the subspace of $\mathfrak{h}^{\perp}$ such that $[A,\mathfrak{h}] = 0$ for $A\in\mathfrak{a}$. Next, for $A\in\mathfrak{a}$, $\langle [A, \mathfrak{h}^{\perp}], \mathfrak{h}\rangle \subset \langle A,[ \mathfrak{h}^{\perp}, \mathfrak{h}]\rangle\subset\{0\}$, thus $[A, \mathfrak{h}^{\perp}]\subset\mathfrak{h}^{\perp}$. Then $$\langle [A, [\mathfrak{h}, \mathfrak{h}^{\perp}]], \mathfrak{a}\rangle \subset \langle [[A, \mathfrak{h}], \mathfrak{h}^{\perp}], \mathfrak{a}\rangle + \langle [\mathfrak{h},[A, \mathfrak{h}^{\perp}]], \mathfrak{a}\rangle\subset\{0\} $$ as in the middle sum, the first item is zeros because $[A, \mathfrak{h}] = 0$, the second is $\langle [\mathfrak{h},[A, \mathfrak{h}^{\perp}]], \mathfrak{a}\rangle\subset \langle [A, \mathfrak{h}^{\perp}],[\mathfrak{h}, \mathfrak{a}]\rangle\subset \{0\}$ as $[\mathfrak{h}, \mathfrak{a}] = \{0\}$. This shows $[\mathfrak{a}, \mathfrak{b}]$ is in the orthogonal complement of $\mathfrak{a}$ in $\mathfrak{h}^{\perp}$, or $[\mathfrak{a}, \mathfrak{b}]\subset\mathfrak{b}$. Now, $\langle [\mathfrak{a}, \mathfrak{a}],\mathfrak{h}\rangle \subset \langle \mathfrak{a}, [\mathfrak{a},\mathfrak{h}]\rangle \subset \{0\}$, thus $[\mathfrak{a}, \mathfrak{a}]\subset\mathfrak{h}^{\perp}$. But then $\langle [\mathfrak{a}, \mathfrak{a}],\mathfrak{b}\rangle \subset \langle \mathfrak{a}, [\mathfrak{a},\mathfrak{b}]\rangle\subset\langle \mathfrak{a}, \mathfrak{b}\rangle \subset\{0\}$, hence $[\mathfrak{a}, \mathfrak{a}]\subset\mathfrak{a}$, therefore $\mathfrak{a}$ is a subalgebra of $\mathfrak{g}$, and $\mathfrak{b}$ is an $\mathfrak{a}$-module. Involutiveness follows from the orthogonal decomposition $\mathfrak{g} = \mathfrak{h}\oplus\mathfrak{b}\oplus\mathfrak{a}$, and the characterization of $\mathfrak{a}$ by the relation $[\mathfrak{a}, \mathfrak{h}]=0$, which implies $\mathfrak{a}^{\perp} = \mathfrak{b}\oplus\mathfrak{h}$. \end{proof} \begin{proposition}\label{prop:adPt}Assume $\mathfrak{g}$ has a bi-invariant inner product $\langle\rangle$. Let $\rP$ be a positive-definite self-adjoint operator under the inner product $\langle\rangle$. Then under the inner product $\langle\rangle_{\rP}$ defined by $\langle A_1, A_2\rangle_{\rP}:= \langle A_1, \rP A_2\rangle$, we have $\addg_A X = -\rP^{-1}[A, \rP X]$ for $X\in\mathfrak{g}$, or $\addg_A = -\rP^{-1}\circ\ad_A\circ \rP$. Let $t$ be a positive number and $\mathfrak{a}, \mathfrak{b}, \mathfrak{h}$ as in \cref{prop:abh_split}. Let $\mathfrak{n}=\mathfrak{b}\oplus\mathfrak{h}$, thus $\mathfrak{g} = \mathfrak{a} \oplus\mathfrak{n}$. Define the operator $\rP = \rP_t$ by $\rP \omega = t \omega_{\mathfrak{a}} + \omega_{\mathfrak{n}}$. Then for $\omega_1, \omega_2\in\mathfrak{g}$ \begin{equation}\label{eq:addg_Pta} (\addg_{\omega_{1}}\omega_{2})_{\mathfrak{a}} = -[\omega_{1\mathfrak{a}}, \omega_{2\mathfrak{a}}] -1/t[\omega_{1\mathfrak{n}},\omega_{2\mathfrak{n}}]_{\mathfrak{a}} \end{equation} \begin{equation}\label{eq:addg_Ptn} (\addg_{\omega_{1}}\omega_{2})_{\mathfrak{n}} = - [\omega_{1\mathfrak{a}}, \omega_{2\mathfrak{b}}] + t[\omega_{2\mathfrak{a}}, \omega_{1\mathfrak{b}}] -[\omega_{\mathfrak{n}, 1},\omega_{2\mathfrak{n}}]_{\mathfrak{n}} \end{equation} \begin{equation}\label{eq:addg_BKP} [\omega_1, \omega_2]_{\rP} = [\omega_1, \omega_2] + (1-t)([\omega_{1\mathfrak{a}},\omega_{2\mathfrak{b}}] + [\omega_{2\mathfrak{a}},\omega_{1\mathfrak{b}}])\end{equation} Let $\mathfrak{k}\subset \mathfrak{h}$ be a Lie subalgebra of $\mathfrak{h}$ and $\mathfrak{m} = \mathfrak{a}\oplus\mathfrak{b}\oplus\mathfrak{d}$ where $\mathfrak{h} = \mathfrak{k}\oplus\mathfrak{d}$ is an orthogonal decomposition, thus $\mathfrak{g} = \mathfrak{k}\oplus\mathfrak{m}$. For $\omega_3 \in\mathfrak{g}$ \begin{equation}\label{eq:oneil_P} (\addg_{\omega_3}[\omega_1, \omega_2]_{\mathfrak{k}})_{\mathfrak{m}} = -[\omega_{3\mathfrak{m}}[\omega_1, \omega_2]_{\mathfrak{k}}] \end{equation} \end{proposition} \begin{proof} Let $A, Y, X\in \mathfrak{g}$. From $\ad(\mathfrak{g})$ invariance of $\langle\rangle$ we have $$\langle[A, Y], \rP X\rangle = \langle Y, -\rP \rP^{-1}[A, \rP X]\rangle$$ which gives us the first statement. For \cref{eq:addg_Pta} and \cref{eq:addg_Ptn}, we expand $$\begin{gathered}\addg_{\omega_1}\omega_2 = -\rP^{-1}[\omega_{1\mathfrak{a}} + \omega_{1\mathfrak{n}}, t\omega_{2\mathfrak{a}} + \omega_{2\mathfrak{n}}]\\ =(-1/t)([t\omega_{1\mathfrak{a}}, \omega_{2\mathfrak{a}}] + [\omega_{1\mathfrak{n}},\omega_{2\mathfrak{n}}]_{\mathfrak{a}}) - ([\omega_{1\mathfrak{a}}, \omega_{2\mathfrak{n}}] + [\omega_{1\mathfrak{n}}, t\omega_{2\mathfrak{a}}] +[\omega_{1\mathfrak{n}},\omega_{2\mathfrak{n}}])_{\mathfrak{n}} \end{gathered}$$ then use the fact that $[\mathfrak{a}, \mathfrak{h}] = \{0\}$. Equation \ref{eq:addg_BKP} follows from this and the definition of $[\quad]_{\rP}$, using anti-commutativity to cancel $1/t([\omega_{1\mathfrak{n}},\omega_{2\mathfrak{n}}]_{\mathfrak{a}} + [\omega_{2\mathfrak{n}},\omega_{1\mathfrak{n}}]_{\mathfrak{a}})$. For \cref{eq:oneil_P}, let $\omega_4\in\mathfrak{m}$, we have $$\langle \addg_{\omega_3}[\omega_1,\omega_2]_{\mathfrak{k}},\omega_4\rangle_{\rP} = \langle [\omega_1,\omega_2]_{\mathfrak{k}},\rP[\omega_3,\omega_4]\rangle = \langle [\omega_1,\omega_2]_{\mathfrak{k}},[\omega_3,\omega_4]_{\mathfrak{k}}\rangle $$ as when we expand $\rP[\omega_3,\omega_4]$, only $[\omega_3,\omega_4]_{\mathfrak{k}}$ could be not orthogonal to $[\omega_1,\omega_2]_{\mathfrak{k}}$. From here $\langle [\omega_1,\omega_2]_{\mathfrak{k}},[\omega_3,\omega_4]_{\mathfrak{k}}\rangle = \langle [\omega_1,\omega_2]_{\mathfrak{k}},[\omega_3,\omega_4]\rangle = -\langle [\omega_3, [\omega_1,\omega_2]_{\mathfrak{k}}],\omega_4\rangle$. But $[\omega_{3\mathfrak{k}},[\omega_1,\omega_2]_{\mathfrak{k}}]$ is orthogonal to $\omega_4\in\mathfrak{m}$, so we are left with $-\langle [\omega_{3\mathfrak{m}}, [\omega_1,\omega_2]_{\mathfrak{k}}],\omega_4\rangle =-\langle [\omega_{3\mathfrak{m}}, [\omega_1,\omega_2]_{\mathfrak{k}}],\omega_4\rangle_{\rP}$ as $[\omega_{3\mathfrak{m}}, [\omega_1,\omega_2]_{\mathfrak{k}}]_{\mathfrak{a}} = 0$, because $[\omega_{3\mathfrak{a}}, [\omega_1,\omega_2]_{\mathfrak{k}}] = 0$ while the remaining term is in $\mathfrak{b}\oplus\mathfrak{h}$. By \cref{prop:abh_split} $[\omega_{3\mathfrak{m}}, [\omega_1,\omega_2]_{\mathfrak{k}}]\in\mathfrak{m}$ since $\mathfrak{m}$ is the orthogonal complement of $\mathfrak{k}$, this proves \cref{eq:oneil_P}. \end{proof} Recall $o$ is the coset containing the identity in the homogeneous manifold $\mathtt{G}/\mathtt{K}$. The expression $\rR^{[0]}$ in the following theorem is the curvature of a normal homogeneous manifold, probably not usually known in this format. \begin{proposition}\label{prop:curve_Pt} For a Lie group $\mathtt{G}$ with Lie algebra $\mathfrak{g}$ and a bi-invariant metric $\langle\rangle_{\mathtt{G}}$, the curvature of the homogeneous manifold $\mathtt{M} =\mathtt{G}/\mathtt{K}$ under the metric $\rP_t$ at $o$ with $\mathfrak{k}\subset\mathfrak{h}$ are subalgebras of $\mathfrak{g}$, ($\mathfrak{g} = \mathfrak{a} \oplus \mathfrak{b}\oplus\mathfrak{h} = \mathfrak{a}\oplus\mathfrak{n} = \mathfrak{m}\oplus\mathfrak{k}$ as in \cref{prop:abh_split}) at $\omega_1, \omega_2, \omega_3\in\mathfrak{m}$ is given by \begin{equation} \rR_{\omega_1, \omega_2}\omega_3 = \rR^{[0]}_{\omega_1, \omega_2}\omega_3 + (1-t)\rR^{[1]}_{\omega_1, \omega_2}\omega_3 + (1-t)^2\rR^{[2]}_{\omega_1, \omega_2}\omega_3 \end{equation} \begin{equation} \begin{gathered} \rR^{[0]}_{\omega_1, \omega_2}\omega_3 := \frac{1}{4}([[\omega_1, \omega_2],\omega_3]_{\mathfrak{m}} + 2[[\omega_1, \omega_2]_{\mathfrak{k}},\omega_3] -[[\omega_2, \omega_3]_{\mathfrak{k}},\omega_1] + [[\omega_1, \omega_3]_{\mathfrak{k}},\omega_2]) \end{gathered} \end{equation} \begin{equation}\begin{gathered} \rR^{[1]}_{\omega_1, \omega_2}\omega_3 := \frac{1}{2}([[\omega_1, \omega_2]_{\mathfrak{a}}, \omega_{3\mathfrak{b}}] + [\omega_{3\mathfrak{a}}, [\omega_1, \omega_2]_{\mathfrak{b}}])\\ - \frac{1}{4}([\omega_1, [\omega_{2\mathfrak{a}}, \omega_{3\mathfrak{b}}] + [\omega_{3\mathfrak{a}}, \omega_{2\mathfrak{b}}]] + [\omega_{1\mathfrak{a}}, [\omega_2, \omega_3]_{\mathfrak{b}}] + [[\omega_2, \omega_3]_{\mathfrak{a}}, \omega_{1\mathfrak{b}}])_{\mathfrak{m}}\\ + \frac{1}{4}([\omega_2, [\omega_{1\mathfrak{a}}, \omega_{3\mathfrak{b}}] + [\omega_{3\mathfrak{a}}, \omega_{1\mathfrak{b}}]] + [\omega_{2\mathfrak{a}}, [\omega_1, \omega_3]_{\mathfrak{b}}] + [[\omega_1, \omega_3]_{\mathfrak{a}}, \omega_{2\mathfrak{b}}])_{\mathfrak{m}} \end{gathered} \end{equation} \begin{equation}\begin{gathered} 4\rR^{[2]}_{\omega_1, \omega_2}\omega_3 := -[\omega_{1\mathfrak{a}}, [\omega_{2\mathfrak{a}}, \omega_{3\mathfrak{b}}] + [\omega_{3\mathfrak{a}}, \omega_{2\mathfrak{b}}]] + [\omega_{2\mathfrak{a}}, [\omega_{1\mathfrak{a}}, \omega_{3\mathfrak{b}}] + [\omega_{3\mathfrak{a}}, \omega_{1\mathfrak{b}}]] \end{gathered} \end{equation} \end{proposition} \begin{proof}We apply the formulas for $[\quad]_{\rP}$, with $\mathfrak{n} = \mathfrak{b}\oplus\mathfrak{h}$ and $\mathfrak{g} = \mathfrak{a}\oplus\mathfrak{n}$ $$[[\omega_1, \omega_2], \omega_3]_{\rP, \mathfrak{a}} = [[\omega_1, \omega_2], \omega_3]_{\mathfrak{a}} $$ $$[[\omega_1, \omega_2], \omega_3]_{\rP, \mathfrak{n}} = [[\omega_1, \omega_2], \omega_3]_{\mathfrak{n}} + (1-t)([[\omega_1, \omega_2]_{\mathfrak{a}}, \omega_{3,\mathfrak{b}}] +[\omega_{3, \mathfrak{a}}, [\omega_1, \omega_2]_{\mathfrak{b}}] $$ $$[\omega_1[\omega_2, \omega_3]_{\rP}]_{\rP} = [\omega_1, [\omega_2, \omega_3]] + (1-t) [\omega_1, [\omega_{2, \mathfrak{a}}, \omega_{3,\mathfrak{b}}] + [\omega_{3,\mathfrak{a}}, \omega_{2,\mathfrak{b}}]] +$$ $$ (1-t)([\omega_{1\mathfrak{a}}, [\omega_2, \omega_3]_{\mathfrak{b}}] + [[\omega_2, \omega_3]_{\mathfrak{a}}, \omega_{1,\mathfrak{b}}]) +(1-t)^2([\omega_{1\mathfrak{a}}, [\omega_{2\mathfrak{a}}, \omega_{3\mathfrak{b}}] + [\omega_{3\mathfrak{a}}, \omega_{2\mathfrak{b}}]]) $$ We now apply \cref{eq:hom_curv}. By the Jacobi identity, the $\rR^{[0]}$ component of the first line is $$(\frac{1}{2}[[\omega_1, \omega_2], \omega_3] - \frac{1}{4}[\omega_1, [\omega_2, \omega_3]] + \frac{1}{4}[\omega_2, [\omega_1, \omega_3]])_{\mathfrak{m}}=\frac{1}{4}[[\omega_1, \omega_2], \omega_3]_{\mathfrak{m}}$$ while the second line has the O'Neil terms $\addg_{\omega_i}[\omega_j, \omega_k]_{\mathfrak{k}}$ ($i, j, k$ in a permutation of $\{1, 2, 3\}$) evaluated as $-[\omega_{i\mathfrak{m}}[\omega_j, \omega_k]_{\mathfrak{k}}]$. Since we assume $\omega_i\in\mathfrak{m}$, this gives us the expression for $\rR^{[0]}$. Permuting indices the collect terms, we get $\rR^{[1]}$ and $\rR^{[2]}$. Some of the expressions, for example, $\rR^{[2]}_{\omega_1\omega_2}\omega_3$ are already in $\mathfrak{m}$ so we do not need to apply projection again. \end{proof} We use these formulas to compute the Levi-Civita connection and curvature for Stiefel manifolds. For two integers $n> p$, we will describe the Stiefel manifold as $\SOO(n)/\SOO(n-p)$. Here, $\mathtt{G} = \SOO(n)$ and $\mathtt{K} = \SOO(n-p)$. We take $\mathfrak{g} = \mathfrak{o}(n)$, $\mathtt{G}=\SOO(n)$. Take the bi-invariant form to be $\frac{1}{2}\Tr(\omega_1^{\ft}\omega_2)$. We divide a matrix in $\mathfrak{o}(n)$ to blocks of the form $\begin{bmatrix}A & -B^{\ft}\\B & H \end{bmatrix}$, with $A\in\mathfrak{o}(p)$, $B\in\mathbb{R}^{(n-p)\times p}$ and $H\in\mathbb{R}^{(n-p)\times (n-p)}$, and we represent that matrix by a triple $[\![A, B, H]\!]$ to save space. Take the subalgebra generated by the $H$ block to be $\mathfrak{k}=\mathfrak{h}=\mathfrak{o}(n-p)$, identified with the bottom right $(n-p)\times (n-p)$ block of $\mathfrak{o}(n)$, then $\mathfrak{m}$ is the subspace of $\mathfrak{o}(n)$ where the $H$-block is zero, the subalgebra $\mathfrak{a}$ is $\mathfrak{o}(p)$ identified with the $A$-block, and $\mathfrak{b}$ is the subspace generated by the $B$ and $B^{\ft}$-blocks, as in the below $$\mathfrak{g} : \begin{bmatrix}\mathfrak{a} & \mathfrak{b}\\ \mathfrak{b} & \mathfrak{h}\end{bmatrix}\quad\quad \mathfrak{n} : \begin{bmatrix}0 & \mathfrak{b}\\ \mathfrak{b} & \mathfrak{h}\end{bmatrix}\quad\quad \mathfrak{m} : \begin{bmatrix}\mathfrak{a} & \mathfrak{b}\\ \mathfrak{b} & 0\end{bmatrix}$$ The Lie and $[\quad]_{\rP}$ brackets of $\lbb A_1, B_1, H_1]\rbb, \lbb A_2, B_2, H_2\rbb\in \mathfrak{o}(n)$ are given by $$[\lbb A_1, B_1, H_1\rbb, \lbb A_2, B_2, H_2\rbb] =$$ $$\lbb[A_1, A_2]+B_2^{\ft}B_1 -B_1^{\ft}B_2, B_1A_2 +H_1B_2 - B_2A_1 - H_2B_1, [H_1, H_2] +B_2B_1^{\ft}- B_1B_2^{\ft}\rbb$$ $$[\lbb A_1, B_1, H_1\rbb, \lbb A_2, B_2, H_2\rbb]_{\rP} = \lbb[A_1, A_2]+B_2^{\ft}B_1 -B_1^{\ft}B_2,$$ $$ tB_1A_2 +H_1B_2 +(t-2)B_2A_1 - H_2B_1,$$ $$ [H_1, H_2] +B_2B_1^{\ft}- B_1B_2^{\ft}\rbb$$ For $U = (Y|Y_{\perp})\in\SOO(n)$, where $(|)$ denotes the division of a matrix in $\mathbb{R}^{n\times n}$ to the first $p$ (in $\mathbb{R}^{n\times p}$) and last $n-p$ (in $\mathbb{R}^{n\times (n-p)}$) column blocks, if $\omega = (\eta|\eta_{\perp})$ is a tangent vector at $U$ to $\SOO(n)$ then $\omega = d\mathcal{L}_U(U^{\ft}\omega) = U\lbb Y^{\ft}\eta, Y_{\perp}^{\ft}\eta, Y_{\perp}^{\ft}\eta_{\perp}\rbb$. We describe the submersion $\SOO(n)\to\St{p}{n}$, identifying $\St{p}{n}$ with $\SOO(n)/\SOO(n-p)$ by the map $U\mapsto Y$, where $U = (Y|Y_{\perp})$ as just described. The map is clearly a differentiable submersion on to $\St{p}{n}$, the fiber over $Y$ consists of matrices of the form $(Y|Y_{\perp} Q)$, $Q\in\SOO(n-p)$, hence the vertical space consists of $(0|Y_{\perp} \mathrm{q})$, $\mathrm{q}\in\mathfrak{o}(n-p)$. Equip $\SOO(n)$ with the metric $\rP_t$ in \cref{prop:adPt}. At $U=\dI_n$, the horizontal space consists of matrices of the form $\lbb A, B, 0\rbb$, with $A\in\mathfrak{o}(p), B\in\mathbb{R}^{(n-p)\times p}$, and in general, a horizontal vector is of the form $U\lbb A, B, 0\rbb$. The submersion maps $\omega = (\eta|\eta_{\perp})$ to $\eta\in\mathbb{R}^{n\times p}$ satisfying $Y^{\ft}\eta \in\mathfrak{o}(p)$. \begin{proposition}\label{prop:stf_leftinv} With the above setting, the horizontal lift of a tangent vector $\eta\in\mathbb{R}^{n\times p}$ to $\St{p}{n}$ at $U=(Y|Y_{\perp})\in\SOO(n)$ under $\rP_t$ is $\bar{\eta} = (\eta| -Y\eta^{\ft}Y_{\perp})$ and the induced metric is \begin{equation}\langle\eta, \eta \rangle_t = \Tr(\eta\eta^{\ft} +(\frac{t}{2} - 1) YY^{\ft}\eta\eta^{\ft})\end{equation} The Levi-Civita connection for two vector fields $\mathtt{V}, \mathtt{Z}$ on $\St{p}{n}$ under this metric is given by \begin{equation}\label{eq:levinew}\nabla_{\mathtt{V}}\mathtt{Z} = \rD_{\mathtt{V}}\mathtt{Z} + \frac{1}{2}Y(\mathtt{V}^{\ft}\mathtt{Z} + \mathtt{V}^{\ft}\mathtt{Z}) + \frac{2-t}{2}(\dI_n-YY^{\ft})(\mathtt{V} \mathtt{Z}^{\ft} +\mathtt{V}\mathtt{Z}^{\ft})Y \end{equation} The curvature $\rR_{\xi, \eta}\phi$ at $Y\in\St{p}{n}$ for three tangent vectors $\xi, \eta, \phi$ computed by \cref{prop:curv_general} is identical to that computed by \cref{eq:cur_ABCA} and (\ref{eq:cur_ABCB}) if we represent the tangent and curvature vectors in the format in \cref{prop:stiefel_cur}, and set $\alpha=t/2$. \end{proposition} \begin{proof} A matrix multiplication shows $U^{\ft}\bar{\eta}$ is antisymmetric and could be represented as $\lbb Y^{\ft}\eta, Y_{\perp}^{\ft}\eta, 0\rbb\in\mathfrak{o}(n)$, which is horizontal at $\dI_n$, thus $\bar{\eta}$ is horizontal and maps to $\eta$, hence it is the horizontal lift. Using the relations $Y_{\perp}\Yperp^{\ft} + YY^{\ft} = \dI_n$ the induced metric is $$\begin{gathered}\langle U^{\ft}\bar{\eta}, U^{\ft}\bar{\eta}\rangle_{\rP} = \frac{1}{2}\Tr\begin{bmatrix} Y^{\ft}\eta & -\eta^{\ft}Y_{\perp}\\Y_{\perp}^{\ft}\eta& 0\end{bmatrix}\begin{bmatrix} t\eta^{\ft}Y & \eta^{\ft}Y_{\perp}\\ -Y_{\perp}^{\ft}\eta& 0\end{bmatrix}\\ = \frac{1}{2}\Tr (tYY^{\ft}\eta\eta^{\ft} + 2 Y_{\perp}\Yperp^{\ft}\eta\eta^{\ft}) = \Tr(\eta\eta^{\ft} +(\frac{t}{2} - 1) YY^{\ft}\eta\eta^{\ft})\end{gathered}$$ Let $\mathtt{V}, \mathtt{Z}$ be two vector fields on $\St{p}{n}$, which lift to $\SOO(n)$-vector fields $\bar{\mathtt{V}} = (\mathtt{V}|-Y\mathtt{V}^{\ft}Y_{\perp})$, $\bar{\mathtt{Z}} = (\mathtt{Z}|-Y\mathtt{Z}^{\ft}Y_{\perp})$. Let $F = U^{\ft}\bar{\mathtt{Z}} = \lbb Y^{\ft}\mathtt{Z}, Y_{\perp}^{\ft}\mathtt{Z}, 0\rbb$, by \cref{eq:nabla_hom}, $\nabla_{\mathtt{V}}\mathtt{Z}$ lifts to $UC_{\mathfrak{m}}$ with $C = \rD_{\bar{\mathtt{V}}}F + \frac{1}{2}[\lbb Y^{\ft}\mathtt{V}, Y_{\perp}^{\ft}\mathtt{V}, 0\rbb, \lbb Y^{\ft}\mathtt{Z}, Y_{\perp}^{\ft}\mathtt{Z}, 0\rbb]_{\rP}$. Expand the Lie-derivative and the $\rP$-bracket $$\begin{gathered}C = \lbb\mathtt{V}^{\ft}\mathtt{Z} + Y^{\ft}\rD_{\mathtt{V}}\mathtt{Z}, -Y_{\perp}^{\ft}\mathtt{V} Y^{\ft}\mathtt{Z} +Y_{\perp}^{\ft}\rD_{\mathtt{V}}\mathtt{Z}, 0\rbb +\\ \frac{1}{2}\lbb[Y^{\ft}\mathtt{V}, Y^{\ft}\mathtt{Z}] +\mathtt{Z}^{\ft}Y_{\perp}\Yperp^{\ft}\mathtt{V} - \mathtt{V}^{\ft}Y_{\perp}\Yperp^{\ft}\mathtt{Z}, tY_{\perp}^{\ft}\mathtt{V} Y^{\ft}\mathtt{Z} +(t-2)Y_{\perp}^{\ft}\mathtt{Z} Y^{\ft}\mathtt{V}, C_H\rbb \end{gathered}$$ for $C_H\in\mathfrak{o}(n-p)$. Thus, the submersion maps $UC_{\mathfrak{m}}$ to its left $p$ columns $$\begin{gathered} Y(\mathtt{V}^{\ft}\mathtt{Z} + Y^{\ft}\rD_{\mathtt{V}}\mathtt{Z} +\frac{1}{2}([Y^{\ft}\mathtt{V}, Y^{\ft}\mathtt{Z}] +\mathtt{Z}^{\ft}Y_{\perp}\Yperp^{\ft}\mathtt{V} - \mathtt{V}^{\ft}Y_{\perp}\Yperp^{\ft}\mathtt{Z})) +\\ Y_{\perp}(-Y_{\perp}^{\ft}\mathtt{V} Y^{\ft}\mathtt{Z} +Y_{\perp}^{\ft}\rD_{\mathtt{V}}\mathtt{Z} +\frac{1}{2}(tY_{\perp}^{\ft}\mathtt{V} Y^{\ft}\mathtt{Z} +(t-2)Y_{\perp}^{\ft}\mathtt{Z} Y^{\ft}\mathtt{V}))\\ =\rD_{\mathtt{V}}\mathtt{Z} + Y\mathtt{V}^{\ft}\mathtt{Z} + \frac{1}{2}(YY^{\ft}\mathtt{V} Y^{\ft} \mathtt{Z} - YY^{\ft}\mathtt{Z} Y^{\ft} \mathtt{V} + Y\mathtt{Z}^{\ft}Y_{\perp}\Yperp\mathtt{V} - Y\mathtt{V}^{\ft}Y_{\perp}\Yperp\mathtt{Z}) \\ +\frac{1}{2} Y_{\perp}\Yperp^{\ft}(-2\mathtt{V} Y^{\ft}\mathtt{Z} +t\mathtt{V} Y^{\ft}\mathtt{Z} +(t-2)\mathtt{Z} Y^{\ft}\mathtt{V}) \end{gathered}$$ The last line simplifies to $$\frac{t-2}{2}(\dI_n-YY^{\ft})(\mathtt{V} Y^{\ft}\mathtt{Z} +\mathtt{Z} Y^{\ft}\mathtt{V}) = \frac{2-t}{2}(\dI_n-YY^{\ft})(\mathtt{V} \mathtt{Z}^{\ft} +\mathtt{Z}\mathtt{V}^{\ft})Y$$ while twice the remaining terms, except for $\rD_{\mathtt{V}}\mathtt{Z}$ is $$\begin{gathered}2Y\mathtt{V}^{\ft}\mathtt{Z} + YY^{\ft}\mathtt{V} Y^{\ft} \mathtt{Z} - YY^{\ft}\mathtt{Z} Y^{\ft} \mathtt{V} + Y\mathtt{Z}^{\ft}(\dI_n-YY^{\ft})\mathtt{V} - Y\mathtt{V}^{\ft}(\dI_n-YY^{\ft})\mathtt{Z} \\ = Y\mathtt{V}^{\ft}\mathtt{Z} + Y\mathtt{Z}^{\ft}\mathtt{V} +Y(Y^{\ft}\mathtt{V} +\mathtt{V}^{\ft}Y)Y^{\ft}\mathtt{Z} -Y(Y^{\ft}\mathtt{Z}+\mathtt{Z}^{\ft}Y)Y^{\ft}\mathtt{V}\\ =Y\mathtt{V}^{\ft}\mathtt{Z} + Y\mathtt{Z}^{\ft}\mathtt{V} \end{gathered}$$ Thus we have proved \cref{eq:levinew}. Let us prove the curvature expressions. To show $f(t)=g(t/2)$, with $f(t) = f_0 + (1-t)f_1 + (1-t)^2f_2$ where $f_1, f_2, f_3$ are constant matrices and $g$ is a matrix-valued quadratic function in $t$, we need to show $f_0 = g(1/2)$, $-2f_1 = g'(1/2)$ and $8f_2 = g''(1/2)$. From left invariance we can take $U = \dI_n$. Thus, we need to compute $\rR^{[0]}, \rR^{[1]}, \rR^{[2]}$ and compare with values and derivatives of $g(\alpha) = \lbb A_R(\alpha), B_R(\alpha), 0\rbb$ with $A_R, B_R$ defined from \cref{eq:cur_ABCA} and (\ref{eq:cur_ABCB}) evaluated at $\alpha=1/2$. Let $\xi = \omega_1, \eta=\omega_2, \phi=\omega_3$ with $\omega_i = \lbb A_i, B_i, 0\rbb$ we have $[\omega_{2\mathfrak{a}}, \omega_{3\mathfrak{b}}]$ is $\lbb 0, -B_3A_2, 0\rbb$, $[\omega_{1\mathfrak{a}}, [\omega_{2\mathfrak{a}}, \omega_{3\mathfrak{b}}]] = \lbb 0, B_3A_2A_1, 0\rbb$ and permuting the indices $$4\rR^{[2]}_{\omega_1, \omega_2}\omega_3 = \lbb 0, -B_3A_2A_1 - B_2A_3A_1+ B_3A_1A_2 + B_1A_3A_2, 0\rbb$$ On the other hand, \cref{eq:cur_ABCA} and \ref{eq:cur_ABCB} gives $A_{R, \alpha=1/2}''=0$ and $B_{R, \alpha=1/2}''$ is $$ B_{R,\alpha=1/2}'' = \frac{4}{2} ( B_{1} A_{3} A_{2} - B_{2} A_{3} A_{1}) +\\ (2) (B_{3} A_{1} A_{2} - B_{3} A_{2} A_{1}) $$ which confirms $8\rR^{[2]}_{\omega_1, \omega_2}\omega_3 = g''(1/2)$. Next, $$[[\omega_1, \omega_2]_{\mathfrak{a}},\omega_{3\mathfrak{b}}] = \lbb 0, - B_3(([A_1, A_2]+B_2^{\ft}B_1 -B_1^{\ft}B_2), 0\rbb$$ $$[\omega_{3\mathfrak{a}}, [\omega_1, \omega_2]_{\mathfrak{b}}]_{\mathfrak{a}} = \lbb 0, -(B_1A_2 - B_2A_1)A_3, 0\rbb$$ $$[\omega_1, [\omega_{2\mathfrak{a}}, \omega_{3\mathfrak{b}}]]_{\mathfrak{m}} = [\lbb A_1, B_1, 0\rbb, \lbb 0, -B_3A_2, 0\rbb]_{\mathfrak{m}} = \lbb A_2B_3^{\ft}B_1 + B_1^{\ft}B_3A_2, B_3A_2A_1, 0\rbb$$ By permuting indices, we evaluate the $\mathfrak{a}$ component of $4\rR^{[1]}_{\omega_1, \omega_2}\omega_3$ from four expressions similar to $[\omega_1, [\omega_{2\mathfrak{a}}, \omega_{3\mathfrak{b}}]]_{\mathfrak{a}}$ as $$\begin{gathered} -A_2B_3^{\ft}B_1 - B_1^{\ft}B_3A_2 -A_3B_2^{\ft}B_1 - B_1^{\ft}B_2A_3\\ +A_1B_3^{\ft}B_2 + B_2^{\ft}B_3A_1 +A_1B_2^{\ft}B_3 + B_3^{\ft}B_2A_1 \end{gathered} $$ and evaluate the $\mathfrak{b}$ component of $4\rR^{[1]}_{\omega_1, \omega_2}\omega_3$ from the remaining items as $$\begin{gathered} 2(- B_3([A_1, A_2]+B_2^{\ft}B_1 -B_1^{\ft}B_2) -(B_1A_2 - B_2A_1)A_3)\\ -B_3A_2A_1 - B_2A_3A_1 +(B_2A_3 - B_3A_2)A_1 +B_1([A_2, A_3]+B_3^{\ft}B_2 -B_2^{\ft}B_3)\\ +B_3A_1A_2 + B_1A_3A_2 - (B_1A_3 - B_3A_1)A_2 -B_2([A_1, A_3]+B_3^{\ft}B_1 -B_1^{\ft}B_3) \end{gathered}$$ Let us collect terms. Terms starting with $B_3$ and two $A$ factors are $$- B_3[A_1, A_2] -B_3A_2A_1 -B_3A_2A_1 +B_3A_1A_2 + B_3A_1A_2 = 0$$ Terms starting with $B_2$ and two $A$ factors: $$2B_2A_1A_3 - B_2A_3A_1+B_2A_3A_1-B_2[A_1, A_3] = B_2A_1A_3 +B_2A_3A_1$$ Terms starting with $B_1$ and two $A$ factors: $$-2B_1A_2A_3+B_1[A_2, A_3] + B_1A_3A_2 - B_1A_3A_2 = -B_1A_2A_3 -B_1A_3A_2$$ Terms with $B$'s only factors $$\begin{gathered} - 2B_3(B_2^{\ft}B_1 -B_1^{\ft}B_2) +B_1(B_3^{\ft}B_2 -B_2^{\ft}B_3) -B_2(B_3^{\ft}B_1 -B_1^{\ft}B_3) \end{gathered}$$ On the other hand, we have $$\begin{gathered}A_{R, \alpha=1/2}' = \frac{-2}{4} (A_{1} B_{3}^{\ft} B_{2} - A_{2} B_{3}^{\ft} B_{1} - B_{1}^{\ft} B_{3} A_{2} + B_{2}^{\ft} B_{3} A_{1}) +\\ \frac{-1}{2}(A_{3} B_{1}^{\ft} B_{2} - A_{3} B_{2}^{\ft} B_{1} - B_{1}^{\ft} B_{2}A_{3}+ B_{2}^{\ft} B_{1} A_{3}) \end{gathered}$$ $$\begin{gathered}B_{R, \alpha=1/2}' = \frac{4(1/2)-1}{2} (B_{1} A_{3} A_{2} - B_{2} A_{3} A_{1}) +\\ (2(1/2)-1) (B_{3} A_{1} A_{2} - B_{3} A_{2} A_{1}) - (B_{3} B_{1}^{\ft} B_{2} - B_{3} B_{2}^{\ft} B_{1}) +\\ \frac{1}{2} (B_{1} B_{2}^{\ft} B_{3} -B_{2} B_{1}^{\ft} B_{3}) + \frac{1}{2}(B_{1} A_{2} A_{3} - B_{1} B_{3}^{\ft} B_{2} - B_{2} A_{1} A_{3} + B_{2} B_{3}^{\ft} B_{1}) \end{gathered}$$ and we can confirm by inspection $-2\rR^{[1]}_{\omega_1, \omega_2}\omega_3 = g'(1/2)$. The constant term $\rR^{[0]}$ is verified similarly, which we will not show here. \end{proof} \begin{remark}We have shown the metric in \cref{sec:stiefel} is $\rP_t$ for $t=\frac{\alpha}{2}$. The submersion associated with the Cheeger deformation gives a sectional curvature formula for $\mathtt{G}$ with the metric $\rP_t$ in proposition 2.4 of \cite{GZ2000}. Using the O'Neil equation and \cref{eq:oneil_P}, it implies the following sectional curvature formula for $\mathtt{M}=\mathtt{G}/\mathtt{K}$ (the norm $\|\|$ corresponds to the bi-invariant inner product $\langle\rangle$) \begin{equation} \begin{gathered} \langle\rR^{\mathtt{M}}_{\omega_1, \omega_2}\omega_1, \rP_t\omega_2 \rangle = \frac{1}{4}\|[\omega_{1\mathfrak{n}}, \omega_{2\mathfrak{n}}]_{\mathfrak{n}} + t [\omega_{1\mathfrak{a}}, \omega_{2\mathfrak{n}}] + t[\omega_{1\mathfrak{n}},\omega_{2\mathfrak{a}}]\|^2 +\\ \frac{1}{4}\|[\omega_{1\mathfrak{n}}, \omega_{2\mathfrak{n}}]_{\mathfrak{a}} + t^2[\omega_{1\mathfrak{a}},\omega_{2\mathfrak{a}}]\|^2 +\frac{1}{4}t(1-t)^3\|[\omega_{1\mathfrak{a}}, \omega_{2\mathfrak{a}}]\|^2 +\\ \frac{3}{4}(1-t)\|[\omega_{1\mathfrak{n}},\omega_{2\mathfrak{n}}]_{\mathfrak{a}} + t[\omega_{1\mathfrak{a}}, \omega_{2\mathfrak{a}}]\|^2 + \frac{3}{4}\|[\omega_1, \omega_2]_{\mathfrak{k}}\|^2 \end{gathered} \end{equation} It is also a weighted sum of squares in a different format from \cref{eq:sec_sum_sq}. It is shown to imply both the non-negativity of curvature when $t\leq 1$ and in the case $\mathfrak{a}$ is abelian, when $t\leq 4/3$. \end{remark} \section{Discussion} In this paper, we have obtained explicit formulas for curvatures of real Stiefel manifolds with deformation metrics and obtained several results related to Einstein metrics and sectional curvature range, including parameter values corresponding to non-negative sectional curvatures. We expect similar results could be obtained for complex and quaternionic Stiefel manifolds. We hope the availability of explicit curvature formulas for a family of metrics on an important class of manifolds will be helpful in both theory and applications. The framework to compute the Levi-Civita connection and curvature for deformations of normal homogeneous spaces could be applied to other families of manifolds, potentially allowing the construction of new Einstein manifolds or manifolds with non-negative curvatures.
{ "timestamp": "2021-05-06T02:08:14", "yymm": "2105", "arxiv_id": "2105.01834", "language": "en", "url": "https://arxiv.org/abs/2105.01834" }
\section{Introduction} The rapid development of smartphones has led to increasing demand for location-based services (LBSs) based on wireless sensor networks within different areas, including academic research, industry and commerce \cite{R1,R3,R20}. Localization services such as global positioning system or global navigation satellite system are typically only available in outdoor environments, and, even there, such satellite-based methods may not provide acceptable accuracy in all outdoor environments due to non-line of sight error, fading, and shadowing effects \cite{R2}. These approaches employ ranging-based methods such as time of arrival, angle of arrival and RSS to estimate the location of the users. These methods also can be used to localize the LBSs in indoor environments. However, they do not provide acceptable accuracy. Fingerprinting methods are used to improve the accuracy of LBSs in indoor environment \cite{R3}. They constitute a subset of localization approaches in which the signal of multiple base stations (BSs) such as WiFi, Bluetooth, ZigBee, light, and radio-frequency identification is used \cite{R2} to determine the location of a receiver. Among these wireless systems, WiFi has attracted the most interest due to its ready availability in modern smartphones and other communication devices \cite{R3}. In this paper, we use the term Access Point (AP) to refer to WiFi BS. Fingerprinting methods have two distinct phases: the offline (or training) and online (or test) phases. The mode of data collection in the offline phase depends on the localization purpose, a typical purpose being to determine a particular zone that the user is located in. In such problems, class labels are assigned to each area, and fingerprints such as RSS or CSI are collected for each class separately (defining a classification problem). Alternatively, when an exact location estimation is required (in essence, a regression problem), during the training phase fingerprints such as RSS or CSI are collected at specific locations of the environment, known as reference points. In the online phase, the target user receives RSS or CSI from multiple BSs or APs. This information is then passed to the system model trained during the offline phase such that the area or location of the user is finally estimated. In this paper, we focus on the classification problem and use RSS as the method for receiving signals from multiple APs. Machine learning methods, and particularly deep learning approaches, have recently been used to recognize the statistical patterns of gathered datasets to train the system model in the offline phase of fingerprint-based localization \cite{R20}. A deep model consists in several layers of neural networks connected by weighted links with activation functions applied to the outputs; typically, such methods define the state-of-the-art in classification performance across a range of domains. However, due to the parametric complexity of these models, the accuracy of deep-learning based fingerprint localization methods depends strongly on the number of samples in the training phase. Because such training data is necessarily in short supply in practical domains, there is a corresponding motivation to establish new methods that can reduce data collection costs, while reaching an accuracy comparable to that of idealized deep neural classification. Many researchers have tried to reduce the data collection cost. Authors in \cite{R6} propose a hybrid generative-discriminative approach for handling unlabeled data using a small quantity of labeled data. In \cite{R7}, the authors propose a semi-supervised deep-learning approach to reduce the cost of collecting labeled data based on constructing a neighborhood graph (similarity matrix) from trajectory data (conventional methods for constructing a neighborhood matrix do not consider the physical location of labeled data). The authors of \cite{R9} also propose an approach (GrassMA) that explicitly takes the location of labeled data into account. Authors in \cite{R21} use deep belief networks to update the hidden features of labeled fingerprints via the sufficient unlabeled data. All of the methods mentioned above require "real unlabeled" data to reach an acceptable level of accuracy. Data-gathering without loss of user privacy is a challenging issue. In this paper, we propose a new method to improve the accuracy of localization while using less real data; we will thus use GANs to produce synthetic data as an extra input to the classification model. GANs have been used for augmentation of data in classification problems within a number of other research areas, including remote sensing \cite{R13}, object classification \cite{R14}, and more generally in computer vision \cite{R15}. To the best of our knowledge, this is the first time that synthetic data generated by a GAN has been used to improve fingerprint-based localization. In this paper, we use lower-case letters (e.g. $a$) to denote scalar numbers, boldface lower-case letters (e.g. $\mathbf{a}$) for vectors, capital letters (e.g. $A$) for functions and boldface capital letters (e.g. $\mathbf{A}$) to denote matrices. The remainder of this paper is organized as follows: in section II, the conventional system model for fingerprint-based classification problem is presented. In section III, the proposed deep learning-based model to generate synthetic data is set out. Experimental results and conclusions are presented in sections IV and V, respectively. \begin{figure}[!t] \vspace*{-10pt} \centering \includegraphics[width=\columnwidth]{fig1.pdf} \caption{A schematic of the indoor environment for localization.} \label{fig:1} \end{figure} \section{System Model} The schematic of a typical indoor environment classification problem is depicted in Fig. \ref{fig:1}. Initially, labels are assigned for each area or room. Then, RSS values from multiple APs are gathered for each area such that a system model can be trained from the collected dataset. We use a deep neural network, $F(\mathbf{x},{\theta _c})$ to model task-related patterns with respect to the room-configuration using the RSS values, where, $\mathbf{x} \in {\mathbb{R}^{\mathbf{1 \times M}}}$ is the input vector (the RSS vector in our problem), ${\theta _c}$ is the parameter of the deep model learned during the training phase, and the $C$ represents the class output nodes. Given the multi-class nature of the classification problem, we make use of a cross-entropy or log-likelihood loss function \cite{R16}: \begin{equation} {\cal L}({\theta _c}) = - \sum\limits_{i = 1}^N {\sum\limits_{j = 1}^C {{y_{ij}}\log {{\hat y}_{ij}}} } \label{eq:1} \end{equation} where ${y_{ij}} = \left\{ {1\,\,\,{\rm{if}}\,\,i \in c,\,\,0\,\,\,\,{\rm{o.w}}} \right\}$, $N$ is the total number of observations, $C$ the number of classes, ${y_{ij}}$ a real label value and ${\hat y_{ij}}$ the predicted value arising from the trained $F(\mathbf{x},{\theta _c})$. Equation \ref{eq:1} can be written in vector form via the class summation: \begin{equation} {\cal L}({\theta _c}) = - \sum\limits_{i = 1}^N {{\mathbf{y}_i}\log \mathbf{\hat y}_i^T} \label{eq:2} \end{equation} where ${\mathbf{y}_i} \in {\mathbb{B}^{1 \times C}}\,$ and $\,\mathbb{B} \in \{ 0,1\} $ i.e. ${\mathbf{y}_i}$ is one-hot encoded so as to be one of ${\left[ {1,{\text{ }}0,{\text{ }}0, \ldots ,{\text{ }}0} \right]_1}$, ${\left[ {0,{\text{ 1}},{\text{ }}0, \ldots ,{\text{ }}0} \right]_2}$, ..., ${\left[ {0,{\text{ }}0,{\text{ }}0, \ldots ,{\text{ 1}}} \right]_C}$. ${\hat y_{ij}}$ is a real number between 0 and 1 predicted by the deep model. The log function favors hard selection of a single class; back-propagation is used to train the system via minimization of the log-likelihood cost function using the Adam \cite{R19} optimizer to obtain $F(\mathbf{x},{\theta _c})$, such that the maximum value of the outputs of $C$ nodes constitutes the class predicted for a given input vector by the trained system model. \section{Proposed Method} \begin{figure*}[!t] \vspace*{-25pt} \centering \includegraphics[width=6.8in]{fig2.pdf} \centering \setlength{\abovecaptionskip}{-10pt} \caption{The broad structure of GAN in the training phase, where $M$ is the dimension of input vector and $K$ is the number of observations.} \label{fig:3} \end{figure*} Generative Adversarial Networks, introduced by Goodfellow et al. in 2014 \cite{R10}, are a class of game-theoretic methods used for learning the feature-distribution of a given dataset, so as to be able to parametrically-generate synthetic data with maximal similarity to the input. GANs generally consist of two distinct parts: a Generator and a Discriminator. The generator is responsible for learning the distribution of the training dataset and generating simulated data (via input noise) that matches the distribution of original data. The discriminator takes these data as input, and through comparison with real data, seeks to evaluate their authenticity. By continuously training these two networks together, it is hoped that a convergent point is reached in which the generator is able to create synthetic data that sufficiently matches the distribution of real data so as to be able to fool the discriminator. We shall first consider a hypothetical dataset consisting in a problem of class discrimination for some small number of classes. Suppose a GAN-based process produces synthetic data for each class. Since the process for one class is the same as that of the other classes, let the RSS dataset for this one class be as follows: \begin{equation} \mathbf R_c = \left[ {\begin{array}{*{20}{c}} {r_{11}}&{r_{12}}& \cdots &{r_{1M}} \\ {r_{21}}&{r_{22}}& \cdots &{r_{2M}} \\ \vdots & \vdots & \ddots & \vdots \\ {r_{K1}}&{r_{K2}}& \cdots &{r_{KM}} \end{array}} \right] \label{eq:3} \end{equation} \noindent where $M$ is the number of APs in the environment, $K$ is the number of class observations, and $r_{ij}$ is the magnitude of the i'th observation from the j'th AP. Each column of the above matrix constitutes a distribution over the desired class. Therefore, we define $\mathbf{x} \in {\mathbb{R}^{\mathbf{1 \times M}}}$ such that the goal of the generator is to map the prior noise latent variables $\mathbf{z} \in {\mathbb{R}^{\mathbf{1 \times L}}}$ to the distribution of $\mathbf R_c$. The broad structure of GAN designed to achieve this is depicted in Fig. \ref{fig:3}. The process for producing the synthetic data is hence based on the cost function: \begin{equation} \begin{array}{l} \mathop {\min }\limits_G \mathop {\max }\limits_D {\mkern 1mu} {\mkern 1mu} {\cal L}(D,G),\,\,\, \rm{where}\\ {\cal L}(D,G) = {E_{x\sim{p_{\rm{data}}}(x)}}[\log D({\mathbf{x}})] + {E_{z\sim{p_z}(z)}}[\log (1 - D(G({\mathbf{z}})))] \end{array} \label{eq:4} \end{equation} It may be seen that the cost function consists of two-parts, with the goal of the discriminator being to maximize the probability of correctly assigning labels to the real and synthetic data. Both $G$ and $D$ are differentiable functions represented by a multilayer perceptron (MLP). The Generator learns how to map the latent noise $\mathbf{z}\sim{p_z}(z)$ to the real data distribution ${\mathbf x\sim{p_{\rm{data}}}(x)}$, denoted via the $G(\mathbf{z},{\theta _g})$ structure, where ${\theta_g}$ indicates the parameters of the MLP in the generator. Contrarily, the discriminator learns how to distinguish between real and synthetic data denoted via the $D(\mathbf{x},{\theta _d})$ structure, where ${\theta _d}$ indicates the parameters of the discriminator. The discriminator has a binary classification structure in its output, in which 0 and 1 indicate synthetic and real data, respectively. The cost function can be represented for the generative and discriminative models, respectively, by fixing the other component. Therefore, the discriminator loss is defined as follows: \begin{equation} {\cal L}({\theta _d}) = {E_{x\sim{p_{\rm{data}}}(x)}}[\log D(\mathbf{x,}{\theta _d})] + {E_{z\sim{p_z}(z)}}[\log (1 - D(G(\mathbf{z,}{\theta _g}))] \label{eq:5} \end{equation} The loss function of generator is correspondingly defined as follows: \begin{equation} {\cal L}({\theta _g}) = {E_{z\sim{p_z}(z)}}[\log (1 - D(G(\mathbf{z,}{\theta _g}))] \label{eq:6} \end{equation} The first term of Eq. (\ref{eq:4}) vanishes during the gradient update step since it does not effect the generator when fixed. The process for updating ${\theta _d}$ and ${\theta _g}$ until convergence is given in full in algorithm 1. For both the generator and discriminator, Adam optimizer is used to update the parameters. The important note is that updating the discriminator is performed in $s$ times rather than the generator. Convergence occurs when $D(\mathbf{x},{\theta _d}) = \frac{1}{2}$, meaning that the discriminator is not able to distinguish between real and synthetic data. After convergence, the generator is ready to produce synthetic samples for the desired class via the same prior noise distribution $\mathbf{z}\sim{p_z}(z)$. In the next stage of the proposed fingerprint localization pipeline, this synthetic data is combined with real data from each class in order to augment conventional classification via the deep learning model used for training, as discussed in the previous section. Therefore, the full set of RSS ($\mathbf {FR}$) data, which consists of both real RSS ($\mathbf {R}$) and synthetic RSS ($\mathbf {SR}$) data for the desired class $c$, can be defined as follows: \begin{equation} \mathbf {FR}_c = \left( {\begin{array}{*{20}{c}} {\mathbf R_c} \\ {\mathbf {SR}_c} \end{array}} \right) \label{eq:7} \end{equation} where, $\mathbf R_c \in {\mathbb{R}^{K \times M}}$, $\mathbf {SR}_c \in {\mathbb{R}^{P \times M}}$, $\mathbf {FR}_c \in {\mathbb{R}^{(K + P) \times M}}$. \begin{figure}[!t] \centering \vspace*{-20pt} \includegraphics[width=3.3in]{Alg.pdf} \caption*{} \label{fig:temp} \vspace*{-30pt} \end{figure} \section{Experiment Methodology and Results} The dataset\footnote{The dataset is available in:\\\url{https://archive.ics.uci.edu/ml/datasets/Wireless+Indoor+Localization}} used in this paper is provided by Rajen Bhatt \cite{R17}, and consists of 2000 RSS samples collected from 7 APs in four different rooms (4 classes). We randomly select half of the data for training and the other half for the test phase. Both training and test datasets contain 250 data samples from each class. All of the data are standardized before presenting them to the classification model. The model used for classification is an MLP consisting of 6 densely connected layers. The inputs are hence RSS samples, and outputs are the class likelihoods. Both classification and GAN models have been implemented on Tensorflow 1.13 and accelerated by Geforce RTX 2060. We perform multiple experiments to demonstrate the validity of the proposed method; in the first experiment, 10\% of the training data in each class (25 samples) is randomly selected and presented to the GAN model in order to generate synthetic data. We add the data generated by the GAN to the remaining dataset to increase the total quantity of data. The second experiment is similar to the first one, with the exception that all of the training data (250 samples) are selected. Results in terms of test accuracy and log-likelihood loss for both experiments are presented in Table \ref{tab:1}. Accuracy is defined as $\left( {}^{{{N}_{\rm{true}}}}\!\! \diagup \!\!{}_{{{N}_{\rm{total}}}}\; \right)\times 100$, where ${{N}_{\rm{true}}}$ is the number of true predicted classes within the test data and ${{N}_{\rm{total}}}$ is the total number of test data points (equal to 1000 in our experiments); the log-likelihood loss is defined in Equation \ref{eq:1}. As can be seen in this table, the classification model is saturated after adding 750 and 250 samples to the 10\% and 100\% of the real data such that adding the extra synthetic samples does not increase the accuracy. In order to minimize the sample bias effects in the results, the process of randomly selecting data, measuring the classification accuracy, generating synthetic data and determining the final accuracy is carried-out several times. Each neural network model is trained and validated over 100 times with different initial model seeds. The average of the test accuracy and test loss from these runs are then reported as final results. In the final experiment, a fraction of the real data is randomly selected in order to generate synthetic data such that the total quantity of data after adding synthetic data is equal to the quantity of original data in each class (e.g. $K$=250). As depicted in Fig. \ref{fig:4}, by carrying-over the fraction of real data used from 5\% to 100\%, we measure the test accuracy and loss in order to evaluate the effect of synthetic data on the classification accuracy. Fig. \ref{fig:4} indicates that the test accuracy of the model is around 50\% when only a small fraction of real data is used; however, by adding synthetic data, it increases to a value of 80\%. It can be seen that the effect of using real data after 90\% has diminished and the classification model is saturated. Therefore, as a result from Table \ref{tab:1} and Fig. \ref{fig:4}, it cannot be expected a miracle from synthetic data to significantly enhance the accuracy, especially when adequate samples of real data are available. \begin{table}[!t] \centering \caption{Effect on test accuracy and log-likelihood loss of adding synthetic data samples to 10\% and 100\% of real data.} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{\diagbox[width=12em, height = 2.3em]{\kern-0.7em \bf Synthetic Data}{\bf Real Data \kern-0.7em}} & \multicolumn{2}{c|}{\bf 10\% (25 Samples)} & \multicolumn{2}{c|}{\bf 100\% (250 Samples)} \\ \cline{2-5} &\bf Accuracy &\bf Log Loss &\bf Accuracy &\bf Log Loss \\ \hline \bf0 &62.0\% &1.03 &95.3\% &0.14 \\ \hline \bf250 &92.6\% &0.24 &97.1\% &0.08 \\ \hline \bf500 &93.0\% &0.28 &97.4\% &0.08 \\ \hline \bf750 &94.5\% &0.24 &97.3\% &0.08 \\ \hline \bf1000 &94.4\% &0.25 &97.2\% &0.08 \\ \hline \end{tabular} \label{tab:1} \end{table} \begin{figure}[!t] \vspace*{-6pt} \centering \includegraphics[width=2.5in]{fig3.pdf} \caption{Comparison of classification accuracy between purely real data (blue line) and real data combined with synthetic data (red line).} \label{fig:4} \end{figure} \section{Conclusion} In this paper, we have demonstrated that synthetic data can improve the accuracy of fingerprint-based localization in a deep learning context, where the data collection process is time-consuming and costly. In particular, we have proposed the use of a specialized GAN implementation in order to generate synthetic data to provide high-accuracy localization. Experimental results indicate that the proposed method for classification using only 10\% of real data combined with generated synthetic data can get very close to the accuracy of a similar system using 100\% real labeled data. This reduces the expense of data collection significantly, and encourages us to apply the concept to fields other than fingerprint-based localization. \normalsize \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-07T02:12:10", "yymm": "2105", "arxiv_id": "2105.01903", "language": "en", "url": "https://arxiv.org/abs/2105.01903" }
\section{Introduction} The locality of images (\ie, a pixel is more related to its neighbors than the distant pixels) makes Convolutional Neural Network (ConvNet) successful in image recognition, as a conv layer only processes a local neighborhood. In this paper, we refer to this inductive bias as the \textit{local prior}. On top of that, we also desire the ability to capture the long-range dependencies, which is referred to as the \textit{global capacity} in this paper. Traditional ConvNets model the long-range dependencies by the large receptive fields formed by deep stacks of conv layers \cite{wang2018non}. However, repeating local operations is computationally inefficient and may cause optimization difficulties. Some prior works enhance the global capacity with self-attention-based modules \cite{wang2018non,dosovitskiy2020image,vaswani2017attention}, which has no local prior. For example, ViT \cite{dosovitskiy2020image} is a pure-Transformer model without convolution, which feeds images into the Transformers as a sequence. Due to the lack of local prior as an important inductive bias, ViT needs an enormous amount of training data ($3\times10^8$ images in JFT-300M) to converge. On the other hand, some images have intrinsic positional prior, which cannot be effectively utilized by a conv layer because it shares parameters among different positions. For example, when someone tries to unlock a cellphone via face recognition, the photo of the face is very likely to be centered and aligned so that the eyes appear at the top and the nose shows at the middle. We refer to the ability to utilize such positional prior as the \textit{positional perception}. This paper revisits fully-connected (FC) layers to provide traditional ConvNet with global capacity and positional perception. We directly use an FC as the transformation between feature maps to replace conv in some cases. By flattening a feature map, feeding it through FC, and reshaping back, we can enjoy the positional perception (because its parameters are position-related) and global capacity (because every output point is related to every input point). Such an operation is efficient in terms of both the actual speed and theoretical FLOPs, as shown in Table. \ref{table-comparisons}. For the application scenarios where the primary concerns are the accuracy and throughput but not the number of parameters, one may prefer FC-based models to traditional ConvNets. For example, the GPU inference serves usually have tens of GBs of memory, so that the memory occupied by the parameters is minor compared to that consumed by the computations and internal feature maps. \begin{figure*} \begin{center} \includegraphics[width=\linewidth]{arch.pdf} \vspace{-0.25in} \caption{Sketch of a RepMLP. Here $N,C,H,W$ are the batch size, number of input channels, height and width, $h,w,g,p,O$ are the desired partition height and width, number of groups, padding, and output channels, respectively. The input feature map is split into a set of partitions, and the Global Perceptron adds the correlations among partitions onto each partition. Then the Local Perceptron captures the local patterns with several conv layers, and the Partition Perceptron models the long-range dependencies. This sketch assumes $N=C=1$,$H=W$,$\frac{H}{w}=\frac{W}{w}=2$ (\ie, a channel is split into four partitions) for the better readability. We assume $h,w>7$ so that the Local Perceptron has conv branches of kernel size $1,3,5,7$. The shapes of parameter tensors are shown alongside FC and conv layers. Via structural re-parameterization, the training-time block with conv and BN layers is equivalently converted into a three-FC block, which is saved and used for inference.} \label{fig-arch} \vspace{-0.25in} \end{center} \end{figure*} However, an FC has no local prior because the spatial information is lost. In this paper, we propose to incorporate local prior into FC with a \textit{structural re-parameterization} technique. Specifically, we construct conv and batch normalization (BN) \cite{ioffe2015batch} layers parallel to the FC during training, then merge the trained parameters into the FC to reduce the number of parameters and latency for inference. Based on that, we propose a re-parameterized multi-layer perceptron (RepMLP). As shown in Fig. \ref{fig-arch}, the training-time RepMLP has FC, conv, and BN layers but can be equivalently converted into an inference-time block with only three FC layers. The meaning of structural re-parameterization is that the training-time model has a set of parameters while the inference-time model has another set, and we \textit{parameterize the latter with the parameters transformed from the former}. Note that we do not derive the parameters before each inference. Instead, we convert it \textit{once for all}, and then the training-time model can be discarded. Compared to conv, RepMLP runs faster under the same number of parameters and has global capacity and positional perception. Compared to a self-attention module \cite{wang2018non,dosovitskiy2020image}, it is simpler and can utilize the locality of images. As shown in our experiments (Table. \ref{table-comparisons}, \ref{table-face}, \ref{table-seg}), RepMLP outperforms the traditional ConvNets in a variety of vision tasks, including \textbf{1)} general classification (ImageNet \cite{deng2009imagenet}), \textbf{2)} task with positional prior (face recognition) and \textbf{3)} task with translation invariance (semantic segmentation). Our contributions are summarized as follows. \begin{itemize}[noitemsep,nolistsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item We propose to utilize the global capacity and positional perception of FC and equip it with local prior for image recognition. \item We propose a simple, platform-agnostic and differentiable algorithm to merge the parallel conv and BN into FC for the local prior without any inference-time costs. \item We propose RepMLP, an efficient building block, and show its effectiveness on multiple vision tasks. \end{itemize} \section{Related Work} \subsection{Designs for Global Capacity} Non-local Network \cite{wang2018non} proposed to model the long-range dependencies via the self-attention mechanism. For each query position, the non-local module first computes the pairwise relations between the query position and all positions to form an attention map and then aggregates the features of all the positions by a weighted sum with the weights defined by the attention map. Then the aggregated features are added to the features of each query position. GCNet \cite{cao2019gcnet} created a simplified network based on a query-independent formulation, which maintains the accuracy of Non-local Network with less computation. The input to a GC block goes through a global attention pooling, feature transform (a $1\times1$ conv), and feature aggregation. Compared to these works, RepMLP is simpler as it uses no self-attention and contains only three FC layers. As will be shown in Table. \ref{table-comparisons}, RepMLP improves the performance of ResNet-50 more than Non-local module and GC block. \subsection{Structural Re-parameterization} In this paper, structural re-parameterization refers to constructing the conv and BN layers parallel to an FC for training and then merging the parameters into the FC for inference. The following two prior works can also be categorized into structural re-parameterization. Asymmetric Convolution Block (ACB) \cite{ding2019acnet} is a replacement for regular conv layers, which uses horizontal (\eg, $1\times3$) and vertical ($3\times1$) conv to strengthen the ``skeleton'' of a square ($3\times3$) conv. Reasonable performance improvements are reported on several ConvNet benchmarks. RepVGG \cite{ding2021repvgg} is a VGG-like architecture, as its body uses only $3\times3$ conv and ReLU for inference. Such an inference-time architecture is converted from a training-time architecture with identity and $1\times1$ branches. RepMLP is more related to ACB since they are both neural network building blocks, but our contributions are not about making convolutions stronger but \textit{making MLP powerful} for image recognition as a replacement for regular conv. Besides, the training-time convolutions inside RepMLP may be enhanced by ACB, RepVGG block, or other forms of convolution for further improvements. \section{RepMLP} A training-time RepMLP is composed of three parts termed as Global Perceptron, Partition Perceptron and Local Perceptron (Fig. \ref{fig-arch}). In this section, we introduce our formulation, describe every component, and show how to convert a training-time RepMLP into three FC layers for inference, where the key is a simple, platform-agnostic and differentiable method for merging a conv into an FC. \subsection{Formulation} In this paper, a feature map is denoted by a tensor $\mathrm{M}\in\mathbb{R}^{N\times C\times H\times W}$, where $N$ is the batch size, $C$ is the number of channels, $H$ and $W$ are the height and width, respectively. We use $\mathrm{F}$ and $\mathrm{W}$ for the kernel of conv and FC, respectively. For the simplicity and ease of re-implementation, we use the same data format as PyTorch \cite{paszke2019pytorch} and formulate the transformations in a pseudo-code style. For example, the data flow through a $K\times K$ conv is formulated as \begin{equation} \mathrm{M}^{(\text{out})} = \text{CONV}(\mathrm{M}^{(\text{in})}, \mathrm{F}, p) \,, \end{equation} where $\mathrm{M}^{(\text{out})}\in\mathbb{R}^{N\times O\times H^\prime\times W^\prime}$ is the output feature map, $O$ is the number of output channels, $p$ is the number of pixels to pad, $\mathrm{F}\in\mathbb{R}^{O\times C\times K\times K}$ is the conv kernel (we temporarily assume the conv is dense, \ie, the number of groups is~1). From now on, we assume $H^\prime=H, W^\prime=W$ for the simplicity (\ie, the stride is 1 and $p=\lfloor \frac{K}{2} \rfloor$). For an FC, let $P$ and $Q$ be the input and output dimensions, $\mathrm{V}^{(\text{in})}\in\mathbb{R}^{N\times P}$ and $\mathrm{V}^{(\text{out})}\in\mathbb{R}^{N\times Q}$ be the input and output, respectively, the kernel is $\mathrm{W}\in\mathbb{R}^{Q\times P}$ and the matrix multiplication (MMUL) is formulated as \begin{equation}\label{eq-formulation-v} \mathrm{V}^{(\text{out})} = \text{MMUL}(\mathrm{V}^{(\text{in})}, \mathrm{W})=\mathrm{V}^{(\text{in})}\cdot\mathrm{W}^\intercal \,. \end{equation} We now focus on an FC that takes $\mathrm{M}^{(\text{in})}$ as input and outputs $\mathrm{M}^{(\text{out})}$. We assume the FC does not change the resolution, \ie, $H^\prime=H, W^\prime=W$. We use $\text{RS}$ (short for ``reshape'') as the function that only changes the shape specification of tensors but not the order of data in memory, which is \textit{cost-free}. The input is first flattened into $N$ vectors of length $CHW$, which is $\mathrm{V}^{(\text{in})}=\text{RS}(\mathrm{M}^{(\text{in})}, (N,CHW))$, multiplied by the kernel $\mathrm{W}(OHW, CHW)$, then the output $\mathrm{V}^{(\text{out})}(N, OHW)$ is reshaped back into $\mathrm{M}^{(\text{out})}(N,O,H,W)$. For the better readability, we omit the RS if there is no ambiguity, \begin{equation} \mathrm{M}^{(\text{out})}=\text{MMUL}(\mathrm{M}^{(\text{in})},\mathrm{W}) \,. \end{equation} Such an FC cannot take advantage of the locality of images as it computes each output point according to every input point, unaware of the positional information. \subsection{Components of RepMLP} We do not use FC in the above-mentioned manner because of not only the lack of local prior but also the huge number of parameters, which is $COH^2W^2$. With the common settings, \eg, $H=W=28,C=O=128$ on ImageNet, this single FC would have 10G parameters, which is clearly unacceptable. To reduce the parameters, we propose Global Perceptron and Partition Perceptron to model the inter- and intra-partition dependencies separately. \textbf{Global Perceptron} splits up the feature map so that different partitions can share parameters. For example, an $(N,C,14,14)$ input can be split into $(4N,C,7,7)$, and we refer to every $7\times7$ block as a \textit{partition}. We use an efficient implementation for such splitting with a single operation of memory re-arrangement. Let $h$ and $w$ be the desired height and width of every partition (we assume $H,W$ are divisible by $h,w$ respectively, otherwise we can simply pad the input), the input $\mathrm{M}\in\mathbb{R}^{N\times C\times H\times W}$ is first reshaped into $(N, C, \frac{H}{h}, h, \frac{W}{w}, w)$. Note that this operation is cost-free as it does not move data in memory. Then we re-arrange the order of axes as $(N,\frac{H}{h}, \frac{W}{w}, C, h, w)$, which moves the data in memory efficiently. For example, it requires only one function call (\textit{permute}) in PyTorch. Then the $(N,\frac{H}{h}, \frac{W}{w}, C, h, w)$ tensor is reshaped (which is cost-free again) as $(\frac{NHW}{hw}, C, h, w)$ (noted as a \textit{partition map} in Fig. \ref{fig-arch}). In this way, the number of parameters required is reduced from $COH^2W^2$ to $COh^2w^2$. However, splitting breaks the correlations among different partitions of the same channel. In other words, the model will view the partitions separately, totally unaware that they were positioned side by side. To add correlations onto each partition, Global Perceptron \textbf{1)} uses average pooling to obtain a pixel for each partition, \textbf{2)} feeds it though BN and a two-layer MLP, then \textbf{3)} reshapes and adds it onto the partition map. This addition can be efficiently implemented with automatic broadcasting (\ie, implicitly replicating $(\frac{NHW}{hw},C,1,1)$ into $(\frac{NHW}{hw},C,h,w)$) so that every pixel is related to the other partitions. Then the partition map is fed into Partition Perceptron and Local Perceptron. Note that if $H=h,W=w$, we directly feed the input feature map into Partition Perceptron and Local Perceptron without splitting, hence there will be no Global Perceptron. \textbf{Partition Perceptron} has an FC and a BN layer, which takes the partition map. The output $(\frac{NHW}{hw}, O, h, w)$ is reshaped, re-arranged and reshaped in the inverse order as before into $(N, O, H, W)$. We further reduce the parameters of FC3 inspired by groupwise conv \cite{chollet2017xception,xie2017aggregated}. With $g$ as the number of groups, we formulate the groupwise conv as \begin{equation} \mathrm{M}^{(\text{out})} = \text{gCONV}(\mathrm{M}^{(\text{in})}, \mathrm{F}, g, p) \,, \mathrm{F}\in\mathbb{R}^{O\times \frac{C}{g} \times K\times K} \,. \end{equation} Similarly, the kernel of \textit{groupwise FC} is $\mathrm{W}\in\mathbb{R}^{Q\times \frac{P}{g}}$, which has $g\times$ fewer parameters. Though groupwise FC is not directly supported by some computing frameworks like PyTorch, it can be alternatively implemented by a groupwise $1\times1$ conv. The implementation is composed of three steps: \textbf{1)} reshaping $\mathrm{V}^{(\text{in})}$ as a ``feature map'' with spatial size of $1\times1$; \textbf{2)} performing $1\times1$ conv with $g$ groups; \textbf{3)} reshaping the output ``feature map'' into $\mathrm{V}^{(\text{out})}$. We formulate the groupwise matrix multiplication (gMMUL) as \begin{equation} \begin{aligned} &\mathrm{M}^\prime=\text{RS}(\mathrm{V}^{(\text{in})}, (N, P, 1, 1)),\quad \mathrm{F}^\prime=\text{RS}(\mathrm{W}, (Q, \frac{P}{g}, 1, 1) \,, \\ &\text{gMMUL}(\mathrm{V}^{(\text{in})}, \mathrm{W}, g) = \text{RS}(\text{gCONV}(\mathrm{M}^\prime, \mathrm{F}^\prime, g, 0), (N, Q)) \,. \end{aligned} \end{equation} \textbf{Local Perceptron} feeds the partition map through several conv layers. A BN follows every conv, as inspired by \cite{ding2019acnet,ding2021repvgg}. Fig. \ref{fig-arch} shows an example of $h,w>7$ and $K=1,3,5,7$. Theoretically, the only constraint on the kernel size $K$ is $K\leq h,w$ (because it does not make sense to use kernels larger than the resolution), but we only use odd kernel sizes as a common practice in ConvNet. We use $K\times K$ just for the simplicity of notation and a non-square conv (\eg, $1\times3$ or $3\times5$) also works. The padding of conv should be configured to maintain the resolution (\eg, $p=0,1,2,3$ for $K=1,3,5,7$, respectively), and the number of groups $g$ should be the same as the Partition Perceptron. The outputs of all the conv branches and Partition Perceptron are added up as the final output. \subsection{A Simple, Platform-agnostic, Differentiable Algorithm for Merging Conv into FC} Before converting a RepMLP into three FC layers, we first show how to merge a conv into FC. With the FC kernel $\mathrm{W}^{(1)}(Ohw,Chw)$, conv kernel $\mathrm{F}(O,C,K,K)$ ($K\leq h,w$) and padding $p$, we desire to construct $\mathrm{W}^\prime$ so that \begin{equation} \begin{aligned} &\text{MMUL}(\mathrm{M}^{(\text{in})},\mathrm{W}^\prime) \\ &=\text{MMUL}(\mathrm{M}^{(\text{in})},\mathrm{W}^{(1)}) + \text{CONV}(\mathrm{M}^{(\text{in})},\mathrm{F},p) \,. \end{aligned} \end{equation} We note that for any kernel $\mathrm{W}^{(2)}$ of the same shape as $\mathrm{W}^{(1)}$, the additivity of MMUL ensures that \begin{equation} \begin{aligned} &\text{MMUL}(\mathrm{M}^{(\text{in})}, \mathrm{W}^{(1)}) + \text{MMUL}(\mathrm{M}^{(\text{in})}, \mathrm{W}^{(2)}) \\ &= \text{MMUL}(\mathrm{M}^{(\text{in})}, \mathrm{W}^{(1)} + \mathrm{W}^{(2)}) \,, \end{aligned} \end{equation} so we can merge $\mathrm{F}$ into $\mathrm{W}^{(1)}$ as long as we manage to construct $\mathrm{W}^{(\mathrm{F},p)}$ of the same shape as $\mathrm{W}^{(1)}$ which satisfies \begin{equation} \text{MMUL}(\mathrm{M}^{(\text{in})}, \mathrm{W}^{(\mathrm{F},p)}) = \text{CONV}(\mathrm{M}^{(\text{in})},\mathrm{F},p) \,. \end{equation} Obviously, $\mathrm{W}^{(\mathrm{F},p)}$ must exist, since a conv can be viewed as a sparse FC that shares parameters among spatial positions, which is exactly the source of its translation invariance, but it is not obvious to construct it with given $\mathrm{F}$ and $p$. As modern computing platforms use different algorithms of convolution (\eg, im2col-\cite{im2col}, Winograd- \cite{winograd}, FFT-\cite{fft-conv}, MEC-\cite{cho2017mec}, and sliding-window-based) and the memory allocation of data and implementations of padding may be different, a means for constructing the matrix on a specific platform may not work on another platform. In this paper, we propose a simple and \textit{platform-agnostic} solution. As discussed above, for \textit{any} input $\mathrm{M}^{(\text{in})}$ and conv kernel $\mathrm{F}$, padding $p$, there exists an FC kernel $\mathrm{W}^{(\mathrm{F},p)}$ such that \begin{equation} \mathrm{M}^{(\text{out})} = \text{CONV}(\mathrm{M}^{(\text{in})}, \mathrm{F}, p) = \text{MMUL}(\mathrm{M}^{(\text{in})}, \mathrm{W}^{(\mathrm{F},p)}) \,. \end{equation} With the formulation used before (Eq. \ref{eq-formulation-v}), we have \begin{equation}\label{eq-middle} \mathrm{V}^{(\text{out})} = \mathrm{V}^{(\text{in})}\cdot\mathrm{W}^{(\mathrm{F},p)\intercal} \,. \end{equation} We insert an identity matrix $\mathrm{I}$ $(Chw, Chw)$ and use the associative law \begin{equation} \mathrm{V}^{(\text{out})} = \mathrm{V}^{(\text{in})}\cdot (\mathrm{I} \cdot \mathrm{W}^{(\mathrm{F},p)\intercal}) \,. \end{equation} We note that because $\mathrm{W}^{(\mathrm{F},p)}$ is constructed with $\mathrm{F}$, $\mathrm{I} \cdot \mathrm{W}^{(\mathrm{F},p)\intercal}$ is a convolution with $\mathrm{F}$ on a feature map $\mathrm{M}^{(\mathrm{I})}$ which is reshaped from $\mathrm{I}$. With explicit RS, we have \begin{equation} \mathrm{M}^{(\mathrm{I})} = \text{RS}(\mathrm{I}, (Chw, C, h, w)) \,, \end{equation} \begin{equation}\label{eq-second-last} \mathrm{I}\cdot\mathrm{W}^{(\mathrm{F},p)\intercal} = \text{CONV}(\mathrm{M}^{(\mathrm{I})}, \mathrm{F}, p) \,, \end{equation} \begin{equation}\label{eq-last} \mathrm{V}^{(\text{out})} = \mathrm{V}^{(\text{in})} \cdot \text{RS}(\mathrm{I}\cdot\mathrm{W}^{(\mathrm{F},p)\intercal}, (Chw, Ohw)) \,. \end{equation} Comparing Eq. \ref{eq-middle} with Eq. \ref{eq-second-last}, \ref{eq-last}, we have \begin{equation}\label{eq-final} \mathrm{W}^{(\mathrm{F},p)} = \text{RS}(\text{CONV}(\mathrm{M}^{(\mathrm{I})}, \mathrm{F}, p), (Chw, Ohw))^\intercal \,. \end{equation} Which is exactly the expression we desire for constructing $\mathrm{W}^{(\mathrm{F},p)}$ with $\mathrm{F}, p$. In short, the equivalently FC kernel of a conv kernel is the result of convolution on an identity matrix with proper reshaping. Better still, the conversion is efficient and differentiable, so one may derive the FC kernel during training and use it in the objective function (\eg, for penalty-based pruning \cite{han2015learning,molchanov2016pruning}). The expression and code for the groupwise case are derived in a similar way and provided in the supplementary material. \subsection{Converting RepMLP into Three FC Layers} To use the theory presented above, we need to first eliminate the BN layers by equivalently fusing them into the preceding conv layers and FC3. Let $\mathrm{F}\in\mathbb{R}^{O\times \frac{C}{g}\times K\times K}$ be the conv kernel, $\vect{\mu},\vect{\sigma},\vect{\gamma},\vect{\beta}\in\mathbb{R}^{O}$ be the accumulated mean, standard deviation and learned scaling factor and bias of the following BN, we construct the kernel $\mathrm{F}^\prime$ and bias $\mathbf{b}^\prime$ as \begin{equation}\label{eq-fuse-bn} \mathrm{F}^\prime_{i,:,:,:} = \frac{\vect{\gamma}_i}{\vect{\sigma}_i}\mathrm{F}_{i,:,:,:} \,,\quad \mathbf{b}^\prime_i = -\frac{\vect{\mu}_i \vect{\gamma}_i}{\vect{\sigma}_i} + \vect{\beta}_i \,. \end{equation} Then it is easy to verify the equivalence: \begin{equation} \begin{aligned} &\frac{\vect{\gamma}_i}{\vect{\sigma}_i}(\text{CONV}(\mathrm{M}, \mathrm{F}, p)_{:,i,:,:} - \vect{\mu}_i) + \vect{\beta}_i \\ &= \text{CONV}(\mathrm{M}, \mathrm{F}^\prime, p)_{:,i,:,:} + \mathbf{b}^\prime_i \,, \forall 1\leq i \leq O \,, \end{aligned} \end{equation} where the left side is the original computation flow of a conv-BN, and the right is the constructed conv with bias. The 1D BN and FC3 of Partition Perceptron are fused in a similar way into $\hat{\mathrm{W}}\in\mathbb{R}^{Ohw\times\frac{Chw}{g}}$, $\hat{\mathbf{b}}\in\mathbb{R}^{Ohw}$. Then we convert every conv via Eq. \ref{eq-final} and add the resultant matrix onto $\hat{\mathrm{W}}$. The biases of conv are simply replicated by $hw$ times (because all the points on the same channel share a bias value) and added onto $\hat{\mathbf{b}}$. Finally, we obtain a single FC kernel and a single bias vector, which will be used to parameterize the inference-time FC3. The BN in Global Perceptron is also removed because the removal is equivalent to applying an affine transformation before FC1, which can be absorbed by FC1 as two sequential MMULs can be merged into one. The formulas and code are provided in the supplementary material. \subsection{RepMLP-ResNet} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{GLFP.pdf} \vspace{-0.20in} \caption{Illustration of GLFP Module (courtesy of \cite{xiapatent}), which is relevant to our RepMLP Bottleneck designed for ResNets (Fig. \ref{fig-repmlp-resnet}).} \label{fig-GLFP} \vspace{-0.3in} \end{center} \end{figure} The design of RepMLP and the methodology of re-parameterizing conv into FC are generic hence may be used in numerous models including traditional CNNs and the concurrently proposed all-MLP models, \eg, MLP-Mixer \cite{tolstikhin2021mlp}, ResMLP \cite{touvron2021resmlp}, gMLP \cite{liu2021pay}, AS-MLP \cite{lian2021mlp}, \etc. In this paper, we use RepMLP in ResNet for most of our experiments because this work was finished before the publicity of all the above-mentioned all-MLP models. The application of RepMLP on the all-MLP models is scheduled as our future work. In order to use RepMLP in ResNet, we follow the bottleneck \cite{he2016deep} design principle of ResNet-50 to reduce the channels by $4\times$ via $1\times1$ conv. Moreover, we further perform $r\times$ channel reduction before RepMLP and $r\times$ channel expansion afterwards via $3\times3$ conv. The whole block is termed as RepMLP Bottleneck (Fig. \ref{fig-repmlp-resnet}). For a specific stage, we replace all the stride-1 bottlenecks with RepMLP Bottlenecks and keep the original stride-2 (\ie, the first) bottleneck. The design of RepMLP Bottleneck is relevant to GLFP Module \cite{xiapatent}, which uses a bottleneck structure with $1\times1$, $3\times3$ conv and FC for human face recognition, but the differences are significant. \textbf{1)} GLFP directly flattens the input feature maps as vectors then feeds them into the FC layer, which is novel and insightful but may be inefficient on tasks with large input resolution such as ImageNet classification and semantic segmentation. In contrast, RepMLP partitions the input feature maps and use Global Perceptron to add the global information. \textbf{2)} GLFP uses a $3\times3$ conv branch parallel to the $1\times1$-FC-$3\times3$ branch to capture the local patterns. Unlike the Local Perceptron of RepMLP that can be merged into the FC for inference, the conv branch of GLFP is essential for both training and inference. \textbf{3)} Some differences in the topology (e.g., addition v.s. concatenation). It should be noted again that the core contribution of this paper is not the solution to insert RepMLP into ResNet but the methodology of re-parameterizing conv into FC and the three components of RepMLP. \section{Experiments} \subsection{Pure MLP and Ablation Studies} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{puremlp.pdf} \vspace{-0.25in} \caption{The pure MLP model and the convolutional counterpart. The stage1 and stage3 are displayed in detail. Taking stage1 for example, $32\times32$ is the resolution, $C=16$ is the number of output channels (except the last layer). Left: FC(32,16) is the kernel size, suggesting that this FC (equivalent to a $1\times1$ conv) projects 16 channels into 32 channels; all the RepMLPs are configured with $g=2$, $h=w=8$. Right: the convolutional counterpart uses $3\times3$ conv. A BN follows every conv and a ReLU follows every RepMLP or conv-BN sequence.} \label{fig-puremlp} \vspace{-0.25in} \end{center} \end{figure} We first verify the effectiveness of RepMLP by testing a pure MLP model on CIFAR-10. More precisely, since an FC is equivalent to a $1\times1$ conv, by ``pure MLP'' we means no usage of conv kernels bigger than $1\times1$. We interleave RepMLP and regular FC ($1\times1$ conv) to construct three stages and downsample by max pooling, as shown in Fig.~\ref{fig-puremlp}, and construct a ConvNet counterpart for comparison by replacing the RepMLPs with $3\times3$ conv. For the comparable FLOPs, the channels of the three stages are 16,32,64 for the pure MLP and 32,64,128 for the ConvNet, so the latter is named Wide ConvNet. We adopt the standard data augmentation \cite{he2016deep}: padding to $40\times40$, random cropping and left-right flipping. The models are trained with a batch size of 128 and a cosine learning rate annealing from 0.2 to 0 in 100 epochs. As shown in Table. \ref{table-puremlp}, the pure MLP model reaches 91.11\% accuracy with only 52.8M FLOPs. Not surprisingly, the pure MLP model does not outperform the Wide ConvNet, motivating us to combine RepMLP and traditional ConvNet. Then we conduct a series of ablation studies. \textbf{A)} We also report the FLOPs of the MLP before the conversion, which still contains conv and BN layers. The FLOPs is much higher though the extra parameters are marginal, which shows the significance of structural re-parameterization. \textbf{B)} ``w/o Local'' is a variant with no Local Perceptron, and the accuracy is 8.5\% lower, which shows the significance of local prior. \textbf{C)} ``w/o Global'' removes FC1 and FC2 and directly feed the partition map into Local Perceptron and Partition Perceptron. \textbf{D)} ``FC3 as conv9'' replaces FC3 with a conv ($K=9$ and $p=4$, so that its receptive field is larger than FC3) followed by BN to compare the representational capacity of FC3 to a regular conv. Though the comparison is biased towards conv because its receptive field is larger, its accuracy is 3.5\% lower, which validates that FC is more powerful than conv since a conv is a degraded FC. \textbf{E)} ``RepMLP as conv9'' directly replaces the RepMLP with a $9\times9$ conv and BN. Compared to D, its accuracy is lower as it has no Global Perceptrons. \setlength{\tabcolsep}{4pt} \begin{table} \caption{Top-1 accuracy, FLOPs and parameters of pure MLP and ConvNet on CIFAR-10. } \label{table-puremlp} \vspace{-0.2in} \begin{center} \small \begin{tabular}{lcccccccc} \hline Model & Acc & FLOPs (M) & Params (M) \\ \hline Pure MLP & 91.11 & 52.8 & 22.41 \\ A) before conversion & 91.11 & 118.9 & 22.91 \\ B) w/o Local & 82.64 & 52.8 & 22.41 \\ C) w/o Global & 89.61 & 52.5 & 22.08 \\ D) FC3 as conv9 & 87.64 & 66.2 & 0.81 \\ E) RepMLP as conv9 & 87.29 & 65.8 & 0.48 \\ \hline Wide ConvNet & 91.99 & 65.1 & 0.50 \\ \hline \end{tabular} \end{center} \vspace{-0.2in} \end{table} \setlength{\tabcolsep}{1.4pt} \subsection{RepMLP-ResNet for ImageNet Classification} We take ResNet-50 \cite{he2016deep} (the torchvision version \cite{torch-model}) as the base architecture to evaluate RepMLP as a building block in traditional ConvNet. For the fair comparison, all the models are trained with identical settings in 100 epochs: global batch size of 256 on 8 GPUs, weight decay of $10^{-4}$, momentum of 0.9, and cosine learning rate annealing from 0.1 to 0. We use mixup \cite{zhang2017mixup} and a data augmentation pipeline of Autoaugment \cite{cubuk2019autoaugment}, random cropping and flipping. All the models are evaluated with single central crop and the speed is tested on the same 1080Ti GPU with a batch size of 128 and measured in examples/second. For the fair comparison, the RepMLPs are converted and all the original conv-BN structures of every model are also converted into conv layers with bias for the speed tests. As a common practice, we refer to the four residual stages of ResNet-50 as c2, c3, c4, c5, respectively. With $224\times224$ input, the output resolutions of the four stages are $56, 28, 14, 7$, and the $3\times3$ conv layers in the four stages have $C=O=64,128,256,512$, respectively. To replace the big $3\times3$ conv layers with RepMLP, we use $h=w=7$ and three conv branches in the Local Perceptron with $K=1,3,5$. We begin by using RepMLP in c4 only and varying the hyper-parameters $r$ and $g$ to test how they influence the accuracy, speed, and number of parameters (Table. \ref{table-c4-gr}). Notably, with violent 8$\times$ reduction (so that the input and output channels of RepMLP is $256/8=32$), RepMLP-Res50 has fewer parameters and run 10\% faster than ResNet-50. The comparison between the first two rows suggest that the current groupwise $1\times1$ conv is inefficient, as the parameters increase by 59\% but the speed decreases by only 0.7\%. Further optimizations on groupwise $1\times1$ conv may make RepMLP more efficient. In the following experiments, we use $r=2$ or 4 and $g=4$ or 8 for the better trade-off. \setlength{\tabcolsep}{4pt} \begin{table} \caption{Results with $224\times224$ input and different $r,g$ in c4 only. The speed is in examples/second.} \label{table-c4-gr} \vspace{-0.25in} \begin{center} \small \begin{tabular}{lccccccc} \hline & $r$ & $g$ & Top-1 acc & Speed & Params (M) \\ \hline RepMLP-Res50 &4 & 8 & 78.13 & 671 & 30.87 \\ RepMLP-Res50 &4 & 2 & 78.22 & 666 & 49.31 \\ RepMLP-Res50 &8 & 8 & 77.79 & 759 & 25.02 \\ RepMLP-Res50 &2 & 8 & 78.60 & 639 & 52.77 \\ Original Res50 &- & - & 77.19 & 689 & 25.53 \\ \hline \end{tabular} \end{center} \vspace{-0.15in} \end{table} \setlength{\tabcolsep}{1.4pt} \setlength{\tabcolsep}{4pt} \begin{table} \caption{Using RepMLP in different stages of ResNet-50 with $224\times224$ input. The speed is in examples/second.} \label{table-stages} \vspace{-0.25in} \begin{center} \small \begin{tabular}{lccccccc} \hline & \text{c2} & \text{c3} & \text{c4} & \text{c5} & Top-1 acc & Speed & Params (M) \\ \hline RepMLP-Res50&\checkmark & \checkmark & \checkmark & \checkmark & 78.32 & 574 & 74.46 \\ RepMLP-Res50&\checkmark & \checkmark & \checkmark & & 78.60 & 575 & 66.97 \\ RepMLP-Res50& & \checkmark & \checkmark & \checkmark & 78.30 & 632 & 48.35 \\ RepMLP-Res50& & \checkmark & \checkmark & & 78.55 & 636 & 40.87 \\ RepMLP-Res50& & \checkmark & & & 78.09 & 644 & 35.52 \\ RepMLP-Res50& & & \checkmark & & 78.13 & 671 & 30.87 \\ Original Res50& & & & & 77.19 & 689 & 25.53 \\ \hline \end{tabular} \end{center} \vspace{-0.15in} \end{table} \setlength{\tabcolsep}{1.4pt} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{bottleneck.pdf} \vspace{-0.20in} \caption{Sketch of a RepMLP Bottleneck.} \label{fig-repmlp-resnet} \vspace{-0.3in} \end{center} \end{figure} We continue to test RepMLP in different stages. Specifically, we set $g=8$ and $r=2,2,4,4$ for c2,c3,c4,c5, respectively, for the reasonable model sizes. Table. \ref{table-stages} shows that replacing the original bottlenecks with RepMLP Bottlenecks causes very minor slowdown, and the accuracy is significantly improved. Using RepMLP only on c4 brings only 5M more parameters but 0.94\% higher accuracy, and using RepMLP in c3 and c4 offers the best trade-off. It also suggests that RepMLP should be combined with traditional conv for the best performance, as using it in all the four stages delivers lower accuracy than c2+c3+c4 and c3+c4. We use RepMLP in c3+c4 in the following experiments. The comparisons to the larger traditional ConvNets with higher input resolution (Table. \ref{table-comparisons}) further justifies the effectiveness of RepMLP and offers some interesting discoveries. When trained and tested with $320\times320$ inputs, we use RepMLP with $h=w=10$ and the Local Perceptron has four branches with $K=1,3,5,7$. We also vary the number of groups to generate three models with different sizes. For example, g8/16 means that $g=8$ for c3 and 16 for c4. As two classic models for modeling the long-range dependencies, we construct the Non-local \cite{wang2018non} and GC \cite{cao2019gcnet} counterparts following the instructions in the original papers, and the models are trained with the identical settings. We also present the well-known EfficientNet \cite{efficientnet} series as a strong baseline trained with the identical settings again. We have the following observations. \textbf{1)} Compared to the traditional ConvNets with comparable numbers of parameters, the FLOPs of RepMLP-Res50 is much lower and the speed is faster. For example, compared to ResNet-101 with $224\times224$ inputs, RepMLP-Res50 has only 50\% FLOPs and 4M fewer parameters, runs 50\% faster, but their accuracies are the same. With $320\times320$ inputs, RepMLP-Res50 outperforms in accuracy, speed, and FLOPs by a large margin. Additionally, the improvements of ResNet-50 should not be simply attributed to the increased depth because it is still shallower than ResNet-101. \textbf{2)} Increasing the parameters of RepMLPs causes very minor slowdown. From RepMLP-Res50-g8/16 to RepMLP-Res50-g4/8, the parameters increase by 47\%, but the FLOPs increases by only 3.6\% and the speed is lowered by only 2.2\%. This property is particularly useful for high-throughput inference on large-scale servers, where the throughput and accuracy are our major concerns while the model size is not. \textbf{3)} Compared to Nonlocal and GC, the speed of RepMLP-Res50 is almost the same, but the accuracy is around 1\% higher. \textbf{4)} Compared to EfficientNets, which are actually not efficient on GPU, RepMLP-Res50 outperforms in both the speed and accuracy. \setlength{\tabcolsep}{4pt} \begin{table*} \caption{Comparisons with traditional ConvNets on ImageNet all trained with the identical settings. The speed is tested on the same 1080Ti with a batch size of 128. The input resolutions of the EfficientNets are different because they are fixed as the structural hyper-parameters.} \label{table-comparisons} \vspace{-0.2in} \begin{center} \small \begin{tabular}{lcccccccc} \hline Model & Input resolution& Top-1 acc& Speed (examples/s) & FLOPs (M) & Params (M) \\ \hline ResNet-50 & 224 & 77.19 & 689 & 4089 & 25.53 \\ ResNet-101 & 224 & 78.55 & 421 & 7801 & 44.49 \\ RepMLP-Res50 & 224 & 78.55 & 636 & 3890 & 40.87 \\ \hline ResNet-50 & 320 & 78.03 & 344 & 8343 & 25.53 \\ ResNet-101 & 320 & 79.40 & 213 & 15919 & 44.49 \\ RepMLP-Res50-g8/16& 320 & 79.76 & 312 & 8057 & 59.22 \\ RepMLP-Res50-g8/8& 320 & 79.84 & 311 & 8108 & 72.02 \\ RepMLP-Res50-g4/8& 320 & 80.07 & 305 & 8354 & 87.38 \\ NL-Res50 & 320 & 78.95 & 316 & 9182 & 27.63 \\ GC-Res50 & 320 & 78.93 & 312 & 8351 & 28.05 \\ \hline EfficientNet-B1 & 240 & 75.76 & 512 & 686 & 7.76 \\ EfficientNet-B2 & 260 & 76.46 & 396 & 993 & 9.07 \\ EfficientNet-B3 & 300 & 78.17 & 228 & 1827 & 12.18 \\ \hline \end{tabular} \end{center} \vspace{-0.25in} \end{table*} \setlength{\tabcolsep}{1.4pt} We visualize the weights of FC3 in Fig. \ref{fig-visualized}, where the sampled output point (6,6) is marked by a dashed square. The original FC3 has no local prior as the marked point and the neighborhood have no larger values than the others. But after merging the Local Perceptron, the resultant FC3 kernel has larger values around the marked point, suggests that the model focuses more on the neighborhood, which is expected. Besides, the global capacity is not lost because some points (marked by red rectangles) outside the largest conv kernel ($7\times7$ in this case, marked by a blue square) still have larger values than the points inside. We also present another design of bottleneck (RepMLP Light Block) in the Appendix, which uses no $3\times3$ conv but only $1\times1$ for 8$\times$ channel reduction/expansion. Compared to the original ResNet-50, it achieves comparable accuracy (77.14\% \vs 77.19\%) with 30\% lower FLOPs and 55\% faster speed. \begin{figure} \begin{subfigure}{0.47\linewidth} \includegraphics[width=\linewidth,page=2]{fc_converted_6_6.pdf} \end{subfigure} \begin{subfigure}{0.47\linewidth} \includegraphics[width=\linewidth,page=1]{fc_converted_6_6.pdf} \end{subfigure} \vspace{-0.05in} \caption{FC weights sampled from the FC3 of the first RepMLP in c3 of RepMLP-Res50-g8/8. The left is the original training-time FC3 and the right is the inference-time FC3 merged with Local Perceptron. Specifically, we reshape the kernel of FC3 into $\bar{\mathrm{W}}(O,h,w,\frac{C}{g},h,w)$, which is $(64, 10, 10, 8, 10, 10)$, then sample the weights related to the first input channel and the (6,6) point (marked by a dashed square) on the first output channel, which is $\bar{\mathrm{W}}_{0,6,6,0,:,:}$. Then we take the absolute value, normalize by the minimum of the whole matrix, and take the natural logarithm for the better readability. A point with darker color indicates the FC considers the corresponding position on the input channel more related to the output point at (6,6).} \label{fig-visualized} \vskip -0.1in \end{figure} \subsection{Face Recognition} Unlike conv, FC is not translation-invariant, making RepMLP particularly effective for images with positional prior, \ie, human faces. The dataset we use for training is MS1M-V2, a large-scale face dataset with 5.8M images from 85k celebrities. It is a semi-automatic refined version of the MS-Celeb-1M dataset \cite{guo2016ms} which consists of 1M photos from 100k identities and has many noisy images and wrong ID labels. We use MegaFace \cite{kemelmacher2016megaface} for evaluation, which includes 1M images of 60k identities as the gallery set and 100k images of 530 identities from FaceScrub as the probe set. It is also a refined version by manual clearing. We use $96\times96$ inputs for both training and evaluation. Apart from MobileFaceNet \cite{chen2018mobilefacenets} as a well-known baseline, which was originally designed for low-power devices, we also use a customized ResNet (referred to as FaceResNet in this paper) as a stronger baseline. Compared to a regular ResNet-50, the numbers of blocks in c2,c3,c4,c5 are reduced from 3,4,6,3 to 3,2,2,2, the widths are reduced from 256,512,1024,2048 to 128,256,512,1024, and the channels of $3\times3$ are increased from 64,128,256,512 to 128,256,512,1024. In other words, the $1\times1$ conv layers in residual blocks do not reduce or expand the channels. Because the input resolution is $96\times96$, the spatial sizes of c2,c3,c4,c5 are 24,12,6,3, respectively. For the RepMLP counterpart, we modify FaceResNet by replacing the stride-1 bottlenecks of c2,c3,c4 (\ie, the last two bottlenecks of c2 and the last blocks of c3,c4) by RepMLP Bottlenecks with $h=w=6,r=2,g=4$. For training, we use a batch size of 512, momentum of 0.9, AM-Softmax loss \cite{wang2018additive}, and weight decay following \cite{chen2018mobilefacenets}. All the models are trained for 420k iterations with a learning rate beginning with 0.1 and divided by 10 at 252k, 364k and 406k iterations. For evaluation, we report the top-1 accuracy on MegaFace. Table. \ref{table-face} shows that FaceResNet delivers higher accuracy than MobileFaceNet but runs slower, while RepMLP-FaceRes outperforms in both accuracy and speed. Compared to MobileFaceNet, RepMLP-FaceRes shows \textbf{4.91}\% higher accuracy and runs 8\% faster (though it has 2.5$\times$ FLOPs), which is obviously a better fit for the high-power devices. \setlength{\tabcolsep}{4pt} \begin{table} \caption{Results of face recognition on MS1M-V2 and MegaFace. The speed (examples/second) is tested with a batch size of 512 and input 96$\times$96 on the same 1080Ti GPU .} \label{table-face} \vspace{-0.2in} \begin{center} \small \begin{tabular}{lcccccccc} \hline Model & Acc & Speed & FLOPs (M) & Params (M) \\ \hline MobileFaceNet & 90.99 & 5002 & 162 & 0.98 \\ FaceResNet & 92.97 & 3811 & 1050 & 40.35 \\ RepMLP-FaceRes & 95.90 & 5425 & 406 & 28.32 \\ \hline \end{tabular} \end{center} \vspace{-0.25in} \end{table} \setlength{\tabcolsep}{1.4pt} \subsection{Semantic Segmentation} Semantic segmentation is a representative task with translation invariance, as a car may occur at the left or right. We verify the generalization performance of ImageNet-pretrained RepMLP-Res50 on Cityscapes \cite{cityscapes}, which contains 5K finely annotated images and 19 categories. We use the RepMLP-Res50-g4/8 and the original ResNet-50 pretrained with $320\times320$ on ImageNet as the backbones. For the better reproducibility, we simply adopt the official implementation and default configurations \cite{official-pspnet} of PSPNet \cite{pspnet} framework: poly learning rate policy with base of 0.01 and power of 0.9, weight decay of $10^{-4}$ and a global batch size of 16 on 8 GPUs for 200 epochs. Following PSPNet-50, we use dilated conv in c5 of both models and c4 of the original ResNet-50. We do not use dilated conv in c4 of RepMLP-Res50-g4/8 because its receptive field is already large. Since the resolution of c3 and c4 becomes $90\times90$, the Global Perceptron will have 81 partitions of each channel hence more parameters in FC1 and FC2. We address this problem by reducing the output dimensions of the FC1 and the input dimensions of FC2 by 4$\times$ for c3 and 8$\times$ for c4. FC1 are FC2 are initialized randomly, and all the other parameters are inherited from the ImageNet-pretrained model. Table. \ref{table-seg} shows that the PSPNet with RepMLP-Res50-g4/8 outperforms the Res-50 backbone by 2.21\% in mIoU. Though it has more parameters, the FLOPs is lower and the speed is faster. Of note is that our PSPNet baseline is lower than the reported PSPNet-50 because the latter was customized for semantic segmentation (added two more layers before the max pooling) but ours is not. \setlength{\tabcolsep}{4pt} \begin{table} \caption{Semantic segmentation on Cityscapes \cite{cityscapes} tested on the \textit{validation} subset. The speed (examples/second) is tested with a batch size of 16 and input 713$\times$713 on the same 1080Ti GPU.} \label{table-seg} \vspace{-0.2in} \begin{center} \small \begin{tabular}{lccccccc} \hline Backbone & mIoU & Speed & FLOPs (M) & Params (M) \\ \hline RepMLP-Res50 & 76.58 & 10.43 & 342,696 & 175.41 \\ ResNet-50 & 74.27 & 10.22 & 350,004 & 46.56 \\ \hline \end{tabular} \end{center} \vspace{-0.1in} \end{table} \setlength{\tabcolsep}{1.4pt} \section{Conclusion} An FC has stronger representational capacity than a conv, as the latter can be viewed as a sparse FC with shared parameters. However, an FC has no local prior, which makes it less favored for image recognition. In this paper, we have proposed RepMLP, which utilizes the global capacity and positional perception of FC and incorporates the local prior into FC by re-parameterizing convolutions into it via a simple and platform-agnostic algorithm. From the theoretical side, \textit{viewing conv as a degraded case of FC} opens up a new perspective, which may deepen our understanding of the traditional ConvNets. It should not be left unmentioned that RepMLP is designed for the application scenarios where the major concerns are the inference throughput and accuracy, less concerning the number of parameters. {\small \bibliographystyle{ieee_fullname}
{ "timestamp": "2022-03-31T02:34:01", "yymm": "2105", "arxiv_id": "2105.01883", "language": "en", "url": "https://arxiv.org/abs/2105.01883" }
\section{Introduction} Consider the problem of approximately minimizing the maximum of $N$ convex functions: given $f_1, \ldots, f_N$ such that for every $i\in [N]$ the function $f_i: \R^d \to \R$ is convex, Lipschitz and possibly smooth, and a target accuracy $\epsilon$, \begin{equation}\label{eq:problem} \mbox{find a point $x$ such that }~ F_{\max}(x) - \inf_{x\opt \in \R^d} F_{\max}(x\opt) \le \epsilon ~~\mbox{where}~~ F_{\max}(x) \coloneqq \max_{i\in[N]} f_i(x) ~. \end{equation} Problems of this form play significant roles in optimization and machine learning. The maximum of $N$ functions is a canonical example of structured non-smoothness and several works develop methods for exploiting it~\cite{nesterov2005smooth,nemirovski2009robust,shalev2016minimizing,bullins2020highly,carmon2020acceleration}. The special case where the $f_i$'s are linear functions is particularly important for machine learning, since it is equivalent to hard-margin SVM training (with $f_i$ representing the negative margin on the $i$th example)~\cite{vapnik1999overview,clarkson2012sublinear,hazan2011beating}. Going beyond the linear case, \citet{shalev2016minimizing} argue that minimizing the maximum classification loss can have advantageous effects on training speed and generalization in the presence of rare informative examples. Moreover, minimizing the worst-case objective is the basic paradigm of robust optimization~\cite{bental2009robust,namkoong2016stochastic}. In particular, since $F_{\max}(x)=\max_{p\in \Delta^N} \sum_{i\in[N]} p_i f_i(x)$ the problem corresponds to an extreme case of distributionally robust optimization~\cite{bental2013robust} with an uncertainty set that encompasses the entire probability simplex $\Delta^N$. The goal of this paper is to characterize the complexity of this fundamental problem. We are particularly interested in the regime where the number of data points $N$ and the problem dimension $d$ are large compared to the desired level of accuracy $1/\epsilon$, as is common in modern machine learning. Consequently, we focus on dimension-independent first-order methods (i.e., methods which only rely on access to $f_i(x)$ and a (sub)gradient $\nabla f_i(x)$ as opposed to higher-order derivatives), and report complexity in terms of the number of function/gradient evaluations required to solve the problem. \subsection{Related work} To put our new complexity bounds in context, we first review the prior art in solving the problem~\eqref{eq:problem} with first-order methods. For simplicity of presentation, throughout the introduction we assume each $f_i$ is 1-Lipschitz and that $F_{\max}$ has a global minimizer $x\opt$ with (Euclidean) norm at most 1. The simplest approach to solving the problem~\eqref{eq:problem} is the subgradient method~\cite{nesterov2018lectures}. This method finds an $\epsilon$-accurate solution in $O(\epsilon^{-2})$ iterations, with each step computing a subgradient of $F_{\max}$, which in turn requires evaluation of all $N$ function values and a single gradient. Consequently, the complexity of this method is $O(N\epsilon^{-2})$. We are unaware of prior work obtaining improved complexity without further assumptions.\footnote{The center of gravity method \cite{levin1965cg,newman65cg} yields a query complexity $O(N d \log(1/\epsilon))$ which is an improvement only for sufficiently small problem dimension $d$.} However, even a weak bound on smoothness helps: if each $f_i$ has $O(1/\epsilon)$-Lipschitz gradient, then it is possible to minimize $F_{\max}$ to accuracy $\epsilon$ with complexity $\Otil{N\epsilon^{-1}}$ ~\cite{nesterov2005smooth}.\footnote{Throughout the paper, the $\Otil{\cdot}$ and $\Omtil{\cdot}$ hide polylogarithmic factors. } This result relies on the so-called ``softmax'' approximation of the maximum, \begin{equation}\label{eq:softmax} \Fsm(x) \coloneqq \epsilon' \log\prn*{\sum_{i\in[N]} e^{f_i(x) / \epsilon'}},~~\mbox{where } \epsilon' = \frac{\epsilon}{2\log N}. \end{equation} It is straightforward to show that $\abs{\Fsm(x)-F_{\max}(x)} \le \frac{\epsilon}{2}$ for all $x\in\R^d$, and that $\nabla \Fsm$ is $\Otil{1/\epsilon}$-Lipschitz if $\nabla f_i$ is $O(1/\epsilon)$-Lipschitz for every $i$. Therefore, Nesterov's accelerated gradient descent~\cite{nesterov2005smooth} finds a minimizer of $\Fsm$ to accuracy $\frac{\epsilon}{2}$ in $\Otil{\sqrt{1/\epsilon}/\sqrt{\epsilon}}$ iterations, with each iteration requiring $N$ evaluations of $f_i$ and $\nabla f_i$ to compute $\nabla \Fsm$, yielding the claimed bound. The assumption that $\nabla f_i$ is $O(1/\epsilon)$-Lipschitz is fairly weak; see \Cref{sec:app-discussion-smoothness} for additional discussion. Given more smoothness, further improvement is possible. \citet[Section 2.3.1]{nesterov2018lectures} shows that it suffices to solve $O(\sqrt{\Lg/\epsilon})$ linearized subproblems of the form $\min_{x\in\R^d} \max_{i\in[N]}\big\{ f_i(y_t) +(\nabla f_i(y_t))^\top (x-y_t) + \frac{\Lg}{2}\norm{x-y_t}^2 \big\}$. This yields a query complexity upper bound of $O(N\sqrt{\Lg/\epsilon})$, Though the complexity of solving each subproblem is not immediately clear, in \Cref{sec:app-discussion-nesterov} we explain how a first-order method \cite{carmon2019variance} solves the subproblem to sufficient precision. Additional schemes for solving~\eqref{eq:problem} in the special case of linear functions (i.e., $\Lg=0$) are discussed in \Cref{sec:app-discussion-linear}. A powerful technique for solving optimization problems with a large number $N$ of component functions is sampling components in order to compute cheap unbiased gradient estimates. However, both $F_{\max}$ and $\Fsm$ are not given as linear combinations of the $f_i$'s. Consequently, it is not clear how to efficiently compute unbiased estimators for their gradients. Several works address this by considering the saddle point problem \begin{equation*} \min_{x\in\R^d}\max_{p\in\Delta^N} F_{\mathrm{pd}}(x;p) \coloneqq \sum_{i\in [N]} p_i f_i(x), \end{equation*} which is equivalent to minimizing to $F_{\max}$. One can obtain unbiased estimators for $\nabla F_{\mathrm{pd}}(x;p)$, and apply stochastic mirror descent to find its saddle-point~\cite{nemirovski2009robust,shalev2016minimizing,namkoong2016stochastic}. However, all known estimators for $\nabla_p F_{\mathrm{pd}}$ have complexity-variance product $\Omega(N)$. Consequently, the best general guarantees known for such methods are $\Otil{N\epsilon^{-2}}$ iterations and total complexity.\footnote{ Exact-gradient primal-dual methods such as mirror-prox~\cite{nemirovski2004prox} and dual-extrapolation~\cite{nesterov2007dual} have complexity guarantees scaling as $\Otil{N\epsilon^{-1}}$ under the stronger smoothness assumption $\Lg = O(1)$ \cite[cf.][Section 5.2.4]{bubeck2015convex}. } \citet{shalev2016minimizing} analyze a stochastic primal-dual method from an online learning perspective. They show that if the online method producing the primal updates admits a mistake bound (as is the case for learning halfspaces), then the complexity of the approach improves to $\Otil{N\epsilon^{-1}}$. We show that adopting a primal-only perspective and iteratively restricting $x$ to a small ball (i.e., ``thinking inside the ball'') allows us to make better use of the scalability of stochastic gradient methods. \begin{table}[] \captionsetup{font=small} \centering \begin{tabular}{@{}llll@{}} \toprule Smoothness & Method & Upper bound & Lower bound \\ \midrule \multirow{2}{*}{None ($\Lg=\infty$)} & Subgradient method & $N\epsilon^{-2}$ & \multirow{2}{*}{$N\epsilon^{-2/3} + \epsilon^{-2}$} \\ & Ours & $N\epsilon^{-2/3} + \epsilon^{-8/3}$ & \\ \midrule \multirow{2}{*}{Weak ($\Lg \approx 1/\epsilon$)} & AGD on softmax & $N\epsilon^{-1}$ & \multirow{2}{*}{$N\epsilon^{-2/3} + \sqrt{N}\epsilon^{-1}$} \\ & Ours & $N\epsilon^{-2/3} + \sqrt{N}\epsilon^{-1}$ & \\ \midrule Strong ($\Lg \ll 1/\epsilon$) & AGD on linearization* & $N\sqrt{\Lg\epsilon^{-1}}$ & $N\Lg^{1/3}\epsilon^{-1/3} + \sqrt{N\Lg\epsilon^{-1}}$ \\ \bottomrule \end{tabular} \caption{\label{table:summary} The complexity of solving the problem~\eqref{eq:problem} in terms of number of $(i,x)$ queries for computing and $f_i(x)$ and $\nabla f_i(x)$. The tables assume each $f_i$ is convex, 1-Lipschitz and (optionally) has $\Lg$-Lipschitz gradient, and that $F_{\max}$ has a minimizer with norm at most 1. The stated rates omit constant and (in the upper bounds) polylogarithmic factors. *For this algorithm only, the computational complexity is not simply $d$ times the query complexity; see~\Cref{sec:app-discussion-nesterov}.} % \end{table} \subsection{Our contributions} To motivate our developments, % note that the general complexity guarantees described above all scale linearly with the number of functions $N$. On the one hand, this is to be expected, as even evaluating the maximum of $N$ numbers requires querying all of them. On the other hand, a linear scaling in $N$ stands in sharp contrast to guarantees for minimizing the \emph{average} of $N$ functions, which are typically sublinear in $N$. Since good scaling with dataset size is crucial in machine learning, we wish to precisely characterize the number of dataset passes (that is, the coefficient of $N$) in the complexity of minimizing $F_{\max}$. Towards that end, we prove an oracle complexity lower bound. The bound shows that any algorithm that operates by repeatedly querying $i,x$ and observing $f_i(x), \nabla f_i(x)$, must make $\Omega(N\epsilon^{-2/3})$ queries in order to solve problem~\eqref{eq:problem} for some convex, 1-Lipschitz problem instance $f_1,\ldots, f_N$ with domain in the unit ball. The same bound continues to hold even when constraining the $f_i$ to have $O(1/\epsilon)$-Lipschitz gradient, and when using high-order derivative oracles. This result further sharpens the contrast to average risk minimization, as it implies $\Omega(\epsilon^{-2/3})$ dataset passes are required in the worst case. However, it also suggests the potential for significant improvement over existing algorithms and their complexity bounds. We realize this potential with new algorithms whose leading complexity term in $N$ matches our lower bound up to polylogarithmic factors. In the non-smooth case, our approach solves~\eqref{eq:problem} with complexity $\Otil{N\epsilon^{-2/3} + \epsilon^{-8/3}}$, dominating prior guarantees for $N=\Omtil{\epsilon^{-2/3}}$. For $O(1/\epsilon)$-Lipschitz gradient functions, we obtain the stronger rate $\Otil{N\epsilon^{-2/3} + \sqrt{N}\epsilon^{-1}}$, which dominates prior guarantees for $N=\Omtil{1}$. At the core of these algorithms is a technique for accelerated optimization given a ball optimization oracle~\cite{carmon2020acceleration}; we make several improvements to this technique, which may be of independent interest. \Cref{table:summary} summarizes our results and their comparison to prior art. In addition to the results described above, the table also contains lower bounds on sublinear terms in $N$ (that follow from standard arguments), as well as a lower bound for the smooth regime where $\Lg=o(1/\epsilon)$. In this regime there exists a gap between the linear terms in the upper and lower bounds. \subsection{Overview of techniques} Our algorithms rely on a new technique introduced by~\citet{carmon2020acceleration} for acceleration with a ball optimization oracle (BOO). For any $r>0$ and $F:\R^d \to \R$, a BOO of radius $r$ takes in a query point $\bx\in\R^d$ and returns an (approximate) minimizer of $F$ in a ball of radius $r$ around $\bx$. The technique, which is a variant of Monteiro-Svaiter acceleration~\cite{monteiro2013accelerated,gasnikov19near,bubeck2019complexity,bullins2020highly}, minimizes $F$ to $\epsilon$ accuracy using $\Otil{(1/r)^{2/3}}$ oracle calls (with $\mathsf{poly}(\log(1/\epsilon))$ factors hidden). \citet{carmon2020lower} apply their technique to the special case of~\eqref{eq:problem} with linear losses (see also~\Cref{sec:app-discussion-linear}), showing that the log-sum-exp function is quasi-self-concordant and implementing a BOO of radius $r=\Thetatil{\epsilon}$ using $\Otil{1}$ linear system solves. However, this approach does not extend to general $f_i$ because quasi-self-concordance no longer holds for $\Fsm$, which might not even be differentiable. The main technical insight of our paper is that it is possible to efficiently implement a BOO of radius $r_\epsilon=\Thetatil{\epsilon}$ for $\Fsm$ using stochastic first-order methods. More precisely, for any $\bx \in\R^d$ we can minimize $\Fsm$ in a ball of radius $r_\epsilon$ around $\bx$ to any $\mathsf{poly}(\epsilon)$ accuracy with precisely $N$ function evaluations and $\mathsf{poly}(1/\epsilon)$ (sub-)gradient evaluations. Using BOO acceleration, this immediately implies an $\Otil{N\epsilon^{-2/3} + \mathsf{poly}(1/\epsilon)}$ complexity bound exhibiting optimal $N$ dependence. To implement the BOO for $\Fsm$, we consider instead the ``exponentiated softmax'' function \begin{equation*} \Gamma_{\epsilon}(x) = {\epsilon'} \cdot \exp\prn*{\frac{\Fsm(x)-\Fsm(\bx)}{{\epsilon'}}} = \sum_{i\in [N]} p_i {\epsilon'} \cdot e^{ \frac{f_i(x)-f_i(\bx)}{{\epsilon'}}} ~~\mbox{where}~~p_i = \frac{e^{f_i(\bx)/{\epsilon'}}}{\sum_{j\in[N]} e^{f_i(\bx)/{\epsilon'}}}, \end{equation*} and $\epsilon'=\epsilon/(2\log N)$ as in~\cref{eq:softmax}. Note that $\Gamma_\epsilon$ is a monotonically increasing transformation of $\Fsm$, and is therefore convex with the same minimizer as $\Fsm$. Moreover, it is a (weighted) finite sum, and consequently amenable to stochastic gradient methods. It remains to verify that the functions $\xi_i(x) = \epsilon' \cdot e^{ ({f_i(x)-f_i(\bx)})/{\epsilon'}}$ are well-behaved, which might look difficult since exponentials are notoriously unstable. However, our choice of $r$ and Lipschitz continuity of $f_i$ implies that $e^{ \prn{f_i(x)-f_i(\bx)}/{\epsilon}} = \Theta(1)$ inside the ball, and consequently $\xi_i$ is indeed well-behaved, with Lipschitz constant $O(1)$. We thus minimize $\Gamma_\epsilon$ (and hence $\Fsm$) with stochastic gradient descent~\cite{hazan2014beyond}, sampling $i$ from $p$. Moreover, if $\nabla f_i$ are Lipschitz, then $\nabla \xi_i$ are also Lipschitz, and we apply an accelerated variance reduction method~\cite{allen2016katyusha} for better efficiency. To complete the analysis of our methods it remains to determine how accurately we need to solve each ball subproblem. Unfortunately, the analysis of \cite{carmon2020acceleration} makes fairly stringent accuracy requirements, and also requires $\nabla \Fsm$ to have a finite Lipschitz constant. To obtain tighter guarantees, we significantly rework the analysis in \cite{carmon2020acceleration}, modifying the algorithm to make it applicable without any differentiablility requirements. Our improved analysis takes into account the fact that the acceleration scheme only requires ball minimization with strong $\ell_2$ regularization, which further improves the oracle implementation complexity. Our lower bound follows from a variation on the classical ``chain constructions'' in optimization lower bounds~\cite{nemirovski1983problem,woodworth2016tight,guzman2015lower,diakonikolas2020lower}, where in order to make a unit of progress on our constructed function, any algorithm must (with constant probability) make $\Omega(N)$ queries in order to discover a single new link in the chain. We build a chain of length $\Omega(\epsilon^{-2/3})$ for which querying any $\epsilon$ minimizer of $F_{\max}$ requires discovering the entire chain, giving the $\Omega(N\epsilon^{-2/3})$ complexity lower bound. To prove this result for arbitrary randomized algorithms, we randomize both the order of the functions and the rotation of the domain. \paragraph{Paper outline.} \Cref{sec:prelims} provides some additional preliminaries and notation. \Cref{sec:ms-bacon-redux} gives our improved derivation of the BOO acceleration method of~\cite{carmon2020acceleration}, and \Cref{sec:boo-implementation} develops a BOO for $\Fsm$, culminating in our upper complexity bounds for the problem~\eqref{eq:problem}, stated in \Cref{thm:ub}. \Cref{sec:lb} gives our lower bounds with the main result stated in \Cref{thm:lb}. % \section{Preliminaries}\label{sec:prelims} \paragraph{General notation.} Throughout, $\norm{\cdot}$ denotes the Euclidean norm. We write $\mathbb{B}_r(z)$ for the Euclidean ball of radius $r$ centered at $z$, and $\mathbb{B}_r^d(z)$ when emphasizing that the ball is $d$-dimensional. We use $\Lf$ to denote a function Lipschitz constant and $\Lg$ to denote a gradient Lipschitz constant; we say that $f$ is $\Lg$-smooth if it has $\Lg$-Lipschitz gradient. To disambiguate between sequence and coordinate indices, in \Cref{sec:lb} we denote the former with normal subscript and the latter with bracketed subscript, i.e., $x\coind{i}$ is the $i$th coordinate of $x$ and $x_k$ is the $k$th element in the sequence $x_1, x_2, \ldots$. We also write $v\coind{\le i}$ to denote a copy of $v$ with coordinate $i+1, i+2, \ldots$ set to zero. We use $a\wedge b \coloneqq \min\{a,b\}$ to abbreviate binary minimization. We write the binary indicator of event $A$ as $\indic{A}$. \paragraph{Complexity model.} We mainly measure complexity through the number individual function and gradient evaluations required to solve the problem~\eqref{eq:problem}. We write $\mathcal{T}_f$ for the cost of evaluating $f_i(x)$ for a single $i$ and $x$, and similarly write $\mathcal{T}_g$ for the cost of evaluating $\nabla f_i(x)$. Assuming $\mathcal{T}_f, \mathcal{T}_g = \Omega(d)$, our evaluation complexity upper bounds translate directly to runtime upper bounds. \paragraph{Proximal operators.} For any function $f$ and regularization parameter $\lambda \ge 0$, we define the standard proximal mapping $\prox{f}(\bx) \coloneqq \argmin_{x\in\R^d} \crl*{ f(x) + \frac{\lambda}{2} \norm{x-\bar{x}}^2}$. We also define the ball constrained proximal mapping $\bprox{f}(\bx) \coloneqq \argmin_{x\in\mathbb{B}_r(\bx)} \crl*{ f(x) + \frac{\lambda}{2} \norm{x-\bar{x}}^2}$. Finally, we define the notion of an approximate oracle for $\bprox{f}$, which plays a key role in our analysis. \begin{definition}[BROO\xspace]\label{def:broo} We say that a mapping $\oracle{\cdot}$ is a Ball Regularized Optimization Oracle of radius $r$ ($r$-BROO\xspace) for $f$, if for every query point $\bx$, regularization parameter $\lambda$ and desired accuracy $\delta$, it return $\tilde{x} = \oracle{\bx}$ satisfying \begin{equation}\label{eq:broo-req} f(\tilde{x})+\frac{\lambda}{2}\|\tilde{x}-\bx\|^2\le \min_{x \in \mathbb{B}_{r}(\bx)} \crl*{ f(x) + \frac{\lambda}{2} \norm{x-\bar{x}}^2}+\frac{\lambda}{2}\delta^2. \end{equation} \end{definition} \noindent Note that when $f$ is convex, the strong convexity of $f(x)+\frac{\lambda}{2}\norm{x-\bx}^2$ and the approximation requirement~\eqref{eq:broo-req} guarantee that $\norm{\oracle{\bx}-\bprox{f}(\bx)}\le \delta$. % \section{BROO\xspace acceleration}\label{sec:ms-bacon-redux} In this section, we describe a variant of the ball optimization acceleration scheme of \citet{carmon2020acceleration}, given as \Cref{alg:ms-bacon-redux}. Both methods follow the template of Monteiro-Svaiter acceleration~\cite{monteiro2013accelerated}, but our algorithm improves on \cite{carmon2020acceleration} in two ways. First, it accesses the objective strictly through the ball oracle, while \cite{carmon2020acceleration} also uses gradient computations. Second, our algorithm requires an oracle that solves \emph{regularized} ball optimization problems, which are easier to implement.\footnote{We note that $\lambda$ in our notation corresponds to $1/\lambda$ in the notation of~\cite{carmon2020acceleration}.} As a consequence of these differences, our accelerated algorithm's guarantee does not require any smoothness of the objective function. Moreover, our setup allows for far less accurate solutions to the ball optimization subproblems: \citet{carmon2020acceleration} require $\delta = O(\frac{\epsilon}{\Lg R})$ while we only require $\delta = O(\frac{\epsilon}{\lambda R})$. While our requirement becomes stricter as the regularizer $\lambda$ grows, it also becomes easier to fulfill since the ball optimization problem becomes more strongly convex and hence easier to solve. Our relaxed accuracy requirement ultimately translates to an improved $\epsilon^{-1}$ dependence in the sublinear-in-$N$ term in our upper bound. With the key innovations of~\Cref{alg:ms-bacon-redux} explained, we now formally state its convergence guarantee; we defer the proof to \Cref{sec:ms-bacon-redux-proofs}. \newcommand{\Delta}{\Delta} \newcommand{\widehat{\Delta}}{\widehat{\Delta}} \newcommand{x_{\mathrm{ret}}}{x_{\mathrm{ret}}} \begin{algorithm2e}[htb!] \setstretch{1.1} \caption{BROO\xspace acceleration} \label{alg:ms-bacon-redux} \LinesNumbered \DontPrintSemicolon \SetKwRepeat{Do}{do}{while} \KwInput{Initial $x_0\in \R^d$, Lipschitz and distance bounds $\Lf$, $R$, $r$, accuracy $\epsilon$, BROO\xspace $\oracle{\cdot}$ % } \KwOutput{$x_{\mathrm{ret}}$ such that $f(x_{\mathrm{ret}}) - \argmin_{z \in \mathbb{B}_R(x_0)} f(z) \leq \epsilon$ } \SetKwFunction{Bisection}{$\lambda\textsc{-Bisection}$} \SetKwProg{Fn}{function}{}{} \BlankLine Let $v_0 = x_0$, $A_0 = 0$\; \For{$t=0,1,2,\ldots$}{ $\lambda_{t+1} = \lambda\textsc{-Bisection}(x_t,v_t,A_t, \lambda_{\max}=\tfrac{2\Lf}{r}, \lambda_{\min}=\tfrac{\epsilon}{6rR})$ \label{line:lambda_min}\; \Comment*[f]{Finds $\lambda_{t+1}$ such that $x_{t+1}\approx \prox[\lambda_{t+1}]{f}(y_t)$ and either $\norm{x_{t+1}-y_t} \approx r$ or $x_{t+1}$ is $\epsilon$-optimal} $a_{t+1} = \tfrac{1}{2\lambda_{t+1}}(1 + \sqrt{1 + 4 \lambda_{t+1} A_t})$ and $A_{t+1} = A_t + a_{t+1}$ \label{line:at-def} \Comment*{$A_{t+1} = a_{t+1}^2 \lambda_{t+1}$} $y_{t}=\frac{A_t}{A_{t+1}}x_{t}+\frac{a_{t+1}}{A_{t+1}}v_{t}$\; $x_{t+1} = \oracle[\lambda_{t+1},\delta_{t+1}]{y_t}$, where $\delta_{t+1} = \frac{\epsilon}{12 \lambda_{t+1} R}$\; $v_{t+1}= \argmin_{v \in \mathbb{B}_R(x_0)}\left\{a_{t+1} \lambda_{t+1}\left\langle y_{t}-x_{t+1},v \right\rangle +\frac{1}{2}\norm{ v-v_{t}} ^{2}\right\}$ \label{line:v-update} \; \If{$A_{t + 1} \geq \frac{R^2}{\epsilon}$, $\lambda_{t + 1} \leq \frac{\epsilon}{3 r R}$, $\norm{x_{t +1} - v_{t + 1}} > 2R$, \textbf{\textup{or}} $A_{t + 1} < \exp\left(\frac{r^{2/3}}{R^{2/3}}(t-1)\right) A_1$ \label{line:outer_check}}{ \Return $x_{\mathrm{ret}} \in \argmin_{x\in\crl{x_0,x_1,\ldots,x_{t+1}}} f(x) $ \label{line:outer_return}\; } } \BlankLine \Fn{$\lambda\textsc{-Bisection}(x,v,A,\lambda_{\max}, \lambda_{\min})$ % }{ For all $\lambda'$, let $y_{\lambda'} \coloneqq \alpha_{2A\lambda'} \cdot x + (1-\alpha_{2A\lambda'}) \cdot v$, where $\alpha_\tau \coloneqq \frac{\tau}{1+\tau+\sqrt{1 + 2\tau}}$ Define $\Delta(\lambda) \coloneqq \norm{\oracle[\lambda, \frac{r}{17}]{y_\lambda} - y_\lambda}$ \Comment*{approximation of $\widehat{\Delta}(\lambda) \coloneqq \norm{ \bprox[\lambda,r]{f}(y_\lambda) - y_\lambda}$} Let $\lambda = \lambda_{\max}$ \lWhile{$\lambda \geq \lambda_{\min}$ \textup{\textbf{and}} $\Delta(\lambda) \leq \frac{13r}{16}$ \label{line:while1start}}{ $\lambda \gets \lambda/2$ \Comment*[f]{terminates in $O(\log\frac{\lambda_{\max}}{\lambda_{\min}})$ steps} }\label{line:while1end} \lIf{$\lambda \leq \lambda_{\min}$}{\Return $2\lambda$ \label{line:ls_lower_boundary} \Comment*[f]{happens only if $\bprox[2\lambda,r]{f}(y_{2\lambda})$ is $O(\epsilon)$-optimal for small $\lambda_{\min}$} } Let $\lambda_u = 2\lambda$, $\lambda_{\ell} = \lambda$ and $\lambda_m = \sqrt{\lambda_u \lambda_{\ell}}$ \lIf{$\Delta(\lambda_{\ell}) \leq \frac{15r}{16}$}{\Return $\lambda_{\ell}$ \Comment*[f]{happens only if $\Delta(\lambda_{\ell}) \in [\frac{13r}{16}, \frac{15r}{16}]$} }\label{line:stop-middle} \While{$\Delta(\lambda_m)\notin [\frac{13r}{16}, \frac{15r}{16}]$ \textup{\textbf{and}} $\log_2 \frac{\lambda_u}{\lambda_{\ell}} \ge \frac{r}{8(R+\Lf/\lambda_{\ell})}$ \label{line:while2start} } { \leIf{ $\Delta(\lambda_m) < \frac{13r}{16}$}{$\lambda_u=\lambda_m$}{$\lambda_\ell = \lambda_m$} $\lambda_m = \sqrt{\lambda_u \lambda_{\ell}}$ } \label{line:while2end} \Return $\lambda_m$ \Comment*[f]{ the while loop terminates in $O\prn[\big]{\log\prn[\big]{\frac{R}{r} + \frac{\Lf}{\lambda_{\min} r}}}$ steps} } \end{algorithm2e} \newcommand{\fancyind}[1]{_{(#1)}} \begin{restatable}{theorem}{thmMSBaconRedux}\label{thm:ms-bacon-redux} Let $f:\R^d\to \R$ be convex and $\Lf$-Lipschitz, and let $z \in \R^d$. For any domain bound $R>0$, ball radius $r\in(0,R]$, accuracy level $\epsilon>0$, and initial point $x_0\in\R^d$, \Cref{alg:ms-bacon-redux} returns a point $x\in\R^d$ satisfying $f(x)-\min_{z\in\mathbb{B}_R(x_0)}f(z) \le \epsilon$ using at most \begin{equation*} T = O\prn*{ \prn*{ \frac{R}{r}}^{2/3} \log \prn*{\frac{[f(x_0) - \min_{z\in\mathbb{B}_R(x_0)}f(z)] R}{ \epsilon r} } \log \prn*{\frac{\Lf R^2}{\epsilon r}} } \end{equation*} queries to an $r$-BROO\xspace. Moreover, the BROO\xspace query parameters $(\lambda\fancyind{1}, \delta\fancyind{1}), \ldots, (\lambda\fancyind{T}, \delta\fancyind{T})$ satisfy \begin{enumerate} \item \label{item:lambda-bounds} $\Omega(\frac{\epsilon}{rR}) \le \lambda\fancyind{i} \le O(\frac{\Lf }{r})$ and $\delta\fancyind{i} \ge \Omega( \frac{\epsilon}{\lambda\fancyind{i} R} )$ for all $i \in [T]$. \item \label{item:lambda-sum-bound} $\sum_{i \in [T]} \frac{1}{\sqrt{\lambda\fancyind{i}}} \leq O\prn[\big]{ \frac{R}{\sqrt{\epsilon}}\log\frac{\Lf R^2}{\epsilon r}}.$ % \end{enumerate} \end{restatable} We remark that Theorem~\ref{thm:ms-bacon-redux} requires a bound on the Lipschitz constant of $f$ solely to bound the complexity of the bisection procedure for finding $\{\lambda_{t}\}$. % % \section{BROO\xspace implementation}\label{sec:boo-implementation} In this section, we develop efficient BROO\xspace implementations for $\Fsm$, the softmax approximation of $F_{\max}$~\eqref{eq:softmax}. In \Cref{ssec:approx} we develop our main analytical tool in the form of an ``exponentiated softmax'' function approximating $\Fsm$ and facilitating efficient stochastic gradient estimation. We then minimize the exponentiated softmax with standard tools from stochastic convex optimization. In \Cref{ssec:sgd} we give a BROO\xspace implementation for the non-smooth case using restarted SGD~\cite{hazan2014beyond}. In \Cref{ssec:asvrg} we instead apply an accelerated variance reduction method (Katyusha~\cite{allen2016katyusha}) that offers improved performance when the $f_i$ are even slightly smooth. Finally, in \Cref{ssec:ub-statement} we combine our BROO\xspace implementations with \Cref{alg:ms-bacon-redux} and its guarantees to obtain our main results: new convergence guarantees for minimizing $F_{\max}$. We defer proofs to \Cref{sec:boo-impl-proofs}. \subsection{Exponentiating a softmax}\label{ssec:approx} Recall that $\epsilon' = \epsilon/(2\log N)$ and that (for nominal accuracy $\epsilon$) the softmax function $\Fsm(x) = \epsilon' \log\prn*{ \sum_{i\in[N]} e^{f_i(x)/\epsilon'}}$ approximates $F_{\max}$ to within $\epsilon/2$ additive error. The key challenge in designing an efficient stochastic method for minimizing $\Fsm$ is a lack of cheap unbiased gradient estimators. Specifically, we have $\nabla \Fsm(x) = \sum_{i\in [N]} p_i(x) \nabla f_i(x)$, where \begin{equation}\label{eq:softmax-prob} p_i(x) = \frac{e^{f_i(x)/\epsilon'}}{\sum_{j\in [N]} e^{f_j(x)/\epsilon'}}. \end{equation} Given access to $p(x)$, we could easily obtain an unbiased estimator for $\nabla \Fsm(x)$ by sampling $i\sim p(x)$ and outputting $\nabla f_i(x)$. However, computing $p(x)$ itself requires evaluating all $N$ functions, making it basically as costly as computing $\nabla \Fsm$ exactly. This difficulty, however, is greatly relieved when we operate in a small ball of radius $r_\epsilon = \epsilon' / \Lf$ centered at some point $\bx$. To see why, note that for every $i$ and every $x\in\mathbb{B}_{r_\epsilon}(\bx)$, Lipschitz continuity of $f_i$ implies $| f_i(x)/\epsilon' -f_i(\bx)/\epsilon' | \le \Lf r_\epsilon / \epsilon' = 1$. Consequently, $p(\bx)$ is a multiplicative approximation for $p(x)$ throughout the ball, satisfying $e^{-2} p_i(\bx) \le p_i(x) \le e^2 p(\bx)$ for all $x\in \mathbb{B}_{r_\epsilon}(\bx)$. Our high-level strategy is thus: perform a full data pass \emph{once} to compute $p(\bx)$, and then rely on the stability of $p(x)$ within $\mathbb{B}_{r_\epsilon}(\bx)$ to efficiently estimate gradients by sampling from $p(\bx)$. However, simply sampling $i\sim p(\bx)$ and returning $\nabla f_i(x)$ is not enough, because it leads to a biased estimator of $\nabla \Fsm(x)$. Instead, we define below a surrogate function ``exponentiating the softmax'' that closely approximates $\Fsm$ and for which $e^{(f_i(x)-f_i(\bx))/\epsilon'} \nabla f_i(x)$ is an unbiased gradient estimator when $i\sim p(\bx)$.\footnote{We remark that $g_i(x) = e^{(f_i(x)-f_i(\bx))/\epsilon'} \nabla f_i(x)$ is also nearly unbiased for $\Fsm$ in the sense that $\E g_i(x) = Z(x) \nabla \Fsm(x)$ for some $Z(x)$ that is close to 1 when inside $\mathbb{B}_{r_\epsilon}(\bx)$. Estimators of this form suffice for SGD, but are less amenable to variance reduction.} To precisely define the surrogate ``exponentiated softmax'' function, we require some additional notation. Fixing a ball center $\bx$ and regularization parameter $\lambda$, let \begin{equation*} f_i^\lambda(x) \coloneqq f_i(x) + \frac{\lambda}{2}\norm{x-\bx}^2 ~~\mbox{and}~~ \Fsm^\lambda(x) \coloneqq \Fsm(x) + \frac{\lambda}{2}\norm{x-\bx}^2 = \epsilon' \log\prn*{ \sum_{i\in[N]} e^{f_i^\lambda(x)/\epsilon'}} \end{equation*} be the regularized counterparts of $f_i$ and $\Fsm$, respectively. Then, we define the exponentiated softmax as \begin{equation} \label{eq:bare-exp-def} \Gamma_{\epsilon,\lambda}(x) = {\epsilon'} \cdot \exp\prn*{\frac{\Fsm^{\lambda}(x)-\Fsm^{\lambda}(\bx)}{{\epsilon'}}} = \sum_{i\in [N]} p_i(\bx) \gamma_i(x)~\mbox{where}~ \gamma_i(x)\coloneqq \epsilon' e^{ \frac{f_i^\lambda(x)-f_i^{\lambda}(\bx)}{{\epsilon'}}}. \end{equation} Clearly, $\Gamma_{\epsilon,\lambda}$ is a finite sum objective (weighted by $p(\bx)$), making stochastic first-order methods applicable. Moreover, as the following lemma shows, when the ball radius $r$ and $\lambda$ are not too large, $\Gamma_{\epsilon,\lambda}$ closely approximates $\Fsm^\lambda$ and is as regular as $\Fsm^\lambda$ up to a constant. \begin{restatable}{lem}{beFapprox}\label{lem:bef-approx} Let $f_1,\cdots, f_N$ each be $\Lf$-Lipschitz and $\Lg$-smooth gradients. For any $c>0$, $r \le c \epsilon' / \Lf$, and $\lambda \le c\Lf/r$ let $C = (1+c+c^2)e^{c+c^2/2}$. The exponentiated softmax $\Gamma_{\epsilon,\lambda}$ satisfies the following properties for any $\bx\in\R^d$. \begin{enumerate} \item \label{item:bef-error-bound} $\Fsm^{\lambda}(x)$ and $\Gamma_{\epsilon,\lambda}$ have the same minimizer $x\opt$ in $\mathbb{B}_r(\bx)$. Moreover, for every $x\in\mathbb{B}_r(\bx)$, \[\Fsm^{\lambda}(x) - \Fsm^{\lambda}(x\opt) \le C ( \Gamma_{\epsilon,\lambda}(x) - \Gamma_{\epsilon,\lambda}(x\opt) ).\] \item \label{item:bef-reg-bound} Restricted to $\mathbb{B}_r(\bx)$, each function $\gamma_i$ defined in~\eqref{eq:bare-exp-def} is $C \Lf$-Lipschitz, $C^{-1} \lambda$ strongly convex, and $C(\Lg + \lambda + \Lf^2 / \epsilon')$-smooth. \end{enumerate} \end{restatable} \noindent The proof of \Cref{lem:bef-approx} follows from a straightforward calculation, and we defer it to \Cref{ssec:bef-approx-proof}. \subsection{The non-smooth case: SGD implementation}\label{ssec:sgd} To take advantage of the strong convexity of of $\Gamma_{\epsilon,\lambda}$ we use the restarted SGD variant of~\citet{hazan2014beyond}, which finds an $\varepsilon$-suboptimal point of a $G$-Lipschitz and $\mu$-strongly convex function with $\Otil{G^2 / (\mu \varepsilon)}$ iterations (with high probability). To estimate the stochastic gradients, we sample $i\sim p(\bx)$ and output $\nabla \gamma_i(x)$; this takes $O(\mathcal{T}_g + \mathcal{T}_f)$ time per stochastic gradient, plus $O(N\mathcal{T}_f)$ preprocessing time to compute $p(\bx)$. We provide pseudocode for the algorithm in \Cref{ssec:sgd-app}, where we also prove the following complexity bound. \begin{restatable}{corollary}{corsgd}\label{cor:sgd} Let $f_1,f_2,\cdots,f_N$ be $L_f$ Lipschitz, let $\sigma\in(0,1)$, $\epsilon, \delta > 0$ and $r_\epsilon = \epsilon/(2\log N\cdot\lip)$. For any $\bx\in\R^d$ and $\lambda \le O(\Lf / r_\epsilon)$, with probability at least $1-\sigma$, \Cref{alg:innerloop-SGD} outputs a valid $r_\epsilon$-BROO\xspace response for $\Fsm$ to query $\bx$ with regularization $\lambda$ and accuracy $\delta$, and has cost \begin{equation}\label{eq:sgd-complexity-bound} O\left(\mathcal{T}_f N+ (\mathcal{T}_g + \mathcal{T}_f) \frac{\Lf^2}{\lambda^2\delta^2}\log\left(\frac{\log(\Lf/\lambda\delta)}{\sigma}\right)\right). \end{equation} \end{restatable} \subsection{The (slightly) smooth case: accelerated variance reduction implementation}\label{ssec:asvrg} If we further assume smoothness of $f_1, \ldots, f_N$, we can use stochastic variance reduction to obtain an improved runtime. With these methods, we estimate the gradient of $\Gamma_{\epsilon,\lambda}$ as $\nabla \Gamma_{\epsilon,\lambda}(x') + \nabla \gamma_i(x) - \nabla \gamma_i(x')$, where $i\sim p(\bx)$ and $x'$ is a reference point which we recompute $\Otil{1}$ times. Here, the $O(N\mathcal{T}_f)$ cost of computing $p(\bx)$ is essentially free compared to the cost $\Otil{N\mathcal{T}_g}$ of computing the exact gradients of $\Gamma_{\epsilon,\lambda}$ at the reference point. We again take advantage of the regularization-induced $\lambda$-strong-convexity a variant of the Katyusha method of \citet{allen2016katyusha}. This results in the following complexity guarantee; see~\Cref{ssec:asvrg-app} for a proof. \begin{restatable}{corollary}{corasvrg}\label{cor:asvrg} Let $f_1$, $\cdots$, $f_N$ be $\Lf$-Lipschitz and $\Lg$-smooth, let $\sigma\in(0,1)$, $\epsilon, \delta > 0$, $\epsilon' = \epsilon / (2\log N)$ and $r_\epsilon =\epsilon' / \Lf$. For any $\bx\in\R^d$ and $\lambda \le O(\Lf / r_\epsilon)$, with probability at least $1-\sigma$, Katyusha1 \cite{allen2016katyusha} outputs a valid $r_\epsilon$-BROO\xspace response to query $\bx$ with regularization $\lambda$ and accuracy $\delta$, and has computational cost \begin{equation}\label{eq:asvrg-complexity-bound} O\left(\left(\mathcal{T}_f+\mathcal{T}_g\right)\left(N+{\frac{\sqrt{N}\left(\Lf + \sqrt{\epsilon' \Lg}\right)}{\sqrt{\lambda \epsilon'}}}\right)\log\left(\frac{\Lfr_\epsilon}{ \lambda\delta^2\sigma}\right)\right). \end{equation} \end{restatable} \subsection{Main result}\label{ssec:ub-statement} With our oracle implementations in hand, we are ready to state our main result. \begin{restatable}{theorem}{thmub}\label{thm:ub} Let $f_1,f_2,\ldots,f_N$ be $L_f$-Lipschitz, let $x\opt$ be a minimizer of $F_{\max}(x) = \max_{i\in[N]} f_i(x)$ and assume $\norm{x_0-x\opt}\le R$ for a given initial point $x_0$ and some $R>0$. For any $\epsilon>0$, \Cref{alg:ms-bacon-redux} with the BROO\xspace implementation for $\Fsm$ in Algorithm~\ref{alg:innerloop-SGD} solves the problem~\eqref{eq:problem} with probability at least $\frac{99}{100}$ and has computational cost \begin{equation} \label{eq:final-rate-non-smooth} O\left(\left(\frac{\lip R\log N}{\epsilon}\right)^{2/3}\left( \mathcal{T}_f N+\left(\frac{\Lf R}{\epsilon}\right)^2\cdot(\mathcal{T}_f+\mathcal{T}_g)\log K\right)\log^2K\right), \end{equation} where $K \coloneqq \Lf R\epsilon^{-1}\log N$. If moreover $f_1,f_2,\ldots,f_N$ are each $\Lg$-smooth, then \Cref{alg:ms-bacon-redux} with a BROO\xspace implementation for $\Fsm$ using Kayusha1 solves~\eqref{eq:problem} with probability $\ge \frac{99}{100}$ and has cost \begin{equation*} \label{eq:final-rate-smooth} O\left((\mathcal{T}_f+\mathcal{T}_g) \left( \left(\frac{\lip R\log N}{\epsilon}\right)^{2/3}N +\left( \frac{\Lf R\sqrt{\log N}}{\epsilon} +\sqrt{\frac{\Lg R^2}{\epsilon}}\right)\sqrt{N}\right)\log^3 K\right). \end{equation*} \end{restatable} \noindent The proof of \Cref{thm:ub}, which we provide in \Cref{ssec:ub-proof}, follows straightforwardly from \Cref{thm:ms-bacon-redux} and \Cref{cor:sgd,cor:asvrg}. % When applying \Cref{cor:sgd} with $\delta = \Omega(\frac{\epsilon}{\lambda R})$ the dependence of the complexity on $\lambda$ cancels, and we get that each oracle call costs $\Otil{N\mathcal{T}_f + {\Lf^2 R^2}\epsilon^{-2}(\mathcal{T}_f+\mathcal{T}_g)}$. The complexity bound then follows from multiplying the per-call cost with the bound $\Otil{(R/r_\epsilon)^{-2/3}}$ that \Cref{thm:ms-bacon-redux} provides on the total number of oracle calls. When applying \Cref{cor:asvrg} we obtain an oracle implementation cost of $\Otil{N(\mathcal{T}_f +\mathcal{T}_g) + \lambda^{-1/2}\sqrt{N}\sqrt{\Lf^2 \epsilon^{-1}+ \Lg}(\mathcal{T}_f+\mathcal{T}_g)}$. The complexity bound again follows by multiplying the per-call cost again with the total number of calls, except that to bound the contribution of $\sqrt{N}$ term we invoke the the guarantee $\sum_{i} \lambda\fancyind{i}^{-1/2} \le \Otil{R\epsilon^{-1/2}}$ in \Cref{thm:ms-bacon-redux} to a tighter bound. % \section{Lower bounds}\label{sec:lb} \newcommand{\prog}[1][\alpha]{\mathrm{prog}_{#1}} \newcommand{\tilde{f}}{\tilde{f}} \newcommand{\hat{f}^{\{T,N,\ell\}}}{\hat{f}^{\{T,N,\ell\}}} \newcommand{\hat{F}_{\max}^{\{T,N,\ell\}}}{\hat{F}_{\max}^{\{T,N,\ell\}}} \newcommand{\ell}{\ell} \newcommand{\linkfun}[1][\alpha_T,\ell]{\psi_{#1}} \newcommand{f_{\emptyset}}{f_{\emptyset}} \newcommand{(f_i)_{i\in [N]}}{(f_i)_{i\in [N]}} \newcommand{\mathcal{O}^{\mathrm{loc}}}{\mathcal{O}^{\mathrm{loc}}} \newcommand{\B}[1][t]{B_{#1}} \newcommand{\C}[1][t]{C_{#1}} \newcommand{\breachEv}[1][t]{\mathfrak{E}_{#1}} \NewDocumentCommand{\orcInfo}{O{\pi} O{U}}{\mathcal{I}_{#1,#2}} \newcommand{\bp}[1][t]{\bar{p}_{#1}} \newcommand{\zrEv}[1][t]{\mathfrak{ZR}_{#1}} \newcommand{\tau}{\tau} \newcommand{\Pi}{\Pi} \newcommand{\Pi^{-1}}{\Pi^{-1}} \newcommand{\zeta}{\zeta} \newcommand{I^{-}}{I^{-}} \newcommand{\restrictedTo}[1]{\vert_{#1}} In this section, we prove oracle complexity lower bounds showing that the results of the previous section are order optimal for sufficiently large $N$ and $\Lg$. While our algorithms are first-order methods, our lower bounds remain valid even for other algorithms that use high order derivatives, as is typical for our proof technique. We begin by providing a formal definition of the oracle-based optimization model we consider (\Cref{sec:lb-protocol}). In \Cref{sec:lb-progress-control}, we define an $N$-element variant for the zero-chain concept, and prove that it allows us to control the progress of any (possibly randomized) algorithm. Then, in \Cref{sec:lb-hard-instace} we construct a particular $N$-element zero-chain for which slow progress implies a large optimality gap. Finally, \Cref{sec:lb-statement} ties these results together, giving our lower bound and providing some discussion. \subsection{Optimization protocol}\label{sec:lb-protocol} Consider problem instances of the form $(f_i)_{i\in [N]}$, where $f_i: D\to \R$ for some common domain $D$ and all $i\in[N]$. We say that an algorithm operating on $(f_i)_{i\in [N]}$ is an \emph{$N$-element algorithm} if it uses the following iterative protocol. At iteration $t$, the algorithm produces a query $i_t, x_t$, with $i_t\in [N]$ and $x_t \in D$. It then observes the output of a \emph{local oracle} for $f_{i_t}$ at the point $x_t$, which we denote by $\mathcal{O}^{\mathrm{loc}}_{f_{i_t}}(x_t)$. Formally, $\mathcal{O}^{\mathrm{loc}}$ can be any mapping that satisfies $\mathcal{O}^{\mathrm{loc}}_{f}(x) = \mathcal{O}^{\mathrm{loc}}_{\tilde{f}}(x)$ whenever $f(y)=\tilde{f} (y)$ for all $y$ in some open set containing $x$ (subsequently referred to as a ``neighborhood'' of $x$). In particular, the first-order oracle used for our upper bounds corresponds to $\mathcal{O}^{\mathrm{loc}}_{f}(x) = (f(x), \nabla f(x))$ and is valid local oracle. The $p$th order derivative oracle $\mathcal{O}^{\mathrm{loc}}_{f}(x) = (f(x), \nabla f(x), \ldots, \nabla^{p} f)$ is also a valid local oracle. The notion of local oracles is classical in the literature on information-based complexity~\cite{nemirovski1983problem,guzman2015lower}. The algorithms we consider may be randomized, and we use $\zeta$ to denote the algorithm's randomness. Beyond $\zeta$, the query of the algorithm at iteration $t$ may only depend on the information it observes from the oracle. That is, for any $t\ge 1$, we have \begin{equation}\label{eq:general-iter} i_t, x_t = Q_t \prn*{ \zeta, \mathcal{O}^{\mathrm{loc}}_{f_{i_1}}(x_1), \ldots, \mathcal{O}^{\mathrm{loc}}_{f_{i_{t-1}}}(x_{t-1}) } \end{equation} for some measurable function $Q_t$. \subsection{Progress control argument}\label{sec:lb-progress-control} Following well-established methodology~\cite{nesterov2018lectures,guzman2015lower,carmon2020lower}, instead of directly bounding the sub-optimality of the queries $x_1, \ldots, x_t$ we first bound a surrogate quantity we call \emph{progress}. Informally, the progress is the highest coordinate index that the algorithm managed to ``discover'' using the oracle responses. Formally, we define the progress of a point $x$ as \begin{equation} \label{eq:progress} \prog[\alpha](x) \coloneqq \max\crl*{i\ge 1 \;\big|\; \abs{x\coind{i}} > \alpha}~~\mbox{(where $\max \emptyset \coloneqq 0$)}. \end{equation} The parameter $\alpha$ is a significance threshold for declaring a coordinate ``discovered;'' it allows us to prevent algorithms from trivially discovering coordinates by querying directions at random. We next define a structural property that facilitates controlling the rate with which $\prog(x_t)$ increases. For this definition, we recall that $v\coind{\le l}$ denotes the vector whose first $l$ coordinates are identical to those of $v$ and the remainder are zero. Recall also that $\mathbb{B}_1^T(0)$ is the unit ball in $\R^T$. \begin{definition}\label{def:zc} A sequence $f_1,\ldots,f_N$ of functions $f_i:\mathbb{B}_1^T(0)\to \R$ is called an \emph{$\alpha$-robust $N$-element zero-chain} if for all $x \in \mathbb{B}_1^T(0)$, all $y$ in a neighborhood of $x$, and all $i \in [N]$, we have \begin{equation}\label{eq:zc-def} \prog(x) \le p \implies f_i(y) = \begin{cases} f_i(y\coind{\le p}) & i < p + 1 \\ f_i(y\coind{\le p+1}) & i = p + 1 \\ f_{N}(y\coind{\le p}) & i > p+1. \\ \end{cases} \end{equation} \end{definition} To unpack this definition, consider any first-order algorithm with the following two simplifying properties: (1) the queries $i_1, i_2, \ldots$ are drawn i.i.d.\ from $\mathrm{Uniform}([N])$ and (2) every query $x_t$ lies in the span of previously observed gradients $\nabla f_{i_1}(x_1), \ldots, \nabla f_{i_{t-1}}(x_{t-1})$ \cite[cf.][]{nesterov2018lectures}. The first query of the algorithm must be $x_1=0$, and consequently $\prog(x_1)=0$. \Cref{def:zc} then implies that $f_2,\ldots,f_N$ are all constant in a neighborhood of $x_1$, while $f_1$ depends only on the first coordinate. Therefore, the span of the gradients (and the next query's progress) can only increase to $1$ after the algorithm queries $i=1$ for the first time. With uniformly random index queries, that takes $\Omega(N)$ queries with constant probability. Repeating this argument, we see that every increase of the gradient span (and hence query progress) takes $\Omega(N)$ queries with constant probability, and therefore reaching progress $T$ takes $\widetilde{\Omega}(NT)$ queries with high probability. To extend this conclusion to general algorithms of the form~\eqref{eq:general-iter}, we perform two types of randomization. First, to handle arbitrary strategies for choosing $i_t$ (as opposed to uniform sampling), we apply a random permutation to $f_1, \ldots, f_N$. Second, to handle arbitrary queries $x_t$ (as opposed to queries in the span of observed gradients), we randomly rotate the coordinate system. This randomization scheme guarantees that no algorithm can materially improve on uniform sampling and span-preserving, as we formally state in the following. \begin{restatable}{prop}{propProgControl}\label{prop:prog-control} Let $\delta, \alpha \in (0,1)$ and let $N,T\in \N$ with $T\le N/2$. Let $(f_i)_{i\in [N]}$ be an $\alpha$-robust $N$-element zero-chain with domain $\mathbb{B}_1^T(0)$. For $d \ge T + \frac{2}{\alpha^2}\log \frac{4NT^2}{\delta}$, draw $U$ uniformly from the set of $d\times T$ orthogonal matrices, and draw $\Pi$ uniformly from the set of permutations of $[N]$. Let $\tilde{f}_i(x) \coloneqq f_{\Pi^{-1}(i)} (U^\top x)$. Let $\{(i_t, x_t)\}_{t\ge 1}$ be the queries of any $N$-element algorithm operating on $\tilde{f}_1, \ldots, \tilde{f}_N$. Then with probability at least $1-\delta$ we have \begin{equation*} \prog(U^\top x_t) < T~~\mbox{for all}~~t\le \tfrac{1}{16}N\prn*{T-\log\tfrac{2}{\delta}}. \end{equation*} \end{restatable} See \Cref{sec:prog-control-proof} for a proof. Our definition of $N$-element zero-chains and our proof of their progress control property builds on the notion of (single element) zero-chain functions~\cite{carmon2020lower}. It is also closely related to probability-$p$ zero-chains~\cite{arjevani2019lower}; \Cref{prop:prog-control} essentially shows that $N$-element algorithms interacting with an $N$-element zero-chain make progress about as slowly as stochastic algorithms interacting with with a probability-$N^{-1}$ zero-chain. \subsection{Hard instance construction}\label{sec:lb-hard-instace} With the progress-control machinery in hand, we proceed to constructing a specific $N$-element zero-chain that also guarantees a large optimality gap for points with progress smaller than $T$. Toward that end, we first define the ``link function'' $\linkfun[\alpha,\ell]: \R \to \R_+$ as \begin{equation*} \linkfun[\alpha, \ell] (t) \coloneqq \begin{cases} 0 & |t| \le \alpha \\ \frac{\ell}{2} (t-\alpha)^2 & \alpha \le |t| \le \ell^{-1} + \alpha \\ |t| - \alpha - \frac{1}{2\ell} & \mbox{otherwise}. \end{cases} \end{equation*} Clearly, $\linkfun[\alpha, \ell]$ is 1-Lipschitz, $\ell$-smooth, and is identically zero for all $|t|\le \alpha$. We note that $\linkfun[\alpha, \ell]$ is the composition of the Huber function~\cite{huber1992robust} with $\max\{0,|t|-\alpha\}$. Chain constructions of the form $\sum_{i \in[N]} \linkfun(x\coind{i}-x\coind{i-1})$ are common in lower bounds for convex optimization \cite[cf.][]{nesterov2018lectures,woodworth2016tight}. For our construction, we instead spread the link components across the different elements. Formally, for $i\in[N]$, we define the $i$th function in the our hard instance as \begin{equation}\label{eq:hard-instance-def} \hat{f}^{\{T,N,\ell\}}_i(x) \coloneqq \begin{cases} \linkfun \prn*{ \frac{x\coind{i} - x\coind{i-1}}{{2}} } & i \le T \\ 0 & \mbox{otherwise}\\ \end{cases} ~~\mbox{where}~~\alpha_T \coloneqq \frac{1}{4 T^{3/2}}~\mbox{and}~x\coind{0}\coloneqq \frac{1}{\sqrt{T}}. \end{equation} The following lemma summarizes the properties of our construction. The proof of the lemma is straightforward and we provide it in \Cref{sec:hard-instance-props-proof} \begin{restatable}{lem}{lemHardInstanceProps}\label{lem:hard-instance-props} For every $T,N\in \N$ and $\ell \ge 0$, such that $T\le N$, we have that \begin{enumerate} \item \label{item:hi-zc} The hard instance $(\hat{f}^{\{T,N,\ell\}}_i)_{i\in N}$ is an $\alpha_T$-robust $N$-element zero-chain. \item \label{item:hi-lip} The function $\hat{f}^{\{T,N,\ell\}}_i$ is 1-Lipschitz and $\ell$-smooth for every $i\in\ [N]$. \item \label{item:hi-optgap} For $x\in \R^d$ with $\prog[\alpha_T](x) < T$, the objective $\hat{F}_{\max}^{\{T,N,\ell\}}(x) = \max_{i\in [N]} \hat{f}^{\{T,N,\ell\}}_i(x)$ satisfies \begin{equation*} \hat{F}_{\max}^{\{T,N,\ell\}}(x)- \min_{x_\star\in\mathbb{B}_1(0)} \hat{F}_{\max}^{\{T,N,\ell\}}(x_\star) \ge \linkfun\prn*{\frac{3}{8T^{3/2}}} \ge \min\crl*{ \frac{1}{8T^{3/2}}, \frac{\ell}{32T^3} }. \end{equation*} \end{enumerate} \end{restatable} \subsection{Lower bound statement}\label{sec:lb-statement} Finally, we combine the results of the previous sections to state our lower bound. In the statement, we use $a\wedge b \coloneqq \min\{a,b\}$ to abbreviate binary minimization. \begin{restatable}{theorem}{thmLB}\label{thm:lb} Let $\Lf, \Lg, R > 0$, $\epsilon < \Lf R \wedge \Lg R^2$, $N\in\N$ and $\delta\in (0,1)$. Then, for any (possibly randomized) algorithm there exists an $\Lf$-Lipschitz and $\Lg$-smooth functions $(f_i)_{i\in[n]}$ with domain $\mathbb{B}_R^{d}(0)$ for $d=O \prn*{ \brk*{ \prn[\big]{\frac{\Lf R}{\epsilon}}^{2} \wedge \prn[\big]{\frac{\Lg R^2}{\epsilon}}} \log \frac{N(\Lf R \wedge \Lg R^2)}{\epsilon } }$ such that with probability at least $\frac{1}{2}$ over the randomness of the algorithm, the first \begin{equation} \label{eq:final-lower-bound} \Omega \prn*{ N \brk*{ \prn[\Big]{\frac{\Lf R}{\epsilon}}^{2/3} \wedge \prn[\Big]{\frac{\Lg R^2}{\epsilon}}^{1/3}} + \brk*{ \prn[\Big]{\frac{\Lf R}{\epsilon}}^2 \wedge \prn[\Big]{\frac{N \Lg R^2}{\epsilon}}^{1/2}} } \end{equation} queries of the algorithm are all $\epsilon$-suboptimal for $F_{\max}(x) = \max_{i\in [N]} f_i(x)$. \end{restatable} See \Cref{sec:lb-proof} for a proof of this result. The first (linear-in-$N$) term in the lower bound follows from \Cref{prop:prog-control} and \Cref{lem:hard-instance-props} via a re-scaling argument. The second (sublinear-in-$N$) lower bound term is a direct consequence of existing lower bounds~\cite{diakonikolas2020lower,woodworth2016tight,fang2018near}. We remark that our lower bound is stated for optimization constrained to a ball of radius $R$, while our upper bounds assume unconstrained optimization given a minimizer of norm at most $R$. These two settings are essentially equivalent; in \Cref{app:lb-unconstrained} we sketch a general technique for transferring lower bounds to the unconstrained setting. In \Cref{table:summary} we specify our lower bound in the special cases $\Lg=\infty$ and $\Lg=\Theta(\Lf^2 / \epsilon)$, showing that they match our upper bounds (up to polylogarithmic factors) for $N=\Omega( (\Lf R / \epsilon)^2 )$ in the former case and for any $N$ in the latter. More broadly, when $\Lg=\Theta(\Lf^{2+q} R^{q} / \epsilon^{1+q} )$ our lower and upper bounds match for any $N$ and $q\in[0,2/3]$. For $\Lg = o(\Lf^2 /\epsilon)$ and $\Lg = \omega(\Lf^{8/3}R^{2/3}/\epsilon^{5/3})$, however, there remain gaps between our upper and lower bounds. We discuss these gaps in the following section. % \section{Discussion}\label{sec:discussion} To conclude the paper, we provide some commentary on our results and the possibilities of improving them. For simplicity, in this section we revert to the setting $\Lf=R=1$ used in the introduction. We also use $a \ll b$ as a shorthand for $a = O(b)$, and ignore constant and logarithmic factors throughout. \subsection{Gaps between the upper and lower bounds} \paragraph{Regimes where a gap exists.} Comparing our upper bound in~\Cref{thm:ub} to our lower bound in~\Cref{thm:lb}, we identify two regimes where our upper and lower bounds disagree by more than polylogarithmic factors. The first is the \emph{smooth regime} $\Lg \ll \epsilon^{-1}$, the lower bound is $\Omega(N\Lg^{1/3}\epsilon^{-1/3} + \sqrt{N\Lg\epsilon^{-1}})$ while our upper bound is $\Otil{N\epsilon^{-2/3} + \sqrt{N}\epsilon^{-1}}$, and a different algorithm gives a better oracle complexity $O(N\sqrt{\Lg\epsilon^{-1}})$ (see \Cref{sec:app-discussion-nesterov}) which still falls short of the lower bound. The second regime is the \emph{non-smooth} regime $\Lg \gg \epsilon^{-1}$, where both the upper and lower bounds share the term $N\epsilon^{-2/3}$. Comparing the lower bound to the variance reduced upper bound~\eqref{eq:final-rate-smooth}, we see that they disagree if and only if $N\epsilon^{-2/3} + \epsilon^{-2} \ll \sqrt{N\Lg \epsilon^{-1}}$ which is equivalent to $N \ll \Lg \epsilon^{1/3}$ and $N \gg \epsilon^{-3}/\Lg$. Clearly, this is only possible only when $\Lg \gg \epsilon^{-5/3}$, and so we conclude that the rate~\eqref{eq:final-rate-smooth} is in fact optimal whenever $\epsilon^{-1} \ll \Lg \ll \epsilon^{-5/3}$. Moreover, the upper bound~\eqref{eq:final-rate-non-smooth} matches the lower bound whenever $N \gg \epsilon^{-2}$ for any $\Lg\gg \epsilon^{-1}$. We conclude that gaps in the non-smooth regime exist only for $\Lg \gg \epsilon^{-5/3}$ and $\epsilon^{-3} / \Lg \ll N \ll \min\{\epsilon^{-2}, \epsilon^{1/3}\Lg\}$. \paragraph{Closing the gap in the non-smooth regime.} Improving the bound~\eqref{eq:final-rate-non-smooth} from $\Otil{N\epsilon^{-2/3} + \epsilon^{-8/3}}$ to $\Otil{N\epsilon^{-2/3} + \epsilon^{-2}}$ would imply that~\eqref{eq:final-lower-bound} gives the optimal rate for any $\Lg \gg \epsilon^{-1}$. The main barrier for obtaining such improvement is our accuracy requirements $\delta_t = O(\epsilon / \lambda_t)$ in \Cref{alg:ms-bacon-redux}. Meeting this requirement with SGD means that each oracle implementation costs $\Otil{N+\epsilon^{-2}}$ function/gradient evaluations, and multiplying this cost by the number of rounds $\Otil{\epsilon^{-2/3}}$ yields the exponent $8/3$. A variant of \Cref{alg:ms-bacon-redux} which can handle less accurate BROO\xspace outputs could close this gap by allowing a more efficient SGD-based implementation. \paragraph{Closing the gap in the smooth regime.} The gap between our upper and lower bounds when $\Lg \ll \epsilon^{-1}$ is more fundamental than the one arising for $\Lg \gg \epsilon^{-5/3}$, because it affects the term linear in $N$. The barrier for improving the linear term in our algorithm is the ball radius. Any $r_\epsilon$-BROO\xspace implementation with $\Omega(N)$ cost will have overall complexity $\Omega(Nr_\epsilon^{-2/3})$. The techniques we develop in \Cref{sec:boo-implementation} only allow us to support $r_\epsilon=\Otil{\epsilon}$, because this is the largest radius where the exponentiated softmax is stable (see~\Cref{lem:bef-approx}). \paragraph{Conjectures and future work.} We conjecture that our lower bound is in fact optimal in both smoothness regimes. In future work we will attempt to close the remaining complexity gaps described above. \subsection{Some necessary algorithmic structures} We now argue that several aspects of our method, namely functions value access, individual function queries and randomization are necessary in any method that achieves (or improves on) our complexity bounds. \paragraph{Function value access.} It is possible to minimize a convex function $f$ by iterative (sub)gradient evaluations, without access to the value of $f$ itself. In contrast, all algorithms for minimizing $F_{\max} = \max_{i\in[N]} f_i(x)$ \emph{must} query the values of the $f_i$'s in addition to their gradients. To see why this is so, consider the case where $f_i(x) = \Pi(i) - x_i$, where $\Pi$ is a random permutation of $[N]$ and the domain is the unit Euclidean ball. The global minimum of $\max_{i\in [N]}f_i(x)$ is the $\Pi^{-1}(N)$-th standard basis vector. However, gradients provide no information about $\Pi^{-1}(N)$, since $\nabla f_i(x) = -e_i$ for all $x$, independent of $\Pi$. \paragraph{Individual-function access.} The algorithms from prior work in \Cref{table:summary} (namely the subgradient method, AGD on softmax and AGD on linearization) are full-batch methods: they proceed by querying all $N$ functions $f_1, \ldots, f_N$ at the same point $x_t$ and using the result to generate the next query point $x_{t+1}$. In contrast, our BROO\xspace implementations proceed by sampling an index $i_t$, computing $\nabla f_i$ at $x_{t}$ (and potentially another point), and generating the next query $x_{t+1}$. Full-batch methods are more amenable to parallelization, but for our problem have demonstrably worse oracle complexity. To see this, consider the case where all the $f_i$'s are identical and equal to standard hard instance for convex optimization. For such input, any full-batch methods will have oracle complexity $\Omega(N\min\crl{\epsilon^{-2}, \sqrt{\Lg \epsilon^{-1}}})$~\cite{diakonikolas2020lower}, which is worse than our upper bounds for any $\Lg \gg \epsilon^{-1/3}$ and sufficiently large $N$. \paragraph{Randomization.} Another contrast between the prior algorithms in \Cref{table:summary} and our algorithm is that the former are deterministic while ours is randomized. \citet{woodworth2016tight} prove a lower bound of $\Omega(N\min\crl{\epsilon^{-2}, \sqrt{\Lg \epsilon^{-1}}})$ gradient queries for any deterministic method for minimizing the \emph{average} of $N$ functions. Observing that the maximum of the $N$ functions in their construction has the same minimum value as their average (and that the maximum upper bounds the average in any other points), we conclude that this lower bound is also valid for any deterministic method for solving the problem~\eqref{eq:problem}. Therefore, randomization is necessary for obtaining our improved rates of convergence. \subsection{Practical considerations} The main purpose of the algorithms we develop in this paper is to clarify the complexity of the fundamental optimization~\eqref{eq:problem}. Nevertheless, since this problem formulation is relevant for a number of machine learning tasks~\cite{clarkson2012sublinear,hazan2011beating,shalev2016minimizing}, it is interesting to try and develop a more practical variant of algorithms. Two aspects of our method which we believe will be particularly useful in practice are the gradient estimation scheme we use in \Cref{alg:innerloop-SGD} and the momentum scheme in \Cref{alg:ms-bacon-redux}. However, a number of aspects of our method seem rather impractical. First, the theory instructs us to constrain subproblem solutions to a very small ball of radius $r_\epsilon$ of roughly $\epsilon / \Lf$. Since usually neither $\epsilon$ or $\Lf$ are known in advance, the parameter $r_\epsilon$ must be tuned. Moreover, choosing $r_\epsilon$ to be small in keeping with the theory would likely mean very slow progress in the early stages of the algorithm. A second impractical aspect is the bisection stage in \Cref{alg:ms-bacon-redux}: while in theory the bisection only increases complexity by a logarithmic factor, in practice it entails solving a considerable number of sub-problems without making progress. This bisection overhead is an issue with Monteiro-Svaiter acceleration more broadly and a topic of active research~\cite{song2019unified,nesterov2019implementable}. \newpage \arxiv{\section*{Acknowledgment}} YC was supported in part by Len Blavatnik and the Blavatnik Family foundation, and the Yandex Machine Learning Initiative for Machine Learning. YJ was supported by Stanford Graduate Fellowship. AS was supported in part by a Microsoft Research Faculty Fellowship, NSF CAREER Award CCF-1844855, NSF Grant CCF-1955039, a PayPal research award, and a Sloan Research Fellowship. \arxiv{\bibliographystyle{abbrvnat}}
{ "timestamp": "2021-05-06T02:05:24", "yymm": "2105", "arxiv_id": "2105.01778", "language": "en", "url": "https://arxiv.org/abs/2105.01778" }
\section{Introduction} In 2020, the largest pandemic in recent history spread through the world: COVID-19. As of May 1st, 2021, there have already been 152 million cases and 3 million deaths around the world \cite{noauthor_covid-19_nodate}. In many regions, those numbers are considerably under-counted \cite{charlie_excess_2021}. Beyond that, many parts of the world have slowed or stopped due to the human, economic, and social impacts of distancing and protection measures. For the purpose of the ongoing pandemic and predictions for future pandemics \cite{dodds_disease_2019}, this project seeks to create a mask detection system that is capable of recognizing whether people in surveillance-type video streams are correctly wearing their masks. \subsection{Pipeline Overview} Due to the real-time and real-world deployment constraints of such task, we decided to tackle this problem from two fronts - performance and efficiency. The first pipeline that focuses on accuracy uses a pre-trained face detector to extract faces from the frame, then passes the cropped faces to an image classifier. This mask-wearing classifier is trained on a large-scale synthetic dataset of 180,000 images divided into three classes: mask correctly worn, mask incorrectly worn, and no mask worn. We experimented with various models for this classifier, from traditional machine learning approaches such as random forest and Haar-cascades to state-of-the-art computer vision architectures such as DenseNet and ResNet. The second pipeline leverages a real-time object detection architecture called YOLO, which is short for ``You Only Look Once''. As implied by the name, this is an extremely fast and efficient model designed specifically for object localization and classification in real-time settings. We trained this model on a real-world dataset with face bounding box labels and mask-wearing classifications. Due to the time-consuming process of bounding box annotation and the lack of methods to generate synthetic images, this dataset is comparatively small with 14,233 images. \subsection{Related Works} Projects with similar intent have been quite popular due to the ongoing pandemic. In the paper by Adnane Cabani and his colleagues from Universite de Haute-Alsace, a method was proposed to utilize haar-cascade based feature detectors to individually determine the presence of nose and and mouth from a detected face \cite{cabani_maskedface-net_2021}. Their logic follows that no mask is worn if we can successfully detect a mouth from the face, mask is worn incorrectly if we can detect a nose by not a mouth, and mask is worn correctly if we can detect neither a nose nor a mouth. This approach is efficient and intuitive but has severe limitations - it can only process full-frontal faces and one can easily trick the detector by covering their mouth and nose with their hand. Another approach is proposed by Chandrika Deb in his Github project \cite{deb_facemaskdetection_2021}. Similar to our first proposed pipeline, he utilizes a Caffe-based face detector in conjunction with a fine-tuned MobileNetV2 for mask-wearing classification. He was able to achieve a decent 0.93 f1-score on the classifier. Nevertheless, he used a very small dataset of 4095 images, which might not be representative of different ethnicities, genders, and types of facial coverings that the system might encounter in real-world settings. His data was also split into only two classes: with mask and without mask. So his model is incapable of detecting if someone is incorrectly wearing their mask (i.e having his or her mask below the nose). Lastly, the Github project by the Chinese company AIZOO Tech uses a object detection network for both face mask detection and classification, similar to our second proposed pipeline \cite{aizootech_facemaskdetection_2021}. They used a lite version of SSD, which is short for Single-Shot Multi-box Detector \cite{liu_ssd_2016}, and achieved around 0.9 in precision and recall. This architecture is very similar to YOLO in both their underlying principal and their intended applications. However, the original SSD was published in 2015, which could be very outdated compare to the fifth iteration of YOLO that we are using, which is published in late 2020. Additionally, their dataset contains only 4095 labeled images - which could be too small for the reasons discussed previously. \section{Face Detector $\rightarrow$ Classifier Pipeline} We need two networks for this pipeline - a face detector and a mask-wearing image classifier. Since face recognition has already been a well-defined and established task in computer vision with many existing solutions - there is no need to reinvent the wheel. So for the face detector, we turn to an existing face recognition package built on-top of PyTorch called Facenet \cite{esler_facenet-pytorch_2021}. This package contains multiple pre-trained deep learning face detectors and we will specifically be using MTCNN, which is short for Multi-task Cascaded Convolutional Networks, due to its superior speed and efficiency \cite{zhang_joint_2016}. As for the image classifier component that our project mainly focuses on, we will test various models from traditional machine learning algorithms to state-of-the-art deep learning architectures. The experiment process and results will be explained in the next few sections. \subsection{Dataset and Pre-processing} For the image classification pipeline, we used the MaskedFace-Net dataset created by Université de Haute-Alsace, which contains 133,784 artificially generated images of size 1024 by 2014, each containing a single human face that are either correctly wearing a mask or incorrectly wearing a mask \cite{cabani_maskedface-net_github_2021}. These images have been artificially generated by placing a blue medical mask over images of uncovered frontal faces. To complete the third category where no mask is worn, we used 70,000 images from the Flickr Face Dataset develoepd by NVIDIA AI Lab \cite{tero_nvlabsffhq_2021}. So in summary, there are 67,049 instances in the correctly masked category, 66,734 instances in the incorrectly masked category, and 70,000 instances in the no mask category. The first step in pre-processing was to resize all images from 1024x1024 to 128x128, in order to scale down the original 38GB input. After the data was reduced to a more manageable size, we randomly split the whole dataset into train, validation, and test set with a ratio of 80:10:10. Lastly, we performed data augmentation for translational and scale in-variance. The training images were first normalized then passed through a series of built-in PyTorch transformations including ColorJitter, Random Rotation, RandomResizedCrop, GaussianBlur, and RandomErasing. This was done to minimize the overfitting of our model to training data. \subsection{Classifier Baseline} Before looking at advanced neural network architectures, we wanted to see whether simpler models would achieve high performance. Given that models tend to have a trade-off between performance and speed, starting with simpler models allowed us to get a baseline for performance (see Table 1). Haar-based facial feature detection was one approach we tried, as this is a classic tried-and-true pattern recognition algorithm for detecting facial features \cite{viola_rapid_2001}. We also applied a random forest with a max depth of 2. As an ensemble method, random forests has great performance and prevents overfitting, while also having the advantage of being highly efficient. Given that mask wearing would be dependent on the pixels at the center of the image, it is possible that random forests would learn these feature combinations. And it appears they did - the random forest model achieved 94.33\% accuracy, performing worst for predicting incorrectly masked folks at 87\% and best for predicting correctly masked folks at 99\% accuracy. It was also by far our fastest model, performing 6951 instances per second on CPU. Our last base model was a vanilla CNN with two convolutional layers, each followed by a leaky ReLU activation function and a max pooling layer. These were followed by two linear layers, also using leaky ReLU as the activation function. This model achieved 98.55\% test accuracy. \begin{table*} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Model & Test Accuracy (class-wise) & Test Accuracy (total) & Inferences/Sec \\ \hline\hline Haar-cascade & .90/.49/.79 & 0.7266 & 45.95 (CPU) \\ Random Forest & .99/.87/.97 & 0.9433 & 6951.18 (CPU) \\ Vanilla CNN & N/A & 0.9855 & 775.35 (V100) \\ \hline \end{tabular} \end{center} \caption{Baseline Model Performance} \end{table*} \subsection{Advanced Models} In determining the best state-of-the-art deep learning architectures to use for transfer learning, we wanted to weigh accuracy versus network size. As a proxy for size, we consulted graphs that plotted accuracy against the number of operations and number of parameters (see Figure 1). We looked for models in the upper-left quadrant with few parameters, as indicated by the bubble size. As a result, we picked DenseNet161 \cite{huang_densely_2018}, MobileNet v2 \cite{howard_mobilenets_2017}, Inception v3 \cite{szegedy_rethinking_2015}, and ResNet18 \cite{he_deep_2015}. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{model_size_accuracy.jpeg} \end{center} \caption{Top-1 one-crop accuracy versus amount of operations required for a single forward pass in multiple popular neural network architectures \cite{culurciello_analysis_2018}.} \label{fig:long} \label{fig:onecol} \end{figure} \begin{table*} \begin{center} \begin{tabular}{|l|c|c|} \hline Model & Test Accuracy (total) & Inferences/Sec (V100) \\ \hline\hline ResNet18 & 0.9975 & \textbf{680.83} \\ MobileNet v2 & 0.9995 & 577.15 \\ DenseNet161 & \textbf{0.9997} & 301.49 \\ Inception v3 & 0.9983 & 425.99 \\ \hline CNN (Distillation) & 0.9985 & 775.35 \\ \hline \end{tabular} \end{center} \caption{Advanced Model Performance} \end{table*} Here we will give a brief architecture highlight of the four networks that we chose. ResNet18 makes use of shortcut connections that are built upon residual blocks, containing about 11 million parameters. MobileNet v2 is a lightweight model that only has 3 million parameters. It uses depth-wise separable convolution and linear bottlenecks between the layers. For DenseNet, each layer receives feature maps from all preceding layers so that feature propagation is strengthened. The total number of parameters in this architecture is 29 million. InceptionNet v3 is a much wider network architecture that has about 24 million parameters. Multiple kernels of small and different sizes are implemented within the same layer. As shown in Table 2 summarizing the performance of the advanced models, the test accuracies across the board are significantly higher than the base models, and they are all above the 99.5\% accuracy line. The highest accuracy of 99.97\% was achieved by DenseNet, and the second highest accuracy of 99.95\% was obtained by MobileNet v2, followed by the 99.83\% accuracy of InceptionNet v3, and the lowest accuracy of 99.75\% was by ResNet18. While looking at the inference speed of these models, ResNet was the fastest, which was able to inference 680 instances per second on Tesla V100, and DenseNet was the slowest mode, which was capable of inferring only 301 instances per second. The trade-off between accuracy and speed was noticeable here. \subsection{Distillation} As our use case required model speed as well as performance, we used the technique of knowledge distillation to see if we could achieve comparable performance with a smaller model. Using our initial CNN model as the student, we performed vanilla distillation using our highest-performing model, DenseNet (99.97\% test accuracy), as the teacher network. The result was a CNN with 99.85\% test accuracy and the best inference speed out of all our advanced models - 775.35 instances per second on a Tesla V100. The CNN had 15\% as many parameters as DenseNet. Given the small size, fast computation, and high accuracy, the CNN after distillation met our project goals and would be a good model to use going forward. \section{Object Detection Pipeline} \subsection{Curating Dataset} To train or fine-tune an object detection network like SSD and YOLO, we need a dataset with ground-truth bounding box labels in addition to the mask-wearing classifications. This makes gathering a large dataset very challenging as the annotation process is time-consuming and there are no straight-forward methods to generate synthetic data like the first pipeline. Luckily we were able to combine two bounding-box-labeled datasets on Kaggle - one from Wobot Intelligence \cite{wobot_face_2020} and another from Andrew Moranhao \cite{maranhao_face_2017}. However we noticed that the ``no mask worn'' class is severely under-represented in both of the datasets. To address this, we downloaded about 3000 images from the COCO dataset \cite{lin_microsoft_2015} that contains the label ``Person'' and ran our MTCNN face detector discussed previously to artificially generate facial bounding boxes. We were careful in only keeping detection outputs with a confidence score above 0.9 - so this method should be robust enough to generate pseudo ground-truth labels for the ``no mask worn'' class. At the end, our custom dataset contains 7364 positive instances (mask correctly worn) and 6869 negative instances (mask incorrectly worn + no mask worn). \subsection{YOLO v5} For the object detection architecture, we decided to use the fifth iteration of YOLO, which is short for You Only Look Once \cite{jocher_ultralyticsyolov5_2021}. The original paper by Redmon et. al was published in May 2016, and they were the first object detection network to combine the problem of object localization (bounding boxes) and classification in one end-to-end differentiable network \cite{redmon_you_2016}. The underlying principle is that YOLO treats detection as a regression problem - it divides the image into a grid, and for each grid cell it simultaneously produce boxing box confidence and class probability (see Figure 2). The model then aggregates those results to produce the final bounding box output and classification. This architecture is known for its performance and efficiency on real-time video data, so it is naturally a good fit for our task. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{YOLO.png} \end{center} \caption{YOLO Algorithm Overview \cite{redmon_you_2016}} \label{fig:long} \label{fig:onecol} \end{figure} Over the years, there have been multiple iterations and improvements over the original YOLO architectures. This leads us to the fifth version, YOLO v5, developed by the company Ultralytics. This latest iteration utilizes Cross Stage Partial Network (CSPNet) \cite{wang_cspnet_2019} as the model backbone and Path Aggregation Network (PANet) \cite{liu_path_2018} as the neck for feature aggregation (see Figure 3). These improvements have led to better feature extraction and a significant boost in the mean averaged precision score. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{yolov5.png} \end{center} \caption{YOLO v5 Architecture Overview \cite{noauthor_overview_nodate}} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Experiment Results} From Table 3 showing the performance evaluation of our fine-tuned YOLO v5 model, we can see that it does very well on the task of face mask detection by achieving a mAP score of 0.898 across both classes on the test data. Looking at the validation loss curves, we can see that it is also learning both the bounding box prediction task and the mask-wearing classification task. \begin{figure*} \begin{center} \includegraphics[width=1.0\linewidth]{yolo_result.png} \end{center} \caption{YOLOv5 Validation Loss Curves} \label{fig:short} \end{figure*} \begin{table*} \begin{center} \begin{tabular}{|l|c|c|c|} \hline & Negative mAP & Positive mAP & Total mAP \\ \hline Train & 0.974& 0.964 & 0.969 \\ Validation & 0.869 & 0.908 & 0.888 \\ Test & 0.894 & 0.902 & 0.898 \\ \hline \end{tabular} \end{center} \caption{YOLOv5 Mean Averaged Precision on Test Data} \end{table*} \section{Analysis on Real-World Data} Now that we have both of our face mask detection pipelines that were trained on different datasets, how do they compare when run on real-world video data? To test this we gathered two videos - one is a person sitting in front of a webcam taking on and off his mask, and the other is a person filming his walk along the Brooklyn Bridge during the pandemic \cite{youtube_brooklyn_2021}. The webcam video serves as a baseline in a very-controlled environment with only a single face close to the camera. The Brooklyn Bridge video resembles more closely to a setting where the face detection system could potentially be deployed (i.e in public surveillance). We ran both pipelines on these two videos, their sample outputs are shown in Figure 5 and Figure 6. YOLOv5 maintained an incredible 52 FPS on both sets of video data while MTCNN + ResNet18 of the first pipeline only achieved 6 FPS. Visually verifying the prediction accuracy, we observed that both pipelines performed almost flawlessly on the webcam video. YOLOv5 had a more stable detection output throughout the whole video. In the Brooklyn Bridge video, we see YOLOv5 completely outperform MTCNN + ResNet18. YOLOv5 was able to accurately detect and classify all face instances that appeared in the video. MTCNN + ResNet18, on the other hand, not only had numerous misdetection of faces but also only produced the correct classification when the person is wearing blue medical mask and extremely close to the camera. Upon closer examination, we observed that a traditional face detector like MTCNN might fail to detect a face when the person has a mask and other types of facial coverings on such as sunglasses or a hat. These coverings occludes key facial features thus making the pipeline fail before it can even reach our mask-wearing image classifier. In contrast, YOLOv5 was designed specifically to overcome this challenge. It was \textbf{not} trained to first detect faces then classify if the face has a mask on. Instead it has learned to detect directly ``face with correctly worn mask'' and ``face with incorrectly worn mask''. This key difference enables single-shot architectures like YOLO to excel in the tasks of object localization and recognition. \begin{figure*} \begin{center} \includegraphics[width=0.33\linewidth]{1.png} \includegraphics[width=0.33\linewidth]{2.png} \includegraphics[width=0.33\linewidth]{3.png} \end{center} \caption{Demo Result of MTCNN + ResNet18 \cite{li_mtcnn_2021}} \label{fig:short} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.33\linewidth]{4.png} \includegraphics[width=0.33\linewidth]{5.png} \includegraphics[width=0.33\linewidth]{6.png} \end{center} \caption{Demo Result of YOLOv5 \cite{li_yolov5_2021}} \label{fig:short} \end{figure*} \section{Social Impact} With research coming out that wearing masks would have saved countless lives \cite{peeples_face_2020} and helped stem the pandemic \cite{howard_evidence_2021}, being able to ensure compliance with mask mandates could contribute strongly to ending the pandemic around the world and preventing future ones from growing to an uncontrollable level like this one. Moreover, people may not realize they are wearing masks incorrectly, whether due to a lack of information, movement uncovering their mouth/nose, or forgetting to replace their mask when in a crowd. Tools such as these could be great for educating people on mask wearing. Mask detection can also allow businesses and public spaces to reopen quickly and safely. This is powerful, as mental health has deteriorated throughout the pandemic \cite{Bethune_apa_2021}; by making it possible for people to live their lives and interact again, socialization will increase, which can contribute to reduced stress levels and increased resilience \cite{ozbay_social_2007}. While no argument needs to be made for why reopening the world is good, people having the choice to interact with loved ones, earn a paycheck from a rebounding economy, and participate in social activities will be a plus. If face mask detection systems can contribute to the reopening of the world and curbing of pandemics, exploring these models should be a high priority. Of course, there are risks with such technologies. Real-time AI opens a host of security and ethical risks. If the system is not created in such a way to prevent data from being stolen, it would be possible for a hacker to steal footage, even potentially with labels showing who is or is not wearing masks, creating a privacy violation. These are already latent risks with any sort of monitoring, but they intensify with the larger system used for AI unless defenses are put in place, like using edge AI \cite{porambage_sec-edgeai_2019}. Similarly, there are safety risks in terms of malicious actors fooling the networks in order to exhaust security resources, potentially causing fatigue, avoiding detection of maskless people, or distracting from true security risks. These systems could be particular targets, given the polarization of mask wearing. We discuss these in the conclusion, as the risk profiles would need to be analyzed before deployment. In terms of fairness, our dataset incorporates people of various backgrounds, each not, incorrectly, and correctly wearing a mask. As a result, there should be fewer issues around bias. However, it is still possible some groups are underrepresented or the algorithm has issues detecting the masks of a certain group correctly. This could unfairly treat different groups. When incorporating additional datasets or real-time data, it would be important to monitor for any inequity issues resulting. The biggest challenge would be around authoritarianism and privacy. While this deep learning pipeline would be focused on identifying whether mask wearing is done correctly, there would likely be discussions around the trade-off of freedom versus security. There are already debates around the world in terms of pandemic restrictions and mask mandates, with many individuals wanting to decide for themselves. Tools like this one can provide governments more control over their citizens’ behavior, which would be risky. Similarly, non-government organizations, businesses, and people can already determine to some extent what sort of restrictions they want on their property, yet this automates further control that could be detrimental. For example, what if employers start firing employees who trigger multiple maskless alerts? What if those predictions were incorrect? These are real risks that could impact people’s lives. Moreover, the mask detection would need to be treated separately from any technology built on top of it, as people acclimatizing to facial mask detection could give way to organizations or governments to roll out other systems like face detection or employee monitoring. Ideally, face detection technologies would be limited to times where a pandemic is emerging or present, so that the benefits of stemming pandemics is more likely to outweigh the social risks of acclimatizing to video monitoring AI. Given the potential benefits of saving lives, improving mental health, and keeping the world going, face mask detection technologies could have a place in ending this pandemic and preventing future ones, improving our collective well-being. At the same time, risks of bias, security, and privacy are ones that would need to be evaluated, monitored, and addressed. This is to ensure that these technologies do not give way to social ills that outweigh their benefits. \section{Conclusion} As observed from the performance evaluation of both pipelines, the task of face mask detection in real-time video data can definitely by automated via current deep learning models. In comparing and contrasting the two pipelines, we saw the trade-off between speed and accuracy that each model had to make. Although both pipelines are poised to gain significant boost in performance given a larger, more diverse, and better curated dataset, it is evident that single-shot object detection architectures such as YOLO is better suited for this task. Given its blazing fast inference speed, satisfactory performance on only a very small dataset, and the lack of dependency on a pre-trained face detector, the pathway to a highly-accurate and fast face mask detection system is limited only by the lack of labeled training data. This can easily be addressed by spending more money on human annotations or performing a more extensive search to combine existing data. As for the face detector and classifier pipeline, the two immediate areas of further work would be diversification of datasets and further distillation. The biggest gap in our training process is the dataset. We used the largest public dataset that we could find, MaskedFace-Net for correctly versus incorrectly worn masks. Those images were supplemented with Flickr Faces from NVIDIA AI for non-masked people. However, the masks were synthetically generated so they may not accurately reflect real-world settings. In addition, the ``photoshopped'' masks were all blue surgical masks, which does not reflect the diversity of masks used in the real world. As a result, finding or creating a diverse dataset of correctly, incorrectly, and non-masked people would be key for improving our model’s performance in the real world. Our dataset was also quite homogeneous in that the photos tended to be akin to headshots. Getting a dataset of people in natural and differing environments, then labeling them, would likely improve real-world performance, as face mask detection systems would likely be deployed in varied environments beyond headshots. Once we have a varied dataset, the next priority would be experimenting with further distillation. While our project used vanilla distillation to achieve high performance, we found that other forms of distillation are quite effective. We only had time to run distillation on a small sample of our data, but we found that KD-Lib’s implementation of noisy teacher, self learning, and messy collaboration achieved powerful results \cite{shah_kdlib_2020}. For example, running vanilla distillation, followed by noisy teacher, self learning, messy collaboration and self learning again resulted in inception’s accuracy going from 37\% to 96.5\% on a sample dataset with a couple of hundred photos. Its teacher network was ResNet with an accuracy of 67\%, which it leapfrogged. Exploring with different forms of distillation could help get the accuracy of the system closer to 100\%. At the same time, given the importance of the performance-speed trade-off, it’s possible to try distilling progressively smaller models until it is sufficiently small and fast enough for real-world deployment. As our demonstration showed, these models can work in real-time systems already, but it is possible to reduce their overhead further. \subsection{Long-term Implications} Longer-term areas of further work include bias evaluation and risk analysis. Once those are addressed, deploying and monitoring in the real world would likely be the next avenue. While our dataset includes a wide representation of people across ages, genders, races, backgrounds, and other demographics, the models are not evaluated against any sort of social equity. Even with models achieving high accuracy, it is possible some groups are unfairly targeted by these models. For this reason, it would be important to do some sort of testing on how the model performs across different groups. This would likely require the images to be labeled with demographic data, and that may be a manual process. Yet, this can help avoid inequalities resulting from a real-world deployment. In addition, prior to a real-world deployment, an assessment of risk is critical. Can these systems be misused by businesses or governments to determine private information? Can face mask detection be used in ways that are detrimental to individuals and society? What are the benefits and risks to using this technology? Some of these go beyond the deep learning models but do concern the pipeline. For example, the security of the feeds and data is critical. Even if the model cannot do more than predict whether a mask is worn correctly, bad actors may be able to hack into the model pipeline and steal footage or even predictions to identify those masking or not. It may even be possible for someone to break in and fool the networks to set off notifications that no one is masking when that is not true. That could exhaust public resources aimed at dealing with public safety, potentially opening up risks elsewhere. The security of this pipeline will be critical. As an extension of this idea, it could be possible for users to fool the networks using adversarial techniques. Distillation is already in our pipeline, which could help make these attacks significantly harder \cite{papernot_distillation_2016}. If these networks are not available to the public, it would be harder for attackers to be able to figure out how to fool them. However, making them commercially available and widespread would provide opportunity for attackers to access them, plus attackers would be able to test their own networks and learn to fool them. That is in addition to the other neural network attacks that exist, such as poisoning by feeding bad data. In any case, we need to be ready for the eventuality that attackers will be able to fool these networks. Various security techniques could be explored, like using generative adversarial models and pruning \cite{cheng_defending_2020}. Manually reviewing the model results periodically and applying additional approaches as needed could mitigate the risk and damage of any attacks. Finally, with both social and technical concerns addressed, the model would be deployed to the real world. Once the deployment strategy is decided, deploying to the real world and using some sort of continuous or human-in-the-loop learning could ensure the model performs quickly to help increase correct mask wearing and curb pandemics. Finding ways to use the real-world data would also increase the size of the training data and therefore the performing models, allowing researchers and engineers to use this data to create successively better face mask detection models. {\small \bibliographystyle{ieee}
{ "timestamp": "2021-05-06T02:07:32", "yymm": "2105", "arxiv_id": "2105.01816", "language": "en", "url": "https://arxiv.org/abs/2105.01816" }
\section{Introduction} \noindent Bisimulations play a crucial role in the model theory of modal logic as the canonical notion of \emph{semantic} equivalence: bisimilar worlds necessarily satisfy precisely the same formulae. If the converse is also true, the (usually finitary) logical language is powerful enough to describe the (typically infinitary) semantics: this is the so-called \emph{Hennessy-Milner property}~\cite{HenMil85}. Bisimulations were introduced in \cite{Ben76} to characterise normal modal logic over a classical base as the bisimulation-invariant fragment of first-order logic. Independently, they arose in the field of computer science as an equivalence relation between process graphs \cite{Mil80,Par81}, and as extensional equality in non-wellfounded set theory \cite{Acz88}. By and large, the Hennessy-Milner property is well understood for normal modal logic over a classical base, where it is known to hold for all modally saturated models, see Section 2 of \cite{BRV01}. In the realm of (dual- and bi-)intuitionistic logic and their modal extensions, much less is known. Some explorations are made in \cite{Pat97} where the Hennessy-Milner property is established for \emph{intuitionistic} propositional logic, interpreted over intuitionistic Kripke models \cite{Kri65}, and in \cite{Dav09}, where a Hennessy-Milner property is given for tense intuitionistic logic where all modalities are interpreted using a single additional relation. Besides, \cite{Bou04} contains Hennessy-Milner results for strict-weak languages, and \cite{Pre14} discusses a Hennessy-Milner result for unimodal extensions of positive, intuitionistic and bi-intuitionistic logic. In this paper we aim to derive Hennessy-Milner properties for a large variety of logics using the notion of \emph{image-com\-pact\-ness}. A relation is image-compact if its successor sets of a single points are compact in a topology that includes all truth sets of formulae as clopens. Similar methods have previously been used in the setting of normal modal logic over a classical base \cite{BonKwi95} and unimodal logic over a positive, intuitionistic and bi-intuitionistic base \cite{Pre14}. Our results apply to intuitionistic, dual-intuitionistic, and bi-intuitionistic propositional logic, as well as their extension with normal modal operators. Moreover, we can use them to obtain new Hennessy-Milner type results for various logics previously studied, notably modal intuitionistic, and tense bi-intuitionistic logic. Technically, we show that logical equivalence and bisimulations coincide for image-compact Kripke models, and obtain a (known) characterisation for intuitionistic propositional logic. We then dualise the semantics to obtain the same result for \emph{dual-intuitionistic logic}, which is the extension of positive logic with a binary subtraction arrow $\bito$ residuated with respect to disjunction. While this may seem like a mathematical curiosity at first, subtraction has found multiple applications. In computer science it can be used to describe control mechanisms such as co-routines \cite{Cro04} and in philosophy the subtraction arrow provides a tool to reason about refutation \cite{Tra17,Res97}. Thereafter, we merge the results for intuitionistic propositional logic and its dual to obtain a characterisation of bisimulation for bi-intuitionistic logic (which can be viewed as the union of intuitionistic and dual-intuitionistic logic) in terms of logical equivalence. Bi-intuitionistic logic is also known as subtractive logic \cite{Cro04} and Heyting-Brouwer logic \cite{Rau74b}, and was introduced by Rauszer with Kripke semantics and a Hilbert calculus \cite{Rau80}. We refer to \cite{GorShi20} for an excellent overview of the logic, that moreover clarifies some of Rauszer's confusions. In a second step, we extend the underlying propositional languages with modal operators that are interpreted like Bo\v{z}i\'{c} and Do\v{s}en did in \cite{BozDos84}, where $\Box$ and $\Diamond$ are a priori unrelated modalities. Our approach is similar to the propositional case: a Hennessy-Milner theorem for intuitionistic propositional logic augmented with $\Box$ gives, by duality, an analogous theorem for dual-intuitionistic logic with $\Diamond$, and both can be combined to get the same for bi-intuitionistic logic, extended with an arbitrary number of $\Box$ and $\Diamond$-operators. Finally, we apply our results to obtain new Hennessy-Milner theorems for a large variety of logics studied in the literature. These fall into two classes: various flavours of intiutionistic modal logic \cite{Ono77,Fis81,PloSti86,WolZak98} and various flavours of tense bi-intuitionistic logic \cite{GorPosTiu10,SteSchRye16,SanSte17}. \paragraph{Structure of the Paper} In Section \ref{sec:IKM} we recall intuitionistic Kripke frames and models as semantics for intuitionistic, dual-intuitionistic and bi-intuitionistic logic. We give the definition of general frames and use these to define the notions of image-compactness and pre-image-compactness. Subsequently, in Section \ref{sec:inst}, we show how one can relate the relations of logical equivalence for different languages, borrowing a simple observation from the theory of institutions. Bisimulations between intuitionistic Kripke models are defined in Section \ref{sec:non-modal}, and the notions of (pre-)image-compactness are shown to give rise to Hennessy-Milner type results for (bi- and dual-)intuitionistic logic. In Section \ref{sec:modal} we extend our scope to modal extensions of the previously studied logics. We give a suitable notion of frame and model and define bisimulations between them. Again, the notions of (pre-)image-compactness give rise to Hennessy-Milner results. We then specialise these results to obtain Hennessy-Milner theorems for a number of logics studied in the literature in Section \ref{sec:app}. Finally, in Section \ref{sec:ic-vs-sat} we detail how in some cases image-compactness coincides with notions of saturation, and in Section \ref{sec:conc} we suggest several avenues for further research. \paragraph{Related Work} As mentioned above, in \cite{Bou04} the author proves Hennessy-Milner type theorems for strict-weak languages. Amongst such languages are intuitionistic logic, where implication is viewed as a strict arrow, dual-intuitionistic logic, modelling subtraction as a weak arrow, and bi-intuitionistic logic. In fact, the framework in {\it op.~\!cit.}~allows one to add as many such arrows as desired. The strict and weak arrows are interpreted using a relation in the same way implication and subtraction are interpreted (see Section \ref{sec:IKM} below). Moreover, every arrow gives rise to a box- or diamond-like modality via $\Box\phi := \top \to \phi$ and $\Diamond\phi := \phi \leftharpoondown \bot$, where $\leftharpoondown$ denotes a weak arrow. However, boxes and diamonds are not defined separately. This means that, when proving that some relation satisfies the back-and-forth conditions of a bisimulation, one can always make use of the arrows interpreted via each relation in the frame. This simplifies the proof of Hennessy-Milner results, because each clause resembles the proof of \cite[Theorem 21]{Pat97} or Theorem \ref{thm:hm-int} below, or its dual. In Section \ref{sec:modal} of the current paper, dealing with normal modal extensions of (bi- and dual-)intuitionistic logic, we do not have this luxury. In \cite{Pre14} the author considers modal extensions of positive, intuitionistic and bi-intuitionistic logic. Moreover, the relation used to interpret the modalities is not required to interact with the underlying partial order at all. The level of generality forces to author to obtain a Hennessy-Milner theorem via a duality, because the potential absence of implication or subtraction arrow frustrates a more direct approach like in \cite[Theorem 21]{Pat97} or \cite[Proposition 2.54]{BRV01}. By cleverly extending the duality to a dual adjunction, a slightly larger Hennessy-Milner class is derived. However, the models it contains are still based on pre-Priestley spaces. In our setting we begin with (bi- or dual-)intuitionistic logic, so that we always have an arrow in our language. Furthermore, the relations we use to interpret additional modal operators are required to satisfy certain coherence conditions with respect to the pre-order underlying a frame. These extra constraints allow us to derive a stronger Hennessy-Milner result. Finally, in \cite{Dav09} the author derives a Hennessy-Milner theorem for tense intuitionistic logic. This is a bit farther removed from our research, because the underlying intuitionistic logic is interpreted in topological spaces, rather than the more restrictive intuitionistic Kripke frames (= Alexandrov spaces) used here. We discuss this setting as a potential avenue for further research in the conclusion. \paragraph{Relation to Predecessor Paper} The current paper is an extension of preliminary results reported in \cite{GroPat19}. Conceptually, we identify the core notion of image compactness as the key stepping stone in establishing Hennessy-Milner type theorems. Technically, this yields stronger results: in {\it op.~\!cit.}, we have established Hennessy-Miler type theorems for descriptive and finite models of bi-intuitionistic logic. Both are special cases of (pre-)image-compact models. Moreover, (pre-)image compact models are closed under disjoint unions whence closure under disjoint unions, reported in {\it op.~\!cit.}, is automatic, and all results follow from Theorem \ref{thm:hm-bi-int} below. Similarly, the results from Section 5 of \cite{GroPat19} about descriptive and finite $\mathsf{Bi\hyphen int}_{\Box\Diamond}$-models are subsumed by Theorem \ref{thm:hm-mod-bi-int}, again noting that image compactness subsumes both finiteness and being descriptive. Finally, the treatment of bisimulations for modal and epistemic intuitionistic, and tense bi-intuitionistic logic is new. \section{Intuitionistic Kripke Models and Image Compactness}\label{sec:IKM} \noindent We recall the Kripke semantics of intuitionistic, dual-intuitionistic and bi-intuitionistic propositional logic, and introduce the semantic notion at the heart of our results: image-compact relations. Throughout the paper, we write $\Prop$ for a (possibly infinite) set of propositional variables. \begin{defn} The \emph{language} $\mathsf{Bi\hyphen int}(\Prop)$ of bi-intuitionistic propositional logic over the set $\Prop$ of propositional variables is given by the grammar $$ \phi ::= \top \mid \bot \mid p \mid \phi \wedge \phi \mid \phi \vee \phi \mid \phi \to \psi \mid \phi \bito \phi . $$ where $\to$ is intuitionistic implication and $\bito$ its dual, sometimes called \emph{subtraction}. The language $\mathsf{Int}(\Prop)$ of intuitionistic propositional logic is the set of $\bito$-free bi-intuitionistic formulae, and the language $\mathsf{Int}^{\partial}(\Prop)$ consists of all implication-free formulae. \end{defn} \noindent All three languages can be interpreted over intuitionistic Kripke models. These are simply pre-ordered sets, i.e., sets with a reflexive and transitive relation on them. If $(X, \leq)$ is a pre-order and $a \subseteq X$ then we write ${\uparrow}a = \{ y \in X \mid x \leq y \text{ for some } x \in a \}$ for the upwards closure of $a$, and for $x \in X$ we abbreviate ${\uparrow}x := {\uparrow}\{ x \}$. The set $a$ is called an \emph{upset} if ${\uparrow}a = a$, and we write $\fun{Up}(X, \leq)$ for the collection of upsets of $(X, \leq)$. \begin{defn}\label{def:interpr} An \emph{intuitionistic Kripke frame} is a pre-ordered set $(X, \leq)$. An \emph{intuitionistic Kripke model} is a triple $(X, \leq, V)$ where $(X, \leq)$ is a pre-order, and $V: \Prop \to \fun{Up}(X, \leq)$ is an upset-valued valuation. The \emph{truth} of bi-intuitionistic formulae in an intuitionistic Kripke model $\mo{M} = (X, \leq, V)$ at a world $x \in X$ is defined inductively by \begin{align*} \mo{M}, x \Vdash \top &\quad\text{always} \\ \mo{M}, x \Vdash \bot &\quad\text{never} \\ \mo{M}, x \Vdash p &\iff x \in V(p) \\ \mo{M}, x \Vdash \phi \wedge \psi &\iff x \Vdash \phi \text{ and } x \Vdash \psi \\ \mo{M}, x \Vdash \phi \vee \psi &\iff x \Vdash \phi \text{ or } x \Vdash \psi \\ \mo{M}, x \Vdash \phi \to \psi &\iff \text{for all } y \geq x, \text{ if } y \Vdash \phi \text{ then } y \Vdash \psi \\ \mo{M}, x \Vdash \phi \bito \psi &\iff \text{there exists } y \leq x \text{ such that } y \Vdash \phi \text{ xand } y \not\Vdash \psi. \end{align*} We write $x \logeq_{\Lbiint} x'$ to denote that two states $x \in X$ and $x' \in X'$ of two intuitionistic Kripke models $\mo{M} = (X, \leq, V)$ and $\mo{M}' = (X', \leq', V')$ are logically equivalent with respect to bi-intuitionistic propositional logic, i.e., \[ \mo{M}, x \Vdash \phi \iff \mo{M}', x' \Vdash \phi \] for all $\phi \in \mathsf{Int}$. The relations $\logeq_{\Lint}$ and $\logeq_{\Ldint}$ are the relations of logical equivalence with respect to $\mathsf{Int}$ and $\mathsf{Int}^{\partial}$ are defined analogously. In an intuitionistic Kripke model $\mo{M} = (X, \leq, V)$, we write $\llb \phi \rrb^{\mo{M}} = \lbrace x \in X \mid x \Vdash \phi \rbrace$ for the truth set of $\phi$ in $\mo{M}$. \end{defn} If we define the operators $\mathrel{\mkern1mu\underline{\mkern-1mu \to\mkern-2mu}\mkern2mu }, \mathrel{\mkern3.4mu\underline{\mkern-3.4mu \bito\mkern-3.4mu}\mkern3.4mu } : \fun{Up}(X, \leq) \times \fun{Up}(X, \leq) \to \fun{Up}(X, \leq)$ by \begin{align*} a \mathrel{\mkern1mu\underline{\mkern-1mu \to\mkern-2mu}\mkern2mu } b &= \{ x \in X \mid \text{for all } y \in X, \text{ if } x \leq y \text{ and } y \in a \text{ then } y \in b \} \\ a \mathrel{\mkern3.4mu\underline{\mkern-3.4mu \bito\mkern-3.4mu}\mkern3.4mu } b &= \{ x \in X \mid \text{there exists } y \leq x \text{ such that } y \in a \text{ and } y \notin b \} \end{align*} then evidently $\llb \phi \to \psi \rrb^{\mo{M}} = \llb \phi \rrb^{\mo{M}} \mathrel{\mkern1mu\underline{\mkern-1mu \to\mkern-2mu}\mkern2mu } \llb \psi \rrb^{\mo{M}}$ and $\llb \phi \bito \psi \rrb^{\mo{M}} = \llb \phi \rrb^{\mo{M}} \mathrel{\mkern3.4mu\underline{\mkern-3.4mu \bito\mkern-3.4mu}\mkern3.4mu } \llb \psi \rrb^{\mo{M}}$ for any intuitionistic Kripke model $\mo{M}$. The logics $\mathsf{Int}, \mathsf{Int}^{\partial}, \mathsf{Bi\hyphen int}$ are sometimes interpreted over posets (rather than pre-orders), for example in the predecessor paper of this one \cite{GroPat19} and in \cite{ChaZak97}. Here, we choose the more general semantics. The relationship between intuitionistic and dual-intuitionistic logic is best clarified in terms of dual models (with reversed order). \begin{defn}\label{def:ikm-dual} The \emph{dual} of an intuitionistic Kripke model $\model{M} = (X, \leq, V)$ is the model $\dual{\model{M}} = (X, \geq, \dual{V})$, where $\dual{V}$ is defined by $\dual{V}(p) = X \setminus V(p)$. \end{defn} \noindent The notion of dual model is well defined, as the complement $X \setminus a$ of an upset $a$ in a pre-order $(X, \leq)$ is a downset, and hence an upset for the dual pre-order $(X, \geq)$. On the level of languages, we have a translation $(\cdot)^t: \mathsf{Int} \to \mathsf{Int}^{\partial}$ such that $\phi$ is true at a state $x$ in a model $(X, \leq, V)$ if and only if its translation $\phi^t$ is \emph{false} in the \emph{dual model}. We define this inductively via \begin{align*} \bot^t &= \top & \top^t &= \bot & p^t &= p \\ && (\phi \wedge \psi)^t &= \phi^t \vee \psi^t & (\phi \vee \psi)^t &= \phi^t \wedge \psi^t \\ && (\phi \to \psi)^t &= \psi^t \bito \phi^t & (\phi \bito \psi)^t &= \psi^t \to \phi^t \end{align*} Clearly, $(\cdot)^t$ is an involution of $\mathsf{Bi\hyphen int}$ which restricts to translations $\mathsf{Int} \to \mathsf{Int}^{\partial}$ and $\mathsf{Int}^{\partial} \to \mathsf{Int}$. \begin{lem}\label{lem:sem-trans} Let $\mo{M} = (X, \leq, V)$ be an intuitionistic Kripke model and $\phi \in \mathsf{Bi\hyphen int}$ be a formula. Then we have $$ \mo{M}, x \Vdash \phi \iff \dual{\mo{M}}, x \not\Vdash \phi^t. $$ \end{lem} \begin{proof} This follows from a straightforward induction. We showcase one of the inductive steps: \begin{align*} \mo{M}, x \Vdash \phi \to \psi &\iff \text{for all } y \geq x \text{ either } \mo{M}, y \not\Vdash \phi \text{ or } {M}, y \Vdash \psi \\ &\iff \text{for all } y \geq x \text{ either } \dual{\mo{M}}, y \Vdash \phi^t \text{ or } \dual{\mo{M}}, y \not\Vdash \psi^t \\ &\iff \text{there is no } y \geq x \text{ such that } \dual{\mo{M}}, y \Vdash \psi^t \text{ and } \dual{\mo{M}}, y \not\Vdash \phi^t \\ &\iff \dual{\mo{M}}, x \not\Vdash \psi^t \bito \phi^t = (\phi \to \psi)^t \end{align*} All other cases are similar. \end{proof} \noindent We now define image-compactness, the main technical vehicle that we use to establish Hennessy-Milner results in this paper. For this, we augment models with a collection of \emph{admissible subsets}, that is, a selection of subsets of the carrier that includes all truth sets. This allows us to topologise the model using the patch topology, and use compactness to get a finitary handle on the successors of any given world. \begin{defn}\label{def:general} A \emph{general model} is a tuple $\mo{M} = (X, \leq, V, A)$ such that ${(X, \leq, V)}$ is an intuitionistic Kripke model, $A \subseteq \fun{Up}(X, \leq)$ is a collection of up-closed subsets of $(X, \leq)$ that (i) is closed under finite union and finite intersection, and (ii) contains $\emptyset$, $X$ and $V(p)$ for every $p \in \Prop$. We call $\mo{M}$ a \emph{general $\mathsf{Int}$-model} (resp. $\mathsf{Int}^{\partial}$-model) if $A$ is moreover closed under $\mathrel{\mkern1mu\underline{\mkern-1mu \to\mkern-2mu}\mkern2mu }$ (resp. $\mathrel{\mkern3.4mu\underline{\mkern-3.4mu \bito\mkern-3.4mu}\mkern3.4mu }$), and a \emph{general $\mathsf{Bi\hyphen int}$-model} if $A$ is closed under both $\mathrel{\mkern1mu\underline{\mkern-1mu \to\mkern-2mu}\mkern2mu }$ and $\mathrel{\mkern3.4mu\underline{\mkern-3.4mu \bito\mkern-3.4mu}\mkern3.4mu }$. The \emph{patch topology} on a general model $\mo{M} = (X, \leq, V, A)$ is the topology $\tau_A$ on $X$ generated by the (clopen) subbase $A \cup -A$, where $-A = \{ X \setminus a \mid a \in A \}$. \end{defn} \noindent What will be of special interest later are the \emph{compact} subsets of a general model $\mo{M} = (X, \leq, V, A)$. Recall that a subset $U \subseteq X$ is \emph{compact} if every open cover $(O_i)_{i \in I}$ of $U$ (that is, $U \subseteq \bigcup \lbrace O_i \mid i \in I \rbrace$ and $O_i \in \tau_A$ for all $i \in I$) has a finite subcover (that is, there exists a finite $J \subseteq I$ such that $U \subseteq \bigcup \lbrace O_j \mid j \in J \rbrace$). In particular, if $x \in X$ is a world in a model $(X, \leq, V, A)$, then bisimulation requires us to establish a property for \emph{all} successors in $\mo{M}$, i.e., for the set ${\uparrow}_{\leq}x = \{ y \in X \mid x \leq y \}$. If ${\uparrow}_{\leq}x$ is compact, this can be achieved in a finitary way. This motivates the following definition of image-compactness. \begin{defn} An intuitionistic Kripke model $(X, \leq, V)$ is called \emph{(pre-)image-compact} for $\lan{L}$ (where $\lan{L} \in \{ \mathsf{Int}, \mathsf{Int}^{\partial}, \mathsf{Bi\hyphen int} \}$) if there exists a set $A$ of admissibles such that $(X, \leq, A, V)$ is a general $\lan{L}$-model and for all $x \in X$ the set ${\uparrow}_{\leq}x$ (resp.~${\downarrow}_{\leq}x$) is compact in the patch topology $\tau_A$. \end{defn} \noindent Observe that, like saturation, (pre-)image-compactness is a property of \emph{models}, rather than a property of frames. Furthermore, note that by definition of the patch topology, proposition letters are interpreted as clopen sets in this topology. We conclude the section with the following examples. \begin{exm} \begin{enumerate} \item A Kripke model $\mo{M} = (X, \leq, V)$ is \emph{image-finite} if the set $\lbrace y \in X \mid x \leq y \rbrace$ is finite for every $x \in X$. Clearly every image-finite Kripke model is image-compact: take $A$ to be the collection of all upward closed subsets of $W$. \item Image-compact is strictly more general than image-finite. Consider for example $X = \mb{N} \cup \lbrace \infty \rbrace$ where $n \leq \infty$ for all $n \in \mb{N} \cup \{ \infty \}$ (and $\leq$ is as usual otherwise), with the valuation $V(p_i) = \lbrace x \in X \mid i \leq x \rbrace$, for $i \in \mb{N}$. Clearly, this is not image-finite. If we take $A$ to consist of all sets of the form $\lbrace x \in X \mid x \geq n \rbrace$ where $n$ ranges over $\mathbb{N}$, then this is easily seen to be image-compact. \item Every descriptive intuitionistic Kripke frame \cite[Section 8.4]{ChaZak97} is automatically image-compact. This follows because descriptive frames are precisely Esakia spaces \cite{Esa74}, hence topologically compact, and upsets of single points are closed in this topology. \item If $\mo{M} = (X, \leq, V, A)$ is a general model, and $\dual{\mo{M}} = (X, \geq, \dual{V}, \dual{A})$ is its dual where $\dual{A} = \lbrace X \setminus a \mid a \in A \rbrace$, then $\mo{M}$ is image-compact if and only if $\dual{\mo{M}}$ is pre-image-compact. \end{enumerate} \end{exm} \section{Relating Logical Equivalence for Different Logics}\label{sec:inst} \noindent As this paper is concerned with many different logics, it is useful to structure the relationships between them. More precisely, we will often show that the relation of logical equivalence between two models is a bisimulation for a certain logic. The following simple fact, borrowed from the theory of institutions \cite{Goguen:1992:IAM}, allows us to transfer such results from one logic to another. Let us abstractly define a \emph{semantics} for a language $\lan{L}$ to be a class of models $\mb{M}$ such that: \begin{itemize} \item Each $\mo{M} \in \mb{M}$ has an underlying set, denoted by $\fun{U}\mo{M}$; and \item Each model $\mo{M} \in \mb{M}$ comes with a theory map $\fun{th}_{\mo{M}} : \fun{U}\mo{M} \to \fun{P}\lan{L}$ that sends a state $x \in \fun{U}\mo{M}$ to the collection of $\lan{L}$-formulae true at that state. ($\fun{P}\lan{L}$ denotes the powerset of $\lan{L}$.) \end{itemize} The collection $\mb{M}$ may be regarded as a category and $\fun{U}$ as a functor $\mb{M} \to \cat{Set}$ from $\mb{M}$ to the category of sets. However, we do not need this categorical perspective for our purposes. \begin{exm} One can think of $\lan{L} = \mathsf{Int}$, with $\mb{M}$ the collection of intuitionistic Kripke models from Definition \ref{def:interpr}. Then for $\mo{M} = (X, \leq, V) \in \mb{M}$, the underlying set is given by $\fun{U}\mo{M} = X$ and the theory map is induced by the interpretation from Definition \ref{def:interpr} via $$ \fun{th}_{\mo{M}} : X \to \fun{P}\mathsf{Int} : x \mapsto \{ \phi \in \mathsf{Int} \mid x \Vdash \phi \}. $$ It is easy to see that, {\it mutatis mutandis}, this yields semantics for $\mathsf{Int}^{\partial}$ and $\mathsf{Bi\hyphen int}$ as well. \end{exm} If we have sufficient coherence between two such logic, then logical equivalence of one implies logical equivalence of the other. The next lemma describes this in detail. \begin{lem}\label{lem:inst} Let $\lan{L}_1$ and $\lan{L}_2$ be two languages with semantics $\mb{M}_1$ and $\mb{M}_2$. Denote the underlying set of a model $\mo{M} \in \mb{M}_i$ by $\fun{U}_i\mo{M}$, and the theory of $x \in \fun{U}_i\mo{M}$ by $\fun{th}_i(\mo{M})(x)$. Let \begin{itemize} \item $t: \lan{L}_1 \to \lan{L}_2$ is a surjective translation from $\lan{L}_1$ to $\lan{L}_2$; and \item $r: \mb{M}_2 \to \mb{M}_1$ a transformation of models such that $\fun{U}_1(r\mo{M}) = \fun{U}_2\mo{M}$ for all $\mo{M} \in \mb{M}_2$. \end{itemize} Moreover, suppose that \begin{equation}\label{eq:inst-th} t(\phi) \in \fun{th}_2({\mo{M}})(x) \iff \phi \in \fun{th}_1({r\mo{M}})(x) \end{equation} for all $\phi \in \lan{L}_1$ and $\mo{M} \in \mb{M}_2$ and $x \in \fun{U}_2\mo{M}$. Then we have \begin{equation}\label{eq:inst-le} \fun{th}_2({\mo{M}})(x) = \fun{th}_2({\mo{M}'})(y) \iff \fun{th}_1({r\mo{M}})(x) = \fun{th}_1({r\mo{M}'})(y) \end{equation} for all $\mo{M}, \mo{M}' \in \mb{M}_2$ and $x \in \fun{U}_2\mo{M}$ and $y \in \fun{U}_2\mo{M}'$. \end{lem} \noindent We omit the obvious proof. Observe that \eqref{eq:inst-le} simply says that two worlds are $\lan{L}_1$-logically equivalent if and only if they are $\lan{L}_2$-logically equivalent. Let us have a look at an example. \begin{exm}\label{exm:inst-2} Let $\lan{L}_1 = \mathsf{Int}$ and $\lan{L}_2 = \mathsf{Bi\hyphen int}$, both generated by the same set $\Prop$ of proposition letters. Since both can be interpreted in intuitionistic Kripke frames, there is an evident transformation $r : \mb{M}_2 \to \mb{M}_1$, namely the identity on the class of intuitionistic Kripke models. If we let $t : \mathsf{Int} \to \mathsf{Bi\hyphen int}$ be the obvious translation, then clearly \eqref{eq:inst-th} is satisfied. However, the translation $t$ is not surjective. To overcome this, we can enrich $\mathsf{Int}$ with an additional proposition letter $p_{\phi}$ for every formula $\phi \in \mathsf{Bi\hyphen int}$ that is not in $\mathsf{Int}$. These can be interpreted by extending the valuation $V$ of an intuitionistic Kripke model $(X, \leq, V)$ via $V(p_{\phi}) = \llbracket \phi \rrbracket$, where the latter interpretation is given by the clauses in Definition \ref{def:interpr}. Denote this collection of additional proposition letters by $\Prop'$. Then clearly the translation $t : \mathsf{Int}(\Prop) \to \mathsf{Bi\hyphen int}(\Prop)$ extends to a surjective translation $t : \mathsf{Int}(\Prop \cup \Prop') \to \mathsf{Bi\hyphen int}(\Prop)$. Moreover, we still have an obvious transformation of models and \eqref{eq:inst-th} is satisfied. It follows that the relation of $\mathsf{Bi\hyphen int}(\Prop)$-logical equivalence between two intuitionistic Kripke models coincides with $\mathsf{Int}(\Prop \cup \Prop')$-logical equivalence. \end{exm} More generally, if $\lan{L}_2$ freely extends $\lan{L}_1$ with one or more operators, then we can use this method to transfer properties of $\lan{L}_1$-logical equivalence to $\lan{L}_2$, achieving surjectivity by adding a proposition letter $p_{\phi}$ to $\lan{L}_1$ for every $\lan{L}_2$-formula that is not already in $\lan{L}_1$. We use this as follows: Suppose we know that logical equivalence between certain models for $\lan{L}_1$ is a bisimulation relation, and hence implies certain back-and-forth conditions. Then by the lemma the logical equivalence relation between models for $\lan{L}_2$ coincides with $\lan{L}_1$-logical equivalence, and therefore allows us to inherit the back-and-forth conditions. \section{Bisimulations}\label{sec:non-modal} \noindent We begin this section by recalling the definition of bisimulation between Kripke models given in \cite{Pat97}, and prove a Hennessy-Milner result. We then dualise this to obtain a corresponding result for dual intuitionistic logic. Taken together, both results imply the Hennessy-Milner property for bi-intuitionistic logic. \subsection{Bisimulations for Intuitionistic Logic} \begin{defn}\label{def:i-bis} Let $\mo{M} = (X, \leq, V)$ and $\mo{M}' = (X', \leq', V')$ be two intuitionistic Kripke models. An \emph{intuitionistic bisimulation} or \emph{$\mathsf{Int}$-bisimulation} between $\mo{M}$ and $\mo{M}'$ is a relation $B \subseteq X \times X'$ such that for all $(x, x') \in B$ we have: \begin{enumerate}[\qquad 1 \;] \renewcommand{\theenumi}{($B_{\arabic{enumi}}$)} \item \label{eq:bis1} For all $p \in \Prop$, $x \in V(p)$ iff $x' \in V'(p)$; \item \label{eq:bis2} If $x \leq y$ then there exists $y' \in X'$ such that $x' \leq' y'$ and $yBy'$; \item \label{eq:bis3} If $x' \leq' y'$ then there exists $y \in X$ such that $x \leq y$ and $yBy'$. \end{enumerate} Two states $x$ and $x'$ are called \emph{$\mathsf{Int}$-bisimilar} if there is an $\mathsf{Int}$-bisimulation linking them, notation: $x \rightleftharpoons_{\mathsf{Int}} x'$. \end{defn} \noindent A straightforward inductive argument proves that bisimilar states satisfy the same formulae. \begin{propn}\label{prop:int-bis-sound} If $x \rightleftharpoons_{\mathsf{Int}} x'$ then $x \leftrightsquigarrow_{\mathsf{Int}} x'$. \end{propn} \noindent Furthermore, it is easy to see (but of no relevance for us in the sequel) that intuitionistic bisimulations are closed under composition, and the graph of a bounded morphism is an intuitionistic bisimulation. We prove a Hennessy-Milner property for image-compact models. \begin{thm}\label{thm:hm-int} Let $x, x'$ be worlds in two image-compact models $\mo{M} = (X, \leq, V)$ and $\mo{M}' = (X', \leq', V')$. Then $$ x \rightleftharpoons_{\mathsf{Int}} x' \iff x \leftrightsquigarrow_{\mathsf{Int}} x'. $$ \end{thm} \begin{proof} Since we assume $\mo{M}$ and $\mo{M}'$ to be image-compact, they both carry a general model structure, i.e., we can find a collection $A$ of up-closed subsets of $(X, \leq)$ such that $(X, \leq, V, A)$ is a general model and ${\uparrow}_{\leq}x$ is compact in $\tau_A$ for all $x \in X$, and similarly for $\mo{M}'$. Suppose we have chosen such $A$ and $A'$. The direction from left to right is soundness of the notion of bisimulation (Proposition \ref{prop:int-bis-sound}). For the converse direction, we show that the relation of logical equivalence is a bisimulation between $\mo{M}$ and $\mo{M}'$. Clearly, if $x \leftrightsquigarrow_{\mathsf{Int}} x'$ we have $x \in V(p)$ iff $x' \in V'(p)$, so item \ref{eq:bis1} is satisfied. We now prove that \ref{eq:bis2} holds. Let $x \leftrightsquigarrow_{\mathsf{Int}} x'$ and $x \leq y$. Then we need to find $y' \in X'$ such that $x' \leq' y'$ and $y \leftrightsquigarrow_{\mathsf{Int}} y'$. Suppose towards a contradiction that such a $y'$ does not exist. Then for each $\leq'$-successor $z'$ of $x'$ we can either find a separating formula $\phi_{z'}$ such that $\mo{M}, y \Vdash \phi_{z'}$ and $\mo{M}', z' \not\Vdash \phi_{z'}$, or a separating formula $\psi_{z'}$ such that $\mo{M}, y \not\Vdash \psi_{z'}$ and $\mo{M}', z' \Vdash \psi_{z'}$. Pick such a separating formula for each $z'$. Let $\Phi$ be the collection of such formulae that are \emph{not} satisfied at $z'$, and $\Psi$ the collection of separating formulae that \emph{are} satisfied at $z'$. Since the interpretants of the formulae are clopen in the topology on $X'$ generated by $A' \cup -A'$, the collection $$ \{ X \setminus \llbracket \phi \rrbracket^{\mo{M}'} \mid \phi \in \Phi \} \cup \{ \llbracket \psi \rrbracket^{\mo{M}'} \mid \psi \in \Psi \} $$ is an open cover of ${\uparrow}_{\leq'}x'$. As the latter is assumed to be compact, we get finite subsets $\Phi' \subseteq \Phi$ and $\Psi' \subseteq \Psi$ such that $$ \{ X \setminus \llbracket \phi \rrbracket^{\mo{M}'} \mid \phi \in \Phi' \} \cup \{ \llbracket \psi \rrbracket^{\mo{M}'} \mid \psi \in \Psi' \} $$ covers ${\uparrow}_{\leq'}x'$. As a consequence, for every successor $z'$ of $x'$ there either exists a $\phi \in \Phi'$ such that $z' \not\Vdash \phi$, or a $\psi \in \Psi'$ such that $z' \Vdash \psi$. Therefore, $$ x' \Vdash \textstyle\bigwedge \Phi' \to \bigvee \Psi'. $$ Since the disjunction and conjunction are taken over finite sets, this is a formula in $\mathsf{Int}$. Furthermore, $y$ satisfies all $\phi \in \Phi'$ and none of the $\psi \in \Psi'$, and hence $$ x \not\Vdash \textstyle\bigwedge \Phi' \to \bigvee \Psi'. $$ This is a contradiction with the assumption that $x$ and $x'$ are logically equivalent. Therefore there must exist $y' \in X'$ which is logically equivalent to $y$ and satisfies $x' \leq' y'$. Item \ref{eq:bis3} is proven symmetrically. \end{proof} \noindent Theorem \ref{thm:hm-int} does not give a strict characterisation of models where logical equivalence coincides with bisimilarity. This is witnessed by the following example, which gives a model that is \emph{not} image-compact while logical equivalence (between the model and itself) \emph{does} imply bisimilarity. \begin{exm} Consider the intuitionistic Kripke frame consisting of the rational numbers ordered as usual. Let $\Prop = \{ p_q \mid q \in \mathbb{Q} \}$ be a countable set of proposition letters and define a valuation $V : \Prop \to \fun{Up}(\mathbb{Q}, \leq)$ by $V(p_q) = \{ x \in \mathbb{Q} \mid q < x \}$. Then $\mo{Q} = (\mathbb{Q}, \leq, V)$ is an intuitionistic Kripke model. We claim that $\mo{Q}$ is not image-compact. To see this, let $A$ be any general frame structure such that $(\mathbb{Q}, \leq, A, V)$ is a general model. By definition $\llbracket p_q \rrbracket \in A \cup -A$ and $\mathbb{Q} \setminus \llbracket p_q \rrbracket \in A \cup -A$ for all $p_q \in \Prop$. We note that ${\uparrow}_{\leq}0$ is covered by $$ (\mathbb{Q} \setminus \llbracket p_0 \rrbracket) \cup \bigcup \{ \llbracket p_{\nicefrac{1}{n}} \rrbracket \mid n \in \mathbb{N} \} $$ and clearly this cover does not have a finite subcover. However, the relation of logical equivalence between $\mo{Q}$ and itself is the identity, and hence is automatically an $\mathsf{Int}$-bisimulation. \end{exm} Also, it is not in general true that logical equivalence implies bisimilarity. In \cite[Proposition 27]{Pat97} the author gives an example of two intuitionistic Kripke models such that logical equivalence does not imply bisimilarity. (The notion of image-finiteness used in {\it loc.~\!cit.}~is not the usual one.) Alternatively, one can give a counterexample using ``porcupine models'' similar to Example \ref{exm:biint1} below. \subsection{Bisimulations for Dual- and Bi-Intuitionistic Logic} \begin{defn}\label{def:d-bis} A \emph{dual-intuitionistic bisimulation} or \emph{$\mathsf{Int}^{\partial}$-bisimulation} between intuitionistic Kripke models $\mo{M} = (X, \leq, V)$ and $\mo{M}' = (X', \leq', V')$ is a relation $B \subseteq X \times X'$ such that for all $(x, x') \in B$ we have: \begin{enumerate}[\qquad 1 \;] \renewcommand{\theenumi}{($B_{\arabic{enumi}}$)} \item For all $p \in \Prop$, $x \in V(p)$ iff $x' \in V'(p)$; \setcounter{enumi}{3} \item \label{eq:bis4} If $y \leq x$ then there exists $y' \in X'$ such that $y' \leq' x'$ and $yBy'$; \item \label{eq:bis5} If $y' \leq' x'$ then there exists $y \in X$ such that $y \leq x$ and $yBy'$. \end{enumerate} If moreover $B$ satisfies \ref{eq:bis2} and \ref{eq:bis3} (from Definition \ref{def:i-bis}) then we call $B$ a \emph{bi-intuitionistic bisimulation}, or \emph{$\mathsf{Bi\hyphen int}$-bisimulation}. We define $\mathsf{Int}^{\partial}$-bisimilarity and $\mathsf{Bi\hyphen int}$-bisimilarity as usual, and write these as $x \rightleftharpoons_{\mathsf{Int}^{\partial}} x'$ and $x \rightleftharpoons_{\mathsf{Bi\hyphen int}} x'$. \end{defn} \begin{rem}\label{rem-bisim-Badia} \emph{Directed $\mathsf{Bi\hyphen int}$-bisimulations} \cite[Definition 4]{Bad16} between intuitionistic Kripke models are pairs $(Z_1, Z_2)$ of simulations, i.e., pairs $(Z_1, Z_2)$ of two relations $Z_1 \subseteq X \times X'$ and $Z_2 \subseteq X' \times X$ satisfying certain back-and-forth conditions. This is closely related to $\mathsf{Bi\hyphen int}$-bisimulation as just introduced: if $B$ is a $\mathsf{Bi\hyphen int}$-bisimulation then $(B, B^{-1})$ is a directed $\mathsf{Bi\hyphen int}$-bisimulation, and conversely if $(Z_1, Z_2)$ is a directed $\mathsf{Bi\hyphen int}$-bisimulation, then $Z_1 \cap Z_2^{-1}$ is a $\mathsf{Bi\hyphen int}$-bisimulation. Although not carried out in \emph{op.~\!cit.}, one could define $x$ and $x'$ to be \emph{directed $\mathsf{Bi\hyphen int}$-bisimilar} if there is a directed $\mathsf{Bi\hyphen int}$-bisimulation $(Z_1, Z_2)$ with $(x, x') \in Z_1$ and $(x', x) \in Z_2$. Directed $\mathsf{Bi\hyphen int}$-bisimilarity and $\mathsf{Bi\hyphen int}$-bisimilarity as defined in Definition \ref{def:d-bis} above are then easily seen to coincide. \end{rem} \begin{figure}[h!] \centering \begin{tikzcd}[arrows=-] y \arrow[r, dashed] & y' & y \arrow[r, dashed] & y' & x \arrow[r] & x' & x \arrow[r] & x' \\ x \arrow[u, dashed, "\leq" left] \arrow[r, "B" below] & x' \arrow[u] & x \arrow[u, "\leq" left] \arrow[r, "B" below] & x' \arrow[u, dashed] & y \arrow[u, dashed, "\leq" left] \arrow[r, dashed, "B" below] & y' \arrow[u] & y \arrow[u, "\leq" left] \arrow[r, dashed, "B" below] & y' \arrow[u, dashed] \end{tikzcd} \caption{The zigs and zags of a $\mathsf{Bi\hyphen int}$-bisimulation.} \label{fig:zigzag1} \end{figure} \begin{propn}\label{prop:b-d-bisim-sound} Let $(X, \leq, V)$ and $(X', \leq', V')$ be two intuitionistic Kripke models and $x \in X, x' \in X'$. Then $x \rightleftharpoons_{\mathsf{Int}^{\partial}} x'$ implies $x \leftrightsquigarrow_{\mathsf{Int}^{\partial}}$ and $x \rightleftharpoons_{\mathsf{Bi\hyphen int}} x'$ implies $x \leftrightsquigarrow_{\mathsf{Bi\hyphen int}} x'$. \end{propn} \noindent The following lemma allows us to view an $\mathsf{Int}^{\partial}$-bisimulation between two models $\mo{M}$ and $\mo{M}'$ as an $\mathsf{Int}$-bisimulation between the corresponding dual models. \begin{lem}\label{lem:d-i-bisim} Let $\mo{M} = (X, \leq, V)$ and $\mo{M}' = (X', \leq', V')$ be two intuitionistic Kripke models. Then $B \subseteq X \times X'$ is a $\mathsf{Int}^{\partial}$-bisimulation between $\mo{M}$ and $\mo{M}'$ if and only if it is an $\mathsf{Int}$-bisimulation between $\mo{M}^{\partial}$ and $(\mo{M}')^{\partial}$. \end{lem} \noindent Using this lemma we can convert the result from Theorem \ref{thm:hm-int} to a Hennessy-Milner theorem for dual-intuitionistic logic. \begin{thm}\label{thm:hm-d-int} Let $x, x'$ be worlds in two pre-image-compact intuitionistic Kripke models $\mo{M} = (X, \leq, V)$ and $\mo{M}' = (X', \leq', V')$. Then $$ x \rightleftharpoons_{\mathsf{Int}^{\partial}} x' \iff x \leftrightsquigarrow_{\mathsf{Int}^{\partial}} x'. $$ \end{thm} \begin{proof} Let $B$ be the relation of logical equivalence between $\mo{M}$ and $\mo{M}$. We show that it is an $\mathsf{Int}^{\partial}$-bisimulation. Alternatively, it suffices to show that it is an $\mathsf{Int}$-bisimulation between $\mo{M}^{\partial}$ and $(\mo{M}')^{\partial}$. By Lemma \ref{lem:sem-trans}, two states $x, x'$ in $\mo{M}^{\partial}$ and $(\mo{M}')^{\partial}$ satisfy the same $\mathsf{Int}$-formulae if and only if they satisfy the same $\mathsf{Int}^{\partial}$-formulae in $\mo{M}$ and $\mo{M}'$. Therefore the relation $B$ coincides with logical equivalence between $\mo{M}^{\partial}$ and $(\mo{M}')^{\partial}$. Furthermore, $\mo{M}^{\partial}$ and $(\mo{M}')^{\partial}$ are image-compact because $\mo{M}$ and $\mo{M}'$ are pre-image-compact. So it follows from Theorem \ref{thm:hm-int} that $B$ is an $\mathsf{Int}$-bisimulation between $\mo{M}^{\partial}$ and $(\mo{M}')^{\partial}$, hence an $\mathsf{Int}^{\partial}$-bisimulation between $\mo{M}$ and $\mo{M}'$. \end{proof} \noindent Combining Lemma \ref{lem:inst} and Theorems \ref{thm:hm-int} and \ref{thm:hm-d-int} yields: \begin{thm}\label{thm:hm-bi-int} Let $x, x'$ be worlds in two intuitionistic Kripke models $\mo{M} = (X, \leq, V)$ and $\mo{M}' = (X', \leq', V')$ that are both image-compact and pre-image-compact. Then $$ x \rightleftharpoons_{\mathsf{Bi\hyphen int}} x' \iff x \leftrightsquigarrow_{\mathsf{Bi\hyphen int}} x'. $$ \end{thm} \begin{proof} The direction from left to right follows is Proposition \ref{prop:b-d-bisim-sound}. For the converse, we will show that the relation $B$ of logical equivalence between them is a bisimulation. \ref{eq:bis1} follows immediately from the fact that $B$ is logical equivalence. To show that \ref{eq:bis2} and \ref{eq:bis3} hold, we use Lemma \ref{lem:inst}. Let $\Prop'$ be defined as in Example \ref{exm:inst-2} and extend the valuations $V$ and $V'$ of $\mf{M}$ and $\mf{M}'$ to $\hat{V}$ and $\hat{V}'$ by setting $\hat{V}(p_{\phi}) = \llbracket \phi \rrbracket^{\mf{M}}$, and similar for $\hat{V}'$. Then as a consequence of Lemma \ref{lem:inst}, $B$ coincides with $\mathsf{Int}$-logical equivalence between $(X, \leq, \hat{V})$ and $(X', \leq', \hat{V}')$. Furthermore, these new models are image-compact, and therefore properties \ref{eq:bis2} and \ref{eq:bis3} follow from Theorem \ref{thm:hm-int}. One can similarly obtain \ref{eq:bis4} and \ref{eq:bis5} from Theorem \ref{thm:hm-d-int}. \end{proof} \noindent We complete this section with a detailed example showing that logical equivalence for bi-intuitionistic formulae does not in general imply $\mathsf{Bi\hyphen int}$-bisimilarity. \begin{exm}\label{exm:biint1} Let $W = \{ (n, k) \in (\mathbb{N} \cup \{ \infty \}) \times \mathbb{N} \mid k < n \} \cup \{ x \}$ and define an order $\preccurlyeq$ by: $(n, k) \preccurlyeq x$ for all $(n, k) \in W$ and $(n, k) \preccurlyeq (m, \ell)$ iff $n = m$ and $k \leq \ell$. For $\Prop = \{ p_i \mid i \in \mb{N} \} \cup \{ q \}$ define the valuation $V$ by $V(q) = \{ x \}$ and $V(p_i) = \{ (n, k) \in W \mid i \leq k \} \cup \{ x \}$. Then the triple $(W, \preccurlyeq, V)$ is a Kripke model. Let $\mf{W}' = (W', \preccurlyeq', V')$ be the submodel of $\mf{W}$ with underlying set $W' = \{ (n', k') \in \mathbb{N} \times \mathbb{N} \mid k' < n' \} \cup \{ x' \}$. Note that $\mf{W}'$ does not have an infinite branch. (We use primes to distinguish the two models.) See Figure \ref{fig-exm-fin-comp} for pictorial presentations of the two models. We claim that $x$ and $x'$ are logically equivalent but not bisimilar. Suppose towards a contradiction that there exists a bisimulation $B$ linking $x$ and $x'$. Since $(\infty, 0) \preccurlyeq x$ in $W$ there must be some $y' \in W'$ such that $(\infty, 0)By'$ and $y' \preccurlyeq' x'$. Then $y'$ cannot be $x'$, because $W, (\infty, 0) \not\Vdash q$, hence $W', y' \not\Vdash q$, whereas $W', x' \Vdash q$. So $y'$ is of the form $(n', k')$ for some $n', k' \in \mb{N}$ with $k' < n'$. But then $W', (n', k') \Vdash p_{n'+1} \to q$, while $W, (\infty, 0) \not\Vdash p_{n'+1} \to q$. Therefore $(\infty, 0)$ and $(n', k')$ are not logically equivalent, hence by Proposition \ref{prop:b-d-bisim-sound} they cannot be bisimilar. This contradicts the assumption that there exists a bisimulation $B$ linking $x$ and $x'$, thus $x$ and $x'$ are not bisimilar. Next we show that $x \in W$ and $x' \in W'$ are logically equivalent. For $m \in \mb{N}$, let $\Prop_m = \{ p_i \mid i \in \mb{N}, i \leq m \} \cup \{ q \}$. Then $\mathsf{Int}(\Prop) = \bigcup_{m \in \mb{N}} \mathsf{Int}(\Prop_m)$. Define $B_m \subseteq W \times W'$ by \begin{align*} B_m = \{ (x, x') \} \cup \big\{ \big((n,k), (n',k')\big) \mid \text{either } &[ n = n' \text{ and } k = k' ] \\ \text{or } &[k, k' \geq m ] \\ \text{or } &[n, n' > m \text{ and } k = k' < m ] \big\}. \end{align*} It can be shown by induction that whenever $(z, z') \in B_m$, we have $W, z \Vdash \phi$ iff $W', z' \Vdash \phi$ for all $\phi \in \mb{L}(\Prop_m)$. It follows that $x$ and $x'$ are logically equivalent because $(x, x') \in B_m$ for all $m \in \mb{N}$. As we have already established that $x$ and $x'$ are not bisimilar, we conclude that logical equivalence cannot imply bisimilarity in general. \end{exm} \begin{figure} \centering \begin{tikzpicture}[shorten <=4pt, shorten >=4pt] \draw (-.7,.7) node{$\mf{W}$}; \draw[fill=black] (0,0) circle(1.6pt) node[anchor=south east]{\footnotesize{$q$}}; \draw[fill=black] (1,0) circle(1.6pt) node[anchor=south west]{\footnotesize{$p_0$}}; \draw[fill=black] (-30:1) circle(1.6pt) node[anchor=south west]{\footnotesize{$p_1$}} (-30:2) circle(1.6pt) node[anchor=south west]{\footnotesize{$p_0$}} node[anchor=north, rotate=5]{\scriptsize{$(2,1)$}}; \draw[fill=black] (-60:1) circle(1.6pt) node[anchor=west]{\footnotesize{$p_2$}} (-60:2) circle(1.6pt) node[anchor=west]{\footnotesize{$p_1$}} node[anchor=east, rotate=15]{\scriptsize{$(3, 1)$}} (-60:3) circle(1.6pt) node[anchor=west]{\footnotesize{$p_0$}} node[anchor=east, rotate=15]{\scriptsize{$(3, 0)$}}; \draw[fill=black] (-100:1.5) circle(1.6pt) node[anchor=east]{\footnotesize{$p_3$}} (-100:2.5) circle(1.6pt) node[anchor=east]{\footnotesize{$p_2$}} node[anchor=west, rotate=-3]{\scriptsize{$(\infty, 2)$}} (-100:3.5) circle(1.6pt) node[anchor=east]{\footnotesize{$p_1$}} node[anchor=west, rotate=-3]{\scriptsize{$(\infty, 1)$}} (-100:4.5) circle(1.6pt) node[anchor=east]{\footnotesize{$p_0$}} node[anchor=west, rotate=-3]{\scriptsize{$(\infty, 0)$}}; \draw[latex-] (0,0) -- (1,0); \draw[latex-] (0,0) -- (-30:1); \draw[latex-] (-30:1) -- (-30:2); \draw[latex-] (0,0) -- (-60:1); \draw[latex-] (-60:1) -- (-60:2); \draw[latex-] (-60:2) -- (-60:3); \draw[dashed] (0,0) -- (-100:1.5); \draw[latex-] (-100:1.5) -- (-100:2.5); \draw[latex-] (-100:2.5) -- (-100:3.5); \draw[latex-] (-100:3.5) -- (-100:4.5); \draw[thick, dotted] (-67:1.7) arc(-67:-94:1.7); \end{tikzpicture} \qquad\qquad \begin{tikzpicture}[shorten <=4pt, shorten >=4pt] \draw (-.7,.7) node{$\mf{W}'$}; \draw[fill=black] (0,0) circle(2pt) node[anchor=south east]{\footnotesize{$q$}}; \draw[fill=black] (1,0) circle(2pt) node[anchor=south west]{\footnotesize{$p_0$}}; \draw[fill=black] (-30:1) circle(2pt) node[anchor=south west]{\footnotesize{$p_1$}} (-30:2) circle(2pt) node[anchor=south west]{\footnotesize{$p_0$}} node[anchor=north, rotate=5]{\scriptsize{$(2,1)$}}; \draw[fill=black] (-60:1) circle(2pt) node[anchor=west]{\footnotesize{$p_2$}} (-60:2) circle(2pt) node[anchor=west]{\footnotesize{$p_1$}} (-60:3) circle(2pt) node[anchor=south west]{\footnotesize{$p_0$}} node[anchor=north, rotate=5]{\scriptsize{$(3,1)$}}; \draw[fill=black] (-80:1) circle(2pt) node[anchor=east]{\footnotesize{$p_3$}} (-80:2) circle(2pt) node[anchor=west]{\footnotesize{$p_2$}} node[anchor=east, rotate=5]{\scriptsize{$(4,2)$}} (-80:3) circle(2pt) node[anchor=west]{\footnotesize{$p_1$}} node[anchor=east, rotate=5]{\scriptsize{$(4,1)$}} (-80:4) circle(2pt) node[anchor=west]{\footnotesize{$p_0$}} node[anchor=east, rotate=5]{\scriptsize{$(4,0)$}}; \draw[latex-] (0,0) -- (1,0); \draw[latex-] (0,0) -- (-30:1); \draw[latex-] (-30:1) -- (-30:2); \draw[latex-] (0,0) -- (-60:1); \draw[latex-] (-60:1) -- (-60:2); \draw[latex-] (-60:2) -- (-60:3); \draw[latex-] (0,0) -- (-80:1); \draw[latex-] (-80:1) -- (-80:2); \draw[latex-] (-80:2) -- (-80:3); \draw[latex-] (-80:3) -- (-80:4); \draw[thick, dotted] (-100:1.7) arc(-100:-119:2); \draw[opacity=0] (-100:4.5) circle(2pt) node[anchor=east]{\footnotesize{$p_0$}} node[anchor=west, rotate=-3]{\scriptsize{$(\infty, 0)$}}; \end{tikzpicture} \caption{The figures depicts the models $\mf{W}$ and $\mf{W}'$ from Example \ref{exm:biint1}. The coordinates indicate the names of some of the states. The $p_i$ denote the lowest occurrence of a proposition letter in each branch of the models. That is, if $p_i$ is true in some state, then it is also true in all states above.} \label{fig-exm-fin-comp} \end{figure} \section{Modal Bi-/Dual-/Intuitionistic Logics}\label{sec:modal} \noindent In this section we enrich the logics from Section \ref{sec:non-modal} with (several copies of) the unary modal operators $\Box$ and $\Diamond$. Following \cite{BozDos84}, we shall treat $\Box$ and $\Diamond$ as two different modalities that a priori are not related via axioms. Semantically, $\Box$ and $\Diamond$ are interpreted via distinct relations, so that boxes and diamonds do not necessarily come in pairs. For $\lan{L} \in \{ \mathsf{Int}, \mathsf{Int}^{\partial}, \mathsf{Bi\hyphen int} \}$, we write $\lan{L}_{n,m}$ for the languages that arises from adjoining $\lan{L}$ with boxes $\Box_1, \ldots, \Box_n$ diamonds $\Diamond_1, \ldots, \Diamond_m$. In the special case where $n = 1$ and $m = 0$ we write $\lan{L}_{\Box} := \lan{L}_{1,0}$, and similarly we sometimes use $\lan{L}_{\Diamond} := \lan{L}_{0,1}$ and $\lan{L}_{\Box\Diamond} := \lan{L}_{1,1}$. Since we do not assume any axioms relating boxes and diamonds, each modality is interpreted via its own relation in the same way as in classical modal logic. As such, a model for $\lan{L}_{n,m}$ is an intuitionistic Kripke model with an additional relation $R_i$ for each box and $S_j$ for each diamond, satisfying certain coherence conditions with respect to the order $\leq$ to ensure that the interpretation of every formula is an upset. This approach resembles that of $H\Box$- and $H\Diamond$-models introduced in \cite{BozDos84}. The main objective of this section is to obtain a Hennessy-Milner type theorem for the modal bi-intuitionistic logic $\lan{L}_{n,m}$ interpreted in the models sketched above. We shall prove intermediate results for $\mathsf{Int}_{\Box} = \mathsf{Int}_{1,0}$ and $\mathsf{Int}^{\partial}_{\Diamond} = \mathsf{Int}^{\partial}_{0,1}$, which we then combine for the desired result using Lemma \ref{lem:inst}. \subsection{Semantics for Modal Bi-/Dual-/Intuitionistic Logics} \noindent If $Z$ and $Z'$ are two relations on a set $X$, then we denote by $Z \circ Z'$ the relation $\{ (x, y) \in X \times X \mid \exists u \in X \text{ s.t. } xZu \text{ and } uZ'y \}$. \begin{defn}\label{def:modal-model} A \emph{(modal) $\lan{L}_{n,m}$-frame} is a tuple $(X, \leq, R_1, \ldots, R_n, S_1, \ldots, S_m)$ that consists of an intuitionistic Kripke frame $(X, \leq)$ and relations $R_i, S_j \subseteq X \times X$ satisfying $$ ({\leq} \circ R_i) \subseteq (R_i \circ {\leq}), \qquad({\geq} \circ S_j) \subseteq (S_j \circ {\geq}). $$ It is called \emph{strictly condensed} if $ ({\leq} \circ R_i \circ {\leq}) \subseteq R_i $ and $ ({\geq} \circ S_j \circ {\geq}) \subseteq S_j $ for all $i \in \{ 1, \ldots, n \}$ and $j \in \{ 1, \ldots, m \}$. The corresponding notion of an \emph{$\lan{L}_{n,m}$-model} arises from adding a valuation. \end{defn} Note that, since $\leq$ is reflexive, an $\lan{L}_{n,m}$-frame is strictly condensed if and only if $({\leq} \circ R_i \circ {\leq}) = R_i$ and $({\geq} \circ S_j \circ {\geq}) = S_j$ for all $i$ and $j$. These models can be used to interpret modal extensions of $\mathsf{Int}, \mathsf{Int}^{\partial}$ and $\mathsf{Bi\hyphen int}$ with $n$ boxes and $m$ diamonds. The logical connectives from $\lan{L}$ are interpreted in the underlying intuitionistic Kripke model $(X, \leq, V)$ as usual and, as stated, the interpretations of $\Box_i$ and $\Diamond_j$ are defined as in classical modal logic, via the relations $R_i$ and $S_j$. That is, \begin{align*} \mo{M}, x \Vdash \Box_i\phi &\iff \text{for all } y \in X, \; xR_iy \text{ implies } \mo{M}, y \Vdash \phi \\ \mo{M}, x \Vdash \Diamond_j\phi &\iff \mo{M}, y \Vdash \phi \text{ for some $y$ with } xS_jy. \end{align*} We write $x \leftrightsquigarrow_{\lan{L}_{n,m}} x'$ if two states satisfy precisely the same $\lan{L}_{n,m}$-formulae. We shall sometimes write $(X, \leq, (R_i), (S_j), V)$ for a modal $\lan{L}_{n,m}$-model. Besides, we remark that (strictly condensed) $\lan{L}_{\Box}$-models are precisely (strictly condensed) $H\Box$-models from \cite{BozDos84}, and (strictly condensed) $\lan{L}_{\Diamond}$-models can be found in {\it op.~\!cit.}~under the name of (strictly condensed) $H\Diamond$-models. We have the following notion of bisimulation for these models: \begin{defn}\label{def:modal-bis} Let $\mo{M} = (X, \leq, (R_i), (S_j), V)$ and $\mo{M}' = (X', \leq', (R_i'), (S_j'), V')$ be two modal $\lan{L}_{n,m}$-models and $B \subseteq X \times X'$ a relation. We call $B$ a $\Box_i$-zigzag if for all $(x, x') \in B$ the following conditions hold: \begin{description \item[($\Box_i$-zig)] If $xR_iy$ then there exists $y' \in X'$ such that $x'R_i'y'$ and $yBy'$; \item[($\Box_i$-zag)] If $x'R'_iy'$ then there exists $y \in X$ such that $xR_iy$ and $yBy'$; \end{description} We call $B$ a $\Diamond_j$-zigzag if the same conditions hold for $S_j$ instead of $R_i$. An \emph{$\lan{L}_{n,m}$-bisimulation} between $\mo{M}$ and $\mo{M}'$ is a relation $B \subseteq X \times X'$ which is an $\lan{L}$-bisimulation between the underlying intuitionistic Kripke models and which is a $\Box_i$-zigzag and $\Diamond_j$-zigzag for all $i \in \{ 1, \ldots, n \}$ and $j \in \{ 1, \ldots, m \}$. \end{defn} We remark that one can quotient with bisimilarity: \begin{rem} Let $\mf{M} = (X, \leq, (R_i), (S_j), V)$ be an $\lan{L}_{n,m}$-model. It is easy to see that the collection of $\lan{L}_{n,m}$-bisimulation on a model is closed under all unions. Therefore, the relation $B$ of bisimilarity on $\mf{M}$ is again a bisimulation. Moreover, $B$ is an equivalence relation: it is reflexive because the identity on $X$ is a bisimulation, symmetric because the inverse of a bisimulation on $X$ is again a bisimulation, and transitive because bisimulations are closed under composition. Let $X_B$ denote the quotient of $X$ with the equivalence relation $X$ and write $\bar{x} \in X_B$ for the equivalence class of $x \in X$. For each of the relations $Z$ on $X$, define a relation $Z_B$ on $X/B$ via $\bar{x}Z_B\bar{y}$ if there are $x' \in \bar{x}$ and $y' \in \bar{y}$ such that $x'Zy'$. Finally, for $p \in \Prop$ let $V_B(p) = \{ \bar{x} \mid x \in V(p) \}$. Then it follows from a straightforward verification that the tuple $$ \mf{M}_B = (X_B, \leq_B, ((R_B)_i), ((S_B)_i), V) $$ is an $\lan{L}_{n,m}$-model and the graph of the quotient map $q : X \to X_B$ is a bisimulation between $\mf{M}$ and $\mf{M}_B$. Consequently, if $\mf{M}$ is in a Hennesy-Milner class, then we can that the quotient with respect to logical equivalence. \end{rem} \begin{rem} When equipped with a suitable notion of (bounded) morphism, the collection of $\lan{L}_{n,m}$-frames forms a category. This category is isomorphic to a category of \emph{dialgebras} \cite{GroPat20}, and the language $\lan{L}_{n,m}$ arises as a \emph{dialgebraic logic}. Interestingly, on the level of frames, the bisimulations defined in Definition \ref{def:modal-bis} correspond precisely to \emph{dialgebra bisimulations} (or \emph{cospans}) in the category of $\lan{L}_{n,m}$-frames. \end{rem} \noindent A straightforward inductive proof yields: \begin{propn}\label{prop:adeq-modal} Let $x$ and $x'$ be two states in $\lan{L}_{n,m}$-models $\mo{M}$ and $\mo{M}'$. Then $x \rightleftharpoons_{\lan{L}_{n,m}} x'$ implies $x \leftrightsquigarrow_{\lan{L}_{n,m}} x'$. \end{propn} \noindent In order to get a suitable notion of (pre-)image-compactness for the relations $R_i, S_j$ we extend the notion of a general frame to this modal setting. \begin{defn}\label{def:modal-general} A \emph{general $\lan{L}_{n,m}$-frame} consists of a modal $\lan{L}_{n,m}$-frame $(X, \leq, R_1, \ldots, R_n, S_1, \ldots, S_m)$ and a collection $A \subseteq \fun{Up}(X, \leq)$ such that $(X, \leq, A)$ is a general $\lan{L}$-frame and $A$ is closed under: \begin{align*} {\boxbar}_i &: \fun{Up}(X, \leq) \to \fun{Up}(X, \leq) : a \mapsto \{ x \in X \mid R_i[x] \subseteq a \} \\ {\mathbin{\rotatebox[origin=c]{45}{$\boxslash$}}}_j &: \fun{Up}(X, \leq) \to \fun{Up}(X, \leq) : a \mapsto \{ x \in X \mid xS_jy \text{ for some } y \in a \} \end{align*} for all $i \in \{ 1, \ldots, n \}$ and $j \in \{ 1, \ldots, m \}$. The corresponding notion of a \emph{general $\lan{L}_{n,m}$-model} arises from adjoining such a frame with an \emph{admissible} valuation, i.e., a map $V : \Prop \to A$. A relation $R_i$ in an $\lan{L}_{n,m}$-model $(X, \leq, (R_i), (S_j), V)$ is called \emph{(pre-)image-compact} if there exists $A \subseteq \fun{Up}(X, \leq)$ such that $(X, \leq, (R_i), (S_j), A, V)$ is a general $\lan{L}_{n,m}$-model and $R_i[x] = \{ y \in X \mid xR_iy \}$ (resp. $R^{-1}_i[x] = \{ y \in X \mid yR_ix \}$) is compact in $\tau_A$ for every $x \in X$. We similarly define (pre-)image-compactness for $S_j$. \end{defn} \begin{rem} The definition of (pre-)image-compactness crucially depends on the underlying base logic. In particular, we never speak about an image compact relation in an intuitionistic Kripke frame: we speak about an image compact relation in a $\mathsf{Int}$-, $\mathsf{Int}^{\partial}$- or $\mathsf{Bi\hyphen int}$-frame. For a relation to qualify as image compact, we need to exhibit a system $A$ of admissible subsets that is \emph{closed under the operations of the base logic}. That is, a choice of admissibles may exhibit a relation as image compact in an $\mathsf{Int}$-frame, but there may be no choice of admissibles $A'$ that exhibits the same relation as image-compact in an $\mathsf{Bi\hyphen int}$-frame, for example, if $A$ is not closed under $\mathrel{\mkern3.4mu\underline{\mkern-3.4mu \bito\mkern-3.4mu}\mkern3.4mu }$. This subtlety is caused by the fact that we treat three base logics simultaneously. \end{rem} For our Hennessy-Milner type results, we need to restrict to the strictly condensed models. Although this may seem like a harsh restriction, in fact every $\lan{L}_{n,m}$-model can be turned into a strictly condensed one without changing the interpretation of formulae, by merely readjusting the relations $R_i$ and $S_j$. We explicitly give this construction for $\lan{L}_{\Box}$-models and leave the general case to the reader. \begin{propn}\label{prop:strictify} Let $\mo{M} = (X, \leq, R, V)$ be an $\lan{L}_{\Box}$-model and set $R^+ := ({R} \circ {\leq})$. Then $\mo{M}^+ = (X, \leq, R^+, V)$ is strictly condensed, and for all $x \in X$ and $\phi \in \lan{L}_{\Box}$ we have $\mo{M}, x \Vdash \phi$ iff $\mo{M}^+, x \Vdash \phi$. \end{propn} \begin{proof} To see that $\mo{M}^+$ is strictly condensed, observe that reflexivity and transitivity of $\leq$ imply $({\leq} \circ R^+ \circ {\leq}) = ({\leq} \circ R \circ {\leq}) \subseteq (R \circ {\leq}) = R^+$. The preservation of truth can be proved by induction on the structure of the formula $\phi$. All cases are trivial except the modal case. For this, we have \begin{align*} \mo{M}, x \Vdash \Box\phi &\iff \text{for all $y \in X$, } xRy \text{ implies } \mo{M}, y \Vdash \phi \\ &\iff \text{for all $y \in X$, } x(R \circ {\leq})y \text{ implies } \mo{M}, y \Vdash \phi \\ &\iff \text{for all $y \in X$, } xR^+y \text{ implies } \mo{M}^+, y \Vdash \phi \\ &\iff \mo{M}^+, x \Vdash \Box\phi. \end{align*} The second ``iff'' holds by the fact that truth-sets of formulae are up-closed in $(X, \leq)$, the third one by induction. \end{proof} \noindent An example of this procedure is depicted in Figure \ref{fig:exm-M-plus} below. It is not in general true that either the identity or the relation of logical equivalence between $\mo{M}$ and $\mo{M}^+$ is a $\lan{L}_{\Box}$-bisimulation, as is witnessed by the following example. \begin{exm}\label{exm:M-plus} Let $X = \{ x, y, z \}$ be ordered by the pre-order generated by $y \leq z$ and let $R = \{ (x, y) \} \subseteq X \times X$. Then $(X, \leq, R)$ is an $\lan{L}_{\Box}$-frame. Equip this with the valuation $V : \{ p, q \} \to \fun{Up}(X, \leq)$ given by $V(p) = \{ y, z \}$ and $V(q) = \{ z \}$. Then $\mo{M} = (X, \leq, R, V)$ is the $\lan{L}_{\Box}$-model depicted in Figure \ref{fig:exm-M-plus}. The strictly condensed $\lan{L}_{\Box}$-model $\mo{M}^+$ is obtained by changing $R$ to $R^+ = (R \circ {\leq}) = \{ (x, y), (x, z) \}$. The the relation of logical equivalence between $\mo{M}$ and $\mo{M}^+$ is simply the identity relation on $X$. It is easy to see that this is \emph{not} an $\lan{L}_{\Box}$-bisimulation: in $\mo{M}^+$ there is an $R$-transition from $x$ to $z$. The only state in $\mo{M}$ that is logically equivalent to $z$ is $z$. But there is no $R_{\Box}$-transition from $x$ to $z$ in $\mo{M}$. So there can be no $\lan{L}_{\Box}$-bisimulation linking $x$ and $x'$. \end{exm} \begin{figure}[h!] \centering \begin{tikzcd}[column sep=.5em, row sep=1.4em, arrows=-latex] \mo{M} & & z & [4em] \mo{M}^+ & & z \\ & y \arrow[ru, "\leq"] & & & y \arrow[ru, "\leq"] & \\ & x \arrow[u, "R"] & & & x \arrow[u, "R^+"] \arrow[ruu, bend right=15, "R^+" swap] & \end{tikzcd} \caption{An $\lan{L}_{\Box}$-model and its condensed version.} \label{fig:exm-M-plus} \end{figure} \subsection{Hennessy-Milner Property for Some Modal Intuitionistic Logics} \noindent We now restrict our attention to $\lan{Int_{\Box}}$ and extend the Hennessy-Milner result from Theorem \ref{thm:hm-int} to the setting of $\mathsf{Int}_{\Box}$ interpreted in strictly condensed $\mathsf{Int}_{\Box}$-models. \begin{thm}\label{thm:hm-int-box} Let $\mo{M} = (X, \leq, R, V)$ and $\mo{M}' = (X', \leq', R', V')$ be two strictly condensed $\mathsf{Int}_{\Box}$-models such that $\leq, \leq', R$ and $R'$ are image-compact. Then for all $x \in X$ and $x' \in X'$ we have $$ x \rightleftharpoons_{\mathsf{Int}_{\Box}} x' \iff x \leftrightsquigarrow_{\mathsf{Int}_{\Box}} x'. $$ \end{thm} \begin{proof} The direction from left to right follows from Proposition \ref{prop:adeq-modal}. For the converse, we let $B$ be logical equivalence and we show that it is a $\mathsf{Int}_{\Box}$-bisimulation. It follows from Lemma \ref{lem:inst} and Theorem \ref{thm:hm-int} that $B$ is an $\mathsf{Int}$-bisimulation, so it remains to show that ($\Box$-zig) and ($\Box$-zag) hold. Let $xBx'$ and $xRy$ and suppose towards a contradiction that there is no $R'$-successor $y'$ of $x'$ which is logically equivalent to $y$. Then for each such $y'$ we can find a separating formula. As in Theorem \ref{thm:hm-int}, using compactness, we get two finite sets $\Phi'$ and $\Psi'$ such that $y$ satisfies every formula in $\Phi'$ and none in $\Psi'$, and such that for every $y'$ with $x'R'y'$ there either exists $\phi \in \Phi'$ such that $\mo{M}', y' \not\Vdash \phi$, or $\psi \in \Psi'$ such that $\mo{M}', y' \Vdash \psi$. Let $y'$ be an $R'$-successor of $x'$, then $y' \leq' z'$ implies $x'R'z'$, because $\mf{M}'$ is assumed to be strictly condensed. As a consequence $\mo{M}', y' \Vdash \bigwedge \Phi' \to \bigvee \Psi'$. Since this holds for any $y'$ with $x'R'y'$, we have $$ \mo{M}', x' \Vdash \textstyle\Box(\bigwedge \Phi' \to \bigvee \Psi'). $$ Furthermore, by construction $\mo{M}, y \not\Vdash \bigwedge \Phi' \to \bigvee \Psi'$, so $$ \mo{M}, x \not\Vdash \textstyle\Box(\bigwedge \Phi' \to \bigvee \Psi'). $$ This contradicts the assumption that $x$ and $x'$ be logically equivalent. Therefore we conclude that there must exist a $y' \in X'$ which is logically equivalent to $y$ and satisfies $x'R'y'$. Thus ($\Box$-zig) is satisfied. A symmetric argument shows that ($\Box$-zag) is satisfied as well. \end{proof} \noindent The next example shows that a simple adaptation of ``porcupine models'' exhibits that logical equivalence does not in general imply $\mathsf{Int}_{\Box}$-bisimilarity. Note also that in this example both $\leq$ and $\leq'$ are image-finite and pre-image-finite. \begin{exm} Consider the two structures as in Figure \ref{fig:two-struc} where the lines indicate the relations $R$ and $R'$. Equip both models with the trivial order, that is, $x \leq y$ iff $x = y$. Then $\mo{B}$ and $\mo{B}'$ are two strictly condensed $\mathsf{Int}_{\Box}$-frames. Since the orders are taken to be trivial, the interpretation of intuitionistic logic is classical, i.e., every subset of states is an interpretant and the interpretation of $\neg\phi$ is given by taking complements. Moreover, the notion of an $\mathsf{Int}_{\Box}$-bisimulation reduces to a Kripke bisimulation in the usual sense for normal modal logic, see e.g.~\cite[Definition 2.16]{BRV01}. Therefore, the argument in Example 2.23 of {\it op.~\!cit.}~proves that the roots of the two models are logically equivalent but not bisimilar. \end{exm} \begin{figure}[h!] \centering \begin{tikzpicture}[scale=.8,shorten <=4pt, shorten >=4pt] \draw[fill=black] (0,0) circle(2pt); \draw[fill=black] (150:1) circle(2pt); \draw[fill=black] (120:1) circle(2pt) (120:2) circle(2pt); \draw[fill=black] (90:1) circle(2pt) (90:2) circle(2pt) (90:3) circle(2pt); \draw[-latex] (0,0) -- (150:1); \draw[-latex] (0,0) -- (120:1); \draw[-latex] (120:1) -- (120:2); \draw[-latex] (0,0) -- (90:1); \draw[-latex] (90:1) -- (90:2); \draw[-latex] (90:2) -- (90:3); \draw[dotted, thick, shorten <=0pt, shorten >=0pt] (75:1.7) arc(75:60:1.7); \draw (-.7,-.7) node{$\mo{B}$}; \end{tikzpicture} \qquad\qquad \begin{tikzpicture}[scale=.8,shorten <=4pt, shorten >=4pt] \draw[fill=black] (0,0) circle(2pt); \draw[fill=black] (150:1) circle(2pt); \draw[fill=black] (120:1) circle(2pt) (120:2) circle(2pt); \draw[fill=black] (70:1) circle(2pt) (70:2) circle(2pt) (70:3) circle(2pt); \draw[-latex] (0,0) -- (150:1); \draw[-latex] (0,0) -- (120:1); \draw[-latex] (120:1) -- (120:2); \draw[-latex] (0,0) -- (70:1); \draw[-latex] (70:1) -- (70:2); \draw[-latex] (70:2) -- (70:3); \draw[thick, dotted] (70:3) -- (70:3.8); \draw[thick, dotted, shorten <=0pt, shorten >=0pt] (105:1.7) arc(105:85:1.7); \draw (-.7,-.7) node{$\mo{B}'$}; \end{tikzpicture} \caption{Two structures.} \label{fig:two-struc} \end{figure} \subsection{Hennessy-Milner Property for Modal Dual- and Bi-Intuitionistic Logic} \noindent We now dualise the result of Theorem \ref{thm:hm-int-box} in a similar way as in the proof of Theorem \ref{thm:hm-d-int} in order to obtain a Hennessy-Milner theorem for $\mathsf{Int}^{\partial}_{\Diamond}$ interpreted in $\mathsf{Int}^{\partial}_{\Diamond}$-models. This then leads to the general objective of a general Hennessy-Milner theorem for bi-intuitionistic modal logic with $n$ boxes and $m$ diamonds. We commence by extending Definition \ref{def:ikm-dual}, Lemma \ref{lem:sem-trans} and the translation $(\cdot)^t$ to the context of \emph{modal} bi-intuitionistic logic. Extend the involution $(\cdot)^t$ on $\mathsf{Bi\hyphen int}$ to an involution on $\mathsf{Bi\hyphen int}_{\Box\Diamond}$ by adding to the recursive definition: $$ (\Box\phi)^t = \Diamond\phi^t, \qquad (\Diamond\phi)^t = \Box\phi^t. $$ This is easily seen to restrict to bijections $(\cdot)^t : \mathsf{Int}_{\Box} \to \mathsf{Int}^{\partial}_{\Diamond}$ and $(\cdot)^t : \mathsf{Bi\hyphen int}_{\Box} \to \mathsf{Bi\hyphen int}_{\Diamond}$. Furthermore, for $Z \in \{ R, S \}$ we define the \emph{dual} of a $Z$-model $\mo{M} = (X, \leq, Z, V)$ to be $\mo{M}^{\partial} = (X, \geq, Z, \dual{V})$, where $\dual{V}(p) = X \setminus V(p)$, for $p \in \Prop$. Then $\mo{M}^{\partial\partial} = \mo{M}$, and moreover we have: \begin{lem}\label{lem:1} The tuple $\mo{M} = (X, \leq, Z, V)$ is a (strictly condensed) $\lan{L}_{\Box}$-model if and only if $\dual{\mo{M}}$ is a (strictly condensed) $\lan{L}_{\Diamond}$-model. \end{lem} \noindent Models and their duals are related in the following manner. This extends Lemma \ref{lem:sem-trans}. \begin{lem}\label{lem:3} Let $\mo{M} = (X, \leq, R, V)$ be a strictly condensed $\lan{L}_{\Box}$-model and $\phi \in \mathsf{Bi\hyphen int}_{\Box}$ a formula. Then we have: $$ \mo{M}, x \Vdash \phi \iff \mo{M}^{\partial}, x \not\Vdash \phi^t. $$ \end{lem} \noindent We have now set ourselves up for the proof of the Hennessy-Milner theorem of dual-intuitionistic logic with an extra diamond-modality. \begin{thm}\label{thm:hm-dual-int-dia} Let $\mo{M} = (X, \leq, S, V)$ and $\mo{M}' = (X', \leq', S', V')$ be two strictly condensed $\mathsf{Int}^{\partial}_{\Diamond}$-models such that $\leq$ and $\leq'$ are pre-image-compact and $S$ and $S'$ are image-compact. Then for all $x \in X$ and $x' \in X'$ we have $$ x \rightleftharpoons_{\mathsf{Int}^{\partial}_{\Diamond}} x' \iff x \leftrightsquigarrow_{\mathsf{Int}^{\partial}_{\Diamond}} x'. $$ \end{thm} \begin{proof} Let $B \subseteq X \times X'$ be the relation of logical equivalence. Then $B$ is also logical equivalence of $\mathsf{Int}_{\Box}$-formulae between $\mo{M}^{\partial}$ and $(\mo{M}')^{\partial}$. By assumption all relations in these dual models are image-compact, so it follows form Theorem \ref{thm:hm-int-box} that $B$ is an $\mathsf{Int}_{\Box}$-bisimulation between $\mo{M}^{\partial}$ and $(\mo{M}')^{\partial}$. An easy verification then shows that $B$ is an $\mathsf{Int}^{\partial}_{\Diamond}$-bisimulation between $\mo{M}$ and $\mo{M}'$. \end{proof} \noindent Finally, we attain a Hennessy-Milner theorem for the modal bi-intuitionistic logic $\mathsf{Bi\hyphen int}_{n,m}$ interpreted in $\lan{L}_{n,m}$-models. This follows from Theorems \ref{thm:hm-int-box} and \ref{thm:hm-dual-int-dia}, using Lemma \ref{lem:inst} in a similar way as in the proof of Theorem \ref{thm:hm-bi-int}. \begin{thm}\label{thm:hm-mod-bi-int} Let $\mo{M} = (X, \leq, (R_i), (S_j), V)$ and $\mo{M}' = (X', \leq', (R_i'), (S_j'), V')$ be two strictly condensed $\lan{L}_{n,m}$-models. Furthermore assume that all relations (including $\leq$ and $\leq'$) are image-compact and additionally that $\leq$ and $\leq'$ are pre-image-compact. Then for all $x \in X$ and $x' \in X'$ we have $$ x \rightleftharpoons_{\mathsf{Bi\hyphen int}_{n,m}} x' \iff x \leftrightsquigarrow_{\mathsf{Bi\hyphen int}_{n,m}} x'. $$ \end{thm} \noindent A counterexample for the failure of the converse is readily constructed from the frames $\mo{B}$ and $\mo{B}'$ in Figure \ref{fig:two-struc}, equipped with trivial orders $\leq$ and $\leq'$, and where $n = m = 1$ and $R = S$ is given by the edges, that, in general, logical equivalence between modal models does not imply bisimilarity. \section{Applications}\label{sec:app} \noindent We investigate several (bi-)intuitionistic modal logics found in the literature, and equip them with a notion of bisimulation accompanied by a Hennessy-Milner theorem. We consider (descriptive) $\Box$-models for the language $\mathsf{Int}_{\Box}$ introduced in \cite{WolZak98} in Section \ref{subsec:WZ}, and in Section \ref{subsec:BD} we look at various ways of interpreting $\mathsf{Int}_{\Box\Diamond}$ with a single relation for $\Box$ and $\Diamond$ (in contrast to the approach taken in Section \ref{sec:modal}, where each modality is interpreted via its own relation). In particular, this includes the well-known semantics for modal intuitionistic logic given by Fischer Servi \cite{Fis81}, and Plotkin and Stirling \cite{PloSti86}. In Subsection \ref{subsec:epistemic} we apply our results to intuitionistic epistemic logic \cite{JagMar16}. The knowledge operators in this logic behave like $\Box$-modalities. Additionally, the logic has a unary ``common knowledge'' operator $\ms{C}$, which behaves differently. The second half of this section is devoted to tense bi-intuitionistic logic. In Subsections \ref{subsec:tense1}, \ref{subsec:tense2} and \ref{subsec:tense-H} we investigate three different ways of defining its semantics. The corresponding notion of bisimulation requires the relations interpreting the modalities to look both forward and backwards. In each of these cases, we give a Hennessy-Milner class. \subsection{Wolter/Zakarhyashev Models}\label{subsec:WZ} \noindent In \cite{WolZak98}, the authors introduce $\Box$-models as a semantics for $\mathsf{Int}_{\Box}$. These coincide with general strictly condensed $\lan{L}_{\Box}$ in the sense of Definition \ref{def:modal-model} with the additional property that the underlying order is a partial order (rather than a pre-order). That is: \begin{defn} A $\Box$-frame is a tuple $(X, \leq, R, A)$ such that \begin{itemize} \item $(X, \leq)$ is a partially ordered set; \item $R \subseteq X \times X$ is a relation satisfying $({\leq} \circ R \circ {\leq}) = R$; \item $A \subseteq \fun{Up}(X, \leq)$ is a collection of upsets containing $\emptyset$ and $X$ which is closed under $\cap, \cup, \mathrel{\mkern1mu\underline{\mkern-1mu \to\mkern-2mu}\mkern2mu }$ and ${\boxbar}$ (cf.~Definition \ref{def:modal-general}). \end{itemize} A $\Box$-frame is called \emph{descriptive} if $(X, \leq, A)$ is a descriptive intuitionistic Kripke frame \cite[Section 8.4]{ChaZak97} and $$ xRy \iff \forall a \in A (x \in {\boxbar} a \text{ implies } y \in a). $$ A $\Box$-model is a $\Box$-frame together with an admissible valuation $V : \Prop \to A$ of the proposition letters. \end{defn} \noindent Formulae in $\mathsf{Int}_{\Box}$ are interpreted as usual. Since $\Box$-models are simply special cases of strictly condensed $\mathsf{Int}_{\Box}$-models, we already have a truth-preserving notion of bisimulation. Moreover, Theorem \ref{thm:hm-int-box} gives rise to a Hennessy-Milner theorem for $\Box$-models, where image-compactness is now taken with respect to the general frame structure encompassed in the definition of a $\Box$-model. \begin{cor} Let $x$ and $x'$ be two states in two $\Box$-models all of whose relations are image-compact. Then $x \rightleftharpoons_{\mathsf{Int}_{\Box}} x'$ if and only if $x \leftrightsquigarrow_{\mathsf{Int}_{\Box}} x'$. \end{cor} \noindent In particular, this holds for all descriptive $\Box$-models. \begin{propn} Let $\mo{M} = (X, \leq, R, A)$ be a descriptive $\Box$-frame. Then $R$ is image-compact. \end{propn} \begin{proof} The descriptive intuitionistic Kripke frame $(X, \leq, A)$ underlying $\mo{M}$ can be viewed as an Esakia space $(X, \leq, \tau_A)$, where $\tau_A$ is the patch topology defined in Definition \ref{def:general} \cite{Esa74}. In particular this means that $(X, \tau_A)$ is a compact topological space. By definition, for any $x \in X$ the set $\{ y \in X \mid x \leq y \}$ is closed in $\tau_A$, so $\leq$ is image-compact. Furthermore, by definition of a descriptive $\Box$-frame we have $R[x] = \bigcap \{ a \in A \mid x \in {\boxbar} a \}$ and since this is the intersection of clopen sets, it is closed in $\tau_A$, hence compact. \end{proof} \subsection{Bo\v{z}i\'{c}/Do\v{s}en Models}\label{subsec:BD} \noindent In \cite{BozDos84}, the authors define a \emph{$\Box\Diamond$-model} to be a strictly condensed $\mathsf{Int}_{\Box}$-model $(X, \leq, R)$ which is simultaneously an $\mathsf{Int}_{\Diamond}$-model. That is, $(X, \leq)$ is a pre-order and $R$ is a relation on $X$ that satisfies $({\leq} \circ R \circ {\leq}) = R$ and $({\geq} \circ R) \subseteq (R \circ {\geq})$. These are used to interpret $\lan{Int_{\Box\Diamond}}$-formulae in the usual way. It is straightforward to see that an $\mathsf{Int}_{\Box}$-bisimulation between $\Box\Diamond$-models preserves all formulae in $\lan{Int_{\Box\Diamond}}$, in particular also those involving $\Diamond$. Thus, if $x$ and $x'$ are two states in two $\Box\Diamond$-models with all image-compact relations, then we have a chain of implications: $$ x \rightleftharpoons_{\mathsf{Int}_{\Box}} x' \quad\Rightarrow\quad x \leftrightsquigarrow_{\mathsf{Int}_{\Box\Diamond}} x' \quad\Rightarrow\quad x \leftrightsquigarrow_{\mathsf{Int}_{\Box}} x' \quad\Rightarrow\quad x \rightleftharpoons_{\mathsf{Int}_{\Box}} x'. $$ This implies: \begin{cor} Let $x$ and $x'$ be two states in two $\Box\Diamond$-models with all image-compact relations. Then $x \leftrightsquigarrow_{\mathsf{Int}_{\Box\Diamond}} x'$ if and only if $x \rightleftharpoons_{\mathsf{Int}_{\Box}} x'$. \end{cor} \noindent We note that $\Box\Diamond$-models are special cases of the models used by e.g.~Fischer Servi and Plotkin and Sterling to interpret $\mathsf{Int}_{\Box\Diamond}$, see \cite[Section 1]{PloSti86} and \cite[Section 2]{Fis81}. We refer to these models as FS-models, introduced formally next. \begin{defn} An FS-model is a tuple $\mo{M} = (X, \leq, R, V)$ consisting of an intuitionistic Kripke model $(X, \leq, V)$ and a relation $R \subseteq X \times X$ that satisfies $(R \circ {\leq}) \subseteq ({\leq} \circ R)$ and $({\geq} \circ R) \subseteq (R \circ {\geq})$. \end{defn} \noindent In such a model, the interpretation of intuitionistic connectives and $\Diamond$ is as usual. However, if we interpret $\Box\phi$ as in Definition \ref{def:modal-model} we are no longer guaranteed an upset in $(X, \leq)$. This is remedied by putting $$ \mo{M}, x \Vdash \Box\phi \iff \text{for all $y \in X$, } x({\leq} \circ R)y \text{ implies } \mo{M}, y \Vdash \phi. $$ In the special case where \begin{equation}\label{eq:FS-sc} ({\leq} \circ R) \subseteq R, \end{equation} the interpretation of $\Box$ coincides with the one given in Definition \ref{def:modal-model}, i.e., without the additional quantification over $\leq$ in between. Moreover, if this is the case then $(X, \leq, R, V)$ is a strictly condensed $\Box\Diamond$-model. Therefore, we call an FS-model satisfying \eqref{eq:FS-sc} \emph{strictly condensed}. Then we have: \begin{cor} Let $x$ and $x'$ be two states in two strictly condensed FS-models with all image-compact relations. Then $x \leftrightsquigarrow_{\mathsf{Int}_{\Box\Diamond}} x'$ if and only if $x \rightleftharpoons_{\mathsf{Int}_{\Box}} x'$. \end{cor} \subsection{Intuitionistic Epistemic Logic}\label{subsec:epistemic} \noindent Intuitionistic epistemic logic describes a system of the knowledge of $n$ agents \cite{JagMar16}. The logical language used for this is $\lan{EK}$, and is constructed from propositional variables, intuitionistic connectives, and additional unary modal operators $\ms{K}_i$ for every $i \in \{ 1, \ldots, n \}$ and $\ms{C}$. The intuitive meaning of $\ms{K}_i\phi$ is ``agent $i$ knows that $\phi$'' and $\ms{C}\phi$ means that $\phi$ is common knowledge. This language can be interpreted in \emph{EK-models} \cite[Definitions 2 and 3]{JagMar16}. We give the definition of these models in a slightly reformulated way, so that the connection with $\mathsf{Int}_{\Box}$-models is easier to see. \begin{defn} An \emph{EK-model} is a tuple $(X, \leq, R_1, \ldots, R_n, V)$ consisting of an intuitionistic Kripke model $(X, \leq, V)$ and relations $R_i \subseteq X \times X$ satisfying $({\leq} \circ R_i) \subseteq R_i$. The interpretation of intuitionistic connectives is as usual, and the interpretation of $\ms{K}_i$ is as for boxes: $$ \mo{M}, x \Vdash \ms{K}_i\phi \iff \text{for all $y \in X$, } xR_iy \text{ implies } \mo{M}, y \Vdash \phi. $$ The interpretation $\ms{C}$ is best described via a new relation $R^*$. Let $R = R_1 \cup \cdots \cup R_n$ and let $R^*$ be the collection of all pairs $(x, y)$ such that $y$ is reachable from $x$ via a finite number of $R$-transitions. Then $$ \mo{M}, x \Vdash \ms{C}\phi \iff \text{for all $y \in X$, } xR^*y \text{ implies } \mo{M}, y \Vdash \phi. $$ \end{defn} Of course, EK-models are special cases of $\mathsf{Int}_{n,0}$-models and the interpretation of the $\ms{K}_i$ corresponds to the $n$ boxes in such a model. A straightforward verification shows that $\mathsf{Int}_{n,0}$-bisimulations also preserve the operator $\ms{C}$, so that we have: \begin{lem} Let $x$ and $x'$ be two states in two EK-models $\mo{M}$ and $\mo{M}'$ which are linked by an $\mathsf{Int}_{n,0}$-bisimulation. Then $x \leftrightsquigarrow_{\lan{EK}} x'$, that is, $x$ and $x'$ satisfy precisely the same $\lan{EK}$-formulae. \end{lem} \noindent Conversely, if two states $x$ and $x'$ in two EK-models are logically equivalent, then in particular they satisfy the same $\mathsf{Int}_{n,0}$-formulae, i.e., we have $x \leftrightsquigarrow_{\mathsf{Int}_{n,0}} x'$. If $\mo{M}$ and $\mo{M}'$ (viewed as $\mathsf{Int}_{n,0}$-models) are strictly condensed and all their relations are image-compact, then it follows from Theorem \ref{thm:hm-int-box} and Lemma \ref{lem:inst} that $x$ and $x'$ are linked by an $\mathsf{Int}_{n,0}$-bisimulation. By the previous lemma this in turn implies $x \leftrightsquigarrow_{\lan{EK}} x'$. Thus we have: \begin{cor} Let $x$ and $x'$ be two states in two strictly condensed EK-models all of whose relations are image-compact. Then $$ x \leftrightsquigarrow_{\lan{EK}} x' \iff x \rightleftharpoons_{\mathsf{Int}_{n,0}} x'. $$ \end{cor} \noindent Therefore $\mathsf{Int}_{n,0}$-bisimulations provide a suitable notion of bisimulation between EK-models. \subsection{Tense Bi-Intuitionistic Logic in Tense Models}\label{subsec:tense1}\label{subsec:tense} \noindent Tense bi-intuitionistic logic is obtained from the modal bi-intuitionistic logic $\mathsf{Bi\hyphen int}_{\Box\Diamond}$ by extending it with tense operators $\tdiamond, \tbox$ corresponding to $\Box$ and $\Diamond$, respectively. We call this language $\Tbiint = \mathsf{Bi\hyphen int}_{2,2}$. Classically, $\tdiamond$ is interpreted using the converse relation of $\Box$. Since we assume no connection between $\Box$ and $\Diamond$, we get an additional tense operator $\tbox$ which is interpreted using the converse relation of $\Diamond$. In this subsection we adapt $\mathsf{Bi\hyphen int}_{\Box\Diamond}$-models (Definition \ref{def:modal-model}) to allow interpretation of $\Tbiint$-formulae, i.e., we make sure that the truth-set of every formula is still up-closed. In the next two subsections we investigate two more ways to define semantics for tense bi-intuitionistic logic. If $R$ is a relation on $X$, we write $\breve{R} = \{ (x,y) \mid yRx \}$ for the converse relation. Let $(X, \leq, R, S, V)$ be a $\mathsf{Bi\hyphen int}_{\Box\Diamond}$-model for $\mathsf{Bi\hyphen int}_{\Box\Diamond}$. As stated, we want to use the converse relations $\breve{S}$ and $\breve{R}$ to interpret $\tbox$ and $\tdiamond$, respectively. Therefore, a possible semantics for $\Tbiint$ is given by $\mathsf{Bi\hyphen int}_{2,2}$-models ${(X, \leq, R_1, R_2, S_1, S_2, V)}$ that satisfy $R_2 = \breve{S}_1$ and $S_2 = \breve{R}_1$. This identification leads to the additional coherence conditions $({\geq} \circ \breve{R}_1) \subseteq (\breve{R}_1 \circ {\geq})$, and similarly for $S_1$. Thus, we can also view such a model as a $\mathsf{Bi\hyphen int}_{1,1}$-model with additional coherence conditions. This is reflected in the following definition of a \emph{tense model}. \begin{defn}\label{def:tense-model} A \emph{tense model} is a tuple $(X, \leq, R, S, V)$ consisting of an intuitionistic Kripke model $(X, \leq, V)$ and two relations $R, S \subseteq X \times X$ satisfying $$ ({\leq} \circ R) = (R \circ {\leq}) \quad\text{and}\quad ({\geq} \circ S) = (S \circ {\geq}). $$ \end{defn} \noindent The interpretation of the tense operators in a tense model $\mo{M} = (X, \leq, R, S, V)$ is given by \begin{align*} \mo{M}, x \Vdash \tdiamond\phi &\iff \mo{M}, y \Vdash \phi \text{ for some $y$ with } yRx \\ \mo{M}, x \Vdash \tbox\phi &\iff \text{for all $y \in X$, } ySx \text{ implies } \mo{M}, y \Vdash \phi. \end{align*} Note that this corresponds precisely to the usual interpretation of box and diamond in the $\mathsf{Bi\hyphen int}_{1,1}$-model $(X, \leq, \breve{S}, \breve{R}, V)$. As a consequence, persistence still holds, i.e., the truth-set of every formula is up-closed in $(X, \leq)$. To define a \emph{tense bisimulation} between two tense models $(X, \leq, R, S, V)$ and $(X', \leq', R', S', V')$ we simply use the notion of a $\mathsf{Bi\hyphen int}_{2,2}$-bisimulation between $(X, \leq, R, \breve{S}, S, \breve{R}, V)$ and $(X', \leq', R', \breve{S}', S', \breve{R}', V')$ from Definition \ref{def:modal-bis}. Explicitly, this can be defined as follows: \begin{defn}\label{def:tense-bisim} By a \emph{tense bisimulation} between two tense models $\mo{M} = (X, \leq, R, S, V)$ and $\mo{M}' = (X', \leq', R', S', V')$ we mean a $\mathsf{Bi\hyphen int}$-bisimulation $B \subseteq X \times X$ between the underlying intuitionistic Kripke models such that for all $(x, x') \in B$ and $Z \in \{ R, \breve{S}, S, \breve{R} \}$ we have: \begin{itemize} \item If $xZy$ then there exists $y' \in X'$ such that $x'Z'y'$ and $yBy'$; \item If $x'Z'y'$ then there exists $y \in X$ such that $xZy$ and $yBy'$. \end{itemize} The notion of \emph{tense bisimilarity} is defined as usual, and denoted $\rightleftharpoons_{\Tbiint}$. \end{defn} \noindent It follows from Proposition \ref{prop:adeq-modal} that $\Tbiint$-bisimilar states satisfy the same $\Tbiint$-formulae. We call a tense model $(X, \leq, R, S, V)$ \emph{strictly condensed} if $({\leq} \circ R \circ {\leq}) \subseteq R$ and $({\geq} \circ S \circ {\geq}) \subseteq S$. A straightforward verification shows that this is the case if and only if $({\leq} \circ \breve{S} \circ {\leq}) \subseteq \breve{S}$ and $({\geq} \circ \breve{R} \circ {\geq}) \subseteq \breve{R}$, so a tense model is strictly condensed if and only if the $\mathsf{Bi\hyphen int}_{2,2}$-model $(X, \leq, R, \breve{S}, S, \breve{R}, V)$ is strictly condensed in the sense of Definition \ref{def:modal-model}. We define (pre-)image-compactness of relations in a tense model $(X, \leq, R, S, V)$ as if it were a $\mathsf{Bi\hyphen int}_{\Box\Diamond}$-model. As a corollary of Theorem \ref{thm:hm-mod-bi-int} we then obtain: \begin{cor}\label{cor:hm-tense} Let $\mo{M}$ and $\mo{M'}$ be strictly condensed tense models all of whose relations are both image-compact and pre-image-compact. Suppose $x \in \mo{M}$ and $x' \in \mo{M}'$. Then $$ x \leftrightsquigarrow_{\Tbiint} x' \iff x \rightleftharpoons_{\Tbiint} x'. $$ \end{cor} \noindent We leave the construction of counterexamples showing that we cannot drop the conditions of (pre-)image-compactness of the relations in Corollary \ref{cor:hm-tense} to the reader. \subsection{Tense Bi-Intuitionistic Logic by Gor\'{e}, Postniece and Tiu}\label{subsec:tense2} \noindent An alternative semantics for $\Tbiint$ is introduced in \cite[Section 6]{GorPosTiu10}. The authors define a model, which we shall refer to as a \emph{GPT-model}, to be a tuple $(X, \leq, R, S, V)$ such that $(X, \leq, V)$ is an intuitionistic Kripke model and $R, S$ are relations on $X$ satisfying \begin{equation}\label{eq:a-tense} (R \circ {\leq}) \subseteq ({\leq} \circ R) \quad\text{and}\quad ({\geq} \circ S) \subseteq (S \circ {\geq}). \end{equation} The interpretation of the modalities is then given by \begin{align*} \mo{M}, x \Vdash \Box\phi &\iff \text{for all $y \in X$, } x({\leq} \circ R)y \text{ implies } \mo{M}, y \Vdash \phi \\ \mo{M}, x \Vdash \Diamond\phi &\iff \text{there exists } y \in X \text{ such that } xSy \text{ and } \mo{M}, y \Vdash \phi \\ \mo{M}, x \Vdash \tbox\phi &\iff \text{for all $y \in X$, } x({\leq} \circ \breve{S})y \text{ implies } \mo{M}, y \Vdash \phi \\ \mo{M}, x \Vdash \tdiamond\phi &\iff \text{there exists } y \in X \text{ such that } x\breve{R}y \text{ and } \mo{M}, y \Vdash \phi \end{align*} We can define a bisimulation between such models in the same way as in Definition \ref{def:tense-bisim} above. They are easily seen to preserve truth, despite the changed interpretation of the $\Box$-modalities. If a GPT-model $\mo{M} = (X, \leq, R, S, V)$ satisfies \begin{equation}\label{eq:a-tense-con} ({\leq} \circ R) \subseteq R, \qquad ({\leq} \circ \breve{S}) \subseteq \breve{S} \end{equation} then the interpretation of $\Box$ and $\tbox$ is the same as in Subsection \ref{subsec:tense}, i.e., a state satisfies $\Box\phi$ (resp.~$\tbox\phi$) if all $R$-successors (resp.~$\breve{S}$-successors) satisfy $\phi$. A GPT-model that satisfies \eqref{eq:a-tense-con} will be called \emph{strictly condensed}. Indeed, these are strictly condensed frames in the sense of Subsection \ref{subsec:tense} above, because \begin{align*} ({\leq} \circ R \circ {\leq}) &\subseteq ({\leq} \circ {\leq} \circ R) &\text{(By \eqref{eq:a-tense})} \\ &\subseteq ({\leq} \circ R) &\text{(${\leq}$ is transitive)} \\ &\subseteq R &\text{(By \eqref{eq:a-tense-con})} \end{align*} and similarly $({\geq} \circ S \circ {\geq}) \subseteq S$. Since furthermore the interpretation of formulae is the same as for tense models, Corollary \ref{cor:hm-tense} now carries over to: \begin{cor} Let $\mo{M}$ and $\mo{M'}$ be strictly condensed GPT-models all of whose relations are both image-compact and pre-image-compact. Suppose $x \in \mo{M}$ and $x' \in \mo{M}'$. Then logical equivalence implies tense bisimilarity. \end{cor} \noindent As is the case for $\lan{L}_{n,m}$-models (see Proposition \ref{prop:strictify}), we can turn every GPT-model into a strictly condensed one by only modifying the relations $R$ and $S$. \begin{propn} For every GPT-model $\mo{M} = (X, \leq, R, S, V)$ we can find a strictly condensed GPT-model $\mo{M}^+ = (X, \leq, R^+, S^+, V)$ whose underlying intuitionistic Kripke model remains unchanged and which satisfies for all $x \in X$ and $\phi \in \Tbiint$: $$ \mo{M}, x \Vdash \phi \iff \mo{M}^+, x \Vdash \phi. $$ \end{propn} \begin{proof} Define $R^+ = ({\leq} \circ R)$ and $S^+ = (S \circ {\geq})$. Then reflexivity and transitivity of $\leq$ prove $({\leq} \circ R^+) = R^+$ and $(R^+ \circ {\leq}) \subseteq ({\leq} \circ R^+)$. Besides, $({\geq} \circ S^+) \subseteq (S^+ \circ {\geq})$, and clearly $S^+ = (S \circ {\geq})$ implies $({\leq} \circ \breve{S}^+) \subseteq \breve{S}^+$. So $\mo{M}^+$ is indeed a strictly condensed GPT-model. We now prove that the theory of the individual states is unchanged, by induction on the structure of $\phi$. The only non-trivial cases are the ones involving the modalities. We show the cases $\Box\phi$ and $\Diamond\phi$. Their tense counterparts are similar. We have: \begin{align*} \mo{M}, x \Vdash \Box\phi &\iff x({\leq} \circ R)y \text{ implies } \mo{M}, y \Vdash \phi \\ &\iff x({\leq} \circ {\leq} \circ R)y \text{ implies } \mo{M}, y \Vdash \phi \\ &\iff x({\leq} \circ R^+y) \text{ implies } \mo{M}^+, y \Vdash \phi \\ &\iff \mo{M}^+, x \Vdash \Box\phi \end{align*} For the diamonds: \begin{align*} \mo{M}, x \Vdash \Diamond\phi &\iff \text{there exists } y \in X \text{ such that } xSy \text{ and } \mo{M}, y \Vdash \phi \\ &\iff \text{there exists } y \in X \text{ such that } x(S \circ {\geq})y \text{ and } \mo{M}, y \Vdash \phi \\ &\iff \text{there exists } y \in X \text{ such that } xS^+y \text{ and } \mo{M}^+, y \Vdash \phi \\ &\iff \mo{M}^+, x \Vdash \Diamond\phi \end{align*} The second ``iff'' holds by persistence: the direction from left to right is immediate, conversely, if $xSz \geq y$ and $\mo{M}, y \Vdash \phi$, then persistence implies $\mo{M}, z \Vdash \phi$. \end{proof} \subsection{Tense Bi-Intuitionistic Logic in $H$-Models}\label{subsec:tense-H} \noindent Lastly, we review another approach, taken in \cite{SteSchRye16,SanSte17}, where the authors assume additional axioms relating $\Box$ and $\Diamond$. In particular, in their semantics $\Diamond\phi$ is equivalent to $\rotatebox[origin=c]{180}{\reflectbox{$\neg$}}\Box\neg\phi$, where $\neg\phi = \phi \to \bot$ and $\rotatebox[origin=c]{180}{\reflectbox{$\neg$}}\phi = \top \bito \phi$. The interpreting structures they use are \emph{$H$-frames} \cite[Definition 10]{SteSchRye16}. These are precisely strictly condensed $\Box$-frames from \cite{BozDos84}, called strictly condensed $\lan{L}_{\Box}$-frames in our notation (cf.~Definition \ref{def:modal-model}). We view an $H$-frame ($H$-model) as a strictly condensed $\mathsf{Bi\hyphen int}_{\Box}$-frame ($\mathsf{Bi\hyphen int}_{\Box}$-model). Let $\mo{M} = (X, \leq, R, V)$ be an $H$-model. While $\Box$ and $\tdiamond$ are interpreted in the same way as in Subsection \ref{subsec:tense}, the interpretation of $\tbox$ and $\Diamond$ is given via the so-called \emph{left converse} of $R$, defined as ${\geq} \circ R \circ {\geq}$. Writing (suggestively) $S = ({\geq} \circ R \circ {\geq})$,% \footnote{This is the converse of ${\mathbin{\rotatebox[origin=c]{180}{$\curvearrowleft$}}} R$ in \cite{SteSchRye16}, which may seem odd. But verifying $\llbracket \Diamond\phi \rrbracket = \llbracket \phi \rrbracket \oplus {\mathbin{\rotatebox[origin=c]{180}{$\curvearrowleft$}}} R = \{ x \in X \mid \exists y : y({\mathbin{\rotatebox[origin=c]{180}{$\curvearrowleft$}}} R)x \text{ and } y \in \llbracket \phi \rrbracket \} = \{ x \in X \mid \exists y : y({\leq} \circ \breve{R} \circ {\leq})x \text{ and } y \in \llbracket \phi \rrbracket \} = \{ x \in X \mid \exists y : x({\geq} \circ R \circ {\geq})y \text{ and } y \in \llbracket \phi \rrbracket \} $ shows that this is indeed how we interpret $\Diamond$. A similar verification shows that we get the correct interpretation for $\tbox$.} these modalities are again interpreted as usual, i.e., via \begin{align*} \mo{M}, x \Vdash \Diamond\phi &\iff \text{there exists } y \in X \text{ such that } xSy \text{ and }\mo{M}, y \Vdash \phi \\ \mo{M}, x \Vdash \tbox\phi &\iff ySx \text{ implies } \mo{M}, y \Vdash \phi \end{align*} Therefore, setting $\ov{\mo{M}} = (X, \leq, R, S, V)$ yields a (strictly condensed) tense model $\ov{\mo{M}}$ in the sense of Definition \ref{def:tense-model} which satisfies $\mo{M}, x \Vdash \phi$ iff $\ov{\mo{M}}, x \Vdash \phi$. To see that $\ov{\mo{M}}$ is strictly condensed, note that we have ${\leq} \circ R \circ {\leq} = R$ by definition, and it follows from reflexivity and transitivity of $\leq$ that $$ ({\geq} \circ S \circ {\geq}) = ({\geq} \circ {\geq} \circ R \circ {\geq} \circ {\geq}) = ({\geq} \circ R \circ {\geq}) = S. $$ \noindent The obvious notion of bisimulation between $H$-models is: \begin{defn} An $H$-bisimulation between two $H$-models $(X, \leq, R, V)$ and $(X', \leq', R', V')$ is a $\mathsf{Bi\hyphen int}$-bisimulation $B$ between the underlying intuitionistic Kripke models that additional is a $\Box$-zigzag and a $\tdiamond$-zigzag. (That is, both $R$ and $\breve{R}$ satisfy the zigzag conditions.) $H$-bisimilarity is denoted by $\rightleftharpoons_H$. \end{defn} \noindent In other words, $B$ is an $H$-bisimulation if and only if it is a $\mathsf{Bi\hyphen int}_{\Box\tdiamond}$-bi\-simu\-la\-tion between the $\mathsf{Bi\hyphen int}_{1,1}$-models $(X, \leq, R, \breve{R}, V)$ and $(X', \leq', R', \breve{R}', V')$. Besides, a straightforward verification shows that such an $H$-bisimulation between $\mo{M}$ and $\mo{M}'$ is also a tense bisimulation between $\ov{\mo{M}}$ and $\ov{\mo{M}}'$. Therefore, it preserves truth of all $\Tbiint$-formulae. For the converse, suppose $\mo{M} = (X, \leq, R, V)$ and $\mo{M}' = (X', \leq', R', V')$ are two $H$-models all of whose relations are image-compact and pre-image-compact. Then $(X, \leq, R, \breve{R}, V)$ and $(X', \leq', R', \breve{R}', V')$ are strictly condensed $\mathsf{Bi\hyphen int}_{1,1}$-models in the sense of Definition \ref{def:modal-model}. Moreover, they satisfy all preconditions of Theorem \ref{thm:hm-mod-bi-int}. If $x$ and $x'$ are two states in $\mo{M}$ an $\mo{M}'$ that satisfy the same $\Tbiint$-formulae, then in particular $x \leftrightsquigarrow_{\mathsf{Bi\hyphen int}_{\Box\tdiamond}} x'$, so by Theorem \ref{thm:hm-mod-bi-int} there is a $\mathsf{Bi\hyphen int}_{\Box\tdiamond}$-bisimulation $B$ linking them. But by definition $B$ is precisely an $H$-bisimulation. Summarising: $$ x \rightleftharpoons_H x' \quad\Rightarrow\quad x \leftrightsquigarrow_{\Tbiint} x' \quad\Rightarrow\quad x \leftrightsquigarrow_{\mathsf{Bi\hyphen int}_{\Box\tdiamond}} x' \quad\Rightarrow\quad x \rightleftharpoons_H x'. $$ Thus we have proved: \begin{cor}\label{cor:hm-H-mod} Between any two $H$-models whose relations are image-compact and pre-image-compact, we have $x \leftrightsquigarrow_{\Tbiint} x'$ if and only if $x \rightleftharpoons_H x'$. \end{cor} \begin{rem} One might wonder why we did not employ the results from Subsection \ref{subsec:tense} in order to obtain a Hennessy-Milner result for $H$-models. This would require stipulating $S = ({\geq} \circ R \circ {\geq})$ to be image-compact and pre-image-compact, on top of the preconditions of Corollary \ref{cor:hm-H-mod}. Indeed, it does necessarily follow from $\leq$ and $R$ being (pre-)image-compact. The current approach circumvents this. \end{rem} \section{Image-Compactness Versus Saturation}\label{sec:ic-vs-sat} \noindent We detail the relation between image-compactness and notions saturation for normal modal logic over a classical base, and for intuitionistic logic. \subsection{Modal Saturation in Classical Modal Logic} \noindent We can interpret classical modal logic, that is, the language $\mathsf{Int}_\Box$, in $\mathsf{Int}_{\Box}$-models where $\leq$ is equality, and recover the classical semantics. In particular, this implies that every subset is up-closed and intuitionistic negation is the same as classical negation. Indeed, such an $\mathsf{Int}_{\Box}$-model is simply a Kripke model in the usual sense. We write $\lan{ML}$ for the language of classical normal modal logic. If the orders $\leq$ are trivial, then the definition of an $\mathsf{Int}_{\Box}$-bisimulation reduces to a relation that preserves truth of proposition letters and satisfies ($\Box$-zig) and ($\Box$-zag). In other words, it is a Kripke bisimulation for classical modal logic in the usual sense, see e.g.~\cite[Definition 2.16]{BRV01}. In this setting there is a well-known Hennessy-Milner result for the class of so-called \emph{m-saturated models} \cite[Proposition 2.54]{BRV01}. We recall the definition of m-saturation. \begin{defn}\label{def:m-sat} Let $\mo{M} = (X, R, V)$ be a Kripke model and $a \subseteq X$. Then a set $\Sigma$ of formulae is called \emph{satisfiable} in $a$ if there exists a world $x \in a$ which satisfies each $\phi \in \Sigma$. A set $\Sigma$ is called \emph{finitely satisfiable} in $a$ if every finite subset of $\Sigma$ is satisfiable in $a$. The model $\mo{M}$ is called \emph{m-saturated} if for all $x \in X$ and $\Sigma \subseteq \lan{ML}$ it satisfies: \begin{center} If $\Sigma$ is finitely satisfiable in the set of successors of $x$,\\ then $\Sigma$ is satisfiable in the set of successors of $x$. \end{center} \end{defn} \noindent Our results subsume the Hennessy-Milner result for m-saturated models in the following sense: a Kripke model $(X, R, V)$ is image-compact if and only if it is m-saturated. This result, together with the notion of image-compact relations for Kripke frames, also appears in \cite{BonKwi95}. \begin{propn} Let $\mo{M} = (X, R, V)$ be a Kripke model. Then $\mo{M}$ is image-compact if and only if it is m-saturated. \end{propn} \begin{proof} Let $x \in X$ and let $\Sigma$ be a set of formulae that is finitely satisfiable in the set $R[x]$ of $R$-successors of $x$. Suppose towards a contradiction that $\Sigma$ is not satisfiable in $R[x]$. Then for each $y \in R[x]$ there is a $\phi \in \Sigma$ such that $\mo{M}, y \not\Vdash\phi$, hence $\{ \llbracket \neg\phi \rrbracket^{\mo{M}} \mid \phi \in \Sigma \}$ is an open cover of $R[x]$. Note that the truth set of every formula is clopen in $\tau_A$. By compactness of $R[x]$ we then find a finite subset $\Sigma' \subseteq \Sigma$ such that $R[x] \subseteq \bigcup_{\phi \in \Sigma'} \llbracket \neg\phi \rrbracket^{\mo{M}}$. But that implies that the finite set $\Sigma'$ is not satisfiable, a contradiction with the assumption that $\Sigma$ is finitely satisfiable. Conversely, suppose $\mo{M}$ is m-saturated. Let $A = \{ \llbracket \phi \rrbracket^{\mo{M}} \mid \phi \in \lan{ML} \}$. Then clearly $(X, R, A, V)$ is a general Kripke model. We prove that $R[x]$ is compact for every $x$. By the Alexander subbase theorem it suffices to prove that every open cover consisting of subbase elements has a finite subcover, and since $-A = A$ (because $X \setminus \llbracket \phi \rrbracket^{\mo{M}} = \llbracket \neg\phi \rrbracket^{\mo{M}}$ by classicality) this subbase consists exclusively of truth-sets of formulae. So suppose $R[x] \subseteq \bigcup_{\phi \in \Sigma} \llbracket \phi \rrbracket^{\mo{M}}$, for some set $\Sigma$ of formulae. Then clearly the set $\{ \neg\phi \mid \phi \in \Sigma \}$ is not satisfiable, hence (since $\mo{M}$ is m-saturated) there must be a finite $\Sigma' \subseteq \Sigma$ such that $\{ \neg\phi \mid \phi \in \Sigma' \}$ is not satisfiable in $R[x]$. But that implies $R[x] \subseteq \bigcup_{\phi \in \Sigma'} \llbracket \phi \rrbracket^{\mo{M}}$, which gives the desired finite subcover. \end{proof} \noindent In \cite{BezFonVen10} the collection of \emph{descriptive} Kripke models was identified as a Hennessy-Milner class. If $(X, R, A, V)$ is a descriptive Kripke model, then for all $(X, \tau_A)$ is a Stone space. Moreover $R[x]$ is closed in $(X, \tau_A)$ for all $x \in X$, hence compact. Therefore, the Hennessy-Milner property for the collection of descriptive Kripke models also follows from our results. In \cite{Bou04}, Hennessy-Milner type results are formulated for so-called weak-strict languages. Such languages are interpreted in Kripke structures. One condition for obtaining such a result, is that the models be SW-saturated (Definition 3.5.1 and Lemma 3.5.8 in {\it op.~\!cit.}), which the prove to be equivalent to the customary notion of modal saturation in Proposition 3.5.2. \subsection{Saturation for Intuitionistic Logic} \noindent In \cite{Pat97} several Hennessy-Milner properties for $\mathsf{Int}$-bisimulations on intuitionistic Kripke models are given. The strongest of these uses the notion of \emph{local saturation}, an adaptation of m-saturation from Definition \ref{def:m-sat}. \begin{defn} An intuitionistic Kripke model $\mo{M} = (X, \leq, V)$ is \emph{locally saturated} if for all $x \in X$ and disjoint sets of $\mathsf{Int}$-formulae $\Theta_s, \Theta_r$ the following holds: If for all finite subsets $\theta_s \subseteq \Theta_s$ and $\theta_r \subseteq \Theta_r$ there are worlds $y, y' \in {\uparrow}_{\leq}x$ such that $\mo{M}, y \Vdash \bigwedge\theta_s$ and $y' \not\Vdash \bigvee\theta_r$, then there is a world $z \in {\uparrow}_{\leq}x$ which satisfies every formula in $\Theta_s$ and refutes every formula in $\Theta_r$. \end{defn} \noindent It is shown in \cite[Theorem 21]{Pat97} that logical equivalence on a locally saturated intuitionistic Kripke model implies $\mathsf{Int}$-bisimilairty. We shall now show that an intuitionistic Kripke model is locally saturated if and only if it is image-compact. Therefore, Theorem \ref{thm:hm-int} is equivalent to {\it loc.~\!cit.} \begin{propn}\label{prop:AP} An intuitionistic Kripke model $\mo{M} = (X, \leq, V)$ is locally saturated if and only if it is image-compact. \end{propn} \begin{proof} Suppose $\mo{M}$ is locally saturated and let $x \in X$. Define $A = \{ \llbracket \phi \rrbracket \mid \phi \in \mathsf{Int} \}$. Then clearly $(X, \leq, A)$ is a general frame. We will show that every finite subcover of ${\uparrow}_{\leq}x = \{ y \in X \mid x \leq y \}$ consisting of subbasic opens in $\tau_A$ has a finite subcover. By the Alexander subbase theorem this then proves that ${\uparrow}_{\leq}x$ is compact in the topological space $(X, \tau_A)$. Let \begin{equation}\label{eq:open-cover} \bigcup_{i \in I} \llbracket \phi_i \rrbracket^{\mo{M}} \cup \bigcup_{j \in J} (X \setminus \llbracket \psi_j \rrbracket^{\mo{M}}) \end{equation} be an open cover of ${\uparrow}_{\leq}x$ and suppose towards a contradiction that it does not have a finite subcover. Then for every finite $I' \subseteq I$ and $J' \subseteq J$ there exists $y \in {\uparrow}_{\leq}x$ such that $y \notin \bigcup_{i \in I'} \llbracket \phi_i \rrbracket^{\mo{M}} \cup \bigcup_{j \in J'} (X \setminus \llbracket \psi_j \rrbracket^{\mo{M}}) $, i.e., $\mo{M}, y \Vdash \bigwedge_{j \in J'} \psi_j$ and $\mo{M}, y \Vdash \bigvee_{i \in I'} \phi_i$. Thus, setting $\Theta_s = \{ \psi_j \mid j \in J \}$ and $\Theta_s = \{ \phi_i \mid i \in I \}$, the precondition of weak saturatedness for ${\uparrow}_{\leq}x$ are is satisfied. However, there is no single $y \in {\uparrow}_{\leq}x$ which satisfies every $\psi_j \in \Theta_s$ and refutes every $\phi_i \in \Theta_r$, because then $y$ would not be in the open cover in \eqref{eq:open-cover}. This contradicts the fact that $(X, \leq, V)$ is locally saturated. So the assumption that \eqref{eq:open-cover} has no finite subcover must be wrong, and we conclude that $\mo{M}$ is image-compact. Conversely, suppose $\mo{M}$ is not locally saturated. Then there exists $x \in X$ and collections of formulae $\Theta_s, \Theta_r$ such that for all finite subsets $\theta_s \subseteq \Theta_s$ and $\theta_r \subseteq \Theta_r$ we can find $y, y' \in {\uparrow}_{\leq}x$ such that $\mo{M}, y \Vdash \bigwedge \theta_s$ and $\mo{M}, y' \not\Vdash \bigvee \theta_r$ while there is no $x$-successor which satisfies all of $\Theta_s$ and refutes all formulae in $\Theta_r$. This means that $$ \bigcup \{ \llbracket \phi \rrbracket^{\mo{M}} \mid \phi \in \Theta_r \} \cup \bigcup \{ \llbracket \psi \rrbracket^{\mo{M}} \mid \psi \in \Theta_s \} $$ covers ${\uparrow}_{\leq}x$ but has no finite subcover. Therefore $\mo{M}$ is not image-compact. \end{proof} \section{Conclusion and Further Research}\label{sec:conc} \noindent We have investigated the notion of \emph{image-compactness} and \emph{pre-image-com\-pact\-ness} for relational models that can be used to interpret classical, intuitionistic, dual-intuitionistic and bi-intuitionistic (modal) logic. This notion allowed an efficient formulation of Hennessy-Milner theorems for Kripke-style bisimulations between such models. In classical modal logic and intuitionistic (non-modal) logic, our results match well-known Hennessy-Milner results \cite[Proposition 2.54]{BRV01}, \cite[Theorem 21]{Pat97}, \cite[Corollary 3.9]{BezFonVen10}, while for modal (dual- and bi-)intuitionistic logic we have described previously unknown Hennessy-Milner classes. In particular, the current approach generalises the results for (modal) bi-intuitionistic logic that were subject of the predecessor paper of the current paper \cite{GroPat19}. There are many interesting directions for further research. Firstly, we have not addressed intuitionistic logic enriched with a diamond-modality, i.e., $\mathsf{Int}_{\Diamond}$, interpreted in $\mathsf{Int}_{\Diamond}$-models. Inspection of the proof of Theorem \ref{thm:hm-int-box} shows that this no longer holds for diamonds. It would be interesting to investigate conditions for which $\leftrightsquigarrow_{\mathsf{Int}_{\Diamond}}$ implies $\rightleftharpoons_{\mathsf{Int}_{\Diamond}}$. Dually, this then gives rise to a Hennessy-Milner theorem for dual-intuitionistic logic with a box-modality. Second, there is the question on how to generalise this to $n$-ary box- and diamond-like operators (see e.g.~\cite[Definition 1.23]{BRV01}). These are interpreted via $(n+1)$-ary relations, i.e., $x \Vdash \Diamond(\phi_1, \ldots, \phi_n)$ if there exist $y_1, \ldots, y_n$ such that $(x, y_1, \ldots, y_n) \in S$ and $y_i \Vdash \phi_i$ for all $i \in \{ 1, \ldots, n \}$. We expect that similar techniques as the ones presented in this paper will give rise to Hennessy-Milner properties for this generalisation of normal modal logic. Furthermore, in \cite{Dav09} intuitionistic logic is interpreted in topological spaces. These are then equipped with an additional relation that is used to interpret modalities $\Box$ and $\Diamond$ and their tense counterparts. In case the underlying topological space is an Alexandrov space, and hence corresponds to a pre-order, the intuitionistic connectives are interpreted as usual, and the modalities like in \cite{Fis81}. It would be interesting to see whether notions of (pre-)image-compactness can be extended to this setting, and how they correspond to the notion saturation given in \cite{Dav09}. Finally, we wonder whether the notion of image-compactness can be used or adapted to obtain Hennessy-Milner results for non-normal modal extensions of classical or (dual- or bi-)intuitionistic logic. In case of monotone modal logic over a classical base \cite{Han03,HanKup04} this has been done in \cite{Cel08}. It would be interesting to see how this generalises to monotone modal intuitionistic logic. Other interesting candidates for similar investigations are conditional logic \cite{BalCin18,Wei19a,CiaLiu19} and instantial neighbourhood logic \cite{BenEA17,BezEnqGro20}. \paragraph{Acknowledgements} We would like to thank the anonymous referees for for the comprehensive comments and suggestions. Specifically, the references to related work helped embed our paper more closely into the body of existing research. \section{Bibliography} \bibliographystyle{elsarticle-num}
{ "timestamp": "2021-05-06T02:09:12", "yymm": "2105", "arxiv_id": "2105.01855", "language": "en", "url": "https://arxiv.org/abs/2105.01855" }
\section{Introduction} \IEEEPARstart{T}{he} gain in image quality provided by time-of-flight (TOF) in positron emission tomography (PET) is directly linked to the coincidence time resolution (CTR) of the scanner~\cite{conti2011focus,SURTI201612,vandenberghe2016recent}. Recently, progress achieved in PET detectors have made it possible to build a clinical scanner approaching a CTR of 200~ps~\cite{van2019performance}. Further advances can be expected since a CTR of 58~ps was recently achieved, albeit in a benchmark test using short crystals of LSO:Ce:0.4\%Ca~\cite{gundacker2020experimental}. Moreover, the roadmap towards achieving a CTR of 10~ps has been laid out and technological solutions have been discussed extensively~\cite{lecoq2020roadmap}. The challenge is huge and many pitfalls remain~\cite{schaart2020achieving}. As such, a better understanding of the benefits provided by ultra-fast TOF is of interest. For example, it was recently demonstrated that ultra-fast TOF resolution could provide a gain in spatial resolution by mitigating the blur induced by the detectors size~\cite{toussaint2020improvement}. Our goal is to investigate how ultra-fast TOF can be exploited in the PET reconstruction scheme to yield further gain in image quality and robustness. Low counts acquisition remains a challenge for PET reconstruction~\cite{6878472,lim2018pet}. When the correction for random events is applied, the non-negativity constraints of the MLEM reconstruction scheme induces a bias in the evaluation of the projection space correction factors that is propagated in the reconstructed image. A solution proposed in~\cite{6878472} consists of replacing the usual Poisson distribution by a Gaussian distribution for projections with a low number of counts. The authors of~\cite{lim2018pet} propose a new reconstruction scheme in which the non-negativity constraint is shifted from the image space to the projection space. In both cases, negative values in the image space are permitted to circumvent the bias induced by the random estimation included in the projections with low number of counts. While one could expect the random coincidences to dwindle with better TOF resolution, the coincidence time window will always be lower bounded by the subject size. Another indication of the limitation of the classical PET model for low counts acquisition is highlighted in~\cite{westerwoudt2014advantages}, where the authors showed that the TOF filtered back projection could outperform MLEM in terms of signal-to-noise ratio. Since low spatial frequencies converge faster than higher frequencies with the MLEM algorithm, its convergence rate depends on the structures size. Over-iteration with the MLEM algorithm also results in noisy images~\cite{qi2006iterative}, Noise can thus contaminate larger structures before smaller ones can reach their optimal contrast This behavior compels users to arbitrarily terminate the reconstruction process early, based on the structures of interest. This could be exacerbated with ultra-fast TOF since convergence rate increases with TOF resolution~\cite{SURTI201612}. A regularization term can be included to the reconstruction model to mitigate the noise~\cite{qi2006iterative}. However, the optimal solution of a low counts reconstruction using a fine image discretization includes high frequency structures, which can impede the efficiency of a regularization scheme. In PET models, the physical property of a point source emission following an uniform distribution in the 3D sphere is encoded in the system matrix. As such, the forward model attributes a predetermined ratio of the counts in a voxel to all its projections, irrespective of the observed data. We propose to extend the parametrization of the reconstruction scheme to also explicitly include the projection domain in order to circumvent the bias induced by statistical noise in low counts reconstruction. The resulting number of variables is an order of magnitude larger than in the classical approach. Nevertheless, we hypothesize that with ultra-fast TOF, the proposed approach can be well-defined and thus provide a better description of the underlying physical processes for the reconstruction of low counts acquisition. An implementation of this new approach was investigated for low counts acquisitions to validate the hypothesis. The image quality achieved by this model was studied with a simulated 2D Hot Spot phantom. The MLEM algorithm was used as a baseline. Overall, the results of this study support the hypothesis that the proposed approach is of interest for low counts reconstruction with ultra-fast TOF. \section{Parametrization of the Angular Distribution of Emission (PADE) Model} \label{sec:model} In this section, we propose an implementation of this new approach to PET reconstruction. Correction factors associated to attenuation, detector efficiency, randoms and scatters are omitted for simplicity. The Python/MATLAB convention was used when a subset of the dimension of a matrix/tensor is fixed, e.g., $P_{:,i}^t$ is equivalent to $\{P_{j,i}^t| j \in [1, J]\}$. Also, the term pixel was used interchangeably for voxel since a part of the model is described for 2D reconstruction. Let $y_j^t$ be the number of coincidences observed in projection $j$ at the TOF bin $t$ and $P$ the TOF system matrix of a scanner. The likelihood model consists of solving $y_j^t\sim\textrm{Poisson}( \sum_i P_{j,i}^t \lambda_i), \forall j, t$ with $\lambda_i$ being the number of coincidences originating from the pixel $i$. The expected contribution of pixel $i$ to projection $j$ and TOF bin $t$ for a given estimate $\lambda$ is $P_{j,i}^t \lambda_i$. To fully exploit ultra-fast TOF resolution, the TOF information needs to be finely discretized. This will result in $y$ being sparse and low counts, which makes its approximation with $P_{j,i}^t \lambda_i$ highly prone to bias. We propose to extend the parametrization of the reconstruction model to include the projection space. Let $\phi_{j, i}$ define the number of coincidences emitted from pixel $i$ and observed in projection $j$. In that case, $\lambda_{i} = \sum_{j}{\phi_{j, i}}$ and the expected contribution of $\phi_{j,i}$ to $y_j^t$ is $Q_{j,i}^{t} \phi_{j, i}$, where $Q_{j,i}^{t}$ is the probability of a coincidence originating from pixel $i$ and observed in projection $j$ to be observed in TOF bin $t$. Consequently, $Q_{j,i}^{t} R_{j,i} \approx P_{j,i}^t $ where $R$ is the TOF-less system matrix of the scanner. Introducing this parameterization in the log-likelihood model results in \begin{equation}\label{eq:likelihoodExt} \mathcal{L}(\phi) = \sum_{jit} Q_{j, i}^{t} \phi_{j, i} - \sum_{jt} y_j^t \ln\left(\sum_i Q_{j, i}^t \phi_{j, i} \right) \end{equation} which is the data fit term for the Parametrization of the Angular Distribution of Emission (PADE) model, explicitly introducing the emission angular distribution in projection space. $\phi$ can be defined as a sparse matrix along the projection dimension since a given pixel intersects only a very small subset of the projections spanning the entire image domain. Thus, the proposed extension does not multiply the number of variables by the total number of projections defined in the scanner but does provide a significant increase in degrees of freedom. Also, one could note that the proposed parametrization corresponds to the latent variables used in the MLEM formalism when applied to PET reconstruction. A core property of the PET system model is missing in~\eqref{eq:likelihoodExt}: the isotropic nature of its emission that we refer to as the uniform distribution of emission (UDE) property. This is usually enforced by the system matrix as $R_{j,i} \lambda_i = \phi_{j,i}, \forall j,i$. Let $\mathcal{V}_i\left( \phi \right)$ be a metric that numerically evaluates how closely the pixel $i$ follow the UDE property. In this study, the UDE property is enforced as a penalization term defined as \begin{equation}\label{eq:geoPenal} \mathcal{U}(\phi) = \sum_{i} \omega_i \mathcal{V}_i\left( \phi \right) \end{equation} where $\omega_i$ is the weight applied to pixel $i$. \begin{figure} \centering \subfloat[Partition of a point source emission angles over projections]{% \label{fig:emissionPartInScanner}\includegraphics[width=0.45\linewidth]{emissionPartInScanner.pdf}} \quad \subfloat[Cones of emission and two examples of $j_i^{\textrm{ref}}$ for the point source]{% \label{fig:coneOfEmission}\includegraphics[width=0.45\linewidth]{coneOfEmission.pdf}}\\ \subfloat[Polar partition of the emission angles relative to the brown $j_i^{\textrm{ref}}$]{% \label{fig:vizCircBothPartition}\includegraphics[width=0.45\linewidth]{vizCircBothPartition.pdf}} \quad \subfloat[Visualization of three counts in the previous partition]{% \label{fig:vizCircBothPartitionExample}\includegraphics[width=0.45\linewidth]{vizCircBothPartitionExample.pdf}} \caption{% Representation of the cyclic nature of PET emission and the impact of $j_i^{\textrm{ref}}$ for an octogonal-shaped scanner with one detector per panel. The white circles in \protect\subref{fig:emissionPartInScanner} were added to highlight that the cones of emission in \protect\subref{fig:coneOfEmission} are obtained by extending the lines that connect the detectors extremities to the point source, represented by the dashed green lines. The resulting eight cones of emission were colored differently for easier visualization. The brown and red lines indicate two possibles $j_i^{\textrm{ref}}$ for ordering the valid projections. \protect\subref{fig:vizCircBothPartition} shows the partition of $[0, 2 \pi)$ from the cones of emission of the point source, when ordered using the brown $j_i^{\textrm{ref}}$. The partition of $[0, \pi)$ is sufficient to define the cones of emission due to the anti-parallel nature of PET emission. \protect\subref{fig:vizCircBothPartitionExample} shows an example of three emission angles that correspond to one counts for $\tilde{\jmath} = \{1, 5, 8\}$ and zero everywhere else for the brown $j_i^{\textrm{ref}}$. For the red $j_i^{\textrm{ref}}$, the counts would be in $\tilde{\jmath} = \{1, 4, 5\}$. } \label{fig:visuUde_geo} \end{figure} The following implementation of $\mathcal{V}\left( \right)$ was inspired by section 4.4.1 of~\cite{mardia2009directional} where it is pointed out that the uniform distribution is the only cyclic distribution that is invariant under rotation. We defined a weighted sum, that we will refer to as momentum, as a numerical characterization of the pixel projection-wise distribution. This metric was chosen over the mean since the latter requires a division by the number of elements in the distribution, $\sum_j \phi_{j,i} $, which would make $\mathcal{V}\left( \right)$ more complex and the resulting PADE model harder to solve. A visual representation of the key steps behind the computation of the metric are provided in Fig.~\ref{fig:visuUde_geo} and Fig.~\ref{fig:visuUde_num}. The goals behind these steps is to define an ordering for the projections, circumvent the discrete sampling of the projection domain and modify the emission angular space of each pixel such that it is centered at zero. A partition of the emission angular space is obtained by using the lines that connect the detectors extremities to a pixel, as illustrated in Fig.~\ref{fig:emissionPartInScanner}. We refer to a part of that partition as a cone of emission, which is kind-of the dual of the TOF-less tube of response ($R_{j, :}$). The cones of emission of the pixel ($R_{:, i}$) in Fig.~\ref{fig:emissionPartInScanner} are highlighted in Fig.~\ref{fig:coneOfEmission}. Let $j_i^{\textrm{ref}}$ be a projection of reference for pixel $i$ from which valid projections, i.e. $\{j | R_{j,i} \neq 0.0\}$, can be ordered. The choice of $j_i^{\textrm{ref}}$ emulates a rotation over the emission angular space, as shown with the two examples of $j_i^{\textrm{ref}}$ given in Fig.~\ref{fig:coneOfEmission}. The resulting polar partition of the emission angular space for the brown $j_i^{\textrm{ref}}$ is shown in Fig.~\ref{fig:vizCircBothPartition}. An example of the distribution obtained for three emissions originating from the pixel is illustrated in Fig.~\ref{fig:vizCircBothPartitionExample}. The first line of Fig.~\ref{fig:udePenalViz_exA} and~\ref{fig:udePenalViz_exB} show the partition of the emission angular space obtained from the example in Fig.~\ref{fig:visuUde_geo}, respectively for the brown and red $j_i^{\textrm{ref}}$. Its discretization was defined as the middle position of the partition resulting from the choice of $j_i^{\textrm{ref}}$. Let $D(i, j_i^{\textrm{ref}})$ be a vector that holds that discretization and let $E(i, j_i^{\textrm{ref}}) = D(i, j_i^{\textrm{ref}}) / \pi - 0.5$ be the discretization employed by the metric, illustrated in the second line of Fig.~\ref{fig:udePenalViz_exA} and~\ref{fig:udePenalViz_exB}. The subscript $\tilde{\jmath}$ will be used for $E(i, j_i^{\textrm{ref}})$ to specify that the order of the projections differs from $j$ and that only a subset of the projections are defined for a given pixel. Let $\mathcal{C}_i(\tilde{\jmath}, j_i^{\textrm{ref}})$ be a function that provides the projection index of the $\tilde{\jmath}$-th projection relative to $j_i^{\textrm{ref}}$ for pixel $i$. Thus, the momentum, i.e. a custom weighted sum, of the distribution $\phi_{:, i}$, which is not a mean, is \begin{equation}\label{eq:geoPenalMetricInnerLoop} \mathcal{W}_i\left( \phi, j_i^{\textrm{ref}} \right) = \sum_{\tilde{\jmath}} \left( [E(i, j_i^{\textrm{ref}})]_{\tilde{\jmath}} ~ \phi_{j,i} \right) \end{equation} with $j = \mathcal{C}_i(\tilde{\jmath}, j_i^{\textrm{ref}})$. This is evaluated over several projections of reference to incorporate its variation over rotations. Thus, the UDE metric employed is \begin{equation}\label{eq:geoPenalMetric} \mathcal{V}_i\left( \phi \right) = B^{-1} \sum_{j_i^{\textrm{ref}}} \left( \mathcal{W}_i\left( \phi, j_i^{\textrm{ref}} \right) \right)^2 \end{equation} where $B$ is the number of $j_i^{\textrm{ref}}$ considered. If a distribution follows exactly the UDE property, the momentum will be zero for all projections of reference. The example provided in Fig.~\ref{fig:visuUde_num} shows a case where the UDE metric values vary with the choice of $j_i^{\textrm{ref}}$. The value being almost zero for the brown $j_i^{\textrm{ref}}$ (Fig.~\ref{fig:udePenalViz_exA}) contrary to the value obtained with the red $j_i^{\textrm{ref}}$ (Fig.~\ref{fig:udePenalViz_exB}). \begin{figure} \centering \subfloat[Computation of $\mathcal{W}_i\left( \phi, j_i^{\textrm{ref}} \right)$ for the brown $j_i^{\textrm{ref}}$]{% \label{fig:udePenalViz_exA}\includegraphics{constUnif_exA.pdf}}\\ \subfloat[Computation of $\mathcal{W}_i\left( \phi, j_i^{\textrm{ref}} \right)$ for the red $j_i^{\textrm{ref}}$]{% \label{fig:udePenalViz_exB}\includegraphics{constUnif_exB.pdf}} \caption Visual representation of the process behind~\eqref{eq:geoPenalMetricInnerLoop} using the example in Fig.~\ref{fig:visuUde_geo}. \protect\subref{fig:udePenalViz_exA} represents the case of the brown $j_i^{\textrm{ref}}$ in three steps. The first line shows the same partition as Fig.~\ref{fig:vizCircBothPartition} but limited to $[0, \pi)$. The second line shows the resulting $E(i, j_i^{\textrm{ref}})$, represented by lines colored to their corresponding projection. The third line shows the bins where $\phi_{j,i} = 1$ as green lines and the brown arrow shows the result of $\mathcal{W}_i\left( \phi, j_i^{\textrm{ref}} \right)$. \protect\subref{fig:udePenalViz_exB} is the same as previous except with the red $j_i^{\textrm{ref}}$. Note that the position of the resulting arrow differs between the two choices of $j_i^{\textrm{ref}}$ and that the position for the red $j_i^{\textrm{ref}}$, which is correctly positioned, illustrates the counter-intuitive aspect of using the momentum instead of the mean in~\eqref{eq:geoPenalMetricInnerLoop}. } \label{fig:visuUde_num} \end{figure} One can expect that the UDE penalization term will be more complex for 3D reconstructions since emission angle can no longer be represented over a circle. However, most scanners are shaped as a stack of rings and thus their sampled emission space will also be a stack of cylindrical strip along one of the axis of the 3D sphere. The resulting sampling scheme can be directly mapped to a 2D uniform space if solid angles are taken into accounts. Then, the methodology defined previously should be adequate as long as considerations of a plane domain over a linear domain are taken into account. Since PET sampling scheme exclude, most of the time, two pretty large spherical caps, we expect that the cyclic nature of the data might only be usable on the transverse plane. One of the convenient property of the MLEM algorithm is the stability of the expected number of coincidences over all iterations~\cite{qi2006iterative}. Let $\lambda^k$ be the image obtained after $k$ iterations of MLEM, then $\sum_{t,j,i} P_{j,i}^{t}\lambda_i^k = \sum_{t,j} y_j^t, \forall k > 1$. However, this property needs to be incorporated in the reconstruction model for a general solvers. It could be enforced as a constraint, with a penalization term or within the solver update scheme. We used a penalization term defined as \begin{equation}\label{eq:countsPenal} \mathcal{H}(\phi) = \left( \sum_{t,j,i} Q_{j,i}^t \phi_{j,i} - \sum_{t,j} y_j^t \right)^2. \end{equation} Thus, the general form of the PADE model that was explored was \begin{equation}\label{eq:padeModel} \begin{aligned} &\min_{\phi} & \qquad & \mathcal{L}\left(\phi\right) + \gamma_1 \mathcal{U}\left(\phi\right) + \gamma_2 \mathcal{H}\left(\phi\right) \\ &\text{subject to} & & \phi \geq 0 \end{aligned} \end{equation} where $\gamma_1$ is the weight for the UDE penalization term and $\gamma_2$ the weight for the expected number of coincidences penalization term. Both weights need to be positive with the choice of $\mathcal{U}\left(\phi\right)$ and $\mathcal{H}\left(\phi\right)$ presented previously. Compared to the classical PET log-likelihood model, this model described in~\eqref{eq:padeModel} relax the interpretation of the UDE property by replacing the relation $R_{j,i} \lambda_i = \phi_{j,i}$ to a penalization term represented by $\mathcal{U}\left(\phi\right)$. This new model requires more care to solve than the classical PET log-likelihood model. For example, the MLEM inherently imposes non-negativity constraints on $\lambda$. This is not the case for all general solvers and it will thus need to be taken into account in the choice of the solver. Another example is the calibration of the hyper-parameters $\gamma_1$ and $\gamma_2$. The values used for these parameters will affect the quality of the solutions obtained and these values are most likely data-dependent. \section{ Simulation setup } \label{sec:meth} The goal of this study was to show that the PADE approach is a legitimate candidate for low counts reconstruction. Hence, the methodology was built to compare the image quality achieved by the previously proposed PADE implementation to the performance of the MLEM algorithm on low counts acquisition without including further consideration. \subsection{ Data simulation } \label{subsec:dataSimul} The version 8.0 of Geant4 Application for Tomographic Emission (GATE)~\cite{jan2004gate} was used to simulate data acquisitions from a fictive PET ring scanner. The sources were defined as back-to-back which means that positron range and annihilation photon acolinearity were not simulated. The emission direction of the sources was limited to the 2D plane of the scanner. Only photoelectric processes were enabled for the annihilation photons hence the datasets does not have scatter coincidences. Also, the emission rate of the sources was chosen such that random coincidences would be negligible. The 2-D ring camera was shaped as a regular polygon with 40 sides and an inner diameter of $\sim$80~cm. Each panel was 64~mm in size and consisted of 8 detectors of 8~mm in width. The scanner thus had 320 detectors, implying that each image pixel was intersected by at least 320 tubes of response (i.e. valid projections). The detector length was fixed to 0.1~mm, making the blur induced by depth of interaction negligible. The detectors were 4~mm wide in the axial axis and it was assumed that they had a direct readout without light-sharing coding. The CTR was fixed to $\sim$13~ps at full width at half maximum (FWHM), resulting in a spatial resolution of 2~mm along projections. A custom Hot Spot phantom with background was created for this study. Its main body was a cylinder of 136~mm in diameter and 4~mm in height. The hot spot diameters were 3.2, 4.8, 6.4, 7.9, 9.5, 11.1~mm. The activity in the phantom was defined such that the contrast between the spots and the background would be of four. The simulation was repeated ten times, each resulting in a dataset of around 80,000 coincidences. \subsection{ Image reconstruction } \label{subsec:methRecon} The image domain was a $16\times16$~cm$^2$ plane discretized uniformly in $128 \times 128$ pixels, resulting in an in-plane width of 1.25~mm. A total of 8,544 projections intersected the image domain. The TOF information was discretized uniformly in 128 bins resulting in spatial bins of 1.82~mm in width. Thus, the histograms consisted of 1,093,632 elements and most of them had zero coincidences. The TOF, $P$, and TOF-less, $R$, system matrices and the pure-TOF matrix, $Q$, were precomputed. $R_{j,i}$ was approximated as the geometric probability of a point source centered in the voxel $i$ to emit in the tube of response of projection $j$. The TOF response function of a projection was assumed invariant across its tube of response and modeled as a 1D Gaussian of 2~mm FWHM along its tube of response. $Q_{j,i}^t$ was approximated as the result at voxel $i$ of the convolution of the TOF response function with the rectangular function associated to the TOF bin $t$. $P^t_{j,i}$ was defined as $Q^t_{j,i} P_{j,i}, \forall t,j,i$. For all three matrices, the image domain was oversampled three times in both dimensions of the plane and the mean was taken. The L-BFGS-B~\cite{2020SciPy-NMeth,zhu1997algorithm} algorithm with non-negativity constraints was employed to solve the PADE model. A reduction of the number of variables was applied following the idea that variables that have a null probability of being associated with any coincidences are irrelevant. Thus, the constraints were modified from $\phi_{j,i} \geq 0, \forall j,i$ to \begin{equation}\label{eq:constraints} \begin{cases} \phi_{j,i} = 0 & \text{if } \sum_{t} y_j^t P^t_{j,i} = 0 \\ \phi_{j,i} \geq 0 & \text{otherwise} \end{cases} \end{equation} In a ultra-fast TOF and low counts setting, a significant portion of the variables can be deactivated with this approach ($\approx$70\% in this study). The $\omega$ were defined in two steps. First, it would be detrimental to penalize pixels that have a low value since the UDE property can not be evaluated accurately in those cases. Second, the strength of the penalty applied on a pixel should be positively correlated with its value. We chose to set the weights at each iteration as the value of the pixels of the previous iteration which was inspired from the approach used in the OSL-MLEM algorithm~\cite{green1990bayesian}. Thus, the pixel-dependent weights were defined as \begin{equation}\label{eq:omegaWeight} \omega_i^k = \begin{cases} 0 & \text{if } \sum_j \phi_{j,i}^0 < 9.0 \\ \sum_j \phi_{j,i}^{k-1} & \text{otherwise.} \end{cases} \end{equation} Our initial tests had shown that the proposed model did not perform well when it was initialized with an uniform image. This might have been due to the update scheme of $\omega$ which results in the objective function being drastically modified at each iterations. However, we observed that the MLEM model could produce an adequate initialization for the proposed model, if stopped early. As such, a low iteration MLEM reconstruction, $\lambda^k$ with $k$ being the number of iterations, was employed to build an initial estimate. Since $\lambda^k$ is only defined in the image domain, the initial estimate, $\phi^0$, was generated using the TOF-less system matrix. A modified version of the $R$ matrix was employed in order to satisfy~\eqref{eq:constraints}. Let $\widetilde{R}_{j,i}$ be equal to $R_{j,i}$ if $\sum_t y_j^t P^t_{j,i} \neq 0.0$ and 0.0 otherwise. The matrix was then normalized such that $\sum_j \widetilde{R}_{j,i} = \sum_j R_{j,i}, \forall i$. Thus, the PADE model was initialized with \begin{equation}\label{eq:initialization} \phi_{j,i}^0 = \widetilde{R}_{j,i} \lambda_i^k, \forall j,i. \end{equation} The iteration that had the minimum mean squared error with the simulation groundtruth, i.e. the true number of emissions per pixel, was chosen, which was achieved at the 6$^{\textrm{th}}$ iteration when looking at the mean over all repetitions. An advantage of using an image obtained from the MLEM for initialization is that it has the correct number of expected coincidences. Since the number of valid projections was at least 320 for all pixels, the computation of the UDE metric over all possible $j_i^{\textrm{ref}}$ was computationally expensive. For this study, 30 projections of reference were used for each pixel such that they uniformly sampled their respective projection space. \subsection{ Comparison of the models } \label{subsec:metricComparaison} The images obtained with the TOF-PET log-likelihood MLEM reconstruction (\textit{MLEM}), the direct backprojection of the data with TOF Gaussian kernel (\textit{backprojection}) and two versions of the PADE model were compared in term of contrast and noise. The first PADE version (\textit{PADE}$_{\textrm{v0}}$) was defined with the weights $(\gamma_1, \gamma_2)$ that offered the best performance given the choice of metrics for this study. The second PADE version was defined with $\gamma_1 = 0.0$ and the minimum value of $\gamma_2$ ensuring a divergence in the number of expected coincidences lower than 1\%. This version will be referred to as \textit{Extended} since it is the TOF-PET log-likelihood model extended to the new parameterization. The four models were compared using contrast recovery coefficient (CRC) and coefficient of variation (COV) for the five smallest spots. The background, i.e. body of the phantom excluding the spots, recovery ratio and COV were also studied. The true mean value of the spots and the background was extracted from the groundtruth of each simulations. The pixels associated to a given region of interest were extracted using the groundtruth, with pixels subjected to significant partial volume effect excluded from the evaluation of the metrics. The CRC ratio was computed as $\frac{\mu_{\textrm{spots}}}{\mu_{\textrm{bkg}}} / \textrm{CRC}^{\textrm{true}}$ where $\mu_{\textrm{spots}}$ was the mean value of the pixels extracted from the spots of interest, $\mu_{\textrm{bkg}}$ was the mean value of the pixels extracted from the background and $\textrm{CRC}^{\textrm{true}}$ was the CRC obtained from the simulation groundtruth over the same spots of interest. The background recovery ratio was defined as $\mu_{\textrm{bkg}} / \mu_{\textrm{bkg}}^{\textrm{true}}$. The COV was defined as $\frac{\sigma_{\textrm{ROI}}}{\mu_{\textrm{ROI}}}$ with $\sigma_{\textrm{ROI}}$ being the standard deviation of the pixels extracted from a region of interest (the spots of a given size or the background). The evolution of the metrics over iterations was analyzed since the convergence rate of the MLEM algorithm and the PADE model might differ. The weights $(\gamma_1, \gamma_2)$ for the \textit{PADE}$_{\textrm{v0}}$ model were obtained with a grid search, enabled by using the GNU parallel software~\cite{Tange2011a}. Values of $10^{-3}$ to $10^{5}$ for $\gamma_1$ and values of $10^{-5}$ to $10^{2}$ for $\gamma_2$ with multiplicative steps of ten were considered. For each combination of $(\gamma_1, \gamma_2)$, the solutions obtained over 250 iterations of the solver were registered for the ten simulations. The mean CRC and COV values were extracted and the $(\gamma_1, \gamma_2)$ pair yielding the highest CRC ratio for the smallest spots was selected. $(\gamma_1, \gamma_2)$ pair were excluded if the CRC ratio of any group of spots were overestimated by more than 10\%, if the mean bias over the background was larger than 0.5 or if the number of expected coincidences deviated by more than 1\%. The $(\gamma_1, \gamma_2)$ pair for the \textit{PADE}$_{\textrm{v0}}$ model were (100.0, 10.0). As for the \textit{Extended} model, $\gamma_2$ was equal to 0.01. \section{Results} \label{sec:results} In Fig.~\ref{fig:im_recon_mlemIntuition}, zoomed versions of four images obtained from one of the simulations are shown to provide some insight of the performance provided by classical approaches. One of the characteristics of low counts reconstruction is the inherent statistical variations inside regions of interest, as can be observed from the groundtruth shown in Fig.~\ref{fig:im_gt}. Fig.~\ref{fig:im_BP} shows the image obtained from the \textit{Backprojection} model. Due to the excellent TOF resolution, the greater portion of the spots were resolved albeit with a lower contrast, especially for the smallest spots. The spots were better resolved in the image obtained with 6 MLEM iterations (Fig.~\ref{fig:im_mlem_06}) but correlated statistical noise in the background, not present in the groundtruth, were also observed. This is an example of the images that were used to initialize the \textit{PADE}$_{\textrm{v0}}$ model, where its spots resolvability allow an efficient initialization of $\omega$ (see~\eqref{eq:omegaWeight}). With 10 MLEM iterations (Fig.~\ref{fig:im_mlem_10}) the spots seemed to have a better contrast however the statistical noise is further amplified, both in the spots and the background. These observations for the MLEM images are in agreement with the known interplay between the convergence acceleration induced by better TOF, the effect of object size on contrast recovery rate and the noisy nature of the optimal solution for the PET log-likelihood model. The images in Fig.~\ref{fig:im_recon_mlemIntuition} illustrate that even with excellent TOF resolution, low counts reconstruction remains a challenge that impede image quality. \begin{figure} \centering \subfloat[Dataset groundtruth]{% \label{fig:im_gt}\includegraphics[width=.44\linewidth]{images/groundTruthData.png}}\hfil \subfloat[\textit{Backprojection}]{% \label{fig:im_BP}\includegraphics[width=.44\linewidth]{images/bpGauss.png}}\\ \subfloat[\textit{MLEM}, 6 iterations]{% \label{fig:im_mlem_06}\includegraphics[width=.44\linewidth]{images/MLEM_ite6.png}}\hfil \subfloat[\textit{MLEM}, 10 iterations]{% \label{fig:im_mlem_10}\includegraphics[width=.44\linewidth]{images/MLEM_ite10.png}} \caption{% Cropped version of four images built from one of the simulations, shown with the same linear gray scale: \protect\subref{fig:im_gt} true distribution of the coincidences events; \protect\subref{fig:im_BP} image reconstructed from the backprojection of the data using the TOF Gaussian kernel; \protect\subref{fig:im_mlem_06} and \protect\subref{fig:im_mlem_10} images obtained after 6 and 10 iterations, respectively, with the MLEM algorithm. }% \label{fig:im_recon_mlemIntuition} \end{figure} Fig.~\ref{fig:crc} compares the CRC ratios of the five smallest spots and the background recovery ratio for the four models. Only the first 40 iterations out of 250 are shown since most of them converge towards stable values afterward. Overall, the \textit{MLEM} model seems to outperform the other models. Optimal values of CRC ratios and background recovery ratio are reached, except for the smallest spot for which an overestimation of around 10\% is observed after 80 iterations (not shown). The worst variation over the 10 repetitions were observed with the \textit{MLEM} model for the two smallest spots. Furthermore, the image obtained at 10 MLEM iterations, see Fig.~\ref{fig:im_mlem_10}, had already been much affected by the noise, making the progress in CRC ratio afterward less conclusive. The \textit{Backprojection} model had the worst CRC ratios and background recovery ratio, as expected from the blurred image in Fig.~\ref{fig:im_BP}. However, it was also the most stable which was expected with the choice of TOF kernel. The \textit{PADE}$_{\textrm{v0}}$ and \textit{Extended} models both had a good start since they were initialized with the 6$^{\textrm{th}}$ iteration of the MLEM reconstruction. For the \textit{Extended} model, the performance metrics worsened with iteration, stabilizing at values a little better than those of the \textit{Backprojection} model. The \textit{PADE}$_{\textrm{v0}}$ model achieved some gain in CRC ratio and background recovery ratio for the first few iterations and remained stable afterward. While the algorithm converged very rapidly ($<6$ iterations) to stable values, it did not reach the optimal values for the two smallest spots, especially with the smallest one where it did not reach 80\% of the true contrast recovery. \begin{figure} \centering \includegraphics{bias_Metric} \caption{% The CRC ratio of the five smallest spots and the background recovery ratio (lower right) as a function of the number of iterations are shown for the four models. The values for the \textit{Backprojection} model are displayed as horizontal lines since the algorithm is non-iterative. Error bars ($\pm2\sigma$) show the variability over the ten repetitions. The error bars for the \textit{Backprojection} model are too small to be visible. }% \label{fig:crc} \end{figure} Fig.~\ref{fig:cov} compares the COV of the five smallest spots and the background for the four models. Only the first 40 iterations of 250 are shown since most of them are stable afterward except for the \textit{MLEM} model where it continued to increase. The best model for these metrics was the \textit{Backprojection} model which was most likely due to the smoothing effect of the TOF Gaussian kernel. The trends of the \textit{Extended} model differed from what was observed with the CRC metric, here remaining equivalent to the \textit{PADE}$_{\textrm{v0}}$ model. The \textit{PADE}$_{\textrm{v0}}$ model remained stable with iterations, suggesting that the gain in CRC was achieved without increasing much the noise. The \textit{MLEM} model was notably more unstable than the other models across repetitions. At the 10$^{\textrm{th}}$ iteration, the image produced by the \textit{MLEM} model was noisier than the one of the \textit{PADE}$_{\textrm{v0}}$ model, especially in the background. \begin{figure} \centering \includegraphics{variance_Metric} \caption{% The COV of the five smallest spots and the background as a function of the number of iterations are shown for the four models. The values for the \textit{Backprojection} model are displayed as horizontal lines since the algorithm is non-iterative. Error bars ($\pm2 \sigma$) show the variability over the ten repetitions. The error bars for the \textit{Backprojection} model are too small to be visible. }% \label{fig:cov} \end{figure} In Fig.~\ref{fig:im_recon_comp}, a zoomed version of four images obtained from one of the simulation are shown. It consists of the groundtruth of the simulation compared to images obtained with the \textit{PADE}$_{\textrm{v0}}$, \textit{Extended} and \textit{MLEM} models. We chose the 40$^{\textrm{th}}$ iteration of the \textit{PADE}$_{\textrm{v0}}$ model to display the stability of that model (Fig.~\ref{fig:im_pade_best}). We chose the 40$^{\textrm{th}}$ iteration for the \textit{Extended} model, Fig.~\ref{fig:im_pade_ext}, since it was where the CRC ratios stabilized. The 10$^{\textrm{th}}$ iteration of the MLEM was chosen since it was where most of the spots CRC ratios were around 1.0. The \textit{Extended} image seems to have lower contrast and resolving power then the other models, as shown in Fig.~\ref{fig:crc}. This observation indicates that even with excellent TOF resolution, the UDE property remains important to achieve better reconstruction. The pattern of the noise in the background of the \textit{PADE}$_{\textrm{v0}}$ and the \textit{MLEM} images, Fig.~\ref{fig:im_pade_best} and Fig.~\ref{fig:im_mlem_10_p2} respectively, were similar except for the intensity. This highlight that the initialization of \textit{PADE}$_{\textrm{v0}}$, shown in Fig.~\ref{fig:im_mlem_06}, had a strong impact on the solution of the proposed model. The \textit{PADE}$_{\textrm{v0}}$ image seemed less noisy than the \textit{MLEM} image, which seems to enhance the resolving power of the spots in the \textit{PADE}$_{\textrm{v0}}$ image. Even with far more iterations, the \textit{PADE}$_{\textrm{v0}}$ image appear less noisy than the \textit{MLEM} image. \begin{figure} \centering \subfloat[Dataset groundtruth]{% \label{fig:im_gt_p2}\includegraphics[width=.44\linewidth]{images/groundTruthData.png}}\hfil \subfloat[\textit{Extended}, 40 iterations]{% \label{fig:im_pade_ext}\includegraphics[width=.44\linewidth]{images/extended_ite40.png}}\\ \subfloat[\textit{PADE}$_{\textrm{v0}}$, 40 iterations]{% \label{fig:im_pade_best}\includegraphics[width=.44\linewidth]{images/padeGeoCyclV1p1_oslT9_mlemIt6V2_ite40.png}}\hfil \subfloat[\textit{MLEM}, 10 iterations]{% \label{fig:im_mlem_10_p2}\includegraphics[width=.44\linewidth]{images/MLEM_ite10.png}} \caption Cropped version of four images built from one of the simulations, shown with the same linear gray scale: \protect\subref{fig:im_gt_p2} true distribution of the coincidences events; \protect\subref{fig:im_pade_ext} image reconstructed with the \textit{Extended} model; \protect\subref{fig:im_pade_best} image reconstructed with the \textit{PADE}$_{\textrm{v0}}$ model and \protect\subref{fig:im_mlem_10} image reconstructed with the MLEM algorithm. }% \label{fig:im_recon_comp} \end{figure} The projection-wise distribution of the counts inside one pixel is compared in Fig.~\ref{fig:udeAnalysis} for the image obtained with 40 iterations of the \textit{PADE}$_{\textrm{v0}}$ model ($\phi_{:,i}^{40}$) and the groundtruth ($\phi_{:,i}^{*}$) for one of the datasets. The pixel compared, $i$, was inside one of the 9.5~mm spots and it had 358 valid projections. $\phi_{:,i}^{*}$ was composed of zeros and ones, for a total of 24 counts. Fig.~\ref{fig:udeAnalysis_polar} shows their polar distributions, relative to their expected value if they followed exactly the UDE property, for two choices of $j_i^{\textrm{ref}}$. $\phi_{:,i}^{*}$ seemed to follow a uniform distribution for both $j_i^{\textrm{ref}}$, see the red and green polar distributions, considering it had only 24 samples distributed in 358 bins. $\phi_{:,i}^{40}$ also seemed to follow a uniform distribution for both $j_i^{\textrm{ref}}$, see the cyan and purple polar distributions. $\phi_{:,i}^{40}$ was clearly not sparse, contrary to $\phi_{:,i}^{*}$, and we note that its values oscillates around 1.0, which means that $\phi_{j,i}^{40} \approx (\sum_j \phi_{j,i}^{40}) R_{j,i}$. These observations suggest that the UDE property was strongly enforced with the \textit{PADE}$_{\textrm{v0}}$ model. Fig.~\ref{fig:udeAnalysis_values} shows the histograms of the values of $\mathcal{W}_i(\phi_{:,i}^{40}, j_i^{\textrm{ref}})$ and $\mathcal{W}_i(\phi_{:,i}^{*}, j_i^{\textrm{ref}})$ over the thirty $j_i^{\textrm{ref}}$. Note that values outside of the $[-0.5, 0.5]$ range are possible since the metric was defined as a weighted sum (momentum) and not a mean. The values of $\mathcal{W}_i()$ for the distributions shown in Fig.~\ref{fig:udeAnalysis_polar} were highlighted to provide a visual intuition of the metric. While the two histograms are mostly within the same range, the values associated to $\phi_{:,i}^{*}$ were more dispersed than those of $\phi_{:,i}^{40}$. Thus, the $\mathcal{W}_i()$ metric claims that $\phi_{:, i}^{40}$ follow better the UDE property than $\phi_{:, i}^{*}$, which again suggest that the \textit{PADE}$_{\textrm{v0}}$ model strongly enforces the UDE property. \begin{figure} \centering \subfloat[Polar distributions of $\phi_{:,i}^{40}$ and $\phi_{:,i}^*$ for two choices of $j_i^{\textrm{ref}}$]{% \label{fig:udeAnalysis_polar}\includegraphics{images/exReconPolarDist.png}}\\ \subfloat[Value of $\mathcal{W}_i(\phi, j_i^{\textrm{ref}})$ for thirty $j_i^{\textrm{ref}}$ for $\phi_{:,i}^{40}$ and $\phi_{:,i}^*$]{% \label{fig:udeAnalysis_values}\includegraphics{exReconMetricDist}} \caption{% Comparison of $\phi_{:,i}^{40}$ and $\phi_{:,i}^*$ for one of the datasets. $\phi^{40}$ is the 40$^{\textrm{th}}$ iteration of the \textit{PADE}$_{\textrm{v0}}$, $\phi^*$ is the dataset groundtruth and $i$ is the index of a pixel inside a 9.5~mm spot. $\phi_{:,i}^*$ is composed of ones and zeros and $\sum_j \phi_{j,i}^* = 24$. \protect\subref{fig:udeAnalysis_polar} shows their polar distributions, relative to their expected value if they followed exactly the UDE property, for two choices of $j_i^{\textrm{ref}}$, shown as a dashed black line. If the $\tilde{\jmath}^{\textrm{th}}$ bar is at 1.0, it means that $\phi_{\tilde{\jmath},i} = (\sum_j \phi_{j,i}) R_{\tilde{\jmath},i}$. The cyan and purple distributions are from $\phi_{:,i}^{40}$ while the red and green ones are from $\phi_{:,i}^*$. For easier comparison, $\phi_{:,i}^{40}$ and $\phi_{:,i}^*$ are shown in the same graph. The parts that exceed 1.5 were shaded to indicate that their full height might not be visible. \protect\subref{fig:udeAnalysis_values} shows the values of $\mathcal{W}_i(\phi, j_i^{\textrm{ref}})$ for the thirty $j_i^{\textrm{ref}}$ when applied to $\phi_{:,i}^{40}$ and $\phi_{:,i}^*$. The values of $\mathcal{W}_i(\phi, j_i^{\textrm{ref}})$ for the two $j_i^{\textrm{ref}}$ illustrated in \protect\subref{fig:udeAnalysis_polar} were added with their respective color. The hatching pattern x was used for $\phi_{:,i}^{40}$ while the slash pattern was used for $\phi_{:,i}^*$. }% \label{fig:udeAnalysis} \end{figure} \section{Discussion} \label{sec:disc} The goal of this study was to demonstrate the potential of the PADE paradigm for ultra-fast TOF reconstruction of low counts acquisitions. The proposed implementation of the PADE approach, \textit{PADE}$_{\textrm{v0}}$, achieved similar CRC ratios to the \textit{MLEM} model in most regions of interest with better noise property and its solutions were more stable. However, we can not conclude that it outperforms the classical log-likelihood PET reconstruction model, especially when looking at the CRC ratio obtained on the smallest spots. A closer look taken on the solution obtained with the \textit{PADE}$_{\textrm{v0}}$ model showed that the proposed model still enforces strongly the classical interpretation of the UDE property, i.e. $\phi_{j,i} \approx R_{j,i} \lambda_i$. Also, the performances of the \textit{Extended} model showed that removing completely the UDE property from the reconstruction scheme is disadvantageous, even if ultra-fast TOF is available. These results suggest that the idea behind the PADE model is promising for ultra-fast TOF reconstruction of low counts acquisitions and further investigation will be needed to ascertain its full potential, especially for the UDE metric. \subsection{Review of the \textit{PADE}$_{\textrm{v0}}$ model} \label{subsec:reviewImplementation} The MLEM algorithm, which implicitly enforces non-negativity and expected counts stability, provides quite good results in most cases, if stopped at the correct iteration. When combined with its simplicity of use, it is understandable that other candidates for PET reconstruction often remain mostly academic. As such, the requirement of finding the appropriate weights $\gamma_1$ and $\gamma_2$, which are most likely data-dependent, make the \textit{PADE}$_{\textrm{v0}}$ model less attractive than the classical log-likelihood model. Since $\gamma_2$ is used to enforce expected counts stability, its calibration can be automated by searching the smallest value that enforce this property. However, the weight factor, $\gamma_1$, is likely to pose a challenge, as it is the case for PET algorithm that include a regularization scheme~\cite{qi2006iterative}. Further investigation is required to understand how critical is the calibration of $\gamma_1$ and its dependence to the data. The solution proposed for the maximum \textit{a posteriori} model in~\cite{reader2020bootstrap}, where an automated calibration was proposed, might be applicable for the \textit{PADE}$_{\textrm{v0}}$ model. The TOF-less log-likelihood model was underdetermined in this study (8,544 projections v.s. 16,384 pixels). While our choice of TOF binning does increases the system sampling well beyond the number of pixels, it does not ensure that the resulting system is overdetermined. We expect that the \textit{PADE}$_{\textrm{v0}}$ model would perform better with a scanner having more projections. Indeed, the freedom provided by relaxing the UDE property should shine when the number of valid projections is far larger than the number of counts in a voxel. However, the computational burden increases with the number of projections, which has limited our capability to calibrate the \textit{PADE}$_{\textrm{v0}}$ model on scanners with more detectors. For both the \textit{PADE}$_{\textrm{v0}}$ model and the \textit{MLEM} model, the 3.2~mm spots were mostly resolved even with detectors being 8~mm in width. This was expected of the MLEM algorithm~\cite{toussaint2020improvement} and it is reassuring that the PADE approach seems to share that feature. Yet, these small structures combined with low counts acquisition have posed a challenge for the \textit{MLEM} model. This is highlighted by its CRC ratio, where a value of 1.0 is reached briefly before being overestimated, well after the noise started to significantly spoil the image. Thus, comparison of the models using the CRC of the smallest spot is less straightforward and the \textit{MLEM} model might not be an adequate baseline in that case. The increase in degrees of freedom provided by the PADE paradigm combined with the interplay between the three metrics in~\eqref{eq:padeModel} also means that the hyperparameters have a significant impact on the optimal solution of the model, which can deviate significantly from the groundtruth. Thus, an automated study of several hyperparameters based on one quality metric is misleading and a reduction of the number of unknowns was required. We simplified the study by defining a phantom with only two level of concentrations: the background with a low value and the spots with a high value. Thus, the threshold in~\eqref{eq:omegaWeight} only needed to be able to distinguish these two. As for the value of the $\omega_i$, they were mostly applied to pixels inside the spots, due to the threshold and the initialization, and, as such, their impact is limited to a subset of pixels that should have similar number of counts. Even if the optimal $\omega$ was not directly the number of counts per pixels, this was compensated by $\gamma_1$ since it was calibrated to achieve the best model. We had observed that having $\omega_i \approx \sum_j \phi^*_{j,i}$, with $\phi^*$ being the groundtruth, provided encouraging results. However, $\phi^*$ being unknown, we opted to update $\omega$ with the previous iteration (see~\eqref{eq:omegaWeight}) and use a low iteration MLEM to initialize the reconstruction (see~\eqref{eq:initialization}). While this approach makes it possible for $\omega$ to reach the desired value, it also means that the objective function is modified at each iteration. Considering that the L-BFGS-B algorithm employs past iterations to create an approximation of the Hessian matrix of the objective function, it is possible that its convergence properties are weakened or lost. Nevertheless, the \textit{PADE}$_{\textrm{v0}}$ model was initialized with a pretty good first guess, which might limit the impact of modifying $\omega$ on the solver. The stability of the CRC ratios and COV over iterations for the \textit{PADE}$_{\textrm{v0}}$ model goes on longer than the ten iterations that the L-BFGS-B algorithm uses for the Hessian approximation. This suggests that the solution obtained with the current iterative scheme is optimal for the \textit{PADE}$_{\textrm{v0}}$ model. Moreover, since only a low iteration MLEM is needed with ultra-fast TOF to discern most structures, it might be possible to design an automatic criterion that is independent to the groundtruth. Further investigations will be required to evaluate the impact of initialization on the performance of the \textit{PADE}$_{\textrm{v0}}$ model. Lastly, another choice of general solver might provide a better behavior. For example, the ADMM algorithm breaks the optimization problem in smaller ones that are easier to solve and it might have a better chance at dealing with a model defined as three metrics that compete between each others~\cite{teng2016admm,6825888}. \subsection{Potential of the PADE paradigm} \label{subsec:futurePade} The main novelty of the proposed model is the relaxation of the interpretation of the UDE property from a constraint, i.e. $R_{j,i} \lambda_i = \phi_{j,i}, \forall j,i$, to a penalization term represented by $\mathcal{V}_i()$. The goal of this metric is to promote solutions that follow the UDE property without enforcing $R_{j,i} \lambda_i = \phi_{j,i}, \forall j,i$, which is of particular interest when the number of counts per pixel is low. The implementation of $\mathcal{V}_i()$ in~\eqref{eq:geoPenalMetric} uses the fact that the momentum of a cyclic uniform distribution should be at the center of its domain for any choice of $j_i^{\textrm{ref}}$. The threshold applied on $\omega$ in~\eqref{eq:omegaWeight} is a trick to ensure that $\mathcal{V}_i()$ does not overshadow the log-likelihood term for pixel with only a few counts. It remains to be shown that this implementation of $\mathcal{V}_i()$ is an efficient metric for the PADE paradigm. The discrepancy between the groundtruth and the \textit{PADE}$_{\textrm{v0}}$ solution over the pixel projection-wise distributions shown in Fig.~\ref{fig:udeAnalysis} indicates that the proposed implementation of the PADE approach enforced strongly the relation $\phi_{j,i} = R_{j,i} \lambda_i$. A better correspondence to the groundtruth could be expected if sparse solutions were enforced, which might be possible with ultra-fast TOF. The simplest approach would consists of using a voxel-wise sparsity penalty term, but the calibration would then be an even more complex interplay between the strength of each terms. Algorithmic approaches to enforce sparsity, such as FISTA~\cite{beck2009fast}, might be preferable, if it can be adapted to the PADE model. Integer optimization is another alternative that might benefit the proposed model. The parametrization of the PADE model enables a more accurate use of the integer nature of counts by circumventing the $R_{ji} \lambda_i$ approximation in the log-likelihood term. While the number of variables for the PADE model is definitely in the high end for integer programming, which is known as a NP-Hard problem, multiple heuristic methods have been successfully developed for diverse integer optimization problems and an in-depth investigation of its state-of-the-art might provide fruitful solutions~\cite{kumar2010fifty,glover1997general}. Other approaches could also be investigated, such as origin ensemble that was already applied to TOF-PET reconstruction~\cite{wulker2015time} or machine learning methods which have already shown their potential for PET reconstruction~\cite{gong2019machine}. We expect that the potential of the PADE model would remain for 3D reconstructions and that it would perform even better in this context. The histogram being even sparser in 3D, more candidates (i.e. valid projections) could be removed for each voxel. Thus, the resulting model should be easier to solve for 3D reconstructions. In this study, the mean percentage of valid projections that can be removed for pixels inside the phantom was around 50\% even though the number of detectors was low (320 for one ring of a clinical size scanner). If we modify the scanner to have 16 detectors per panel, thus doubling the number of detectors per ring, the mean percentage rises to 80\%, which results in 10\% less candidates then the previous scanner (50\% of $\approx$320 vs 80\% of $\approx$640). \section{Conclusion} We presented the fundamentals of a new paradigm, PADE, to deal with the difficulties associated with low counts in PET reconstruction, particularly in the context of ultra-fast TOF resolution. The PADE approach offers greater degrees of freedom than the classical log-likelihood model by relaxing the interpretation of the UDE property, usually encoded in the system matrix. This increases the number of variables significantly, which cannot be exploited fully in a classical PET setting. Ultra-fast TOF reconstruction with low counts represents a best case scenario for the PADE approach since the number of variables can be reduced by the excellent TOF resolution without loss in degrees of freedom. The implementation of the PADE model presented in this paper provides some gains, albeit limited, over the log-likelihood model, which suggests that this new paradigm has potential benefits warranting further investigations. The core idea of the PADE approach offers much promise and we conjecture that an efficient and stable implementation of that idea can be achieved. \section{Acknowledgments} The authors gratefully acknowledge Jean-Baptiste Michaud for providing access to his allocated computational cluster ressources and Francis Loignon-Houle for fruitful discussions, for advice on the vulgarization of the PADE model and for proof-reading the paper. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-06T02:09:11", "yymm": "2105", "arxiv_id": "2105.01854", "language": "en", "url": "https://arxiv.org/abs/2105.01854" }
\section{introduction}\label{sec-intro} Duality pairs were introduced by Holm-J\o rgensen in~\cite{holm-jorgensen-duality}, and complete duality pairs over commutative rings were defined in~\cite{gillespie-duality-pairs}. In this paper, we extend this notion to noncommutative rings to show how a theory of relative Gorenstein homological algebra exists with respect to any given complete duality pair. In fact, this notion is too strong, and so we define \emph{semi-complete} duality pairs and develop the theory in this context. This will let us show that the Ding injective modules are the right side of a complete cotorsion pair over any ring $R$. As in~\cite{gillespie-Ding-Chen rings}, a module $N$ is said to be \emph{Ding injective} if $N = Z_0E$ for some exact complex of injectives $E$ such that $\Hom_R(A,E)$ remains exact for all FP-injective (absolutely pure) modules $A$. Throughout, we let $R$ denote a ring with identity, and let $R^\circ := R^{\text{op}}$ denote its opposite ring. The techniques go back to~\cite{bravo-gillespie-hovey} where the so-called \emph{level} and \emph{absolutely clean} modules played the central role in the AC-Gorenstein homological algebra that was developed there. In hindsight, the theory has good properties, enough to give both a projective and injective stable homotopy category on $R$-modules, simply because we have a (semi-)complete duality pair $(\class{L}, \class{A})$ where $\class{L}$ is the class of level $R$-modules and $\class{A}$ is the class of absolutely clean $R^\circ$-modules. Here, the central feature of being a duality pair is that a module $M$ is level (resp. absolutely clean) if and only if $M^+ = \Hom_{\mathbb{Z}}(M,\mathbb{Q/Z})$ is absolutely clean (resp. level). One purpose of this paper is to give the definition of a \emph{semi-complete duality pair} for a general ring $R$ and to show that the arguments and theory of~\cite{bravo-gillespie-hovey} carry over to any semi-complete duality pair. This gives a unified theory encompassing everything in~\cite{bravo-gillespie-hovey, gillespie-duality-pairs, iacob-generalized-gorenstein}. However, we also consider the semi-complete duality pair $\mathfrak{D} = (\langle Flat \rangle,\langle Inj \rangle)$ which is the (definable) duality pair generated by $R$. Here the theory is in agreement with two important results recently shown by Jan {\v{S}}aroch and Jan {\v{S}}\v{t}ov{\'{\i}}{\v{c}}ek in~\cite{saroch-stovicek-G-flat} --- The Gorenstein flat cotorsion pair, and the projectively coresolved Gorenstein flat cotorsion pair, are complete over any ring. See Corollary~\ref{cor-n-duality}(3). But what is new is that we get completeness of the Ding injective cotorsion pair this way, again over any ring. The Ding modules were introduced and studied by Nanqing Ding and coauthors and later named after Ding in~\cite{gillespie-Ding-Chen rings}. In the process, we came across the following general theorem. We then obtain the results we want for duality pairs, and the various applications, as a corollary. To state the theorem, given a class of $R$-modules $\class{B}$, we say an $R$-module $N$ is \emph{Gorenstein $\class{B}$-injective} if $N=Z_{0}E$ for some exact $\Hom(\class{B},-)$-acyclic complex of injective $R$-modules $E$. That is, both $E$ and $\Hom(B,E)$ are exact (acyclic) complexes for all $B\in\class{B}$. Those familiar with Gorenstein homological algebra will guess the definitions of the other concepts below, but see Definitions~\ref{Defs-relative-G-inj}, \ref{Defs-relative-G-pro}, \ref{Defs-relative-G-flat}, and~\ref{Defs-relative-G-flat-proj} for precise definitions. \begin{theorem}\label{them-models} Let $\class{B}$ be a class of $R^\circ$-modules containing all the injective modules. Assume there exists a set (not just a class) $\class{S} \subseteq \class{B}$ such that each $B \in \class{B}$ is a transfinite extension of modules in $\class{S}$. \begin{enumerate} \item There is a cofibrantly generated injective abelian model structure on $R^\circ$-Mod, the \textbf{Gorenstein $\class{B}$-injective model structure}, whose fibrant objects are the Gorenstein $\class{B}$-injective modules. \item There is a cofibrantly generated projective abelian model structure on $R$-Mod, the \textbf{projectively coresolved Gorenstein $\class{B}$-flat model structure}, whose cofibrant objects are the projectively coresolved Gorenstein $\class{B}$-flat modules. \item There is a cofibrantly generated abelian model structure on $R$-Mod, the \textbf{Gorenstein $\class{B}$-flat model structure}, whose cofibrant objects (resp. trivially cofibrant objects) are the Gorenstein $\class{B}$-flat modules (resp. flat modules). This model structure shares the same class of trivial objects as the projective model structure. \end{enumerate} \end{theorem} Each of these is Quillen equivalent to a model structure on chain complexes; See Theorem~\ref{thm-Gor-module} and Theorem~\ref{theorem-proj-coresolved-B-flat}. For the injective case, it also follows that the Gorenstein $\class{B}$-injective modules are the right side of a perfect cotorsion pair. Now if $\mathfrak{D} = (\class{L},\class{A})$ is a semi-complete duality pair, see Definition~\ref{def-complete duality pair}, then it follows from work of Holm-J\o rgensen that the class $\class{A}$ possesses a set $\class{S}$ as in Theorem~\ref{them-models}. As a corollary, and by combining with~\cite[Theorem~A.6]{bravo-gillespie-hovey} for part (2), we get the following in Corollary~\ref{corollary-models}. \begin{corollary}\label{corollary-models-intro} The following abelian model structures are induced by any semi-complete duality pair $\mathfrak{D} = (\class{L},\class{A})$. \begin{enumerate} \item The \textbf{Gorenstein $\mathfrak{D}$-injective model structure} exists on $R^\circ$-Mod. It is a cofibrantly generated injective abelian model structure whose fibrant objects are the Gorenstein $\class{A}$-injective $R^\circ$-modules. \item The \textbf{Gorenstein $\mathfrak{D}$-projective model structure} exists on $R$-Mod. It is a cofibrantly generated projective abelian model structure whose cofibrant objects are the Gorenstein $\class{L}$-projective $R$-modules, equivalently, the projectively coresolved Gorenstein $\class{A}$-flat $R$-modules. \item The \textbf{Gorenstein $\mathfrak{D}$-flat model structure} exists on $R$-Mod. It is a cofibrantly generated abelian model structure whose cofibrant objects (resp. trivially cofibrant objects) are the Gorenstein $\class{A}$-flat modules (resp. flat modules). Moreover, the trivial objects in this model structure coincide with those in the Gorenstein $\mathfrak{D}$-projective model structure. \end{enumerate} \end{corollary} Our main application, which stems from the semi-complete duality pair $\mathfrak{D} = (\langle Flat \rangle,\langle Inj \rangle)$, appears in Theorem~\ref{them-dings}. It proves that the Ding injective modules form an enveloping class over any ring $R$, and that they are the fibrant objects of a cofibrantly generated model structure on $R$-Mod. But in fact we are now able to obtain a relative homological algebra, for any ring $R$, and for each positive integer $1 \leq n \leq \infty$, from a (semi-)complete duality pair $\mathfrak{D}_n$. See Corollary~\ref{cor-n-duality}. This includes everything from the AC-Gorenstein homological algebra of~\cite{bravo-gillespie-hovey} ($n=\infty$), to the above Ding injectives and Saroch and Stovicek's (projectively coresolved) Gorenstein flats from~\cite{saroch-stovicek-G-flat} ($n=1$). \ \emph{Conventions}: Throughout the paper $R$ denotes a ring with identity. Its opposite ring, $R^{\text{op}}$, will be denoted more succinctly by $R^\circ$. Recall that a left (resp. right) $R$-module is equivalent to a right (resp. left) $R^\circ$-module. Our convention throughout the entire paper is that the term \emph{$R$-module}, with the side left unspecified, may be fixed to mean either left or right $R$-module as the reader desires. But then one should realize that the term \emph{$R^\circ$-module} means a swap of sides with respect to that choice. In other words, if we fix $R$-module to mean \emph{right} $R$-module, then ``$M$ is an $R^\circ$-module'' is just our way of saying $M$ is a \emph{left} $R$-module. \section{Symmetric and semi-complete duality pairs} Recall that for a given $R$-module $M$, its \emph{character module} is defined to be the $R^\circ$-module $M^+ = \Hom_{\mathbb{Z}}(M,\mathbb{Q/Z})$. \begin{definition}\cite[Definition~2.1]{holm-jorgensen-duality}\label{def-duality pair} A \emph{duality pair} over $R$ is a pair $(\class{M},\class{C})$, where $\class{M}$ is a class of $R$-modules and $\class{C}$ is a class of $R^\circ$-modules, satisfying the following conditions: \begin{enumerate} \item $M \in \class{M}$ if and only if $M^+ \in \class{C}$. \item $\class{C}$ is closed under direct summands and finite direct sums. \end{enumerate} A duality pair $(\class{M},\class{C})$ is called \emph{perfect} if $\class{M}$ contains the module $R$, and is closed under coproducts and extensions. \end{definition} The canonical example of a duality pair is when we take $\class{F}$ to be the class of all flat $R$-modules and $\class{I}$ to be the class of all injective $R^\circ$-modules. The following is the main result concerning (perfect) duality pairs. \begin{theorem}\cite[Theorem~3.1]{holm-jorgensen-duality}\label{them-duality pair purity} Let $(\class{M},\class{C})$ be a duality pair. Then the following hold: \begin{enumerate} \item $\class{M}$ is closed under pure submodules, pure quotients, and pure extensions. \item If $(\class{M},\class{C})$ is perfect, then $(\class{M}, \rightperp{\class{M}})$ is a perfect cotorsion pair. \end{enumerate} \end{theorem} The following definition comes from~\cite{gillespie-duality-pairs} but it was only stated there for commutative rings. It combines Holm and J\o rgensen's above definition with a similar notion defined in~\cite[Appendix~A]{bravo-gillespie-hovey}. \begin{definition}\label{def-symmetric duality pair} By a \emph{symmetric duality pair} $\{\class{L}, \class{A}\}$ we mean: \begin{enumerate} \item $\class{L}$ is a class of $R$-modules. \item $\class{A}$ is a class of $R^\circ$-modules. \item $(\class{L},\class{A})$ and $(\class{A},\class{L})$ are each duality pairs. \end{enumerate} \end{definition} An example of a symmetric duality pair is obtained by taking $\class{L}$ to be the class of all level $R$-modules and $\class{A}$ to be the class of all absolutely clean $R^\circ$-modules~\cite{bravo-gillespie-hovey}. Theorem~\ref{them-projectivecomplexes} below is a very useful result concerning symmetric duality pairs. It is a generalization of~\cite[Theorem~A.6]{bravo-gillespie-hovey} where it was proved for complexes of projectives. However, as suggested in~\cite[Remark~3.9]{estrada-gillespie-coherent-schemes}, the proof works for complexes of pure-projective $R$-modules because of Stovicek's work on chain complexes of pure-projectives. Recall that an $R$-module $M$ is \emph{pure-projective} if it is projective with respect to the class of all pure short exact sequences. This is the case if and only if $M$ is a direct summand of a direct sum of finitely presented modules. In particular, projective modules and finitely presented modules are examples of pure-projective modules. \begin{theorem}\label{them-projectivecomplexes} Let $\{\class{L}, \class{A}\}$ be a symmetric duality pair with $R$-modules in $\class{L}$ and $R^\circ$-modules in $\class{A}$. \begin{enumerate} \item Assume $P$ is a chain complex of pure-projective $R$-modules. Then the tensor product of $P$ with any $R^\circ$-module $A \in \class{A}$ yields an exact complex if and only if $\Hom_R(P,L)$ is an exact complex for all $L \in \class{L}$. That is, $P$ is $\class{A}^{\otimes}$-acyclic if and only if it is $\Hom(-,\class{L})$-acyclic. \item Assume $Q$ a chain complex of pure-projective $R^\circ$-modules. Then the tensor product of $Q$ with any $R$-module $L \in \class{L}$ yields an exact complex if and only if $\Hom_{R^\circ}(Q,A)$ is an exact complex for all $A \in \class{A}$. That is, $Q$ is $\class{L}^{\otimes}$-acyclic if and only if it is $\Hom(-,\class{A})$-acyclic. \end{enumerate} \end{theorem} \begin{proof} Tensor products must be written on a particular side depending on the choice of $R$-module to mean \emph{left $R$-module} versus \emph{right $R$-module}. So for definiteness, let us assume that $\class{L}$ is a class of \emph{left} $R$-modules and $\class{A}$ a class of \emph{right} $R$-modules. (Of course, versions of our argument still hold if we swap this choice.) So we are given a chain complex $P$ of pure-projective left $R$-modules and we wish to show that $A \otimes_R P$ is exact for all $A \in \class{A}$ if and only if $\Hom_R(P,L)$ for all $L \in \class{L}$. \noindent $(\Longleftarrow)$ By adjoint associativity~\cite[Theorem~2.1.10]{enochs-jenda-book} we have $$\Hom_{\mathbb{Z}}(A\otimes_{R}P, \mathbb{Q/Z}) \cong \Hom_{R}(P,A^+).$$ So since $(\cat{A},\cat{L})$ is a duality pair it is easy to argue that if $\Hom_{R} (P,L)$ is exact for all $L \in \cat{L}$, then $A\otimes_{R}P$ is exact for all $A\in \cat{A}$. \noindent $(\Longrightarrow)$ Suppose $A\otimes_{R}P$ is exact for all $A \in \cat{A}$. Then for any $L \in \cat{L}$, we see $L^{+}\otimes_{R}P$ is exact since $(\cat{L},\cat{A})$ is a duality pair. Using the above adjoint associativity again we conclude that $\Hom (P, L^{++})$ is exact whenever $L \in \class{L}$. In other words, $\Hom (P, K)$ is exact whenever $K \in \class{L}^{++}$ and we note $\class{L}^{++} \subseteq \class{L}$ since $\{\class{L}, \class{A}\}$ is a symmetric duality pair. But for any $L$, the natural map $L\xrightarrow{} L^{++}$ is a pure monomorphism~\cite[Proposition~5.3.9]{enochs-jenda-book}. So if $L \in \cat{L}$, the quotient $L^{++}/L$ is also in $\cat{L}$ since $\cat{L}$ is closed under pure quotients by Theorem~\ref{them-duality pair purity}. We can therefore create a pure exact resolution of $L\in \cat{L}$ by elements of $\cat{L}^{++}$. That is, we can find a pure exact chain complex $X$ where $X_{i}=0$ for $i>0$, $X_{0}=L$, and each of the $X_{i}$ for $i<0$ is in $\cat{L}^{++}$. From this we can easily construct a short exact sequence \[ 0 \xrightarrow{} S^{0}L \xrightarrow{} \widetilde{X} \xrightarrow{} Y \xrightarrow{} 0, \] which we note is degreewise pure, has $Y$ as a pure exact complex (of modules in $\class{L}$), and has $\widetilde{X}$ bounded above with entries in $\cat{L}^{++}$. Since $P$ has pure-projective components, applying $\mathit{Hom}(P,-)$ yields another short exact sequence \[ 0 \xrightarrow{} \mathit{Hom} (P,S^{0}L) \xrightarrow{} \mathit{Hom} (P,\widetilde{X}) \xrightarrow{} \mathit{Hom} (P,Y) \xrightarrow{} 0. \] By Stovicek's~\cite[Theorem~5.4]{stovicek-purity}, any chain map from a chain complex of pure-projectives to a pure exact complex must be null homotopic. In other words, $\mathit{Hom} (P,Y)$ must be an exact complex. Moreover, $\mathit{Hom} (P,S^{0}L) = \Hom_R(P,L)$, so to complete the proof it will suffice to show that $\mathit{Hom} (P,\widetilde{X})$ is exact. But if $Z$ is any \emph{bounded} complex with entries in $\cat{L}^{++}$, then we can prove $\mathit{Hom} (P,Z)$ is exact by induction on the number of nonzero entries in $Z$. Now, like any bounded above complex, $\widetilde{X}$ is the inverse limit of its truncations $\widetilde{X}^{-n}$ for $n\in \mathbb{Z}$, where $(\widetilde{X}^{-n})_{i}=\widetilde{X}_{i}$ for $i\geq -n$ and is $0$ otherwise. This is a very simple inverse limit, in fact, it is an ``inverse transfinite extension'' (dual of transfinite extension) of the spheres $S^i(\widetilde{X}_{i})$ on its components $\widetilde{X}_{i}$. One must check that $\mathit{Hom} (P,\widetilde{X})=\varprojlim \mathit{Hom} (P, \widetilde{X}^{-n})$ and that $\mathit{Hom} (P,\widetilde{X})$ is an exact complex, completing the proof. \end{proof} Referring to Definition~\ref{def-duality pair}, let us call $(\class{M},\class{C})$ a \textbf{semi-perfect} duality pair if it has all the properties required to be a perfect duality pair \emph{except} that $\class{M}$ may not be closed under extensions. \begin{definition}\label{def-complete duality pair} By a \emph{semi-complete duality pair} $(\class{L},\class{A})$ we mean that $\{\class{L},\class{A}\}$ is a symmetric duality pair with $(\class{L},\class{A})$ being a semi-perfect duality pair. In this case, we call $\class{L}$ the \emph{projective class} and $\class{A}$ the \emph{injective class}. If $(\class{L},\class{A})$ is indeed perfect, then we call it a \emph{complete duality pair}. \end{definition} \begin{remark} If $(\class{L},\class{A})$ is a semi-complete duality pair then $\class{L}$ contains not just all projective $R$-modules, but also all flat $R$-modules by the argument in~\cite[Prop.~2.3]{gillespie-duality-pairs}. On the other hand, $\class{A}$ must contain all absolutely pure (i.e. FP-injective) $R^\circ$-modules. Indeed suppose $A$ is absolutely pure and embed it into an injective $I$. Note the monomorphism $A \hookrightarrow I$ is necessarily pure. The argument in~\cite[Prop.~2.3]{gillespie-duality-pairs} shows that $I \in \class{A}$. But since $(\class{A},\class{L})$ is also a duality pair we conclude from Theorem~\ref{them-duality pair purity}(1) that $A \in \class{A}$. \end{remark} \subsection{Examples of (semi-)complete duality pairs}\label{sec-example duality pairs} Several classes of examples of duality pairs are given throughout~\cite{holm-jorgensen-duality, bravo-gillespie-hovey, bravo-perez}. We give a brief summary here of those that are complete duality pairs. We refer the reader to the original sources for more detailed references and unexplained terminology. \begin{example}\label{example-level} Let $R$ be any ring and let $\class{L}$ be the class of all level $R$-modules and $\class{A}$ the class of all absolutely clean $R^\circ$-modules~\cite{bravo-gillespie-hovey}. Then the \emph{level duality pair}, $(\class{L},\class{A})$, is a complete duality pair. Note then that a noncommutative ring $R$ admits \emph{two} level duality pairs - one where $\class{L}$ is the class of left $R$-modules and one where $\class{L}$ is the class of right $R$-modules. \end{example} \begin{example}\label{example-BP} Let $n$ be a natural number satisfying $2 \leq n \leq \infty$. In~\cite{bravo-perez}, Bravo and P\'erez give $n$-analogs to the level duality pairs. Here we let $\class{FP}_n\text{-Flat}$ denote their class of all $\text{FP}_n$-flat $R$-modules, and $\class{FP}_n\text{-Inj}$ their class of all $\text{FP}_n$-injective $R^\circ$-modules. It is shown in~\cite[Cor.~3.7]{bravo-perez} that we have a complete duality pair $(\class{FP}_n\text{-Flat},\class{FP}_n\text{-Inj})$. The class of $\text{FP}_n$-flat modules always sits between the usual class of flat modules ($n=1$) and the class of level modules ($n=\infty$), and the difference is only significant for non-coherent rings. See~\cite{bravo-perez} for details. \end{example} \begin{example} Many commutative rings $R$ have some interesting complete duality pairs attached to them. We refer the reader to the original source~\cite{holm-jorgensen-duality} and to the summary given in~\cite{gillespie-duality-pairs}. Depending on the hypotheses on the ring, there may be the \emph{Auslander-Bass duality pair} $(\class{A}^C_0, \class{B}^C_0)$, the \emph{$C$-Gorenstein flat dimension duality pairs} $(\class{GF}^C_n, \class{GI}^C_n)$ (where $C$ is a dualizing complex), or the \emph{depth-width duality pairs} $(\class{D}_n, \class{W}_n)$. \end{example} \begin{example}\label{ques-G-flat} We see in~\cite[Remark~2.12]{estrada-iacob-perez-G-flat} that, given any ring $R$, it generates a semi-complete duality pair $(\langle R \rangle, \langle R^+ \rangle)$ where $\langle R \rangle$ is the \emph{definable class} (meaning it is closed under products, direct limits, and pure submodules) generated by $R$, and $\langle R^+ \rangle$ is the definable class generated by $R^+$. Moreover, they show $$\mathfrak{D} = (\langle R \rangle, \langle R^+ \rangle) = (\langle Flat\rangle, \langle Inj \rangle)$$ where $\langle Flat\rangle$ is the definable class generated by the class of all flat $R$-modules and $\langle Inj \rangle$ is the definable class generated by the class of all injective $R^\circ$-modules. Alternatively, using results from~\cite{prest-definable}, it is shown very succinctly in~\cite[Lemmas~5.5-5.7]{cortes-saroch} that $\mathfrak{D} = (\langle Flat\rangle, \langle Inj \rangle)$ is a semi-complete duality pair. Moreover, $\langle Inj \rangle$ is precisely the class of all $R^\circ$-modules $M$ fitting into a short exact sequence $$ 0\xrightarrow{} A \xrightarrow{} B \xrightarrow{} M \xrightarrow{} 0$$ where $A$ and $B$ are FP-injective (absolutely pure) $R^\circ$-modules. \end{example} As in~\cite{gillespie-Ding-Chen rings}, a module $N$ is said to be \textbf{Ding injective} if $N = Z_0E$ for some exact complex of injectives $E$ such that $\Hom(A,E)$ remains exact for all FP-injective (absolutely pure) modules $A$. As in~\cite{saroch-stovicek-G-flat}, a module $N$ is said to be \textbf{projectively coresolved Gorenstein flat} if $N = Z_0P$ for some exact complex of projectives $P$ which remains exact upon tensoring with any injective module $I$. So these are like the usual \emph{Gorenstein flat} modules we know from~\cite{enochs-jenda-book}, but defined via a complex of projectives, not just a complex of flats. We have the following results. \begin{proposition}\label{prop-ding-thing} Consider the semi-complete duality pair $\mathfrak{D} = (\langle Flat\rangle, \langle Inj \rangle)$ over any ring $R$. \begin{enumerate} \item An $R^\circ$-module $N = Z_0E$ is Ding injective if and only if it is Gorenstein $\langle Inj \rangle$-injective in the sense of Definition~\ref{Defs-relative-G-inj}. It just means that $\Hom(M,E)$ even remains exact for all $M \in \langle Inj \rangle$. \item An $R$-module $N = Z_0F$ is Gorenstein flat if and only if it is Gorenstein $\langle Inj \rangle$-flat in the sense of Definition~\ref{Defs-relative-G-flat}. It means that the complex of flats $F$ even remains exact upon tensoring it with any $M \in \langle Inj \rangle$. In particular, this is true for any projectively coresolved Gorenstein flat module $N = Z_0P$. \end{enumerate} \end{proposition} \begin{proof} Since $\langle Inj \rangle$ contains all FP-injective modules, any Gorenstein $\langle Inj \rangle$-injective is Ding injective. On the other hand, suppose $N = Z_0E$ is Ding injective. We must show that $\Hom(M,E)$ remains exact for all $M \in \langle Inj \rangle$. But again, any such $M$ sits in a short exact sequence $$ 0\xrightarrow{} A \xrightarrow{} B \xrightarrow{} M \xrightarrow{} 0$$ where $A$ and $B$ are FP-injective (absolutely pure) $R^\circ$-modules. Applying the functor $\Hom(-,E)$ yields, because each $E_n$ is injective, a short exact sequence of complexes $$ 0\xrightarrow{} \Hom(M,E) \xrightarrow{} \Hom(B,E) \xrightarrow{} \Hom(A,E) \xrightarrow{} 0.$$ Since $\Hom(B,E)$ and $\Hom(A,E)$ are both exact, it follows that $\Hom(M,E)$ is also exact. The fact for the Gorenstein flats (and projectively resolved) is proved similarly. But here one must first use~\cite[Lemma~5.3]{estrada-gillespie-coherent-schemes} (a fact first proved by Ding and Mao in~\cite[Lemma~2.8]{ding and mao 08}) and that the short exact sequence containing $M \in \langle Inj \rangle$ is necessarily pure. \end{proof} So now by Theorem~\ref{them-projectivecomplexes} (\cite[Theorem~A.6]{bravo-gillespie-hovey}) we have established the footnote in~\cite[Page~21]{saroch-stovicek-G-flat}. It includes a different proof of Saroch and Stovicek's~\cite[Theorem~4.4]{saroch-stovicek-G-flat}, that all projectively coresolved Gorenstein flat modules are Gorenstein projective. In fact, they are Ding projective in the sense of~\cite{gillespie-Ding-Chen rings}: \begin{corollary}\cite[Theorem~4.4/Cor.~4.5]{saroch-stovicek-G-flat}\label{cor-ss} An $R$-module $N = Z_0P$ is projectively coresolved Gorenstein flat if and only if the complex $P$ in the definition satisfies that $\Hom_R(P,L)$ remains exact for all $L \in \langle Flat \rangle$. \end{corollary} \begin{remark} We note that Corollary~\ref{cor-ss} was also proved by Estrada-Iacob-P\'erez in~\cite[Lemma~2.11/Remark~2.12]{estrada-iacob-perez-G-flat}; again by using that $(\langle Flat\rangle, \langle Inj \rangle)$ is a symmetric duality pair and applying~\cite[Appendix~A.6]{bravo-gillespie-hovey}. \end{remark} \section{Relative Gorenstein injective and projective modules}\label{sec-relative-G-inj} Throughout this section, we let $\class{B}$ denote a class of $R$-modules and we assume $\class{B}$ contains all injective $R$-modules. We will prove a series of lemmas generalizing well-known results for the usual Gorenstein injectives. Their proofs depend only on the definition of a Gorenstein $\class{B}$-injective module, given below. \begin{definition}\label{Defs-relative-G-inj} We will say that a chain complex $X$ of $R$-modules is \emph{$\Hom(\class{B},-)$-acyclic} if $\Hom(B,X)$ is an exact complex of abelian groups for all $B \in \class{B}$. If $X$ itself is also exact we will say that $X$ is an \emph{exact $\Hom(\class{B},-)$-acyclic} complex. We say an $R$-module $N$ is \emph{Gorenstein $\class{B}$-injective} if $N=Z_{0}E$ for some exact $\Hom(\class{B},-)$-acyclic complex of injective $R$-modules $E$. \end{definition} \begin{notation}\label{notation-relative-G-inj} We let $\class{GI}_{\class{B}}$ denote the class of all Gorenstein $\class{B}$-injective $R$-modules, and we set $\class{W} = \leftperp{\class{GI}_{\class{B}}}$. \end{notation} We note that $\class{W}$ is precisely the class of all modules $W$ such that $\Hom_R(W,E)$ remains exact for all exact $\Hom(\class{B},-)$-acyclic complexes of injectives $E$. Indeed it follows from the definition that $W \in \class{W}$ if and only if $\Ext^1_R(W,Z_{n}E)=0$ for all $n$ and all such $E$, and this is equivalent to $\Hom_R(W,E)$ being exact. In particular, $\class{B} \subseteq \class{W}$. \begin{lemma}\label{lemma-characterize-G-Inj} The following are equivalent.\\ (1) $N \in \mathcal{GI}_{\mathcal{B}}$\\ (2) There exists an exact and $\Hom(\mathcal{\class{B}}, -)$-acyclic complex\\ $\ldots \rightarrow E_1 \rightarrow E_0 \rightarrow N \rightarrow 0$ with each $E_i$ injective, and $\Ext^i_R(B, N) =0$ for any $B \in \mathcal{B}$, for any $i \ge 1$. \\ (3) There is a short exact sequence $0 \xrightarrow{} N' \xrightarrow{} E \xrightarrow{} N \xrightarrow{} 0$ with $E$ injective and $N' \in \class{GI}_{\class{B}}$. \end{lemma} \begin{proof} (1) implies (2) follows from the definition of Gorenstein $\mathcal{B}$-injective modules, since $\Ext^i_R(B, N) = H^{-i} \Hom(B, E) =0$, where $E$ is an exact and $ \Hom(\mathcal{B}, -)$ acyclic complex of injectives, such that $N = Z_0E$ .\\ (2) $\Rightarrow$ (1) Let $0 \rightarrow N \rightarrow E_{-1} \rightarrow E_{-2} \rightarrow \ldots$ be an injective resolution of $N$. Pasting it with the complex $\ldots \rightarrow E_1 \rightarrow E_0 \rightarrow N \rightarrow 0$ we obtain an exact complex of injectives $E$ such that $N = Z_0 E$. By hypothesis, $E$ remains exact when applying a functor $\Hom(B, -)$ with $B \in \mathcal{B}$. (1) implies (3) is clear. For the converse, we imitate the argument from~\cite[Lemma~2.5]{Ding projective}. Briefly, note that $\Ext^i_R(B,N) = 0$ for all $B \in \class{B}$. Since $N' \in \class{GI}_{\class{B}}$, we may extend to the left to get a $\Hom(\class{B},-)$-acyclic resolution of injectives. Then we may paste this with any usual injective resolution of $N$. The resulting exact complex of injectives will be $\Hom(\class{B},-)$-acyclic because $\Ext^i_R(B,N) = 0$ for all $B \in \class{B}$. \end{proof} \begin{lemma}\label{lemma-injective-heart} $\mathcal{W} \bigcap \mathcal{GI}_{\mathcal{B}}$ is the class of injective modules. \end{lemma} \begin{proof} Let $G \in \mathcal{W} \bigcap \mathcal{GI}_{\mathcal{B}}$. By definition there is an exact sequence $$0 \rightarrow G' \rightarrow I \rightarrow G \rightarrow 0$$ with $G' \in \mathcal{GI}_{\mathcal{B}}$, and with $I$ an injective module. Since $G \in \mathcal{W}$, we have that $Ext^1_R(G,G')=0$. So the sequence is split exact, and therefore $G$ is injective. On the other hand, $\class{W}$ contains every module in $\class{B}$. (This follows from the definition and computation of Ext by injective (co)resolutions.) Since $\class{B}$ contains all injective modules we conclude $\mathcal{W} \bigcap \mathcal{GI}_{\mathcal{B}}$ is exactly the class of all injective modules. \end{proof} \begin{lemma}\label{lemma-relative-G-Inj-summands} The class $\mathcal{GI}_{\mathcal{B}}$ is closed under direct products and direct summands. \end{lemma} \begin{proof} It follows immediately from the definition that $\mathcal{GI}_{\mathcal{B}}$ is closed under direct sums. A direct argument we learned from Marco P\'erez will work in this context to prove $\mathcal{GI}_{\mathcal{B}}$ is closed under direct summands; see~\cite[Prop.~5.2 ]{bravo-gillespie-perez}. \end{proof} \begin{lemma}\label{lemma-relative-G-Inj-coresolving} The class $\mathcal{GI}_{\mathcal{B}}$ is injectively coresolving. That is, it contains the injectives and for any short exact sequence $0 \xrightarrow{} N' \xrightarrow{} N \xrightarrow{} N'' \xrightarrow{} 0$ with $N \in \class{GI}_{\class{B}}$, we have $N \in \class{GI}_{\class{B}}$ if and only if $N'' \in \class{GI}_{\class{B}}$. \end{lemma} \begin{proof} The proof for closure under extensions follows just like the dual of the argument given in~\cite[Lemma 3.1]{enochs-iacob-jenda}. Next assume $N' , N \in \mathcal{GI}_{\mathcal{B}}$. Write a short exact sequence $0 \xrightarrow{} N' \xrightarrow{} I \xrightarrow{} G \xrightarrow{} 0$ with $I$ injective and $G \in \class{GI}_{\class{B}}$ Construct the pushout diagram below: $$\begin{CD} @. 0 @. 0 @. @. \\ @. @VVV @VVV @. @.\\ 0 @>>> N' @>>> N @>>> N'' @>>> 0 \\ @. @VVV @VVV @| @.\\ 0 @>>> I @>>> P @>>> N'' @>>> 0 \\ @. @VVV @VVV @. @.\\ @. G @= G @. @. \\ @. @VVV @VVV @. @.\\ @. 0 @. 0 @. @. \\ \end{CD}$$ The second row splits since $I$ is injective, forcing $N''$ to be a direct summand of $P$. But $P \in \class{GI}_{\class{B}}$ from the closure under extensions we just proved. Thus $N'' \in \class{GI}_{\class{B}}$, by Lemma~\ref{lemma-relative-G-Inj-summands}. \end{proof} \begin{remark} Alternatively, one can prove the coresolving property and closure under direct summands by imitating the (dual of) the arguments in~\cite[Theorem~2.6]{Ding projective}, and citing~\cite[Prop.~1.4]{holm}. \end{remark} \begin{lemma}\label{lemma-relative-G-Inj-thick} The class $\class{W}$ is thick, meaning it is closed under direct summands and satisfies the 2 out of 3 property on short exact sequences. \end{lemma} \begin{proof} It is automatic that $\class{W}$ is closed under direct summands and extensions since it is defined as an Ext-orthogonal. In fact, by~\cite[Lemma~1.2.9]{garcia-rozas}, since $\class{GI}_{\class{B}}$ has been shown to be an injectively coresolving class, we may conclude that $\class{W} = \leftperp{\class{GI}_{\class{B}}}$ is a projectively resolving class, and, that $\Ext^i_R(W,N) = 0$ for all $W \in \class{W}$ and $N \in \class{GI}_{\class{B}}$ and $i \geq 1$. Now consider a short exact sequence $0 \xrightarrow{} W' \xrightarrow{} W \xrightarrow{} W'' \xrightarrow{} 0$ with $W', W \in \class{W}$. It is only left to show that $\Ext^1_R(W'',N) = 0$ for all $N \in \class{GI}_{\class{B}}$. We follow Holm's argument from~\cite[Lemma~3.5]{gillespie-recollement}. First, for any such $N$, applying $\Hom(-,N)$ and looking at the resulting long exact sequence in Ext we get $\Ext^{\geqslant 2}_R(W'',N)=0$. To see that $\Ext^1_R(W'',N)=0$ for every $N \in \class{GI}_{\class{B}}$, write a short exact sequence $0 \to N' \to E \to N \to 0$, where $E$ is injective and $N' \in \class{GI}_{\class{B}}$. Applying $\Hom_R(W'',-)$ to this sequence gives $\Ext^1_R(W'',N) \cong \Ext^2_R(W'', N')$, which is zero by what we just proved. \end{proof} \begin{proposition}\label{proposition-relative-G-Inj} Let $\class{B}$ be a class of modules containing the injectives. Suppose every module $M$ has a special $\class{GI}_{\class{B}}$-preenvelope. Then $(All, \class{W}, \class{GI}_{\class{B}})$ is an injective abelian model structure on $R$-Mod. In particular, $(\class{W}, \class{GI}_{\class{B}})$ is an hereditary cotorsion pair, and in fact, it is a perfect cotorsion pair. \end{proposition} \begin{proof} To see that $(\class{W}, \class{GI}_{\class{B}})$ is a cotorsion pair (with enough injectives) we only need to show $\rightperp{\class{W}}\subseteq \class{GI}_{\class{B}} $. Given any $M \in \rightperp{\class{W}}$, write a special $\class{GI}_{\class{B}}$-preenvelope $0 \xrightarrow{} M \xrightarrow{} N \xrightarrow{} W \xrightarrow{} 0$. So $W \in \class{W}$ and $N \in \class{GI}_{\class{B}}$. Since $M \in \rightperp{\class{W}}$, the sequence splits, making $M$ a direct summand of $N$. Therefore $M\in \class{GI}_{\class{B}}$ by Lemma~\ref{lemma-relative-G-Inj-summands}. Since $(\class{W}, \class{GI}_{\class{B}})$ is a cotorsion pair with enough injectives, it also has enough projectives by the Salce trick~\cite[Prop.~7.1.7]{enochs-jenda-book}. Thus we have a complete cotorsion pair. By Lemma~\ref{lemma-relative-G-Inj-thick} the class $\class{W}$ is thick. So by~\cite[Prop.~3.1]{gillespie-ding-modules}, $\class{W}$ is closed under direct limits. Any complete cotorsion pair whose left side is closed under direct limits is a perfect cotorsion pair, by~\cite[Theorem~7.2.6]{enochs-jenda-book}. It is now clear too that $(All, \class{W}, \class{GI}_{\class{B}})$ is an injective abelian model structure, by Lemma~\ref{lemma-injective-heart}. \end{proof} In addition, using {\v{S}}aroch and {\v{S}}\v{t}ov{\'{\i}}{\v{c}}ek's~\cite[Theorem~5.6]{saroch-stovicek-G-flat} we can see that, in any case, $(\class{W}, \class{GI}_{\class{B}})$ is at least always a cotorsion pair. We don't use the following result in this paper, but point it out for its own interest; it generalizes~\cite[Prop.~2]{iacob-generalized-gorenstein}. \begin{proposition}\label{prop-always-cot-pair} Let $\class{B}$ be a class of modules containing the injectives. Then $(\class{W}, \class{GI}_{\class{B}})$ is always an hereditary cotorsion pair with $\class{W}$ thick. \end{proposition} \begin{proof} For any class $\class{C}$, we have $\rightperp{\class{C}} = \rightperp{(\leftperp{(\rightperp{\class{C}})})}$, so we have a cotorsion pair $(\class{W},\rightperp{\class{W}})$, where $\class{W} = \leftperp{\class{GI}_{\class{B}}}$. We automatically have $\class{GI}_{\class{B}} \subseteq \rightperp{\class{W}}$, and we wish to show that $\rightperp{\class{W}} \subseteq \class{GI}_{\class{B}}$. As pointed out the proof of Lemma~\ref{lemma-relative-G-Inj-thick}, $\class{W}$ is a projectively resolving class. Therefore, by~\cite[Lemma~1.2.8]{garcia-rozas}, $\rightperp{\class{W}}$ is an injectively coresolving class and $\Ext^i_R(W,N) = 0$ for all $W \in \class{W}$ and $N \in \rightperp{\class{W}}$ and $i \geq 1$. So by Lemma~\ref{lemma-characterize-G-Inj}, we only need to show that any $N \in \rightperp{\class{W}}$ admits a $Hom(\mathcal{\class{B}}, -)$-acyclic complex $$\cdots \xrightarrow{} E_2 \rightarrow E_1 \rightarrow E_0 \rightarrow N \rightarrow 0$$ with each $E_i$ injective. But note that any $N \in \rightperp{\class{W}}$ must be Gorenstein injective, because $(\leftperp{\rightperp{\class{GI}_{\class{B}})}} \subseteq (\leftperp{\rightperp{\class{GI})}} = \class{GI}$, with the equality by~\cite[Theorem~5.6]{saroch-stovicek-G-flat}. So we have a short exact sequence \begin{equation}\label{equation-ses1}\tag{$*$} 0 \xrightarrow{} N_0 \xrightarrow{} E_0 \xrightarrow{} N \xrightarrow{} 0 \end{equation} with $E_0$ injective and $N_0$ Gorenstein injective. Let $W \in \class{W}$ be arbitrary, and we will show that $\Ext^1_R(W,N_0) = 0$. This will complete the proof, because repeating the argument ad infinitum produces the desired $Hom(\mathcal{\class{B}}, -)$-acyclic injective resolution. Write a short exact sequence \begin{equation}\label{equation-ses2}\tag{$**$} 0 \xrightarrow{} W \xrightarrow{} I \xrightarrow{} W' \xrightarrow{} 0 \end{equation} with $I$ injective. Then $W' \in \class{W}$ by Lemma~\ref{lemma-relative-G-Inj-thick}. Applying $\Hom(W',-)$ to \eqref{equation-ses1} we get $$ 0 = \Ext^1_R(W',N) \xrightarrow{} \Ext^2_R(W',N_0) \xrightarrow{} \Ext^2_R(W',E_0) = 0$$ and so $\Ext^2_R(W',N_0) =0$. On the other hand, applying $\Hom(-,N_0)$ to \eqref{equation-ses2} we get $$ 0 = \Ext^1_R(I,N_0) \xrightarrow{} \Ext^1_R(W,N_0) \xrightarrow{} \Ext^2_R(W',N_0) = 0$$ and so $\Ext^1_R(W,N_0) =0$. \end{proof} \begin{proposition}\label{prop-injective model on complexes} Let $\class{B}$ be any class of modules for which there exists a set (not just a class) $\class{S} \subseteq \class{B}$ such that each $B \in \class{B}$ is a transfinite extension of modules in $\class{S}$. Then there is a cofibrantly generated injective abelian model structure on the category of chain complexes whose fibrant objects are the exact $\Hom(\class{B},-)$-acyclic complexes of injectives. We call this the \textbf{exact $\boldsymbol{\Hom(\class{B},-)}$-acyclic injective model structure}. \end{proposition} \begin{proof} A detailed argument is given in~\cite[Lemma~3.3]{gillespie-duality-pairs} for commutative rings, but it certainly holds for noncommutative rings too. It shows this to be a consequence of~\cite[Theorem~4.1]{bravo-gillespie-hovey}. The point is that one can easily check that a complex $I$ of injective modules is exact and $\Hom(\class{B},-)$-acyclic if and only if $\Hom(R\oplus B,I)$ is exact, where $B$ is the single ``test module'' $B = \bigoplus_{N \in S} N$. \end{proof} \begin{theorem}\label{thm-Gor-module} Let $\class{B}$ be a class of modules containing the injectives. Assume there exists a set (not just a class) $\class{S} \subseteq \class{B}$ such that each $B \in \class{B}$ is a transfinite extension of modules in $\class{S}$. Then there is a cofibrantly generated injective abelian model structure on $R$-Mod, the \textbf{Gorenstein $\class{B}$-injective model structure}, whose fibrant objects are the Gorenstein $\class{B}$-injectives. In particular, $(\class{W}, \class{GI}_{\class{B}})$ is a complete hereditary cotorsion pair in $R$-Mod, cogenerated by a set. In fact, it is a perfect cotorsion pair. The sphere functor $S^0(-) : R\text{-Mod} \xrightarrow{} \textnormal{Ch}(R)$ is a left Quillen equivalence from the Gorenstein $\class{B}$-injective model structure to the exact $\Hom(\class{B},-)$-acyclic injective model structure. \end{theorem} \begin{proof} We apply Proposition~\ref{proposition-relative-G-Inj}. For any object $M$, we can take a fibrant replacement of $S^0(M)$ in the exact $\Hom(\class{B},-)$-acyclic model structure. It is precisely a short exact sequence \[ 0 \xrightarrow{} S^{0}(M) \xrightarrow{} I \xrightarrow{}X \xrightarrow{} 0 \] in which $I$ is an exact $\Hom(\class{B},-)$-acyclic complex of injectives and $X$ is trivial in the exact $\Hom(\class{B},-)$-acyclic model structure. By the snake lemma, we get a short exact sequence \[ 0 \xrightarrow{} M \xrightarrow{} Z_{0}I \xrightarrow{} Z_{0}X \xrightarrow{} 0. \] $Z_{0}I$ is Gorenstein $\class{B}$-injective by definition. By the argument in~\cite[Lemma~4.4]{gillespie-duality-pairs} we also get $Z_{0}X \in \class{W}$. The functor $S^0(-) : R\text{-Mod} \xrightarrow{} \textnormal{Ch}(R)$ is left adjoint to the cycle functor $Z_0(-)$ and is a Quillen adjunction from the Gorenstein $\class{B}$-injective model structure to the exact $\Hom(\class{B},-)$-acyclic injective model structure. The argument from~\cite[Theorem~5.8]{bravo-gillespie-hovey} generalizes to show that it is indeed a Quillen equivalence \end{proof} \begin{corollary}\label{cor-well-generated} The full subcategory $\class{GI}_{\class{B}} \subseteq R\textnormal{-Mod}$ is a Frobenius category whose projective-injective objects are precisely the usual injective $R$-modules. The canonical functor $\gamma : R\textnormal{-Mod} \xrightarrow{} \textnormal{Ho}(R\textnormal{-Mod})$ takes all projective modules and all modules in $\class{B}$ to 0, and we have a triangulated equivalence to the stable category $$\textnormal{Ho}(R\textnormal{-Mod}) \cong \textnormal{St}(\class{GI}_{\class{B}}).$$ Moreover, these are well-generated triangulated categories. \end{corollary} \begin{proof} The canonical functor $\gamma$ takes precisely $\class{W}$ to 0, and $\class{W}$ contains $\class{B}$, and certainly all projectives. The cotorsion pair $(\class{W}, \class{GI}_{\class{B}})$ is hereditary in the sense that $\class{W}$ is closed under taking cokernels of monomorphisms. Thus the Frobenius equivalence follows from a general result about hereditary abelian model structures~\cite[Theorem~4.3]{gillespie-hereditary-abelian-models}. We also point out that the homotopy category is a well generated category in the sense of~\cite{neeman-well generated}. Indeed once we have a cofibrantly generated model structure on a locally presentable (pointed) category, a main result from~\cite{rosicky-brown representability combinatorial model srucs} is that its homotopy category is well generated. \end{proof} \subsection{Gorenstein $\class{B}$-projectives modules} Much of what we have done above has a projective dual. To describe, let $\class{B}$ denote a class of modules, but now assume it contains all of the projective modules (instead of the injective modules). \begin{definition}\label{Defs-relative-G-pro} We say an $R$-module $M$ is \emph{Gorenstein $\class{B}$-projective} if $M=Z_{0}Q$ for some exact and $\Hom(-, \class{B})$-acyclic complex of projective $R$-modules $Q$. \end{definition} \begin{notation}\label{notation-relative-G-proj} We let $\class{GP}_{\class{B}}$ denote the class of all Gorenstein $\class{B}$-projective $R$-modules, and we set $\class{V} = \rightperp{\class{GP}_{\class{B}}}$. \end{notation} We leave it to the reader to formulate and verify the duals of the sequence of Lemmas~\ref{lemma-characterize-G-Inj}--\ref{lemma-relative-G-Inj-thick}. We get the following result, dual to Proposition~\ref{proposition-relative-G-Inj}. But note that we don't get a \emph{perfect} cotorsion pair. For the Gorenstein $\class{B}$-injectives, that conclusion relies on~\cite[Prop.~3.1]{gillespie-ding-modules} and~\cite[Theorem~7.2.6]{enochs-jenda-book}; we don't have duals for those. \begin{proposition}[Dual of Proposition~\ref{proposition-relative-G-Inj}]\label{proposition-relative-G-Proj} Let $\class{B}$ be a class of modules containing the projectives. Suppose every module $M$ has a special $\class{GP}_{\class{B}}$-precover. Then $(\class{GP}_{\class{B}}, \class{V})$ is a complete hereditary cotorsion pair. In fact, $(\class{GP}_{\class{B}}, \class{V}, All)$ is a projective abelian model structure on $R$-Mod. \end{proposition} There is however a dual for Proposition~\ref{prop-always-cot-pair}. Note that the proof of Proposition~\ref{prop-always-cot-pair} only uses that the Gorenstein injectives are the right side of a cotorsion pair (not completeness). It was just shown in~\cite[Cor.~3.4]{cortes-saroch} that the Gorenstein projectives are the left half of a cotorsion pair, and this will give us the dual of Proposition~\ref{prop-always-cot-pair}. However, this is shown directly in~\cite[Them.~3.3]{cortes-saroch}! So this is as far as we know how to go by working straight from the definition of the Gorenstein $\class{B}$-projectives. However, IF we can build the projective model structure on $\textnormal{Ch}(R)$ that is dual to the one in Proposition~\ref{prop-injective model on complexes}, then the dual of Theorem~\ref{thm-Gor-module} and its Corollary~\ref{cor-well-generated} will hold by duality arguments. We make a precise statement for later use. \begin{theorem}[Dual of Theorem~\ref{thm-Gor-module}]\label{theorem-projectivemodels} Let $\class{B}$ be a class of modules containing the projectives. Suppose we have constructed a projective abelian model structure on the category of chain complexes whose cofibrant objects are the exact $\Hom(-,\class{B})$-acyclic complexes of projectives. Call this the \textbf{exact $\boldsymbol{\Hom(-,\class{B})}$-acyclic projective model structure}. Then there is a projective abelian model structure on $R$-Mod, the \textbf{Gorenstein $\class{B}$-projective model structure}, in which the cofibrant objects are the Gorenstein $\class{B}$-projectives. In particular, $(\class{GP}_{\class{B}}, \class{V})$ is a complete hereditary cotorsion pair in $R$-Mod. In this case, the sphere functor $S^0(-) : R\text{-Mod} \xrightarrow{} \textnormal{Ch}(R)$ is a right Quillen equivalence from the Gorenstein $\class{B}$-projective model structure to the exact $\Hom(-,\class{B})$-acyclic projective model structure. \end{theorem} \begin{proof} Let us just comment on how the proof of Theorem~\ref{thm-Gor-module} dualizes. A main point is that the functor $S^0(-) : R\text{-Mod} \xrightarrow{} \textnormal{Ch}(R)$ is also right adjoint, to the functor $X \mapsto X_0/B_0X$. The idea is to apply Proposition~\ref{proposition-relative-G-Proj}. So for any object $M$, we take a short exact sequence \[ 0 \xrightarrow{} X \xrightarrow{} P \xrightarrow{}S^0(M) \xrightarrow{} 0 \] where $P$ is an exact $\Hom(-,\class{B})$-acyclic complex of projectives and $X$ is trivial in the exact $\Hom(-,\class{B})$-acyclic model structure. By the snake lemma, we get a short exact sequence \[ 0 \xrightarrow{} X_0/B_0X \xrightarrow{} P_0/B_0P \xrightarrow{} M \xrightarrow{} 0. \] $P_0/B_0P \cong Z_{-1}P$ is Gorenstein $\class{B}$-projective by definition. The argument of~\cite[Lemma~4.4]{gillespie-duality-pairs} dualizes, and we get $X_{0}/B_0X \in \class{V}$. So Proposition~\ref{proposition-relative-G-Proj} applies. Again, the functor $S^0(-) : R\text{-Mod} \xrightarrow{} \textnormal{Ch}(R)$ is right adjoint to the functor $X \mapsto X_0/B_0X$. The argument from~\cite[Theorem~8.8]{bravo-gillespie-hovey} generalizes to show that they form a Quillen equivalence from the exact $\Hom(-,\class{B})$-acyclic projective model structure to the Gorenstein $\class{B}$-projective model structure. \end{proof} \begin{remark} In the above scenario of Theorem~\ref{theorem-projectivemodels}, the dual of Corollary~\ref{cor-well-generated} also holds. However, the conclusion that the homotopy category is well-generated is dependent on showing the model structure to be cofibrantly generated. \end{remark} \section{Relative Gorenstein flat and projectively coresolved modules}\label{sec-coresolved} We again let $\class{B}$ denote a class of modules containing all injective modules. However, we now assume that all the modules in $\class{B}$ are $R^\circ$-modules, where $R^\circ$ denotes the oppose ring $R^{\text{op}}$. The following notion of Gorenstein $\class{B}$-flat module was studied in~\cite{estrada-iacob-perez-G-flat}. \begin{definition}\label{Defs-relative-G-flat} We will say that a chain complex $X$ of $R$-modules is \emph{$\class{B}^{\otimes}$-acyclic} if the tensor product of $X$ with any $B \in \class{B}$ yields an exact complex of abelian groups. If $X$ itself is also exact we will say that $X$ is an \emph{exact $\class{B}^{\otimes}$-acyclic} complex. We say an $R$-module $N$ is \emph{Gorenstein $\class{B}$-flat} if $N=Z_{0}F$ for some exact $\class{B}^{\otimes}$-acyclic complex of flat $R$-modules $F$. \end{definition} \begin{notation}\label{notation-relative-G-flat} We let $\class{GF}_{\class{B}}$ denote the class of all Gorenstein $\class{B}$-flat $R$-modules. We set $\class{GC}_{\class{B}} = \rightperp{\class{GF}_{\class{B}}}$ and call this the class of all \emph{Gorenstein $\class{B}$-cotorsion} modules.\\ \end{notation} Estrada-Iacob-P\'erez show that $\class{GF}_{\class{B}}$ is a Kaplansky class and closed under direct limits, and that gives us the following result. \begin{proposition}\cite[Corollary~2.20]{estrada-iacob-perez-G-flat}\label{G-B-flat} Suppose the class $\class{GF}_{\class{B}}$ is closed under extensions. Then $(\class{GF}_{\class{B}}, \class{GC}_{\class{B}})$ is a perfect hereditary cotorsion pair, cogenerated by a set. \end{proposition} Now let $(\class{F},\class{C})$ denote Enochs' flat cotorsion pair. Here $\class{F}$ denotes the class of all flat $R$-modules and $\class{C}$ the class of all cotorsion $R$-modules. It is then shown in~\cite[Proposition~3.1]{estrada-iacob-perez-G-flat} that $\class{GF}_{\class{B}}\cap\class{GC}_{\class{B}} = \class{F}\cap \class{C}$, as long as $\class{GF}_{\class{B}}$ is closed under extensions. Applying~\cite[Theorem~1.2]{gillespie-hovey triples}, it proves the following.\\ \begin{theorem}~\cite[Theorem~3.2]{estrada-iacob-perez-G-flat}\label{thm-Gor-flat-mod} Let $\class{B}$ be a class of $R^\circ$-modules containing the injectives. Assume that the Gorenstein $\class{B}$-flat modules are closed under extensions. Then there is cofibrantly generated abelian model structure on $R$-Mod, the \textbf{Gorenstein $\class{B}$-flat model structure}, corresponding to the cotorsion pairs $(\class{GF}_{\class{B}}, \class{GC}_{\class{B}})$ and $(\class{F}, \class{C})$. \end{theorem} We will see below in Proposition~\ref{prop-proj-coresolved-B-flat} that, as in~\cite[Theorem~4.11]{saroch-stovicek-G-flat} and~\cite[Theorem~2.14]{estrada-iacob-perez-G-flat}, closure under extensions comes free for the classes $\class{B}$ we will consider in this paper. In particular, this is the case whenever $\class{B}$ is the injective class for some semi-complete duality pair. More generally, when $\class{B}$ satisfies the hypotheses of Theorem~\ref{theorem-proj-coresolved-B-flat}. \subsection{Projectively coresolved Gorenstein $\class{B}$-flat modules} $\class{B}$ still denotes a class of $R^\circ$-modules containing all injectives. The following relative version of {\v{S}}aroch and {\v{S}}\v{t}ov{\'{\i}}{\v{c}}ek's projectively coresolved Gorenstein flat modules was studied in~\cite{estrada-iacob-perez-G-flat}. \begin{definition}\label{Defs-relative-G-flat-proj} We say an $R$-module $N$ is \emph{projectively coresolved Gorenstein $\class{B}$-flat} if $N=Z_{0}Q$ for some exact $\class{B}^{\otimes}$-acyclic complex of projective $R$-modules $Q$. \end{definition} \begin{notation}\label{notation-relative-G-flat-proj} We let $\class{PGF}_{\class{B}}$ denote the class of all projectively coresolved Gorenstein $\class{B}$-flat $R$-modules, and we set $\class{V} = \rightperp{\class{PGF}_{\class{B}}}$. \end{notation} \begin{lemma}\label{lemma-proj-perp} The class $\class{V} := \rightperp{\class{PGF}_{\class{B}}}$ equals the class of all $R$-modules $V$ such that $\Hom_R(Q,V)$ is acyclic for every exact and $\class{B}^{\otimes}$-acyclic complex of projectives $Q$. Equivalently, $\Ext^1_{\textnormal{Ch}(R)}(Q,S^0V) = 0$ for all such $Q$. \end{lemma} \begin{proof} Note that the class of all exact $\class{B}^{\otimes}$-acyclic complexes of projectives $Q$, is closed under suspensions. So we have that $V \in \rightperp{\class{PGF}_{\class{B}}}$ if and only if we have $\Ext^1_R(Z_nQ,V) = 0$ for all such $Q$. Since $Q$ is an exact complex of projectives, this happens if and only if $\Hom_R(Q,V)$ is exact for all such $Q$. But $\Hom_R(Q,V)=\mathit{Hom}(Q,S^0V)$, and since $Q$ is a complex of projectives this complex is exact if and only if $\Ext^1_{\textnormal{Ch}(R)}(Q,S^0V) = 0$ for all such $Q$. \end{proof} Next we have an analog of Proposition~\ref{proposition-relative-G-Inj} for the class of $\mathcal{PGF}_{\mathcal{B}}$ modules: \begin{proposition}[Analog of Proposition~\ref{proposition-relative-G-Inj} ]\label{prop-proj-coresolved-B-flat} Let $\class{B}$ be a class of $R^\circ$-modules containing the injectives. Suppose every module $M$ has a special $\mathcal{PGF}_{\mathcal{B}}$-precover. Then $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V})$ is a complete hereditary cotorsion pair. In fact, $(\class{PGF}_{\class{B}}, \class{V}, All)$ is a projective abelian model structure on $R$-Mod. Moreover, Gorenstein $\class{B}$-flat modules are closed under extensions and $(\class{GF}_{\class{B}}, \class{V}, \class{C})$ is a cofibrantly generated abelian model structure on $R$-Mod. That is, the \textbf{Gorenstein $\class{B}$-flat model structure} of Theorem~\ref{thm-Gor-flat-mod} exists and shares the same class of trivial objects as the projective model structure. \end{proposition} \begin{proof} We show that $^\bot \class{V} \subseteq \mathcal{PGF}_{\mathcal{B}}$, and therefore $(\mathcal{PGF}_{\mathcal{B}}, \class{V})$ is a cotorsion pair. Let $M \in \leftperp{\class{V}}$. Consider an exact sequence $0 \rightarrow A \rightarrow D \rightarrow M \rightarrow 0$ with $D \in \mathcal{PGF}_{\mathcal{B}}$ and $A \in \class{V} = \mathcal{PGF}_{\mathcal{B}}^\bot$. Since $\Ext^1_R(M,A) =0$ we have $D \cong A \oplus M$, so $M \in \mathcal{PGF}_{\mathcal{B}}$. Indeed $\mathcal{PGF}_{\mathcal{B}}$ is closed under direct summands for the following reason. It is shown in~\cite[Theorem~2.10]{estrada-iacob-perez-G-flat} that $\mathcal{PGF}_{\mathcal{B}}$ is a resolving class, (as long as $\class{B}$ contains all the injective modules). It is clearly closed under direct sums as well. Therefore, $\mathcal{PGF}_{\mathcal{B}}$ is closed under direct summands by~\cite[Prop.~1.4]{holm}. Therefore, $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V})$ is a cotorsion pair with enough projectives. Therefore, it also has enough injectives by the Salce trick~\cite[Prop.~7.1.7]{enochs-jenda-book}. So $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V})$ is a complete cotorsion pair. The pair is hereditary: if $N \in \mathcal{PGF}_{\mathcal{B}}$ then, by definition, there is an exact sequence $0 \rightarrow N' \rightarrow P \rightarrow N \rightarrow 0$ with $P$ projective and $N' \in \mathcal{PGF}_{\mathcal{B}}$. Then for any $V \in \class{V}$, the exact sequence $0 = \Ext^1_R(N',V) \rightarrow \Ext^2_R(N,V) \rightarrow \Ext^2_R(P,V)=0$ gives that $\Ext^2_R(N,V)=0$. Similarly, $\Ext^i_R(N,V)=0$ for all $i \ge 1$, and all $V \in \class{V}$. Any right orthogonal class, in particular $\mathcal{V}$, is closed under direct summands. The fact that the class $\mathcal{V}$ has the 2 out of 3 property on short exact sequences follows from Lemma~\ref{lemma-proj-perp}: For every exact and $\class{B}^{\otimes}$-acyclic complex of projectives $Q$, apply the functor $Hom_R(Q,-)$ to any short exact sequence of $R$-modules. The 2 out of 3 property for exactness of cochain complexes gives the result. Since we have $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V})$ is a complete cotorsion pair and $\mathcal{V}$ is thick, we will get the projective abelian model structure $(\class{PGF}_{\class{B}}, \class{V}, All)$ by applying~\cite[Proposition~3.4]{bravo-gillespie-hovey}, once we see that $\mathcal{V}$ contains all projective modules. But every module in $\mathcal{PGF}_{\mathcal{B}}$ is a projectively coresolved Gorenstein flat module in the sense of {\v{S}}aroch and {\v{S}}\v{t}ov{\'{\i}}{\v{c}}ek~\cite{saroch-stovicek-G-flat}, because we are assuming $\class{B}$ contains all injectives. A key result they show in~\cite[Theorem~4.4]{saroch-stovicek-G-flat} (see Corollary~\ref{cor-ss}) is that every such module is Gorenstein projective. It follows that $\class{V}$ contains all projective modules. So $(\class{PGF}_{\class{B}}, \class{V}, All)$ is a projective abelian model structure on $R$-Mod. In fact, it follows from {\v{S}}aroch and {\v{S}}\v{t}ov{\'{\i}}{\v{c}}ek's~\cite[Theorem~4.4]{saroch-stovicek-G-flat} (see Corollary~\ref{cor-ss}) that $\class{V}$ contains all flat modules. Therefore, the claim that $\class{V}$ is also the class of trivial objects in the Gorenstein $\class{B}$-flat model structure will follow immediately from~\cite[Proposition~3.2]{gillespie-recollement} combined with~\cite[Lemma~2.3(1)]{gillespie-models-for-hocats-of-injectives}, once we show $\mathcal{GF}_{\mathcal{B}} \cap \class{V} = \class{F}$, where $\class{F}$ is the class of all flat modules. Below we do this by adapting the argument from~\cite[Proposition~5.2]{estrada-gillespie-coherent-schemes}. From the above comments we have $\class{F} \subseteq \mathcal{GF}_{\mathcal{B}} \cap \class{V}$, so we focus on showing the reverse containment $\mathcal{GF}_{\mathcal{B}} \cap \class{V} \subseteq \class{F}$. So let $M \in \mathcal{GF}_{\mathcal{B}} \cap \class{V}$, and write it as $M = Z_0F$ where $F$ is an exact $\class{B}^{\otimes}$-acyclic complex of flat modules. From~\cite[Cor.~6.4]{bravo-gillespie-hovey} or~\cite[Them.~4.2(1)/Prop.~1.7]{stovicek-deconstructible} we have a complete cotorsion pair $(\dwclass{P}, \rightperp{(\dwclass{P})})$, where $\dwclass{P}$ is the class of all complexes of projectives. So we may write a short exact sequence $$0 \xrightarrow{} F \xrightarrow{} W \xrightarrow{} P \xrightarrow{} 0 $$ with $W \in \rightperp{(\dwclass{P})}$ and $P \in \dwclass{P}$. But then using Neeman's result from~\cite{neeman-flat} (a statement in the notation we are using is also given in~\cite[Lemma~4.3]{estrada-gillespie-coherent-schemes}), one easily argues that $W \in \tilclass{F}$, the class of all exact complexes with all cycle modules flat. Since $F$ and $W$ are each exact, we see that $P$ is exact too. Moreover, the short exact sequence is split in each degree, so tensoring with any $B \in \class{B}$, yields another short exact sequence. So since $F$ and $W$ are each exact and $\class{B}^{\otimes}$-acyclic complexes, it follows that $P$ is an exact $\class{B}^{\otimes}$-acyclic complex too. Therefore $Z_0P$ is a projectively coresolved Gorenstein $\class{B}$-flat module. Note that by the snake lemma we get a short exact sequence $0 \xrightarrow{} Z_0F \xrightarrow{} Z_0W \xrightarrow{} Z_0P \xrightarrow{} 0$. By the hypothesis, $M = Z_0F \in \class{V}$, and so we conclude that this sequence splits. Since $Z_0W$ is flat, so is the direct summand $Z_0F$, proving $\mathcal{GF}_{\mathcal{B}} \cap \class{V} \subseteq \class{F}$. It remains to see that the Gorenstein $\class{B}$-flat modules are closed under extensions. The reader can verify that {\v{S}}aroch and {\v{S}}\v{t}ov{\'{\i}}{\v{c}}ek's characterizations of Gorenstein flat modules given in~\cite[Theorem~4.11]{saroch-stovicek-G-flat} generalize to any class $\class{B}$ containing the injectives and such that $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V})$ is a complete cotorsion pair. See also~\cite[Theorem~2.14]{estrada-iacob-perez-G-flat}; the proof of Estrada-Iacob-P\'erez also illustrates that the characterizations hold whenever $\class{B}$ contains the injectives and $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V})$ is a complete cotorsion pair. We state these characterizations in a Remark below. One of the characterizations that carry over is that a module $M$ is Gorenstein $\class{B}$-flat if and only if it is in the class $\leftperp{(\class{C} \cap \class{V})}$, where $\class{C}$ is the class of cotorsion modules. This class is closed under extensions, so Theorem~\ref{thm-Gor-flat-mod} applies. \end{proof} Here is the promised Remark concerning~\cite[Theorem~4.11]{saroch-stovicek-G-flat}. \begin{remark} In addition to our blanket assumption that $\class{B}$ contains all injectives, suppose we know $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V})$ is a complete cotorsion pair. Then the following conditions are equivalent for an $R$-module $M$. \begin{enumerate} \item$M$ is Gorenstein $\mathcal{B}$-flat. \item There is a short exact sequence of modules \[ 0 \to F \to L \to M \to 0 \] with $F$ flat and $L \in \mathcal{PGF}_{\mathcal{B}}$, which is also $\Hom_R(-,\mathcal{C})$-acyclic, where $\mathcal{C}$ is the class of cotorsion modules. \item $\Ext^1_R(M,C) = 0$ for every $C \in \mathcal{C} \cap \mathcal{V}$. That is, $M \in \leftperp{(\mathcal{C} \cap \mathcal{V})}$. \item There is a short exact sequence of modules \[ 0 \to M \to F \to L \to 0 \] with $F$ flat and $L \in \mathcal{PGF}_{\mathcal{B}}$. \end{enumerate} \end{remark} \begin{proposition}[Analog of Proposition~\ref{prop-injective model on complexes}]\label{prop-proj-coresolved model on complexes} Let $\class{B}$ be any class of $R^\circ$-modules for which there exists a set (not just a class) $\class{S} \subseteq \class{B}$ such that each $B \in \class{B}$ is a transfinite extension of modules in $\class{S}$. Then there is a cofibrantly generated projective abelian model structure on the category of chain complexes whose cofibrant objects are the exact $\class{B}^{\otimes}$-acyclic complexes of projectives. We call this the \textbf{exact $\boldsymbol{\class{B}^{\otimes}}$-acyclic projective model structure}. \end{proposition} \begin{proof} This follows from~\cite[Theorem~6.1]{bravo-gillespie-hovey}. One can check that a complex $P$ of projective modules is exact and $\class{B}^{\otimes}$-acyclic if and only if it is exact upon tensoring with $R\oplus B$, where $B$ is the single ``test module'' $B = \bigoplus_{N \in S} N$. Therefore, we get from~\cite[Theorem~6.1]{bravo-gillespie-hovey}, a cofibrantly generated abelian model structure on $\textnormal{Ch}(R)$, where the cofibrant objects are the exact $\class{B}^{\otimes}$-acyclic complexes of projectives. \end{proof} \begin{theorem}[Analog of Theorem~\ref{thm-Gor-module}]\label{theorem-proj-coresolved-B-flat} Let $\class{B}$ be a class of $R^\circ$-modules containing the injectives. Assume there exists a set (so again, not just a class) $\class{S} \subseteq \class{B}$ such that each $B \in \class{B}$ is a transfinite extension of modules in $\class{S}$. Then there is a cofibrantly generated projective abelian model structure on $R$-Mod, the \textbf{projectively coresolved Gorenstein $\class{B}$-flat model structure}, whose cofibrant objects are the projectively coresolved Gorenstein $\class{B}$-flat modules In particular, $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V})$ is a complete hereditary cotorsion pair, cogenerated by a set. Moreover, the Gorenstein $\class{B}$-flat model structure of Theorem~\ref{thm-Gor-flat-mod} exists and shares the same class $\class{V}$ of trivial objects as the projective model structure. Finally, the sphere functor $S^0(-) : R\text{-Mod} \xrightarrow{} \textnormal{Ch}(R)$ is a right Quillen equivalence from the Gorenstein $\class{B}$-flat (resp. projectively coresolved) model structure to the exact $\class{B}^{\otimes}$-acyclic flat (resp. projective) model structure. \end{theorem} \begin{proof} By Proposition~\ref{prop-proj-coresolved-B-flat} we only need to show every module $M$ has a special $\mathcal{PGF}_{\mathcal{B}}$-precover. But by Proposition~\ref{prop-proj-coresolved model on complexes} we have the exact $\class{B}^{\otimes}$-acyclic projective model structure on chain complexes. So for any object $M$, we can find a short exact sequence \[ 0 \xrightarrow{} X \xrightarrow{} Q \xrightarrow{}S^0(M) \xrightarrow{} 0 \] where $Q$ is an exact $\class{B}^{\otimes}$-acyclic complex of projectives and $X$ is trivial in the exact $\class{B}^{\otimes}$-acyclic projective model structure. By the snake lemma, we get a short exact sequence \[ 0 \xrightarrow{} X_0/B_0X \xrightarrow{} Q_0/B_0Q \xrightarrow{} M \xrightarrow{} 0 \] and $Q_0/B_0Q \cong Z_{-1}Q$ is projectively coresolved Gorenstein $\class{B}$-flat, by definition. So our goal is to show $X_0/B_0X \in \class{V}$. It follows from Lemma~\ref{lemma-proj-perp} that $X_0/B_0X \in \class{V}$ if and only if $S^0(X_0/B_0X)$ is trivial in the exact $\class{B}^{\otimes}$-acyclic projective model structure. So the plan is to show below that $S^0(X_0/B_0X)$ is trivial. But first we note that any bounded above complex of projective modules is trivial in the exact $\class{B}^{\otimes}$-acyclic projective model structure, and, any bounded below exact complex is also trivial. Indeed for any projective module $P$, we deduce that $S^n(P)$ is trivial from {\v{S}}aroch and {\v{S}}\v{t}ov{\'{\i}}{\v{c}}ek's~\cite[Theorem~4.4]{saroch-stovicek-G-flat} (see Corollary~\ref{cor-ss}) combined with the above Lemma~\ref{lemma-proj-perp}. It follows that any bounded above complex of projective modules must also be trivial; for example, see~\cite[Lemma~2.3]{gillespie-AC-proj-complexes}. On the other hand, one easily verifies that for any module $N$, the disk complex $D^n(N)$ is also trivial. So~\cite[Lemma~2.3]{gillespie-AC-proj-complexes} also tells us that any bounded below exact complex is trivial. With these observations we will argue that $S^0(X_0/B_0X)$ is trivial. Indeed one can see that the complex $X$ has a subcomplex $A \subseteq X$, where $A$ is the shown bounded below exact complex: $\cdots \xrightarrow{} X_2 \xrightarrow{} X_1 \xrightarrow{} B_0X \xrightarrow{} 0$. As noted above, this complex is trivial, and since $X$ is trivial the quotient $X/A$ is trivial too. We note that this quotient is the complex $0 \xrightarrow{} X_{0}/B_{0}X \xrightarrow{} X_{-1} \xrightarrow{} X_{-2} \xrightarrow{} \cdots$, which in turn has another obvious subcomplex $0 \xrightarrow{} 0 \xrightarrow{} X_{-1} \xrightarrow{} X_{-2} \xrightarrow{} \cdots$. This is a bounded above complex of projective modules and therefore it too is trivial. This in turn implies the corresponding quotient complex, which is $S^0(X_{0}/B_{0}X)$, is trivial. This completes the proof that the short exact sequence \[ 0 \xrightarrow{} X_0/B_0X \xrightarrow{} Q_0/B_0Q \xrightarrow{} M \xrightarrow{} 0 \] is a special $\mathcal{PGF}_{\mathcal{B}}$-precover of $M$, and gives us the projective model structure corresponding to the Hovey triple $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V}, All)$. The construction from~\cite[Theorem~6.1]{bravo-gillespie-hovey} shows that the class of all exact $\class{B}^{\otimes}$-acyclic complexes of projectives is filtered by a set of such complexes. The filtrations descend to a filtration on the cycles and it follows that $(\mathcal{PGF}_{\mathcal{B}}, \mathcal{V})$ is cogenerated by a set. This in turn translates to a cofibrantly generated model structure by~\cite[Section~6]{hovey}. Again, the functor $S^0(-) : R\text{-Mod} \xrightarrow{} \textnormal{Ch}(R)$ is right adjoint to the functor $X \mapsto X_0/B_0X$. By~\cite[Theorem~4.2]{estrada-gillespie-coherent-schemes} we have the exact $\class{B}^{\otimes}$-acyclic flat model structure on chain complexes. We can adapt the proof of~\cite[Prop.~5.5]{estrada-gillespie-coherent-schemes} to show that these functors provide a Quillen equivalence between the flat model structures. (The proof for the projective model structures is similar. In fact, the proof for the flat case is more difficult and the proof in~\cite[Prop.~5.5]{estrada-gillespie-coherent-schemes} \emph{relies} on the existence of projective models.) Indeed the argument there shows that $X \mapsto X_0/B_0X$ preserves cofibrations and trivial cofibrations, making it a left Quillen functor. To show that $X \mapsto X_0/B_0X$ is a Quillen equivalence in the flat case boils down to showing the following: (i) If $X \xrightarrow{f} Y$ is a chain map between two exact $\class{B}^{\otimes}$-acyclic complexes of flats for which the induced map $X_0/B_0X \xrightarrow{\bar{f}} Y_0/B_0Y$ is a weak equivalence, then $f$ itself must be a weak equivalence. (ii) For all cotorsion modules $C$, and short exact sequences $0 \xrightarrow{} X \xrightarrow{} F \xrightarrow{} S^0C \xrightarrow{} 0$ with $F$ in the class ${}_{\class{B}}\tilclass{F}$ of all exact $\class{B}^{\otimes}$-acyclic complexes of flats, and $X \in \rightperp{{}_{\class{B}}\tilclass{F}}$, then the induced short exact sequence $0 \xrightarrow{} X_0/B_0X \xrightarrow{} F_0/B_0F \xrightarrow{} C \xrightarrow{} 0$ must have $X_0/B_0X \in \class{V}$. Note that what is required to be shown for (ii) is exactly the same type of argument we did above where we showed that each module $M$ has a special $\mathcal{PGF}_{\mathcal{B}}$-precover. In fact, the argument above will work, even with $C$ not assumed to be cotorsion, by again using {\v{S}}aroch and {\v{S}}\v{t}ov{\'{\i}}{\v{c}}ek's nontrivial fact from~\cite[Theorem~4.4]{saroch-stovicek-G-flat} (see Corollary~\ref{cor-ss}) that $\class{V}$ contains all flat modules. (The projective and flat models share the same class of trivial objects and each sphere complex $S^n(F)$ is trivial whenever $F$ is flat, by their result.) To prove the above statement (i), the proof given in~\cite[Proposition~5.5]{estrada-gillespie-coherent-schemes} will readily adapt and yet again uses this fact that the thick class $\class{V}$ of trivial objects contains all flat modules. \end{proof} Note that Theorems~\ref{thm-Gor-module} and~\ref{theorem-proj-coresolved-B-flat} combine to prove Theorem~\ref{them-models} from the Introduction. \section{Gorenstein modules relative to a complete duality pair}\label{sec-D-modules} In this section we let $\mathfrak{D} = (\class{L},\class{A})$ denote a semi-complete duality pair with $R$-modules in the projective class $\class{L}$ and $R^\circ$-modules in the injective class $\class{A}$. \begin{corollary}\label{corollary-models} The following abelian model structures are induced by $\mathfrak{D} = (\class{L},\class{A})$. \begin{enumerate} \item The \textbf{Gorenstein $\mathfrak{D}$-injective model structure} exists on $R^\circ$-Mod. It is a cofibrantly generated injective abelian model structure whose fibrant objects are the Gorenstein $\class{A}$-injective $R^\circ$-modules. \item The \textbf{Gorenstein $\mathfrak{D}$-projective model structure} exists on $R$-Mod. It is a cofibrantly generated projective abelian model structure whose cofibrant objects are the Gorenstein $\class{L}$-projective $R$-modules, equivalently, the projectively coresolved Gorenstein $\class{A}$-flat $R$-modules. \item The \textbf{Gorenstein $\mathfrak{D}$-flat model structure} exists on $R$-Mod. It is a cofibrantly generated abelian model structure whose cofibrant objects (resp. trivially cofibrant objects) are the Gorenstein $\class{A}$-flat modules (resp. flat modules). Moreover, the trivial objects in this model structure coincide with those in the Gorenstein $\mathfrak{D}$-projective model structure. \end{enumerate} \end{corollary} \begin{remark} Each model structure is Quillen equivalent to a model structure on chain complexes as described in Theorems~\ref{thm-Gor-module} and~\ref{prop-proj-coresolved model on complexes}. \end{remark} \begin{proof} Gorenstein $\class{L}$-projective $R$-modules are equivalent to projectively coresolved Gorenstein $\class{A}$-flat $R$-modules by Theorem~\ref{them-projectivecomplexes}. So considering what we have shown in Theorem~\ref{thm-Gor-module}, Theorem~\ref{theorem-projectivemodels}, Theorem~\ref{prop-proj-coresolved model on complexes}, and Theorem~\ref{thm-Gor-flat-mod}, we only need to show that the injective class $\class{A}$ contains a set $\class{S}$ for which every module in $\class{A}$ is built up as a transfinite extension of modules in $\class{S}$. But $\class{A}$ is closed under pure submodules and pure quotients by Holm and J\o rgensen's Theorem~\ref{them-duality pair purity}. It follows from a standard argument that there exists a set $\class{S}$ as desired. For example, see~\cite[Prop.~2.8]{bravo-gillespie-hovey}. \end{proof} We noted in Theorem~\ref{thm-Gor-module} that $(\mathcal{W}, \mathcal{GI}_{\mathcal{B}})$ is always a perfect cotorsion pair. On the other hand, in the context of Proposition~\ref{prop-proj-coresolved-B-flat}, it follows from~\cite[Prop.~2.19]{estrada-iacob-perez-G-flat} that $(\mathcal{GF}_{\mathcal{B}}, \mathcal{GC}_{\mathcal{B}})$ is always a perfect cotorsion pair. In particular, we get the following corollary. \begin{corollary} Whenever $\mathfrak{D} = (\class{L},\class{A})$ is a semi-complete duality pair, then we have $(\mathcal{W}, \mathcal{GI}_{\mathcal{A}})$ and $(\mathcal{GF}_{\mathcal{A}}, \mathcal{GC}_{\mathcal{A}})$ are each perfect cotorsion pairs. \end{corollary} By applying Corollary~\ref{corollary-models} to the duality pair $\mathfrak{D} = (\langle Flat\rangle, \langle Inj \rangle)$ from Proposition~\ref{prop-ding-thing} we get the following theorem. \begin{theorem}\label{them-dings} The Ding injective cotorsion pair is a perfect cotorsion pair over any ring $R$. The Ding injectives form the class of fibrant objects of a cofibrantly generated injective abelian model structure on the category of modules over a ring. Therefore its homotopy category is a well-generated triangulated category. \end{theorem} In fact we have proved the following. \begin{corollary}\label{cor-n-duality} Let $R$ be any ring and $n \geq 1$ be a natural number. We have the following special cases of interest, where all classes of modules mentioned are parts of complete cotorsion pairs, and the injective and flat pairs are perfect cotorsion pairs. \begin{enumerate} \item Set $\mathfrak{D}_{\infty} :=(\class{L},\class{A})$ to be the level-absolutely clean duality pair of Example~\ref{example-level}. Then the classes of modules in Corollary~\ref{corollary-models} correspond to the Gorenstein AC-injectives, the Gorenstein AC-projectives, and the Gorenstein AC-flats. \item For $n \geq 2$, set $\mathfrak{D}_n :=(\class{FP}_n\text{-Flat},\class{FP}_n\text{-Inj})$ to be the Bravo and P\'erez duality pairs of Example~\ref{example-BP}. Then the classes of modules in Corollary~\ref{corollary-models} correspond to what we call Gorenstein $\class{FP}_n$-injective, Gorenstein $\class{FP}_n$-projective, and Gorenstein $\class{FP}_n$-flat modules. \item For $n = 1$, set $\mathfrak{D}_1 := (\langle Flat\rangle, \langle Inj \rangle)$ to be the duality pair of Example~\ref{ques-G-flat}. Then the classes of modules in Corollary~\ref{corollary-models} correspond to the Ding injectives, the projectively coresolved Gorenstein flats, and the usual Gorenstein flats. \end{enumerate} \end{corollary}
{ "timestamp": "2021-05-11T02:16:57", "yymm": "2105", "arxiv_id": "2105.01770", "language": "en", "url": "https://arxiv.org/abs/2105.01770" }
\section{Introduction} In trajectory planning for quadrotors, minimum snap, i.e., fourth derivative, splines are commonly employed~\cite{mahony2012multirotor}. Minimum snap trajectory pioneers Mellinger and Kumar solve an equality constrained quadratic program (QP) to generate spline trajectories~\cite{Mellinger2011}. The numerical stability of this problem is considered by Richter {\it et al.}~\cite{Richter2016} and further by de Almeida and Akella~\cite{DeAlmeida2017}, who each propose methods with better numerical stability. In a previous work, we propose a well-conditioned formulation of Mellinger and Kumar's problem as well as an algorithm that solves it with linear computational complexity in the number of segments in the spline trajectory~\cite{burke2020}. Subsequent to our work, an upcoming paper by Wang {\it et al.} improves upon our results by presenting an algorithm without matrix inverse calculations and thus with a lower computational complexity coefficient \cite{wang2020generating}. Another means of generating spline trajectories is achieved by formulating a problem in which the time between spline segments is included as a variable over which to optimise~\cite{Mellinger2011},~\cite{Richter2016}. The trajectories generated by this problem have smaller snap than those generated by the aforementioned QP, but this problem is nonlinear and nonconvex~\cite{burri2015real}. In solving it, de Almeida {\it et al.} encounter long computation times and note that it is ill-suited to real-time applications~\cite{de2019real}. Also as part of their upcoming work, Wang {\it et al.} propose computationally efficient expressions for a gradient descent method used to solve a closely related problem~\cite{wang2020generating}. The recent developments in minimum snap trajectory planning are also of benefit for trajectory planning for systems beyond quadrotors. Polynomials may be used for quadrotor trajectory planning because quadrotors are differentially flat systems, that is, all states and inputs of the system can be calculated from a set of outputs without integration~\cite{Van_Nieuwstadt1998-vs}. The class of differentially flat systems is extensive and incorporates many mechanical systems, including robot arms and cars with trailers~\cite{Murray95differentialflatness}. Motivated by the wealth of applications that could benefit from efficient spline generation algorithms, we leverage the recent developments in minimum snap trajectory planning for general spline trajectory planning. Through the use of efficient methods, we aim to enable real-time trajectory generation for platforms with varying requirements, such as the number of segments in their spline trajectories or the dimension of their trajectories. \begin{figure}[t] \centering {\includegraphics[width=0.98\linewidth]{fixed_comparison.png}} \caption{The computational time for generating minimum snap spline trajectories of $l$ segments using Algorithm~\ref{algo1} (A1) (blue) and a state-of-the-art solver~\cite{eth_implementation} (orange). The slope of the logarithmic data from Algorithm~\ref{algo1} is approximately one corresponding to linear computational complexity in $l$, whereas the slope of the benchmark corresponds to approximately quadratic computational complexity in $l$.} \end{figure} In this paper, we study and solve two problems in order to generate spline trajectories. The first problem, which we coin the {\it fixed-time problem}, is a QP with linear equality constraints. We develop a previous result~\cite{burke2020} and propose an algorithm to solve the fixed-time problem with linear computational complexity in the number of segments in the spline trajectory. Further, we present an equivalent reformulation of the underlying optimisation program that is better conditioned. The second problem, which we name the {\it variable-time problem}, is a nonlinear and nonconvex minimisation problem. To solve the variable-time problem we employ a gradient-descent approach. We propose a set of efficient expressions for calculating the gradient of the program's objective function thereby reducing the method's computational complexity. We evidence our methods by generating minimum snap trajectories for a quadrotor in real-time. We also demonstrate how our methods can be included in a broader suite of path planning algorithms by implementing an RRT* algorithm that outputs a tree of minimum snap trajectories. In summary, the two methods we propose are efficient and scalable, thereby we are able to solve the two problems faster than the state-of-the-art. The paper is structured as follows. In the next section, we introduce requisite notation and formulate an optimisation program to generate spline trajectories. In Section~\ref{sec::solution}, we detail an algorithm for solving the optimisation program. In Section~\ref{sec::time_optimising}, we present another optimisation program for generating spline trajectories and propose and analyse a method for its solution. In Section~\ref{sec::experiments}, we demonstrate trajectories on two experimental platforms. Concluding remarks come in the end. \section{Fixed-Time Problem Formulation}\label{sec::formulation} \subsection{Notation} Scalars are written with lowercase Greek letters, vectors in lowercase Latin letters, matrices in uppercase Latin letters, and sets in calligraphic font. The only exception to this is using $h$, $i$, $j$, $k$, $l$, $m$, $n$, $p$ and $q$ to denote non-negative integers and using $J$ to denote the scalar cost function. Sequences of scalars, vectors or matrices are enumerated using subscripts $x_1,~x_2,~x_3,\ldots$. Unless specified otherwise, elements of a vector or matrix are denoted using parentheses indexed with subscripts, e.g., $(x_i)_j$ is the $j$th element of the vector $x_i$ and $(A_i)_{j,k}$ is the $(j,k)$th element of the matrix $A_i$. Euclidean space of dimension $n$ is written as $\mathbb{R}^n$. The $q$th time derivative of $x$ is denoted with $x^{(q)}\coloneqq\frac{\mathrm{d}^qx}{\mathrm{d}t^q}$. The cardinality of a set $\mathcal{X}$ is written as $|\mathcal{X}|$. The identity matrix of dimension $k$ is denoted with $I_k$. For the sequences $\eta_1,~\eta_2,\ldots$ and $\nu_1,~\nu_2,\ldots$, $\eta_i = O(\nu_i)$ is written if there exists a positive real constant $\alpha$ such that $|\eta_i|\leq \alpha|\nu_i|$ for $i$ sufficiently large. Let ${\bf P}_+:\mathbb{R}^n\rightarrow \mathbb{R}^n$ project onto the positive orthant such that $\big({\bf P}_+(x)\big)_i=\max\{x_i,0\}$ for $i=1,\ldots,n$. Given sets of column vectors $\{x_1,\dots,x_k\}$ and matrices $\{A_1,\dots,A_k\}$, let \begin{align*} \mathrm{vec}(\{x_1,\dots,x_k\})&\coloneqq\begin{bmatrix} x_1 \\ \vdots \\ x_k \end{bmatrix},\\\mathrm{diag}(\{A_1,\dots,A_k\})&\coloneqq\begin{bmatrix} A_1 & & \\ & \ddots & \\ & & A_k \end{bmatrix}. \end{align*} \subsection{Optimization formulation of spline interpolation} We study the continuous spline $\sigma(\tau):\mathbb{R}\rightarrow\mathbb{R}$ that is parametrised by time $\tau\in\mathbb{R}$ \begin{align*} \sigma(\tau)&=\begin{cases} \sigma_1(\tau), & \tau_0\leq \tau\leq \tau_1, \\ \quad \vdots & \\ \sigma_l(\tau), & \tau_{l-1}\leq \tau\leq \tau_l, \end{cases} \end{align*} with segments $\sigma_i(\tau)$, $i=1,\ldots,l$, defined with respect to the times $t=[\tau_0,\ldots,\tau_l]^T\in\mathbb{R}^{l+1}$. The segments are polynomials of order $n-1$ represented as \begin{align*} \sigma_i(\tau)&=a_i^T z(\tau) \end{align*} where $z(\tau):\mathbb{R}\rightarrow\mathbb{R}^n$ is a vector of the monomial basis functions and $a_i\in\mathbb{R}^{n}$ is a vector of coefficients for $i=1,\ldots,l$. We stack the vectors of coefficients as $a=\mathrm{vec}(\{a_1,\ldots,a_l\})\in\mathbb{R}^{ln}$. As in the problem of Hermite interpolation~\cite{Davis1963}, we generate a spline such that the spline and its derivatives take the values \begin{gather*} (f_i)_q=\sigma_i^{(q-1)}(\tau_{i-1}), \\ (f_i)_{q+k}=\sigma_i^{(q-1)}(\tau_i), \end{gather*} for $q=1,\ldots,k$ for integer $k$ such that $f_i\in\mathbb{R}^{2k}$ for $i=1,\ldots,l$. We stack the vectors of derivatives as $f=\mathrm{vec}(\{f_1,\ldots,f_l\})\in\mathbb{R}^{2kl}$ We generate the spline by finding the minimising argument of the optimisation program \begin{subequations} \label{eq::form1} \begin{gather} J(t)= \min_{\sigma(\tau), f} \int_{\tau_0}^{\tau_l} \big(\sigma^{(k-1)}(\tau)\big)^2d\tau, \label{eq::form1::cost}\\ \sigma^{(q-1)}_i(\tau_{i-1})=(f_i)_q, \quad i=1,\ldots,l,~q=1,\ldots,k, \label{eq::form1::constr1} \\ \sigma^{(q-1)}_i(\tau_{i})=(f_i)_{k+q}, \quad i=1,\ldots,l,~q=1,\ldots,k, \label{eq::form1::constr2} \\ (f_i)_{q+k}=(f_{i+1})_q, \quad i=1,\ldots,l-1,~q=1,\ldots,k, \label{eq::form1::continuity} \\ (f_i)_p = \phi,\quad \forall (i,p,\phi)\in \mathcal{P}, \label{eq::form1::set1} \end{gather} \end{subequations} where the set $\mathcal{P}$ is a collection of triplets $(i,p,\phi)$ with integer $i\in\{1,\ldots,l\}$ and $p\in\{1,\ldots 2k\}$ and $\phi\in\mathbb{R}$. The parameters of the program are the vector $t$, the integer $k$ and the set $\mathcal{P}$. We explicitly represent the objective function in terms of the parameter $t$ as it is a focus of our study later in the paper and we note that $k$ and $\mathcal{P}$ parametrise \eqref{eq::form1} implicitly. The formulator of the problem chooses $t$ to temporally constrain the spline. The choice of $k$ determines the total derivative to be minimised as the cost \eqref{eq::form1::cost} and the degree of continuity enforced through \eqref{eq::form1::continuity}. Each $(i,p,\phi)\in\mathcal{P}$ restricts elements of feasible $f$ through the interpolation constraints \eqref{eq::form1::set1}. The size of $\mathcal{P}$ is established as follows. For each $i$, let $\mu_i^-=|\{p|p\in\{1,\ldots,k\},(i,p,\phi)\in\mathcal{P}\}|$ and $\mu_i^+=|\{p|p\in\{k+1,\ldots,2k\},(i,p,\phi)\in\mathcal{P}\}|$. Further let $m=\sum_{i=1}^{l} \mu_i^- + \mu_i^+$. Hence, there are $m=|\mathcal{P}|$ constraints constituted by \eqref{eq::form1::set1}. \subsection{A compact representation} In this subsection, we express \eqref{eq::form1} more compactly as part of the problem we call the fixed-time problem. We will first present the problem and then detail the vectors and matrices employed in the formulation of the associated optimisation program. \begin{prob}[Fixed-Time Problem] \label{prob1} For a given $t$, let $a^\star$ and $f^\star$ be the solutions of \begin{subequations} \begin{align} J(t)= \min_{a, f} \quad & a^TH(t)a, \label{eq::form2::cost} \\ \text{s.t.} \quad & V(t) a = f, \label{eq::form2::vand}\\ &E f = 0, \\ &P f = b. \label{eq::form2::phi} \end{align}\label{eq::form2} \end{subequations} Find the continuous spline $\sigma(\tau)$ with coefficients $a^\star$. \end{prob} To compactly represent the constraints \eqref{eq::form1::constr1} and \eqref{eq::form1::constr2}, we introduce the matrices $V_i(\tau_{i-1}, \tau_i):\mathbb{R}^2\rightarrow\mathbb{R}^{2k\times n}$ for $i=1,\ldots,l$ \begin{align*} V_i(\tau_{i-1}, \tau_i)&=\begin{bmatrix} W(\tau_{i-1}) \\ W(\tau_i) \end{bmatrix}, \end{align*} where $W(\tau):\mathbb{R}\rightarrow\mathbb{R}^{k\times n}$ is a matrix parametrised by time $\tau$ such that $\big(W(\tau)\big)_{q,j}=\big(z^{(q-1)}(\tau)\big)_j$. For brevity, $V_i(\tau_{i-1},\tau_i)$ is written as $V_i$, for $i=1,\ldots,l$. Thus, we form the block-diagonal matrix $V(t):\mathbb{R}^{l+1}\rightarrow\mathbb{R}^{2kl\times ln}$ as $V(t)=\mathrm{diag}(\{V_1,\ldots,V_l\})$. To represent the cost \eqref{eq::form1::cost}, we introduce the matrix \begin{align*} H_i (\tau_{i-1}, \tau_i) &= \int_{\tau_{i-1}}^{\tau_{i}} z^{(k-1)}(\tau)\big(z^{(k-1)}(\tau)\big)^T d\tau, \end{align*} where $H_i(\tau_{i-1}, \tau_i):\mathbb{R}^2\rightarrow\mathbb{R}^{n\times n}$. We abbreviate $H_i(\tau_{i-1},\tau_i)$ as $H_i$, for $i=1,\ldots,l$, thus, the Hessian of the cost \eqref{eq::form1::cost} is $H(t):\mathbb{R}^{l+1}\rightarrow\mathbb{R}^{ln\times ln}$ such that $H(t)=\mathrm{diag}(\{H_1,\ldots,H_l\})$. Constraints \eqref{eq::form1::continuity} and \eqref{eq::form1::set1} can be written compactly with constant matrices of ones and zeros. To this end, we introduce the matrix $E \in\mathbb{R}^{k(l-1) \times 2kl}$ \begin{align*} E = \begin{bmatrix} 0 & I_{k} & -I_{k} & 0 & 0 & \hdots & 0 & 0 & 0\\ 0 & 0 & 0 & I_{k} & -I_{k} & \hdots & 0 & 0 & 0 \\ & & \vdots & & & \ddots & & \vdots & \\ 0 & 0 & 0 & 0 & 0 & \hdots & I_{k} & -I_{k} & 0 \end{bmatrix}, \end{align*} and the matrix $P=\mathrm{diag}(\{P_1,\ldots,P_l\})\in\mathbb{R}^{m\times 2kl}$, which is block-diagonal with elements $P_i\in\mathbb{R}^{(\mu_i^- + \mu_i^+)\times 2k}$ and $(P_i)_{j,p}=1$ if $(i,p,\phi)\in \mathcal{P}$ and $j\in\{1, \ldots,\mu_i^- + \mu_i^+\}$. We note that $P$ has one nonzero element per row. Let $b=\mathrm{vec}(\{b_1,\ldots,b_l\})\in\mathbb{R}^{m}$ with $b_i\in\mathbb{R}^{\mu_i^- + \mu_i^+}$, $i=1,\ldots,l$, such that $(b_i)_j=\phi$ for each $(i,p,\phi)\in\mathcal{P}$ with $j\in\{1,\ldots,\mu_i^-+\mu_i^+\}$ and is zero otherwise. \section{Solving the Fixed-Time Problem}\label{sec::solution} In this section, we present an algorithm of linear computational complexity in the number of spline segments $l$ for solving the fixed-time problem. We also present an equivalent reformulation of \eqref{eq::form2} that may be solved in its place to avoid numerical errors due to ill-conditioning. \subsection{Solution Algorithm} \begin{algorithm}[t] \SetAlgoLined \For{i=1,\ldots, l} { construct $V_i,H_i,Z_i,b_{i}$; \\ partition $A_i,B_i,C_i,c_i^-, c_i^+$; \\ $\overline{A}_i {\gets\begin{cases} A_1 & i=1, \\ A_{i}+C_{i-1}-B_{i-1}^T\overline{A}_{i-1}^{-1}B_{i-1} & i=2,\ldots,l; \end{cases}}$ \\ $\overline{c}_i^- {\gets \begin{cases} g_1^- & i=1, \\ g_i^- + g_{i-1}^- -B_{i-1}^T\overline{A}_{i-1}^{-1}\overline{c}_{i-1}^- & i=2,\ldots,l; \end{cases}}$ } \For{i=l,\ldots, 1} { $g_i^-\gets \overline{A}^{-1}_i(\overline{c}_i^--B_ig_i^+) $; \\ $g_i^+ {\gets \begin{cases} \begin{aligned} & (C_l-B_l^T\overline{A}_l^{-1}B_l)^{-1} \\ & \quad (c_l^+-B_l^T\overline{A}_l^{-1}\overline{c}_l^-) \end{aligned} & i=l, \\ g_{i+1}^- & i=l-1,\ldots,1; \end{cases}}$ \\ solve for $f_i$ \eqref{eq::proj1}; \\ solve for $a_i$ using \eqref{eq::form2::vand}; } \caption{Algorithm of Linear Computational Complexity for Solving the Fixed-Time Problem} \label{algo1} \end{algorithm} To present our algorithm for solving \eqref{eq::form1}, first, we introduce the variables used in Algorithm~\ref{algo1} and then present expressions in terms of their block diagonal components. Let \begin{align}\label{eq::proj1} f &= \overline{f} + Z g, \end{align} where $\overline{f} \in \mathbb{R}^{2kl}$ and $Z\in \mathbb{R}^{2kl\times (2kl-m)}$ are such that $P \overline{f}=b$ and $PZ=0$, and $g\in \mathbb{R}^{2kl-m}$. Under the change of variables \eqref{eq::proj1}, the constraint \eqref{eq::form2::phi} is satisfied for all $g$. The vector $g$ can be thought of as the free derivative values. The notation $\overline{f}$ is used stylistically to reflect that $\overline{f}$ is not a decision variable and its components are either zeros or the parameters $\phi$ of the $(i,p,\phi)\in\mathcal{P}$. We partition $\overline{f},~g$ and $Z$ into vectors and matrices of size $\overline{f}_{i}\in \mathbb{R}^{2k}$, $g_{i}\in \mathbb{R}^{2k-\mu_i^- -\mu_i^+}$ and $Z_{i}\in \mathbb{R}^{2k \times (2k -\mu_i^- -\mu_i^+)}$, for $i=1,\ldots,l$, such that $\overline{f}=\mathrm{vec}(\{\overline{f}_1,\ldots,\overline{f}_l\})$, $g=\mathrm{vec}(\{g_1,\ldots,g_l\})$ and $Z=\mathrm{diag}(\{Z_{1}, \ldots, Z_{l}\})$. Further, let $g_i=\mathrm{vec}(\{g_i^-, g_i^+\})$ where $g_i^-\in\mathbb{R}^{k-\mu_i^-}$ and $g_i^+\in\mathbb{R}^{k-\mu_i^+}$. For $i=1,\ldots,l$, let \begin{subequations}\label{eq::partition} \begin{align} Z_i^TV_i^{-T}H_iV_i^{-1}Z_i &= \begin{bmatrix} A_i & B_i \\ B_i^T & C_i \end{bmatrix}, \\ Z_i^TV_i^{-T}H_iV_i^{-1} \overline{f}_{i}&= \begin{bmatrix} c_i^- \\ c_i^+ \end{bmatrix} \end{align} \end{subequations} where $A_i\in\mathbb{R}^{(k-\mu_i^-)\times (k-\mu_i^-)}$, $B_i\in\mathbb{R}^{(k-\mu_i^-)\times (k-\mu_i^+)}$, $C_i\in\mathbb{R}^{(k-\mu_i^+)\times (k-\mu_i^+)}$, $c_i^-\in\mathbb{R}^{k-\mu_i^-}$ and $c_i^+\in\mathbb{R}^{k-\mu_i^+}$. \begin{prop}\label{prop::alg_complexity} Algorithm~\ref{algo1} solves \eqref{eq::form1} in $O(k^3l)$. \end{prop} \begin{proof} See Appendix~\ref{app::alg_complexity}. \end{proof} We also note that an algorithm with low computational complexity in $l$ is important given the other variable in the complexity bound $k$ is practically limited by application. For example, $k=4$ is used to minimise the jerk of robot arm trajectories~\cite{Simon1993} and $k=5$ is used to minimise the snap of quadrotor trajectories~\cite{Mellinger2011}. Linear in $l$, our algorithm's computational complexity facilitates spline trajectories of many segments. \subsection{A better conditioned formulation} It is known that \eqref{eq::form2} has matrices that are poorly conditioned and that significant numerical errors are encountered when generating trajectories of more than $50$ segments~\cite{DeAlmeida2017}. To remedy the numerical instability of \eqref{eq::form2}, we employ the same better conditioned matrices as~\cite{burke2020}. The reformulation is \begin{subequations} \label{eq::form_nondim} \begin{align} J(t)=\min_{a,f} \quad &a^T \widetilde{H}(t) a, \label{eq::form_nondim::cost}\\ \text{s.t.} \quad & G(t)\widetilde{V}a =f, \label{eq::form_nondim::vand} \\ & E f =0, \label{eq::form_nondim::const1} \\ & P f = b, \label{eq::form_nondim::const2} \end{align} \end{subequations} where $\widetilde{V}=\mathrm{diag}\{V_1(-1,1),\ldots,V_l(-1,1)\}\in\mathbb{R}^{2kl\times ln}$ and $\widetilde{H}(t)=\mathrm{diag}\{(2/(\tau_1-\tau_0))^{2k-1}H_1(-1,1),\ldots,(2/(\tau_l-\tau_{l-1}))^{2k-1}H_l(-1,1)\}\in\mathbb{R}^{ln\times ln}$. The matrix $G(t)$ has block-diagonal submatrices $G_i(\tau_{i-1},\tau_i): \mathbb{R}^2\rightarrow \mathbb{R}^{2k\times 2k}$ such that $G_i(\tau_{i-1},\tau_i)=\mathrm{diag}\{1, 2/(\tau_i-\tau_{i-1}), \ldots, (2/(\tau_i-\tau_{i-1}))^{k-1}, 1, \ldots, (2/(\tau_i-\tau_{i-1}))^{k-1}\}$ for $i=1,\ldots,l$. We abbreviate $G_i(\tau_{i-1},\tau_i)$ as $G_i$ for $i=1\ldots,l$ and form $G(t):\mathbb{R}^{l+1}\rightarrow\mathbb{R}^{2kl\times 2kl}$ as $G(t)=\mathrm{diag}\{G_1,\ldots,G_l\}$. While the optimal cost of \eqref{eq::form2} and \eqref{eq::form_nondim} are equal, the corresponding optimisers are different. The relationship between the solutions of the two programs is summarised next. \begin{prop}\label{prop::scaling} Let $a^\star$ and $f^\star$ solve \eqref{eq::form2} and $\sigma(\tau)$ be the spline with coefficients $a^\star$. Further, let $\widetilde{a}^\star=\mathrm{vec}\{\widetilde{a}^\star_1,\ldots, \widetilde{a}^\star_l\}$ such that $\widetilde{a}^\star_i$ is the vector of coefficients of $\widetilde{\sigma}_i(\rho)$ for $i=1,\ldots,l$. Then, $\widetilde{a}^\star$ and $f^\star$ solve \eqref{eq::form_nondim} if and only if $\widetilde{\sigma}_i(\rho)=\sigma_i((\tau_i-\tau_{i-1})\rho/2 + (\tau_i+\tau_{i-1})/2)$. \end{prop} \begin{proof} See Appendix~\ref{app:scaling}. \end{proof} \section{Optimising set of times}\label{sec::time_optimising} In this section we present another problem formulation for generating optimal spline trajectories. Closely related to \eqref{eq::form1}, in this problem the time between spline segments is included as a decision variable. We leverage the results of the previous section to propose an algorithm that efficiently solves this problem. \subsection{Variable-time problem} Motivated by the time-optimal minimum-snap trajectories of Mellinger and Kumar~\cite{Mellinger2011}, we formulate another trajectory generation problem in which the cost is also minimised with respect to the times $t$. We name this problem the variable-time problem. \begin{prob}[Variable-Time Problem]\label{prob2} Let $t^\star=[\tau_0^\star,\ldots,\tau_l^\star]^T$ be the a minimiser of \begin{subequations} \label{eq::form3} \begin{align} \min_{t}\quad & J(t), \\ \text{s.t.} \quad & \tau_{i-1}<\tau_i, \quad i=1, \ldots,l. \end{align} \end{subequations} Find the spline $\sigma(\tau)$ by solving the fixed-time problem as described in Problem~\ref{prob1} with $t=t^\star$. \end{prob} A common approach to solve \eqref{eq::form3} is a gradient descent method~\cite{Mellinger2011},~\cite{Richter2016},~\cite{burri2015real}. In these works, an approximation of the gradient $\nabla_t J(t)$ is used as a search direction. The calculation of the approximate gradient typically involves solving \eqref{eq::form2} repeatedly. We also employ a gradient descent method to solve the variable-time problem. The strength of our algorithm is that it requires solving \eqref{eq::form2} only once per iteration via solving a reformulation of \eqref{eq::form3}. This reformulation alongside the proposed solution method are introduced in the following subsection. \subsection{Efficient gradient calculation} \begin{algorithm}[t] \SetAlgoLined $d_i \leftarrow d_0$\; \While{not terminated}{ Find $f^\star$ minimising \eqref{eq::form2} for $t=Rd_i$ using Algorithm~\ref{algo1}\; Calculate $\nabla_d J(Rd_i)$ using Proposition~\ref{prop::gradient}\; Choose $\alpha_i$\; $d_i\leftarrow {\bf P}_+\big(d_i-\alpha_i\nabla_d J(Rd_i)\big)$\; $i\leftarrow i+1$\; } \caption{Algorithm for Solving Problem~\ref{prob2}} \label{algo2} \end{algorithm} We begin by introducing the change of variables $\delta_i=(\tau_i-\tau_{i-1})/2$ for $i=1,\ldots,l$. Let $d=[\delta_1,\ldots,\delta_l]^T\in\mathbb{R}^l$. Further, let $R\in\mathbb{R}^{(l+1)\times l}$ be such that $R_{i,j} = 2$ if $i<j$ and zero otherwise. By construction, the span of $R$ is all $t\in\mathbb{R}^{l+1}$ such that $\tau_0=0$. The reformulation is then \begin{subequations} \label{eq::form6} \begin{align} \min_{d}\quad & J(Rd), \\ \text{s.t.} \quad & \delta_{i}>0, \quad i=1, \ldots,l, \label{eq::form6::const} \end{align} \end{subequations} We note that \eqref{eq::form6} is not strictly equivalent to \eqref{eq::form3} due to the different dimensions of $d$ and $t$. However, the next proposition states that the optimal cost $J(t)$ is invariant to the initial time $\tau_0$, and thereby every solution to \eqref{eq::form6} can be used to calculate another solution to \eqref{eq::form3}. \begin{prop}\label{prop::variable} The vector $d^\star$ is a local minimiser of \eqref{eq::form6} if and only if $t^\star = Rd^\star$ is a local minimiser of \eqref{eq::form3}. \end{prop} \begin{proof} See Appendix~\ref{app::variable}. \end{proof} We solve \eqref{eq::form6} using a projected steepest descent method that we refer to as Algorithm~\ref{algo2}. The iteration of Algorithm~\ref{algo2} is \begin{align}\label{eq::iteration} d_{i+1}=\mathbf{P}_+\big(d_i-\alpha_i \nabla_d J(R d_i)\big), \end{align} where the $d_i\in\mathbb{R}^{l}$ are the sequence of iterates and $\alpha_i\in\mathbb{R}$ are step sizes as chosen by a backtracking line search algorithm for integer $i\geq1$. We note that every iteration calculated with \eqref{eq::iteration} yields a feasible solution to \eqref{eq::form6} because the projection $\mathbf{P}_+$ renders the components of $d_{i+1}$ positive. Since $J(t)$ is the optimal cost of a QP with equality constraints, it has a closed form expression~\cite{sun2006optimization} that we use to calculate $\nabla_d J(Rd)$ directly. We then exploit sparsity to derive expressions for $\nabla_d J(Rd)$ that are efficient in terms the computational cost of their calculation, which are presented in the following proposition. \begin{prop}\label{prop::gradient} For a given $t$, let $a^\star$ and $f^\star=\mathrm{vec}\{f_1^\star, \ldots, f_l^\star\}$ solve \eqref{eq::form2}. The $i$th component of the Jacobian, $\big (\nabla_d J(Rd) \big )_i$, $i=1,\ldots,l$, is given by \begin{align}\label{eq::partialJ} & \big(\nabla_d J(Rd))_i = (2k-1)\delta_i^{1-2k}\big(\delta_i^{-1}{f_i^\star}^T H_i(-1,1) f_i^\star \\ & \quad -{f_i^\star}^T H_i(-1,1) F(\delta_i) f_i^\star -(f_i^\star-\overline{f}_i)^T H_i(-1,1)(f_i^\star-\overline{f}_i) \nonumber \\ & \quad -(f_i^\star-\overline{f}_i)^T \big( H_i(-1,1) F(\delta_i) + F(\delta_i) H_i(-1,1) \big) \overline{f}_i\big), \nonumber \end{align} with $F(\delta)= \delta^{-1}\mathrm{diag}\{0, 1, \ldots,k-1, 0, 1, \ldots,k-1\}\in\mathbb{R}^{2k\times 2k}$. \end{prop} \begin{proof} See Appendix~\ref{app::gradient}. \end{proof} The component-wise expressions of \eqref{eq::partialJ} involve matrices of size at most $n \times n$, hence the calculation of $\nabla_d J(Rd)$ has computational complexity $O(ln^3)$. The minimising argument $f^\star$ is required to calculate \eqref{eq::partialJ}, which in itself requires $O(ln^3)$ computations. Hence, the computational complexity of calculating each iteration \eqref{eq::iteration} is $O(ln^3)$. We also note because \eqref{eq::partialJ} was derived from \eqref{eq::form3}, it depends only on the difference between times $\delta_i$. As found in a previous work~\cite{burke2020}, the condition number of the matrices in \eqref{eq::partialJ} will not necessarily increase for large values of the times $\tau_i$. Instead, \eqref{eq::partialJ} is only prone to numerical errors for large values of $\delta_i$. Indeed, deriving similar expressions for $\nabla_d J(Rd)$ from \eqref{eq::form2} results in calculations involving ill-conditioned matrices that render the results practically useless due to numerical errors. \subsection{Numerical experiments} \begin{table}[t] \centering \ra{1.3} \begin{tabular}{rrrr}\toprule & $l=6$ & $l=8$ & $l=10$ \\ \midrule Algorithm~\ref{algo2} & 2.31 & 2.48 & 2.45 \\ Iteration \eqref{eq::zerod_approx} & 12.43 & 17.59 & 24.50 \\ Iteration \eqref{eq::random_approx} & 39.52 & 232.74 & 996.70 \\ \bottomrule \\ \end{tabular} \caption{Average number of $J$ evaluations for three methods over $100$ trials.} \label{tab::experiments} \end{table} In this subsection, we demonstrate another computational benefit of employing expression \eqref{eq::partialJ}. The benefit we look to highlight is in terms of the number of $J(t)$ evaluations required by a descent method to converge to a minimum. We compare our solution to two other methods popular in the literature by solving the variable-time problem and generating some minimum snap trajectories. The first method we use for comparison is proposed by Mellinger and Kumar~\cite{Mellinger2011} and is chosen as it is easy to implement and is a conceptually simple derivative-free method. It uses a finite difference approximation of the gradient through the iteration, for $j\in\{1,\ldots,l\}$, \begin{align} \label{eq::zerod_approx} (d_{i+1})_j&=(d_i)_j-\beta_i \frac{J\big(R(d_i+\gamma_i e_j)\big)-J(Rd_i)}{\gamma_i}, \end{align} where $\beta_i\in\mathbb{R}$ and $\gamma_i\in\mathbb{R}$ are sequences of parameters for integer $i\geq 1$. The vectors $e_j\in\mathbb{R}^{l}$ are such that $(e_j)_j=1$ and zero otherwise. The second method we use for comparison requires only one $J(t)$ evaluation per iteration. It is a random, derivative-free method with the iteration \begin{align}\label{eq::random_approx} d_{i+1}&=d_i-\varepsilon_i \frac{J\big(R(d_i+\zeta_i r)\big)-J(Rd_i)}{\zeta_i} r, \end{align} where $r\in\mathbb{R}^{l}$ is a random vector with a known Gaussian distribution and $\varepsilon_i\in\mathbb{R}$ and $\zeta_i\in\mathbb{R}$ are sequences of parameters. For for an analysis of the properties of random derivative-free methods and how to choose parameters such as the Gaussian distribution, we refer the reader to~\cite{nesterov2017random}. The average results of $100$ trails, comparing the number of $J(t)$ evaluations required by three methods using the iterate updates \eqref{eq::iteration}, \eqref{eq::zerod_approx} and \eqref{eq::random_approx}, are summarised in Table~\ref{tab::experiments}. As expected, the exact gradient \eqref{eq::iteration} yields the least number of $J(t)$ evaluations. The method that uses the finite-difference approximation \eqref{eq::zerod_approx} requires a similar number of iterations to that using the exact gradient \eqref{eq::iteration} but makes more $J(t)$ evaluations per iteration, evident from the experimental data that linearly increases in $l$. The expression \eqref{eq::random_approx} only requires one $J(t)$ evaluation per iteration, however, the total number of iterations required for convergence in using this update scales with the dimension of the solution and thus $l$. In summary, using the exact gradient in the update \eqref{eq::iteration} reduces the number of $J(t)$ evaluations through the calculation of each iterate and through the total number of iterations required for its convergence. \section{Trajectory planning applications}\label{sec::experiments} In this section, we present two applications of the algorithms presented thus far. The first is planning minimum snap trajectories for quadrotors and we demonstrate the real-time capability of our method by replanning a trajectory during an experimental flight. Second, we present a higher-level path planning algorithm that uses an RRT* algorithm to natively construct a tree of minimum snap trajectories. The computational complexity and well-conditioned formulation of our methods are critical to achieving the outcomes of each application. \subsection{Minimum snap trajectory planning for quadrotors} Our first application is an example of how our problem formulation and algorithms can be applied to Mellinger and Kumar's quadrotor trajectory planning methodology \cite{Mellinger2011}. This is achieved by adapting the variable-time problem to plan trajectories in multiple dimensions. We also demonstrate that our method can generate quadrotor trajectories in real-time. We output trajectories in terms of the quadrotor's position $[\sigma_x(\tau), \sigma_y(\tau), \sigma_z(\tau)]^T\in\mathbb{R}^3$ and yaw $\sigma_\psi(\tau)\in[-\pi,\pi]$, such that $\sigma_x(\tau)$ and $\sigma_y(\tau)$ are two minimum snap splines and $\sigma_z(\tau)$ and $\sigma_\psi(\tau)$ are two minimum acceleration splines\footnote{This has the approximate effect of minimising the control effort required to track the trajectory (see \cite{Mellinger2011} for details).}. We generate each spline by solving separate instances of the fixed-time problem with different parametrisations. To this end, we introduce the optimal costs of the four optimisation programs associated with each dimension of the trajectory. Let $J_x(t)$ and $J_y(t)$ be the optimal cost of two instances of \eqref{eq::form2} with $k=5$ and let $J_z(t)$ and $J_\psi(t)$ be the optimal cost of two instances of \eqref{eq::form2} with $k=3$. We solve a variant of the variable-time problem to find $\sigma_x(\tau),~\sigma_y(\tau),~\sigma_z(\tau)$ and $\sigma_\psi(\tau)$ such that the time between segments is the same for each spline. \begin{prob}[Multi-Dimensional Variable-Time Problem]\label{prob3} Let $t^\star$ be the minimising argument of the optimisation program \begin{subequations} \label{eq::form5} \begin{align} \min_{t}\quad & J_x(t)+J_y(t)+J_z(t)+J_\psi(t), \label{eq::form5::cost} \\ \text{s.t.} \quad & \tau_{i-1}<\tau_i, \quad i=1, \ldots,l. \end{align} \end{subequations} Find the splines $\sigma_x(\tau)$ and $\sigma_y(\tau)$ by solving two instances of the fixed time problem with $t=t^\star$ and $k=5$, and the splines $\sigma_z(\tau)$ and $\sigma_\psi(\tau)$ by solving two instances of the fixed time problem with $t=t^\star$ and $k=3$. \end{prob} We solve the multi-dimensional variable-time problem using a gradient descent method adapted from the approach taken in Section~\ref{sec::time_optimising}. This method is only trivially different to Algorithm~\ref{algo2}. \begin{figure}[t] \centering {\includegraphics[trim={0cm 0cm 0cm 0cm}, width=0.90\linewidth]{declan_lachie.png}} \caption{The quadrotor flight through an obstacle filled circuit. The flight has two phases: the offline trajectory (blue) and the recalculated, online trajectory (orange). \label{fig::quad_still}} \end{figure} \begin{figure}[t] \centering {\includegraphics[width=\linewidth]{variable_comparison.png}} \caption{The time (s) taken to compute a range of trajectories by solving Problem~\ref{prob3} using an adapted Algorithm~\ref{algo2} (blue) and the method proposed by~\cite{Richter2016} as implemented in~\cite{eth_implementation} (orange). \label{fig::quad_timing} } \end{figure} \begin{figure}[h] \centering {\includegraphics[width=\linewidth]{online_solver_comparison.png}} \caption{The discrete reference altitude of the quadrotor flight (blue) and the time taken to recalculate the online trajectory (orange). Using the adapted Algorithm~\ref{algo2} (top), the quadrotor tracks the offline trajectory through the upper hoops until $\tau=61.80$ s when it calculates a new trajectory in $75$ ms, before the next reference altitude update. Using the method proposed by~\cite{Richter2016} as implemented in~\cite{eth_implementation} (bottom), the new trajectory is calculated in $435$ ms, requiring the quadrotor to pause its flight. \label{fig::recalc}} \end{figure} To demonstrate the real-time capability of our algorithm, we conduct a quadrotor flight in which the trajectory is replanned midflight. The flight is through a circuit of four pairs of hoops stacked vertically on one another. Initially, the multi-dimensional variable-time problem is parametrised such that the quadrotor circumnavigates the course by passing through the upper hoops. Then, prompted by user input at an unspecified time, a new parametrisation of the multi-dimensional variable-time problem is solved to generate a trajectory that passes through the lower hoops. The two trajectories are visualised in the flight space in Fig.~\ref{fig::quad_still}. We benchmark our method by repeating the flight using Richter {\it et al.}'s solution of the variable-time problem~\cite{Richter2016} as implemented by the Autonomous Systems Lab of ETH Zurich~\cite{burri2015real}. This implementation is also a gradient descent method and we note that it utilises sparse matrix operations to perform calculations. Trajectory generation was performed with C++ implementations of each algorithm and executed on a laptop with an Intel Core i7-8650U CPU running at 1.9 GHz, with 16 GB of RAM. The quadrotor we use for testing is assembled from generic components and a Pixhawk2 The Cube Black flight controller. The flight controller and a Vicon motion capture system are used for sensing capabilities. We use PX4 firmware~\cite{PX4DevTeam2020} for control and estimation. The computation time required for calculating a range of offline trajectories is shown in Fig.~\ref{fig::quad_timing}. Under our method, it took $0.43$ ms to generate a trajectory for one lap of the circuit and $5.3$ ms to generate one for five laps. Evidently Algorithm~\ref{algo2} is faster than the solver from~\cite{eth_implementation}, which generated a single-lap trajectory in $21$ ms and a five-lap trajectory in $0.91$ s. The timing of the trajectory replanning event is captured in Fig.~\ref{fig::recalc}, where the time taken to calculate the new trajectory is plotted in orange against the reference altitude updates in blue. The inset plot shows that, for each flight, the trajectory is recalculated immediately after the reference altitude is updated. Under our method, corresponding to Fig.~\ref{fig::recalc}, the new trajectory is calculated in approximately $75$ ms before the next reference position update. Thus the new trajectory is able to be broadcast without affecting the reference updates and quadrotor flight. In comparison, when using the solver from~\cite{eth_implementation}, the new trajectory is calculated in approximately $435$ ms. This exceeds the reference position update period and the quadrotor is forced to pause its flight. Finally, we note that the trajectories generated under the two methods are different (see Fig.~\ref{fig::recalc}). Additional interpolation constraints were employed under our method for safety purposes. The solver from~\cite{eth_implementation} was not able to compute a feasible trajectory with these additional constraints hence they were not included. In terms of the comparison, the additional constraints result in spline trajectories with more segments, thus increasing the computation times under our method and favouring to the benchmark. \subsection{Minimum Snap Spline RRT* path planning} Our second application is an example of how the methods presented thus far can be incorporated in an RRT* algorithm, to constructs minimum snap trajectories. This version version of the RRT* algorithm, titled \emph{Minimum Snap Spline RRT*}, is presented in Algorithm~\ref{algo3} At each iteration of Algorithm~\ref{algo3}, a sample from free space is either added to the graph or discarded, depending on the minimum snap trajectory through the graph to the sample. We start with the problem setup and define the problem used to generate minimum snap trajectories, then we present Algorithm~\ref{algo3} and some simulation results. To describe the problem space, let $\mathcal{X}\subseteq\mathbb{R}^2$ be the configuration space, $\mathcal{X}_\mathrm{obs}\subset\mathcal{X}$ be the obstacle space and $\mathcal{X}_\mathrm{free}=\mathcal{X}\backslash\mathcal{X}_\mathrm{obs}$ be the free space. We note that the problem space is formulated in two dimensions for simplicity but to generalise subsequent results to higher dimensions is trivial. Next, we introduce some graph theoretic notation. Let $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ be a tree graph with vertices $\mathcal{V}=\{1,\ldots,n_g\}$ and edges $\mathcal{E}\subset\mathcal{V}\times\mathcal{V}$. A configuration is the collection of points $v=\mathrm{vec}\{v_1,\ldots,v_{n_g}\}\in\mathbb{R}^{2{n_g}}$ where $v_i\in\mathbb{R}^2$ for $i=1,\ldots,n_g$. A framework is the embedding of the graph $\mathcal{G}$ in two-dimensional space denoted by the pair $(\mathcal{G},v)$, that is, for each $i\in\mathcal{V}$ there exists a $v_i\in\mathbb{R}^2$. To develop a minimum snap aware RRT* algorithm, the following problem needs to be solved for any additional sample point. \begin{prob}[Minimum Snap RRT* Subproblem]\label{prob4} For given parameters $\{\mathcal{X}_\mathrm{free}, (\mathcal{G},v), i_h, u\}$, where $\mathcal{X}_\mathrm{free}\subseteq\mathbb{R}^2$, $(\mathcal{G},v)$ is a framework, $i_h$ is a vertex in $\mathcal{G}$, and $u\in\mathcal{X}_\mathrm{free}$, let $(i_1,\ldots,i_h)$ be the path from the root of the graph $i_1=1$ to $i_h$ with $i_j\in\mathcal{V}$, $j\in\{1,\dots,h\}$. Consider two instances of the fixed-time problem with constraints \begin{align*} \sigma_1(\tau_{j}) &= (v_{i_{j+1}})_1, \quad j=0,\ldots,h-1, \\ \sigma_1(\tau_l) &= (u)_1, \intertext{and} \sigma_2(\tau_{j}) &= (v_{i_{j+1}})_2, \quad j=0,\ldots,h-1, \\ \sigma_2(\tau_l) &= (u)_2, \end{align*} and let $J_1(t)$ and $J_2(t)$ be their corresponding optimal costs, respectively. Let $t^\star$ be the minimising argument of the optimisation program \begin{align*} \min_{t}\quad & J_1(t)+J_2(t), \\ \text{s.t.} \quad & \tau_{i-1}<\tau_i, \quad i=1, \ldots,h. \end{align*} Find the splines $\sigma_1(\tau)$ and $\sigma_2(\tau)$ by solving two instances of the fixed time problem with $t=t^\star$ and $k=5$. \end{prob} We note that from the constraints in Problem \ref{prob4} the constructive process outlined in Section \ref{sec::formulation} is used to formulate two instances of \eqref{eq::form2}. Following from Problem~\ref{prob4}, we now define two procedures used in Algorithm~\ref{algo3}. To ease notation, we explicitly represent their functions in terms of $i_h$ and $u$ and note that $\mathcal{X}_\mathrm{free}$ and $(\mathcal{G},v)$ are implicit parameters. The procedures are as follows. \begin{itemize} \item {\bf Collision checking:} For given parameters $\{\mathcal{X}_\mathrm{free}, (\mathcal{G},v), i_h, u\}$, the following Boolean function is defined \end{itemize} \begin{align*} \mathrm{CollisionFree}(i_h, u) &= \begin{cases} \mathrm{True} & \begin{aligned} & \text{if }[\sigma_1(\tau),\sigma_2(\tau)]^T\in\mathcal{X}_\mathrm{free} \\ & \text{ for all }\tau^\star_0\leq \tau \leq \tau^\star_l, \end{aligned} \\ \mathrm{False} & \text{otherwise,} \end{cases} \end{align*} where $\sigma_1(\tau)$, $\sigma_2(\tau)$ and $t^\star=[\tau_0^\star,\ldots,\tau_l^\star]^T$ are found by solving Problem~\ref{prob4} with $\{\mathcal{X}_\mathrm{free}, (\mathcal{G},v), i_h, u\}$. \begin{itemize} \item {\bf Snap evaluation:} For given parameters $\{\mathcal{X}_\mathrm{free}, (\mathcal{G},v), i_h, u\}$, the following real valued function is defined \end{itemize} \begin{align*} \mathrm{Cost}(i_h, u) &= \int_{\tau^\star_0}^{\tau^\star_l} \big(\sigma^{(4)}_1(\tau)\big)^2 + \big(\sigma^{(4)}_2(\tau)\big)^2d\tau, \end{align*} where $\sigma_1(\tau)$, $\sigma_2(\tau)$ and $t^\star=[\tau_0^\star,\ldots,\tau_l^\star]^T$ are found by solving Problem~\ref{prob4} with $\{\mathcal{X}_\mathrm{free}, (\mathcal{G},v), i_h, u\}$. We are now ready to present Algorithm~\ref{algo3}, which is based on the RRT* algorithm implementation in \cite{karaman2011sampling}. The algorithm is initialised with a single vertex $v_1$ and proceeds to construct a framework. At each iteration, the algorithm performs the following coarse steps: \begin{algorithm}[t] \SetAlgoLined $\mathcal{V}\leftarrow \{1\};~\mathcal{E}\leftarrow \varnothing;~v\leftarrow v_1$\; \While{not terminated}{ Sample $v_\mathrm{rand}$ from $\mathcal{X}_\mathrm{free}$\; \label{alg::sample} $i_\mathrm{init}\leftarrow \{i\in\mathcal{V} | \arg \min_{i} \lVert v_i-v_\mathrm{rand}\rVert_2$\}\; \label{alg::nearest} \If{$\mathrm{CollisionFree}(i_\mathrm{init}, v_\mathrm{rand})$}{ \label{alg::nearest_check} $\mathcal{V}\leftarrow \mathcal{V}\cup\{\lvert \mathcal{V}\rvert +1 \}$\; \label{alg::add_v_1} $v\leftarrow \mathrm{vec}\{v, v_\mathrm{rand}\}$\; \label{alg::add_v_2} $i_\mathrm{min}\leftarrow i_\mathrm{init}$\; $J_\mathrm{min}\leftarrow \mathrm{Cost}(i_\mathrm{init}, v_\mathrm{rand})$\; $\mathcal{V}_\mathrm{near}\leftarrow\{i\in\mathcal{V}|\lVert v_i - v_\mathrm{rand} \rVert_2 < \varepsilon\}$\; \For{$i_\mathrm{near}\in \mathcal{V}_\mathrm{near}$}{ \label{alg::near_1} \If{$\mathrm{CollisionFree}(i_\mathrm{near}, v_\mathrm{rand})$ and ($\mathrm{Cost}( i_\mathrm{near}, v_\mathrm{rand}) < J_\mathrm{min}$) \label{alg::all_v_2}} { $i_\mathrm{min}\leftarrow i_\mathrm{near}$\; $J_\mathrm{min}\leftarrow \mathrm{Cost}( i_\mathrm{near}, v_\mathrm{rand})$\; } } \label{alg::near_2} $\mathcal{E}\leftarrow \mathcal{E}\cup\{(i_\mathrm{min},\lvert \mathcal{V} \rvert)\}$\; \label{alg::edge} \For{$i_\mathrm{near}\in \mathcal{V}_\mathrm{near}$}{ \label{alg::rewire_1} $i_\mathrm{par}\leftarrow \{i\in\mathcal{V} | (i,i_\mathrm{near})\in\mathcal{E}\}$\; $J_\mathrm{near}\leftarrow \mathrm{Cost}(i_\mathrm{par}, v_{i_\mathrm{near}})$\; \If {$\mathrm{CollisionFree}(\lvert \mathcal{V} \rvert, v_{i_\mathrm{near}})$ and ($\mathrm{Cost}( \lvert \mathcal{V} \rvert, v_{i_\mathrm{near}}) < J_\mathrm{near}$) \label{alg::all_v_3}} { $\mathcal{E}\leftarrow \mathcal{E}\backslash\{(i_\mathrm{par}, i_\mathrm{near})\}$\; $\mathcal{E}\leftarrow \mathcal{E}\cup\{(\lvert \mathcal{V} \rvert, i_\mathrm{near})\}$\; \label{alg::rewire_2} } } } } Return $((\mathcal{V},\mathcal{E}),v)$\; \caption{Minimum Snap Spline RRT*} \label{algo3} \end{algorithm} \begin{figure}[h!] \centering {\includegraphics[width=0.98\linewidth]{straight_vs_ms_no_prune.png}} \caption{Minimum snap trajectories generated using two RRT* algorithms: (a) path cost is the total snap of a minimum snap spline through vertices; and (b) path cost is Euclidean distance between vertices, which is used to generate a minimum snap spline. Pictured is the underlying tree (solid blue), final trajectory (solid orange) and obstacles (dashed black).} \label{fig::rrt} \end{figure} \begin{enumerate} \item {\bf Sample:} A point $v_\mathrm{rand}$ is randomly sampled (Line \ref{alg::sample}). \item {\bf Add sample to framework:} Find $i_\mathrm{init}$, the vertex corresponding to the nearest point in the framework to $v_\mathrm{rand}$ (Line \ref{alg::nearest}). If a collision-free path exists from the root $v_1$ to $v_\mathrm{rand}$ through $v_{i_\mathrm{init}}$, then $v_\mathrm{rand}$ is added to the framework with the corresponding vertex $i_{\lvert \mathcal{V} \rvert}$ (Lines \ref{alg::nearest_check}--\ref{alg::add_v_2}). \item {\bf Add edge to sample:} Compare the minimum snap trajectories from the root $v_1$ to $v_\mathrm{rand}$ through vertices corresponding to points near the sample $\mathcal{V}_\mathrm{near}$ (Lines \ref{alg::near_1}--\ref{alg::near_2}). Find the point $i_\mathrm{min}\in\mathcal{V}_\mathrm{near}$ corresponding to the lowest snap trajectory and add an edge between $i_\mathrm{min}$ and the sample $i_{\lvert \mathcal{V} \rvert}$ (Line \ref{alg::edge}). \item {\bf Rewire edges:} Check if lower snap trajectories exist through the sample $v_\mathrm{rand}$ to nearby points in the framework then add and remove edges if so (Lines \ref{alg::rewire_1}--\ref{alg::rewire_2}). \end{enumerate} \begin{table}[t] \centering \ra{1.3} \begin{tabular}{rrr} \toprule & Total Snap & Computation Time (s) \\ \midrule Algorithm~\ref{algo3} & 14 & 2.75 \\ ED RRT* & 23 & 0.0123 \\ \bottomrule \\ \end{tabular} \caption{Average total snap and average total computation time from a Monte Carlo simulation of $100$ trials of Algorithm~\ref{algo3} and the Euclidean distance (ED) RRT* algorithm.} \label{tab::rrt_metrics} \end{table} To provide some insight about the efficacy of Algorithm~\ref{algo3}, we generate minimum snap trajectories through one problem space. As a point of comparison, we also use the output of a Euclidean distance RRT* algorithm to create a minimum snap trajectory, which is a prototypical path planning approach~\cite{Richter2016},~\cite{burri2015real}. The Euclidean distance RRT* algorithm constructs a framework based on the straight-line path between vertices, that is, collision checking is performed over straight-line paths and the cost function is Euclidean distance. The trajectories generated under the two approaches are illustrated in Fig.~\ref{fig::rrt}, where the underlying tree is coloured in blue and the final trajectory in orange. Results from a Monte Carlo simulation of $100$ trials of the two methods is recorded in Table~\ref{tab::rrt_metrics}. The computational complexity of calculating the total snap of a trajectory is much greater than the Euclidean distance, which is reflected in the computation time of each method. However, this appears to be in a trade off with the total snap of the trajectory as Algorithm~\ref{algo3} generated trajectories with smaller total snap than the benchmark. We conclude by highlighting a desirable feature of Algorithm~\ref{algo3}: the trajectories output by the algorithm inherit the properties of the collision checker. This is not the case for existing path planning methods that employ RRT* algorithms. For example, the Euclidean distance RRT* algorithm used as a benchmark does not take into account minimum snap trajectories and hence its output is prone to collisions (e.g., see Fig.~\ref{fig::rrt}). Algorithm~\ref{algo3}, however, executes the collision checking subroutine at each instance of adding a vertex or edge. Thus, the output trajectory is guaranteed to be collision free up to the properties of the collision checking subroutine. For example, we have assumed that $\mathrm{CollisionFree}(i_h,u)$ is a perfect collision checker thereby the output of Algorithm~\ref{algo3} is guaranteed to be collision free. In this way, our method provides a native way of generating minimum snap trajectories. \section{Conclusion} In this paper we present a general framework for creating optimal splines. We propose a well-conditioned formulation for an optimal spline generation problem and solve it using an algorithm with linear computational complexity in the number of segments in the spline trajectory. We leverage this algorithm to present a solution to another the optimisation problem used to generate optimal splines. We present expressions that reduce the computational complexity of optimizing the time between segments in the spline trajectory. We demonstrate the applicability of this general framework experimentally by generating trajectories for a quadrotor. We also highlight how our algorithms can be utilised as part of other trajectory planning approaches by proposing an RRT* algorithm that constructs trees of minimum snap splines. In future work, we will explore the use of B-Splines for trajectory planning. The matrices that result from formulating constraints on B-Spline trajectories are typically banded and could permit further improvements in the computational complexity of solving the KKT conditions. \appendices \section{Proof of Proposition~\ref{prop::alg_complexity}} \label{app::alg_complexity} In this appendix we prove that Algorithm~\ref{algo1} solves the fixed-time problem. To reduce notation in this section, we will not explicitly state time dependence, i.e., we write $V(t),~H(t)$ as $V,~H$. We note that the fixed-time problem is solved for a known, constant $t$. We first present a useful set of recursions in the following lemma. The recursions efficiently solve a structured matrix equation that we will encounter in the principle proof. \begin{lemm}\label{lemm::recursions} For $i=1\ldots,k$, let $g_i^-$, $g_i^+$ and $\lambda_i$ satisfy \vspace{-1em} \begin{subequations}\label{recs} \begin{align} g_i^- &= \overline{A}^{-1}_i(\overline{c}_{i}^- -B_ig_{i}^+), \label{rec1} \\ g_i^+ &=\begin{cases} \begin{aligned} & (C_l-B_l^T\overline{A}_l^{-1}B_l)^{-1} \\ & \quad (g_{l}^+-B_l^T\overline{A}_l^{-1}\overline{c}_{l}^-) \end{aligned} & i=l, \\ g_{i+1}^- & i=l-1, \ldots,1, \end{cases} \label{rec2} \\ \lambda_i &= B_i^T g_{i}^- + C_i g_{i}^+ -c_i^+, \label{rec3} \end{align} \end{subequations} where \begin{subequations} \begin{align} \overline{A}_i &=\begin{cases} A_1 & i=1, \\ A_{i}+C_{i-1}-B_{i-1}^T\overline{A}_{i-1}^{-1}B_{i-1} & i=2,\ldots,k, \end{cases} \label{con1}\\ \overline{c}_{i}^-&=\begin{cases} {c}_{1}^- & i=1, \\ \begin{aligned} & c_{i}^- + c_{i-1}^+ \\ & \quad -B_{i-1}^T\overline{A}_{i-1}^{-1}\overline{c}_{i-1}^- \end{aligned} & i=2,\ldots,k. \end{cases} \label{con2} \end{align} \label{con1and2} \end{subequations} Then \eqref{recs} solves \begin{align}\label{eq::KKT} \begin{bmatrix} Z^TV^{-T}HV^{-1}Z & Z^TE^T \\ EZ & 0 \end{bmatrix}\begin{bmatrix} g \\ \lambda \end{bmatrix} &= \begin{bmatrix} Z^TV^{-T}HV^{-1}\overline{f} \\ 0 \end{bmatrix}, \end{align} where $\lambda = [\lambda_1^T, \dots, \lambda_{l-1}^T]^T\in\mathbb{R}^{k(l-1)}$ with $\lambda_i\in\mathbb{R}^k$ for $i=1,\ldots,l-1$. \end{lemm} \begin{proof} Under the partition \eqref{eq::partition}, we may permute the variables and columns of \eqref{eq::KKT} to reveal the block-tridiagonal structure \begin{align}\label{tridiag} \underbrace{ \begin{bmatrix} D_1 & -M^T & & \\ -M & D_2 & -M^T & \\ & -M & D_3 & \hdots\\ & & \vdots & \ddots \end{bmatrix}}_{D}\begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ \vdots \end{bmatrix} &= \begin{bmatrix} d_1 \\ d_2 \\ d_3 \\ \vdots \end{bmatrix}, \end{align} where \begin{align*} D_i=\begin{bmatrix} A_i & B_i & 0 \\ B_i^T & C_i & I \\ 0 & I & 0 \end{bmatrix}, \quad M= \begin{bmatrix} 0 & 0 & I \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \end{align*} with $y_i=[{g_{i}^-}^T, {g_{i}^+}^T, \lambda_i^T]^T$ and $d_i=[{c_{i}^-}^T, {c_{i}^+}^T, 0]^T$ for $i=1,\ldots,l-1$ and $y_l=[{g_{l}^-}^T, {g_{l}^+}^T]^T$ and $d_l=[{c_{l}^-}^T, {c_{l}^+}^T]^T$ such that $y=\mathrm{vec}\{y_1,\ldots,y_l\}$ and $d=\mathrm{vec}\{d_1,\ldots,d_l\}$. Block-tridiagonal matrices such as \eqref{tridiag} are commonly solved through Block LU factorisation~\cite{Fox1969}. The LU factorisation of the block tridiagonal matrix $D$ \scalebox{0.85}{\parbox{1.1\linewidth}{% \begin{align*} D&=\underbrace{\begin{bmatrix} I & & & & \\ N_1 & I & & & \\ & N_2 & I & & \\ & & & \ddots & & \\ & & & N_{k-1} & I \end{bmatrix}}_{L}\underbrace{\begin{bmatrix} D_1 & -M^T & & \\ & \overline{D}_2 & -M^T & & \\ & & \overline{D}_3 & & \\ & & & \ddots & & \\ & & & & \overline{D}_l \end{bmatrix}}_{U}, \label{eq::LU} \end{align*} }} where the block elements are \begin{align*} N_i&=\begin{bmatrix} B_i^T\overline{A}_i^{-1} & -I & C_i-B_i^T\overline{A}_i^{-1}B_i \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix},\quad i=1,\ldots,l-1, \\ \overline{D}_i&=\begin{bmatrix} A_i+C_{i-1}-B_{i-1}^T\overline{A}_{i-1}^{-1}B_{i-1} & B_i & 0 \\ B_i^T & C_i & I \\ 0 & I & 0 \end{bmatrix},\quad i=2,\ldots,l. \end{align*} First solving $Lx=d$ yields the iteration over $\overline{c}_{i}^-$, that is \eqref{con2}. Calculating each $A_i+C_{i-1}-B_{i-1}^T\overline{A}_{i-1}^{-1}B_{i-1}$ gives rise to \eqref{con1}. These matrices are then used to solving $Uy=x$, providing expressions for $g_{i}^-$ and $g_{i}^+$ as \eqref{rec1} and \eqref{rec2}. The solution to $Uy=x$ also governs the values of $\lambda_i$ with \eqref{rec3}. \end{proof} We prove that Algorithm~\ref{algo1} solves the fixed-time problem by first reformulating the optimisation program to reveal its structure. Then we derive the steps of the algorithm in a similar fashion to Cantoni {\it et al.}~\cite{cantoni2020structured}. \begin{proof}[Proof of Proposition~\ref{prop::alg_complexity}] Similar to Richter {\it et al.}~\cite{Richter2016}, we reformulate \eqref{eq::form2} as the following. \begin{subequations}\label{eq::reform1} \begin{align} \min_{f} \quad &f^TV^{-T}HV^{-1}f, \\ \text{s.t. } \quad & Ef =0, \\ \quad & P f = b. \end{align} \end{subequations} We make the change of variable $f = \overline{f} + Z g$ such that $P \overline{f}=b$ and $PZ=0$. By construction $P f=P(\overline{f} + Z g)= b$, and the program resulting from the substitution is \begin{subequations} \label{eq::form4} \begin{align} \min_{g} \quad &2g^TZ^TV^{-T}HV^{-1}\overline{f} + g^TZ^TV^{-T}HV^{-1}Z g, \\ \text{s.t. } \quad & E Z g =0. \label{eq::form4::constr} \end{align} \end{subequations} Note that, the Hessian $Z^TV^{-T}HV^{-1}Z$ is block diagonal and the decision variables are coupled by the sparse matrix $E Z$. Furthermore, the coupling is only between the derivatives of adjacent segments. The recursions introduced in Lemma~\ref{lemm::recursions} lead to the steps of Algorithm~\ref{algo1} as described below. The KKT conditions for \eqref{eq::form4} yield \eqref{eq::KKT}. Lemma~\ref{lemm::recursions} solves \eqref{eq::form4} with linear computational complexity in $l$. Through a change of variables, the recursions can be used to calculate the solution to \eqref{eq::form1}. There are $2l$ matrix calculations required in computing \eqref{con1and2}, while $2l$ systems of equations need to be solved in \eqref{rec1} and \eqref{rec2}. All the matrices involved are square with dimensions smaller than or equal to $k$. Hence, the computational complexity is $O(k^3l)$. \end{proof} \section{Proof of Proposition~\ref{prop::scaling}}\label{app:scaling} The cost function \eqref{eq::form1::cost} can be written as the summation \begin{align*} J(t) &= \min_{\sigma(\tau),f}\sum_{i=1}^l \int_{\tau_{i-1}}^{\tau_i}\big(\sigma^{(k-1)}(\tau)\big)^2 d\tau, \intertext{Under the change of variables $\rho=2\tau/(\tau_i - \tau_{i-1}) - (\tau_i+\tau_{i-1})/(\tau_i - \tau_{i-1})$ for each integral in the summation} J(t) &= \min_{\widetilde{\sigma}(\tau),f}\sum_{i=1}^l \Big(\frac{2}{\tau_i-\tau_{i-1}}\Big)^{2k-1} \int_{-1}^{1}\big(\widetilde{\sigma}^{(k-1)}(\rho)\big)^2d\rho, \\ &= \min_{\widetilde{a}_i,f}\sum_{i=1}^l \Big(\frac{2}{\tau_i-\tau_{i-1}}\Big)^{2k-1} \widetilde{a}_i^T H_i(1,-1) \widetilde{a}_i, \\ &= \min_{\widetilde{a},f}\widetilde{a}^T \widetilde{H}(t) \widetilde{a}. \end{align*} Therefore, subject to the constraints \eqref{eq::form2::vand}--\eqref{eq::form2::phi}, if $a^\star$ and $f^\star$ minimise \eqref{eq::form2::cost} then $\widetilde{a}^\star$ and $f^\star$ minimise \eqref{eq::form_nondim::cost}. We now show that the constraints \eqref{eq::form_nondim::vand}--\eqref{eq::form_nondim::const2} are equivalent to \eqref{eq::form2::vand}--\eqref{eq::form2::phi} to complete the proof. Under the change of variables, for $i=1,\ldots,l$ and $q=1,\ldots,k$, \begin{align*} \sigma^{(q-1)}_i(\tau_{i-1})&=\Big(\frac{2}{\tau_i-\tau_{i-1}}\Big)^{q-1}\widetilde{\sigma}^{(q-1)}_i(-1), \\ \sigma^{(q-1)}_i(\tau_{i})&=\Big(\frac{2}{\tau_i-\tau_{i-1}}\Big)^{q-1}\widetilde{\sigma}^{(q-1)}_i(1). \end{align*} This can be written more compactly as $V(t)a=G(t)\widetilde{V}\widetilde{a}$ and thereby \eqref{eq::form_nondim::vand} is equivalent to \eqref{eq::form2::vand}. The remaining constraints are in common.\hfill $\square$ \section{Proof of Proposition~\ref{prop::variable}} \label{app::variable} We first prove the necessary condition. To obtain a contradiction, assume $t^\star=Rd^\star$ is a local minimiser of \eqref{eq::form3} but $d^\star$ is not a local minimiser of \eqref{eq::form6}. There exists some $d^\dagger$ such that $\lVert d^\dagger - d^\star \rVert < \varepsilon$ and $J(Rd^\dagger)<J(Rd^\star)$. Hence, there exists a $t^\dagger=Rd^\dagger$ such that $\lVert t^\dagger - t^\star \rVert < \lVert R \rVert \varepsilon$ and $J(t^\dagger)<J(t^\star)$, which contradicts $t^\star$ being a local minimiser of \eqref{eq::form3}. To prove the sufficient condition, we first state a useful fact. Consider the times $t_1,~t_2\in\mathbb{R}^{l+1}$ and their differences $d_1,~d_2\in\mathbb{R}^{l}$, respectively, such that $d_1=d_2$. The formulation \eqref{eq::form_nondim} depends only on the difference between times $\tau_i-\tau_{i-1}$, $i=1,\ldots,l$, therefore $J(t_1)=J(t_2)$. We now prove the sufficient condition by contradiction. Assume that $d^\star$ is a local minimiser of \eqref{eq::form6} but $Rd^\star$ is not a local minimiser of \eqref{eq::form3}. Therefore, a $t^\dagger$ exists such that $\lVert t^\dagger - R d^\star \rVert < \varepsilon$ and $J(t^\dagger)<J(Rd^\star)$. However, as the formulation depends only on the difference between times, there must also exist some $d^\dagger$ such that $J(t^\dagger)=J(Rd^\dagger)$. Hence, there is a $d^\dagger$ such that $\lVert d^\dagger - d^\star \rVert < \varepsilon / \lVert R \rVert $ and $J(Rd^\dagger)<J(Rd^\star)$, which contradicts the assertion that $d^\star$ is a local minimiser of \eqref{eq::form6}. \hfill $\square$ \section{Proof of Proposition~\ref{prop::gradient}} \label{app::gradient} We start by performing a similar reformulation as in the proof of Proposition~\ref{prop::alg_complexity}. We note that $G(t)$ depends only on the difference between times $\delta_i=(\tau_i-\tau_{i-1})/2$, so without loss of generality we reparametrise it as $G(d):\mathbb{R}^{l}\rightarrow\mathbb{R}^{2kl\times 2kl}$. This allows us to recast \eqref{eq::form6} as \begin{subequations}\label{eq::reform2} \begin{align} J(Rd)=\min_{f} \quad &f^T(G(d)\widetilde{V})^{-T}\widetilde{H}(G(d)\widetilde{V})^{-1}f, \\ \text{s.t. } \quad & Ef = 0, \\ & Pf = b. \end{align} \end{subequations} We make the change of variable $f = \overline{f} + Zg$ such that $P\overline{f}=b$ and $PZ=0$. Further, we let $g=Yh$ such that $EZY=0$ and $Y\in\mathbb{R}^{(kl-m/2)\times (2kl-m)}$. Substituting the two change of variables into \eqref{eq::reform2} results in the unconstrained optimisation program \begin{align}\label{eq::unconstrained} J(Rd)=\min_{h} \quad h^T Q(d) h + h^Tq(d) + \overline{q}(d), \end{align} where \begin{align*} Q(d) &= Y^TZ^T (G(d)\widetilde{V})^{-T}\widetilde{H}(G(d)\widetilde{V})^{-1} ZY, \\ q(d) &= 2 Y^TZ^T (G(d)\widetilde{V})^{-T}\widetilde{H}(G(d)\widetilde{V})^{-1} \overline{f}, \\ \overline{q}(d) &= \overline{f}^T(G(d)\widetilde{V})^{-T}\widetilde{H}(G(d)\widetilde{V})^{-1}\overline{f}. \end{align*} Let $h^\star(d)$ be the argument that minimizes \eqref{eq::unconstrained} such that \begin{align*} J(Rd)&=h^\star(d)^T Q(d) h^\star(d) + h^\star(d)^T q(d) + \overline{q}(d). \end{align*} A necessary condition for the optimality of the fixed-time minimum-snap trajectory is \begin{align}\label{eq::opt_cond_1} 2Q(d)h^*(d)+q(d)&=0. \end{align} Substituting \eqref{eq::opt_cond_1} into $\nabla_d J(Rd)$ yields \begin{align} \label{eq::partial_J_opt} \nabla_d J(Rd) &= h^\star(d)^T \big(\nabla_d Q(d)\big) h^\star(d) \\ & \quad + h^\star(d)^T \nabla_d q(d) + \nabla_d \overline{q}(d). \nonumber \end{align} We will now identify the nonzero components of $\nabla_d Q(d)$, $\nabla_d q(d)$ and $\nabla_d \overline{q}(d)$ and use them to form compact expressions for $\nabla_d J(Rd)$. For $i=j$, the partial derivatives of the block-diagonal components of $G(d)$ and $\widetilde{H}$ \begin{align*} \frac{\partial G_j}{\partial \delta_i} &= \mathrm{diag}\{0, -\delta_j^{-2}, \ldots,-(s-1)\delta_j^{-s}, \ldots \\ & \quad 0, -\delta_j^{-2}, \ldots,-(s-1)\delta_j^{-s}\}, \end{align*} and \begin{align} \frac{\partial}{\partial \delta_j}\big(\frac{1}{\delta_i^{2k-1}}H_i(-1,1)\big) = \frac{1-2k}{\delta_i^{2k}}H_i(-1,1). \label{eq::prop2_p2} \end{align} Otherwise, the partial derivatives of the block-diagonal components of $G(d)$ and $\widetilde{H}$ are zero for $i\neq j$. To ease notation in the final expression we introduce \begin{align}\label{eq::prop2_p1} F(\delta_i)&=\frac{\partial G_i(\tau_{i-1},\tau_i)}{\partial \delta_j}G_i\tau_{i-1},\tau_i)^{-1}, \\ &= \mathrm{diag}\{0, -\delta_j^{-1}, \ldots,-(s-1)\delta_j^{-1}, \ldots \nonumber \\ & \quad \quad 0, -\delta_j^{-1}, \ldots,-(s-1)\delta_j^{-1}\}. \nonumber \end{align} The last calculation required is \begin{align} \label{eq::prop2_p3} \nabla_d (G(d) \widetilde{V})&= - \widetilde{V}^{-1} (\nabla_d G(d)) G^{-1}(t). \end{align} Substituting \eqref{eq::prop2_p1}, \eqref{eq::prop2_p2} and \eqref{eq::prop2_p3} into the partial derivatives $\nabla_d Q(d),~\nabla_d q(d)$ and $\nabla_d \overline{q}(d)$ then evaluating \eqref{eq::partial_J_opt} yields the compact expressions \eqref{eq::partialJ} as required.\hfill $\square$ \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-06T02:05:59", "yymm": "2105", "arxiv_id": "2105.01788", "language": "en", "url": "https://arxiv.org/abs/2105.01788" }
\section{Introduction} The Unruh effect posits that on a linearly uniformly accelerated trajectory the Minkwoski vacuum looks like a thermal state at a temperature proportional to the trajectory's proper acceleration \cite{Davies, Fulling, Unruh}. This is the Unruh temperature, given by $T_{\rm U} = a/(2 \pi)$ in natural units. The study of the Unruh effect continues to be of great interest in theoretical physics, see e.g. \cite{DeBievre:2006pys} for a study of the problem as return to equilibrium, \cite{Fewster:2016ewy} for the characterisation of thermalisation time for the Unruh effect, \cite{Salton:2014jaa} in the context of entanglement harvesting. The intimate relationship between the Unruh effect and black holes in equilibrium, which indeed originally motivated Unruh's work \cite{Unruh}, is by now well established (see e.g. \cite[Chap. 5]{WaldBook}), and can be seen very explicitly in two-dimensional situations \cite{Juarez-Aubry:2014jba}. \cite{Crispino, Hu:2012jr} include thorough discussions of the Unruh effect, including applications and structural properties. Very recently a concrete experimental proposal has been put forward in \cite{Gooding:2020scc} to detect for the first time the analogous Unruh temperature along uniformly accelerated circular motions. This is an analogous circular Unruh effect, see e.g. \cite{Biermann:2020bjh, Juarez-Aubry:2019gjw, Good:2020hav}. Sitting at the heart of the Unruh effect is the fact that the Minkowski vacuum state restricted to, say, the right Rindler wedge of Minkowski spacetime, $|t| < x$, can be formally represented as thermal mixture of so-called \emph{Rindler particles} supported on the right Rindler wedge. These are nothing but particles defined in the Fulling-Rindler quantisation in flat spacetime, for which the notion of positive energy is defined with respect to Lorentz boosts, $b^a = a(x \, \partial_t^a + t\, \partial_x^a)$, which generate the natural notion of time evolution for linearly accelerated observers. Following this observation, in 1984 Unruh and Wald wrote a seminal paper \cite{Unruh-Wald} where they clarified what occurs when a linearly uniformly accelerated observer detects a Rindler particle: From the point of view of an inertial observer in Minkowski spacetime, the absorption of a Rindler particle -- modelled as a two-level detector excitation -- corresponds to the emission of a Minkowski particle. The paper \cite{Unruh-Wald} is remarkable in that not only did it illustrate the relativity of the notion of a particle, detection and emission, but clarified the fact that working in terms of quantum fields (and taking particles to have an operational meaning) is fully consistent with the basic ideas underlying the equivalence principle. Furthermore, \cite{Unruh-Wald} has served as the starting ground for further developments. For example, the study of \emph{bremsstrahlung} as seen from the point of view of accelerated observers \cite{Higuchi:1992td, Higuchi:1992we}, the analysis of the decay of accelerated protons, and the finding that such behavior approaches that of accelerated neutrons, as the mass scale characterizing that acceleration -- i.e., the corresponding Unruh temperature -- increases, and disappears exponentially as that quantity grows beyond the value of the proton-neutron mass gap \cite{Matsas-1, Matsas-2}. In any case, there are three central questions that are addressed in \cite{Unruh-Wald}. The first one is to analyse the unitary evolution of the joint field-detector system when the field is initially in the Minkowski vacuum state and the two-level detector, initially switched off and prepared in the ground state, follows a linearly uniformly accelerated trajectory in the right Rindler wedge. This is done using perturbation theory in the interaction picture up to first order. (Second order contributions were further studied in \cite{Audretsch:1994gg}.) The second question is to see what the updated state of the field is assuming the detector has detected a Rindler particle after some interaction time has elapsed, namely a one-particle state in the Minkowski folium, and to obtain the updated stress-energy tensor. It is found that, since the updated field state is a one-particle state, the energy of the field has increased upon detection. The third question addressed in \cite{Unruh-Wald} is whether detecting Rindler particles can be used as a mechanism for extracting an unbounded amount of energy from the field or to send a superluminal signal from the right to the left Rindler wedges. In both cases, the analysis leads to a negative answer. The motivation of this work is two-fold. First, we wish to revisit the three central questions discussed in \cite{Unruh-Wald}. Concerning the first one, we note that the calculation in \cite{Unruh-Wald} is performed by exploiting an analogy with the situation of a detector interacting with a field in a thermal state that describes a \emph{proper mixture} \cite{DEspagnat}, i.e., such that the actual state of the system is pure, but there is a degree of ignorance as to what the state of the system actually is, which is encoded in weights accounting for a probability distribution of the possible (pure) states of the system. This results in a mixed state description of a pure state due to ignorance. On the other hand, the most natural description for the Minkowski vacuum from the point of view of an accelerated observer is that of a thermal state as an \emph{improper mixture} \cite{DEspagnat} (see footnote 2 for more details), as the left Rindler wedge degrees of freedom must be traced out, yielding a reduced mixed state. In sec. \ref{sec:Main} we will carry out the calculation from the improper-mixture viewpoint. While the results coincide mathematically, as they should, we think that this treatement is conceptually clearer. We then proceed to calculate the updated expectation value of the stress-energy tensor once the detector has clicked. We obtain expressions in both the right and the left Rindler wedge, adding to the result displayed in \cite{Unruh-Wald} for the right Rindler wedge, as we show in sec. \ref{sec:Main}. Concerning the third central question addressed in \cite{Unruh-Wald}, on the point of energy extraction, we agree with the no-go argument presented by Unruh and Wald: while the energy is \emph{not conserved} for a single measurement, it is on the long run for very many successive measurements, but the discussion on superluminal communication ties in with the second motivation of this paper: Here we shall raise the point that there is a potential issue after a single measurement has been carried out, if one is to trust the semiclassical approximation of quantum gravity ``before" and ``after" the measurement has been performed. The point will be that in semiclassical gravity the expectation value of the stress-energy tensor can be used to actually source geometry, see eq. \eqref{semi-simple} below. Thus, an abrupt change of this quantity should be detectable by an experiment on the gravitational sector. Where and how this abrubt change occurs, i.e., where and how the state of the field can be seen as collapsing upon a measurement of the detector must undoubtedly play a r\^ole on how to prevent this apparent paradox from occurring, but our current understanding of these questions is fuzzy -- hence the use of inverted commas around the words \emph{before} and \emph{after}. Thus, it seems to us that the resolution of this apparent contradiction is connected with the \emph{measurement problem} of quantum theory. On this point we wish to emphasise that, while Unruh and Wald correctly point out in their discussion in \cite[Sec. IV]{Unruh-Wald} that the presence of a detector (switched on or otherwise) in the right Rindler wedge has no influence on the left Rindler wedge, their argument is based on the Heisenberg-picture observation that the effects of the detector can only affect the causal future of the coupling region between the detector and the field. In fact, this observation does not even depend on the details of the detector or the field observables, see e.g. \cite{Fewster:2018qbm} for a precise statement in some generality. The limitation of that argument is that it does not take into account the non-unitary measurement of the detector, which is typically described as a projection onto the out-state in the interaction picture. This is a central difference with the position assumed in this paper. The organisation of this paper is as follows: In Sec. \ref{sec:Main} we describe the evolution of the field-detector system using a left and right doubled Fock space representation for the field, and we obtain the stress-energy tensor after a Rindler particle has been measured. In doing so, we do not make any assumptions on the details of the coupling of the detector and the field, in particular we do not assume a long-time limit for the interaction, other than assuming that the coupling is week, which allows us to conform ourselves with first-order effects in the coupling. We then discuss the non-conservation of energy upon measurements in Sec. \ref{sec:Energetic} in a simplified setting, for the sake of clarity. The possibility of faster than light signalling, its implications and potential avoidances appear in Sec. \ref{sec:Superluminal}. Discussions and conclusions appear in Sec. \ref{sec:Conclusions}. \section{What happens once an accelerating observer has detected a Rindler particle} \label{sec:Main} Consider as in \cite{Unruh-Wald} a particle detector coupled to a Klein-Gordon field in Minkowski spacetime following a linearly uniformly accelerated trajectory with acceleration $a$ in the right Rindler wedge. In other words, the particle detector follows the integral curve generated by the boost $b^a = a(x \partial_t^a + t \partial_x^a)$. While currently the pointlike Unruh-DeWitt detector \cite{DeWitt, Louko:2007mu, Fewster:2016ewy} is the most prominent detector model used in studies about the relativistic quantum information and QFT in curved spacetime literature, we shall model our detector as Unruh and Wald have in \cite{Unruh-Wald} to stay closer to their original treatment. The detector is a two-level system with Hilbert space $\mathbb{C}^2$ spanned by energy eigenstates $|\uparrow\rangle$ and $|\downarrow\rangle$. The detector Hamiltonian is $\widehat H_{\rm D} := \Omega \widehat{A}^* \widehat{A}$, where $\widehat{A}$ and $\widehat{A}^*$ are raising and lowering operators, and $\Omega>0$ is the energy of the excited state, i.e., $\widehat H_{\rm D} |\uparrow\rangle = \Omega |\uparrow\rangle$ and $\widehat H_{\rm D} |\downarrow\rangle = 0$. The coupled detector-field theory is described by the interaction Hamiltonian \begin{align} \widehat H = \widehat H_{\rm D} \otimes 1\!\!1 + 1\!\!1_{\rm D} \otimes \widehat H_\Phi + \widehat H_I, \end{align} where $\widehat H_\Phi$ is the Klein-Gordon Hamiltonian and the interaction Hamiltonian is defined by \begin{eqnarray} \widehat{H}_I(\tau) = \epsilon(\tau) \int_\Sigma e^{ 2 a \xi} {\rm d}\xi \, {\rm d} y {\rm d} z \left[ \psi(\xi, y,z) \widehat{A}(\tau) + \overline{\psi}(\xi, y,z) \widehat{A}^*(\tau) \right] \otimes \widehat{\Phi}(\tau, \xi, y,z), \end{eqnarray} where $\widehat{\Phi}$ is the Klein-Gordon field, $\psi \in C_0^\infty(\Sigma)$ defines the profile of the spatial extension of the detector and $\epsilon \in C_0^\infty(\mathbb{R})$ is a switching function that controls the interaction between the detector and the Klein-Gordon field along the linearly uniformly accelerated trajectory of the detector. We shall assume that the interaction between the detector and the field is weaker than any other scale in the problem and that it takes place for sufficiently long times. The coupling takes place in the right Rindler wedge, where the flat metric can be written in terms of Rindler coordinates \begin{equation} t = \frac{1}{a} e^{a \xi} \sinh(a \tau), \qquad x = \frac{1}{a} e^{a \xi} \cosh(a \tau). \end{equation} It takes the form \begin{equation} {\rm d} s^2 = -e^{2 a \xi} \left( {\rm d} \tau^2 - {\rm d} \xi^2 \right) + {\rm d} y^2 + {\rm d} z^2. \label{RRMetric} \end{equation} Furthermore, in the right Rindler wedge, the Klein-Gordon field can be written as \begin{align} \widehat \Phi(\tau, \xi, y, z) = \int_{\mathbb{R}^+\times \mathbb{R}^2} {\rm d}^3 \kappa \left(v_{I \vec{\kappa}}(\tau, \xi, y, z) \widehat{a}_{{\rm R} \vec{\kappa}} + \overline{v_{I \vec{\kappa}}}(\tau, \xi, y, z) \widehat{a}_{{\rm R} \vec{\kappa}}^* \right), \label{PhiRight} \end{align} with $\vec{\kappa} = (\omega, \kappa_y, \kappa_z)$, and where the right Rindler modes can be written in terms of the modified Bessel function of the second kind or MacDonald's function, \begin{equation} v_{I \vec{\kappa}}(x) = \sqrt{\frac{ \sinh\left( \frac{\pi \omega}{a}\right) }{4 \pi^4 a}} K_{i \omega/a}\left[ \frac{\sqrt{\kappa^2_y + \kappa^2_y + m^2}}{a} e^{a \xi} \right] e^{-i \omega \tau + i \left( y \kappa_y + z \kappa_z\right)} \label{Solg} \end{equation} and the formal sharp-momentum annihilation and creation operators are $\widehat{a}_{{\rm R} \vec{\kappa}} := \widehat{a}(\overline{v_{{\rm R} \vec{\kappa}}})$ and $\widehat{a}_{{\rm R} \vec{\kappa}}^* := \widehat{a}^*(v_{{\rm R} \vec{\kappa}})$ respectively. The annihilation operator annihilates the right Fulling-Rindler vacuum, $\Omega_{\rm R}$, while creation operators create Rindler particles. A fully analogous description of the quantum theory holds in the left Rindler wedge. Introducing the left Rindler coordinates \begin{equation} t = \frac{1}{a} e^{a \tilde \xi} \sinh(a \tilde \tau), \qquad - x = \frac{1}{a} e^{a \tilde \xi} \cosh(a \tilde \tau) \end{equation} the field in the left Rindler wedge takes the analogous form \begin{align} \widehat \Phi(\tilde \tau, \tilde \xi, y, z) = \int_{\mathbb{R}^+\times \mathbb{R}^2} {\rm d}^3 \kappa \left(v_{II \vec{\kappa}}(\tilde \tau, \tilde \xi, y, z) \widehat{a}_{{\rm L} \vec{\kappa}} + \overline{v_{I \vec{\kappa}}}(\tilde \tau , \tilde \xi, y, z) \widehat{a}_{{\rm L} \vec{\kappa}}^* \right), \label{PhiLeft} \end{align} where the left Rindler modes have an identical form to the right modes \eqref{Solg} upon the replacement of $\tau$ and $\xi$ by $\tilde \tau$ and $\tilde \xi$. It is very well known that the Minkowski vacuum restricted to the (right or left) Rindler wedges looks like a thermal mixture of (right or left, resp.) Rindler particles. See appendix \ref{App:MinkoVac} for details. In any case, in the case at hand we consider that the state of the system is prepared initially (before the switch-on of $\epsilon$) as the tensor product \begin{align} |s_{-\infty} \rangle = |\downarrow\rangle \otimes |\Omega_{\rm M} \rangle. \end{align} In the interaction picture, the late-time state of the system (after the switch-off of $\epsilon$) is given by \begin{align} |s_{\infty} \rangle & = T \left({\rm e}^{-{\rm i} \int_\mathbb{R} {\rm d} \tau \widehat H_I} \right) |s_{-\infty} \rangle = |s_{-\infty} \rangle + \left[- {\rm i} \int_\mathbb{R} {\rm d} \tau \widehat H_I + O(\epsilon^2) \right] |s_{-\infty} \rangle, \label{s-infty} \end{align} under the assumption that the coupling is weak. To first order in perturbation theory, the late-time state of the system is \begin{align} |s_{\infty} \rangle = |\downarrow\rangle \otimes |\Omega_{\rm M} \rangle - i | \uparrow \rangle \otimes \int_{\mathbb{R}^+ \times \mathbb{R}^2} \!\!\!\!\!\!\!\!\!\! {\rm d}^3 \kappa \int_{I} {\rm d} {\rm vol}(x) \zeta(x) \left( v_{I \vec \kappa}(x) \hat a_{{\rm R} \vec \kappa} + \overline{v_{I \vec \kappa}(x)} \hat a^*_{{\rm R} \vec \kappa} \right) |\Omega_{\rm M}\rangle , \label{LateState} \end{align} where the volume element in the right Rindler wedge is locally ${\rm d} {\rm vol}(x) = e^{2 a \xi} {\rm d} \tau {\rm d} \xi {\rm d} y {\rm d} z$ and \begin{align} \zeta(x) := e^{i \Omega \tau} \varepsilon(\tau) \overline{\psi(\xi,y,z)}. \end{align} In \cite{Unruh-Wald} the assumption has been made that $\varepsilon(\tau)$ is nearly constant, physically representing a long interaction between the detector and the field, such that switching effects are negligible. In this case, the $\tau$ integral can be performed directly and one obtains a factor proportional to $\delta(\Omega-\omega)$ that indicates that only the modes $v_{I \vec \kappa}$ with frequency highly localised around $\Omega$ will contribute to first order in eq. \eqref{s-infty}. This makes perfect sense, after a long interaction time the only modes that get excited from the vacuum state are those whose energy coincides with the frequency gap of the detector. Let us however digress at this point and not make this approximation. The reason is that in our case we are interested in post-measurement effects for measurements carried out after a finite time of interaction between the field and the detector. On the assumption that the detector clicks, the updated state for the field becomes \begin{align} |f\rangle = -i \mathcal{N} \int_{\mathbb{R}^+ \times \mathbb{R}^2} \!\!\!\!\!\!\!\!\!\! {\rm d}^3 \kappa \int_{I} {\rm d} {\rm vol}(x) \zeta(x) \left( v_{I \vec \kappa}(x) \hat a_{{\rm R} \vec \kappa} + \overline{v_{I \vec \kappa}(x)} \hat a^*_{{\rm R} \vec \kappa} \right) |\Omega_{\rm M}\rangle, \end{align} where the normalisation $\mathcal{N}$ is given by \begin{align} \mathcal{N} = \left( \int_{\mathbb{R}^+ \times \mathbb{R}^2} \!\!\!\!\!\!\!\!\!\! {\rm d}^3 \kappa \int_{I} {\rm d} {\rm vol}(x) \int_{I} {\rm d} {\rm vol}(x') \overline{\zeta(x)} \zeta(x') \left( \frac{v_{I \vec \kappa}(x)\overline{v_{I \vec \kappa'}(x')}}{1- e^{-2 \pi \omega/a}} + \frac{\overline{v_{I \vec \kappa}(x)} v_{I \vec \kappa' }(x')}{e^{2 \pi \omega/a}-1} \right) \right)^{-1/2} \label{Normalisation} \end{align} as can be seen in appendix \ref{App:Normalisation}. We are interesting in the change in the expectation value of the stress-energy tensor of the field in the updated state, i.e., we are interested in \begin{align} \Delta T_{ab} := \langle f | \hat T_{ab}(x) f \rangle - \langle \Omega_{\rm M} | \hat T_{ab} \Omega_{\rm M} \rangle \end{align} in the right and left Rindler wedges. (Note that imposing $\langle \Omega_{\rm M} | \hat T_{ab} \Omega_{\rm M} \rangle = 0$, we have that $\Delta T_{ab} = \langle f | \hat T_{ab}(x) f \rangle$.) To this end, if we use a point-splitting prescription for renormalising the stress-energy tensor the object of interest is to obtain the two-point function in the left and right Rindler wedges. It follows from the calculations in appendices \ref{Sec:RightUpdate} and \ref{Sec:LeftUpdate} that the two-point function in the updated state takes the form \begin{align} \langle f | \hat \Phi(x) \hat \Phi(x') f \rangle & = \langle \Omega_{\rm M} | \hat \Phi(x) \hat \Phi(x') \Omega_{\rm M} \rangle+ \Delta_{\rm R}(x,x') \text{ in the right Rindler wedge and,} \\ \langle f | \hat \Phi(x) \hat \Phi(x') f \rangle & = \langle \Omega_{\rm M} | \hat \Phi(x) \hat \Phi(x') \Omega_{\rm M} \rangle+ \Delta_{\rm L}(x,x') \text{ in the left Rindler wedge}. \end{align} Here, $\Delta_{\rm R}$ and $\Delta_{\rm L}$ are real, smooth, symmetric bi-functions given by \begin{align} \Delta_{\rm R}(x,x') & = \mathcal{N}^{2} \int_{\mathbb{R}^+ \times \mathbb{R}^2} \!\!\!\!\!\!\!\!\!\! {\rm d}^3 \kappa \int_{\mathbb{R}^+ \times \mathbb{R}^2} \!\!\!\!\!\!\!\!\!\! {\rm d}^3 p \int_{I} {\rm d} {\rm vol}(y) \int_{I} {\rm d} {\rm vol}(y') \frac{\overline{\zeta(y)} \zeta(y')}{(1-e^{-2\pi \omega_p/a}) (1-e^{-2\pi \omega_\kappa/a})} \nonumber \\ & \times \left( v_{I \vec p}(y) v_{I \vec{\kappa}}(x) \overline{v_{I \vec{p}}(x')} \overline{v_{I \vec \kappa}(y')} + \overline{v_{I \vec \kappa}(y)} v_{I \vec{\kappa}}(x) v_{I \vec{p}}(x') \overline{v_{I \vec p}(y')}e^{-2 \pi \omega_\kappa} \right. \nonumber \\ & \left. + \overline{v_{I \vec p}(y)} v_{I \vec{\kappa}}(x) v_{I \vec{p}}(x') \overline{v_{I \vec \kappa }(y')}e^{-2 \pi \omega_p} + \overline{v_{I \vec \kappa}(y)} v_{I \vec{\kappa}}(x) \overline{v_{I \vec{p}}(x')} v_{I \vec p}(y')e^{-2 \pi \omega_{\kappa}} e^{-2 \pi \omega_{p}}\right) + {\rm c.c. }, \label{DeltaR} \\ \Delta_{\rm L}(x,x') & = \mathcal{N}^{2} \int_{\mathbb{R}^+ \times \mathbb{R}^2} \!\!\!\!\!\!\!\!\!\! {\rm d}^3 \kappa \int_{\mathbb{R}^+ \times \mathbb{R}^2} \!\!\!\!\!\!\!\!\!\! {\rm d}^3 p \int_{I} {\rm d} {\rm vol}(y) \int_{I} {\rm d} {\rm vol}(y') \overline{\zeta(y)} \zeta(y') \frac{e^{- \pi \omega_p/a} e^{- \pi \omega_\kappa/a} }{(1-e^{-2 \pi \omega_p/a})(1-e^{-2 \pi \omega_\kappa/a})} \nonumber \\ & \times \left( v_{I \vec p}(y) v_{II \vec{\kappa}}(x) \tilde v_{II \vec{p}}(x') \tilde v_{I \vec \kappa}(y') + v_{I \vec p}(y) \tilde v_{II \vec{p}}(x) v_{II \vec{\kappa}}(x') \tilde v_{I \vec \kappa }(y') \right. \nonumber \\ & \left. + v_{I \vec p}(y) \overline{v_{II \vec{\kappa}}(x)} \tilde v_{II \vec{p}}(x') \overline{\tilde v_{I \vec \kappa}(y')} + \overline{v_{I \vec p}(y)} \overline{\tilde v_{II \vec{p}}(x)} v_{II \vec{\kappa}}(x') \tilde v_{I \vec \kappa }(y') \right) + {\rm c.c. }, \label{DeltaL} \end{align} where in eq. \eqref{DeltaL} the tilde on the modes denotes a parity operator in the orthogonal direction to the Rindler wedges, i.e., $\tilde v_{I \vec \kappa } := v_{I (\omega_\kappa, -\kappa_\perp)}$ and similarly for $\tilde v_{II \vec \kappa }$. Let us now compute the expectation value of the stress-energy tensor in the state $| f \rangle$. Note that a difference with the calculation in \cite{Unruh-Wald} is that we do not treat the state of the system as an improper thermal mixture in the right Rindler wedge, but rather as a pure state defined in the right and left Rindler wedges, which allows us to obtain the renormalised expectation value of the stress energy tensor for points in the right and left Rindler wedges. Here we use the terminology introduced by d'Espagnat \cite{DEspagnat} to clarify that the same mathematical object, a density matrix, can be used to represent two very different physical situations: 1) The case in which one is interested in an ensemble of identical quantum systems, each one of which is in one pure and definite quantum state among a list possible such states $ \{ |i \rangle \}$ and where the fraction of such states in the ensemble is given by a certain classical distribution function $f (i)$, and 2) the case in which a system of interest $S$ is a subsystem of a larger system $ S+E$ with the latter in a given pure quantum state, but with our interest focused just on $S$ which can therefore be characterised in terms of the reduced density matrix obtained after tracing over the degrees of freedom of $E$. For the first case one reserves the name ``proper mixture" and says the density matrix represents such proper mixture, and for the second case one reserves the name ``improper mixture", and equally indicates the density matrix is to be understood as representing the improper mixture. We note that another situation one might want to consider, is one in which one is dealing with a single quantum system $S$ which is in a pure state, which however is not completely known, and for which one has information about the classical probability $p (i)$ of the system being in each one of the quantum states. For such situation one can often use the characterisation provided by case 1) by considering a corresponding imaginary ensemble, in which the fraction is arranged to match the given probability i.e., $ f(i) = p(i)$. That situation one also talks by extension about a proper mixture, even though the state of the system is pure, and thus its ``properness" (or the fact that we do not express the state as a pure one) is just a result of our ignorance. Finally, as is usual, a density matrix is characterised as thermal if its representation in the energy basis has the standard thermal weights. Thus a thermal density matrix can be proper or improper. \section{The change in the stress-energy tensor} \label{sec:ChangeTab} In order to compute the stress-energy tensor in the updated state, $\langle f | \hat T_{ab}(x) f \rangle$, in the right/left wedge we apply the point-splitting operator \begin{align} \mathcal{T}_{ab} := g_b{}^{b'} \nabla_a \nabla_{b'} - \frac{1}{2} g_{ab} g^{cd'} \nabla_c \nabla_{d'} - \frac{1}{2} g_{ab} m^2, \end{align} where $g_a{}^{a'}$ is the parallel-transport propagator, to $\Delta_{{\rm R/L}}(x,x')$ given by eq. \eqref{DeltaSimp}, and then take the coincidence limit as \begin{align} \langle f | \hat T_{ab}(x) f \rangle = \lim_{x'\to x} \mathcal{T}_{ab} \Delta_{\rm R/L}(x,x'). \label{Tcoinc} \end{align} It is useful to locally express the parallel-transport components in Rindler coordinates by using the formula $g_\mu{}^{\mu'}(x,x') = e_\mu^I(x) e^{\mu'}_I(x')$ in terms of the soldering form and its inverse. We then have that in the right Rindler wedge the parallel-transport propagator has components $g_\eta{}^{\eta'}(x,x') = e^{a \xi} e^{-a \xi'}$, $g_\xi{}^{\xi'}(x,x') = e^{a \xi} e^{-a \xi'}$, $g_y{}^{y'} = 1$ and $g_z{}^{z'} = 1$, with all other components vanishing. We are chiefly concerned with changes in the left Rindler wedge, which is causally disconnected from the detector that clicks. Inserting \eqref{DeltaL} into eq. \eqref{Tcoinc} we have that the in the left Rindler wedge stress-energy tensor in the updated state takes a diagonal form and the form it takes can be read directly from \eqref{Tcoinc}. For instance, for the energy density of a massless field we have in the left Rindler wedge \begin{align} & \langle f | \hat T_{\eta \eta}(x) f \rangle = \Re \, \, \mathcal{N}^{2} \int_{\mathbb{R}^+ \times \mathbb{R}^2} \!\!\!\!\!\!\!\!\!\! {\rm d}^3 \kappa \int_{\mathbb{R}^+ \times \mathbb{R}^2} \!\!\!\!\!\!\!\!\!\! {\rm d}^3 p \int_{I} {\rm d} {\rm vol}(y) \int_{I} {\rm d} {\rm vol}(y') \overline{\zeta(y)} \zeta(y') \frac{e^{- \pi \omega_p/a} e^{- \pi \omega_\kappa/a} }{(1-e^{-2 \pi \omega_p/a})(1-e^{-2 \pi \omega_\kappa/a})} \nonumber \\ & \times \left( v_{I \vec p}(y) \partial_\eta v_{II \vec{\kappa}}(x) \partial_\eta \tilde v_{II \vec{p}}(x) \tilde v_{I \vec \kappa}(y') + e^{2 a \xi} \sum_{i = 1}^3 g^{ii} v_{I \vec p}(y) \partial_i v_{II \vec{\kappa}}(x) \partial_i\tilde v_{II \vec{p}}(x) \tilde v_{I \vec \kappa}(y') \right. \nonumber \\ & + v_{I \vec p}(y) \partial_\eta \tilde v_{II \vec{p}}(x) \partial_\eta v_{II \vec{\kappa}}(x) \tilde v_{I \vec \kappa }(y') + e^{2 a \xi} \sum_{i}^3 g^{ii} v_{I \vec p}(y) \partial_i \tilde v_{II \vec{p}}(x) \partial_i v_{II \vec{\kappa}}(x) \tilde v_{I \vec \kappa }(y') \nonumber \\ & + v_{I \vec p}(y) \partial_\eta \overline{v_{II \vec{\kappa}}(x)} \partial_\eta \tilde v_{II \vec{p}}(x) \overline{\tilde v_{I \vec \kappa}(y')} + e^{2 a \xi} \sum_{i}^3 g^{ii} v_{I \vec p}(y) \partial_i \overline{v_{II \vec{\kappa}}(x)} \partial_i \tilde v_{II \vec{p}}(x) \overline{\tilde v_{I \vec \kappa}(y')} \nonumber \\ & \left. + \overline{v_{I \vec p}(y)} \partial_\eta \overline{\tilde v_{II \vec{p}}(x)} \partial_\eta v_{II \vec{\kappa}}(x) \tilde v_{I \vec \kappa }(y') + e^{2 a \xi} \sum_{i}^3 g^{ii} \overline{v_{I \vec p}(y)} \partial_i \overline{\tilde v_{II \vec{p}}(x)} \partial_i v_{II \vec{\kappa}}(x) \tilde v_{I \vec \kappa }(y') \right), \label{EnergyLeft} \end{align} where $\Re$ denotes the real part and with $g^{11} = e^{-2 a \xi}$ and $g^{22} = g^{33} = 1$. In the massive case, one adds the term \begin{align} \frac{1}{2} m^2 e^{2 a \xi } \Delta_{\rm L}(x,x) \label{MassTermEnergy} \end{align} to the right-hand side of eq. \eqref{EnergyLeft}. Similar expressions can be obtained for the components $\langle f | \hat T_{\xi \xi}(x) f \rangle$, $\langle f | \hat T_{yy}(x) f \rangle$ and $\langle f | \hat T_{zz}(x) f \rangle$ in the left Rindler wedge, and for the four non-vanishing components in the right Rindler wedge. Instead of spelling out in detail all of the remaining components, we point out that eq. \eqref{EnergyLeft} suffices to a central point of the paper, which is that a measurement that occurs in the right Rindler wedge has non-trivial effects on the causally-disconnected left Rindler wedge. It is natural to ask ``when" or ``where" in spacetime the state collapses after a detector has detected a Rindler particle. This is far from obvious, but a natural assumption seems to be that the state collapses along a Cauchy surface of spacetime (see \cite{Juarez-Aubry:2017ery}) intersecting the ``detection event" on the right Rindler wedge and extending into the left Rindler wedge. We should emphasise that regardless of ``how big" this change might be, it is a principled statement that by including state collapses with semiclassical gravity has produced an abrubt change in the a region causally disconnected from where the measurement took place, in this case by means of a detector click. One can see by direct inspection of \eqref{EnergyLeft} and \eqref{MassTermEnergy} that for high accelerations there exists spacetime regions in the left Rindler wedge where the expectation value of the energy density becomes large. For example, for sufficiently small $\tilde \xi$, say $\tilde \xi \sim 1/a$, the Rindler modes do not exhibit the large-argument suppression of the MacDonald function, but the integrand factors $e^{-\pi \omega / a}/(1- e^{-2 \pi \omega/a})$ exhibit the behaviour \begin{align} \frac{e^{-\pi \omega / a}}{1- e^{-2 \pi \omega/a}} = \frac{1}{2\sinh (\pi \omega/a)} = \frac{a}{2 \omega}+ O(\omega/a). \end{align} This observation is consistent with what one would expect in the long interaction time limit case presented in \cite{Unruh-Wald}, as can be seen from eq. (3.29) in that paper in the small $\beta = 2\pi/a$ regime. \section{Energetic considerations} \label{sec:Energetic} In this section we revisit some of the energetic considerations discussed in \cite{Unruh-Wald} but focusing on the various possible {\it individual outcomes} of the ``detection attempts", rather than on the ensemble averages of detector measurement outcomes, which are the quantities considered in most of the discussion of said reference. Following \cite{Unruh-Wald} and to simplify the discussion we will consider the case of {(ensembles of)} harmonic oscillators, and entangled pairs of harmonic oscillators instead of quantum fields. There is no loss of conceptual clarity in doing so, but the treatment is mathematically less involved. Consider a harmonic oscillator with energy eigenstates $\lbrace |n\rangle \rbrace $ with renormalised energies $n\epsilon$ (i.e., we removed the zero point or ground state energy for simplicity of analysis), and a detector with two states $ | \downarrow \rangle$ and $|\uparrow\rangle$ with energy levels $0$ and $\epsilon$ respectively. Let us assume that the initial state of the combined system is: \begin{equation} \label{state-1 } |\Psi\rangle = N \left( |0\rangle \otimes |\downarrow\rangle + \frac{1}{\sqrt{\alpha}} |n\rangle \otimes |\uparrow\rangle \right), \end{equation} with $ N^2 = 1/(1+\alpha^{-1})$. The expectation value of the energy is $\langle E \rangle_{\Psi}=\frac{n \epsilon }{1+\alpha} $. If an observer finds the detector in the unexcited state $ | \downarrow\rangle$, the value of the energy becomes $\langle E \rangle_{\rm unex} = 0$, and the probability for this is ${\rm P}_{\rm unex} = N^2$. On the other hand if they find the detector in the excited state, the energy becomes $\langle E \rangle_{\rm ex} = n \epsilon$ and the probability for this is ${\rm P}_{\rm ex} = N^2/ \alpha$. The average result is of course $\langle E \rangle =\frac{n \epsilon}{1+\alpha} $, which is the same as $\langle E \rangle_{\Psi}$. We note however that in each specific case (i.e., for each specific outcome of the observation) the actual value of the energy differs from $\langle E \rangle_{\Psi}$, the expectation value of the initial state of the combined system. This is of course not surprising, given that the state $|\Psi \rangle$ is not an eigenstate of the total energy operator. As has been argued in \cite{Tim-Elias}, the relevant issue regarding energy conservation is not its conservation ``on average", but its conservation on each single individual instance of any experiment. In this sense the possibility of preparing a state such as $|\Psi\rangle$ and to make the observations described above already set serious doubts about the general validity of anything like a law of ``energy conservation" in the quantum setting. \subsection{Proper mixture} Consider now the case of a system in thermal equilibrium at temperature $T$. We take the system in question once again to be a simple harmonic oscillator. This corresponds, as is traditionally treated on statistical mechanic textbooks, to an ensemble (a canonical ensemble) of identical particles and might be described by the proper mixture: \begin{equation} \label{density-matrix-1} \rho = N \Sigma_{n=0}^{\infty } e^{-\beta n \epsilon } |n \rangle \langle n |, \end{equation} \noindent where $ N = 1-e^{-\beta \epsilon} $ is a normalisation constant ensuring ${\rm Tr} \rho =1 $. The mean energy is then $\langle E \rangle_{T}= \frac{\epsilon \, e^{-\beta \epsilon}}{1-e^{-\beta \epsilon}}$. Let us now consider a two level detector (as in the previous discussion) initially in the un-excited state (and vanishing energy) and make it interact with our thermal ensemble (for this purpose we in fact consider an ensemble of identical detectors). The initial situation will thus be described by the density matrix: \begin{align} \label{density-matrix-2 } \rho\otimes |\downarrow \rangle \langle \downarrow | = \left[ \left( 1-e^{-\beta \epsilon}\right) \Sigma_{n=0}^{\infty } e^{-\beta n \epsilon } |n \rangle \langle n | \right] \otimes |\downarrow \rangle \langle \downarrow | . \end{align} After letting the system interact for a suitably long time we will find a result analogous to that encountered in the equation (3.25) of \cite{Unruh-Wald}. That is: \begin{align} \label{density-matrix-3} \rho_{\rm late} = N_{\rm late} \; \Sigma_{n=0}^{\infty } e^{-\beta n\epsilon} \left[ |n\rangle \otimes | \downarrow \rangle - i \gamma \sqrt{n} |n-1\rangle \otimes |\uparrow \rangle + \dots ][ \langle n | \otimes \langle \downarrow | + i \gamma\sqrt{n} \langle n-1| \otimes \langle \uparrow| + \dots \right] \end{align} where $\gamma$ is a small parameter representing the strength and time duration of the interaction and $ N_{\rm late} $ is a normalisation constant ensuring ${\rm Tr} \, \rho_{\rm late} =1 $. \subsection{Improper mixture} Consider now the situation in which we are informed that the detector is excited. To do this we simply compute a partial trace after applying the corresponding projector $ |\uparrow \rangle \langle \uparrow|$. The resulting density matrix up, to first order in the expansion is \begin{equation} \label{density-matrix-4} \rho_{\rm post} = N_{\rm post} \Sigma_{n=1}^{\infty } e^{-\beta n\epsilon} n |n-1\rangle \langle n-1|, \end{equation} \noindent where $ N_{\rm post}= \left( \frac{ e^{-\beta \epsilon}}{(1-e^{-\beta \epsilon})^2} \right)^{-1}$ is still another normalisation constant ensuring ${\rm Tr} \, \rho_{\rm post} =1 $. We note that the absence of the first term ought be considered to be the result of both the measurement of the excitation level of the exited detector and the post selection within the ensemble in which the elements containing the un-excited detector state were removed. In other words, we have that in going from eq. \ref{density-matrix-3} to eq. \ref{density-matrix-4} actually modifies the set of systems composing the ensemble itself. The next observation is that the resulting ensemble, represented by the density matrix \ref {density-matrix-4} is not longer thermal, which in turn implies that the expectation value of the energy \begin{equation} \label{energy-post} {\rm Tr} ( \hat H \rho_{\rm post}) = N_{\rm post} \Sigma_{m=0}^{\infty } e^{-\beta (m+1) \epsilon} (m+1)m \epsilon = \frac {2 \epsilon e^{-\beta \epsilon}}{(1-e^{-\beta \epsilon})}. \end{equation} differs slightly from that of a truly thermal ensemble ($\langle E \rangle_{T}= \frac{\epsilon \, e^{-\beta \epsilon}}{1-e^{-\beta \epsilon}}$). We note again that even after adding the energy of excitation of the detector $ \epsilon$ the expectation value of the energy has changed as the result of the measurement (and post-selection). Finally let us consider the case of a single pair of entangled harmonic oscillators with ``thermal weights". That is the case of a pure state, which upon consideration of the reduced density matrices for each oscillator would result in a thermal density matrix, but of course one of an improper nature. Let us add a detector arranged to interact just with the harmonic oscillator (II) and which is initially prepared in the unexcited state, so the state of the complete system is: \begin{equation} \label{entangledstate-1} {|\Psi \rangle}^{(T)}_{\rm initial} = A_{\rm initial} \Sigma_{n=0}^{\infty } e^{-\beta n\epsilon} |n \rangle^{(I)} \otimes |n\rangle^{(II)} \otimes | \downarrow\rangle, \end{equation} where $ A_{\rm initial} = \sqrt{1-e^{-2 \beta \epsilon}} $. After letting the system interact for a suitably long time, the state of the system will be : \begin{equation} \label{entangledstate-2} |\Psi \rangle^{(T)}_{\rm late} = A_{\rm late} \Sigma_{n=0}^{\infty} e^{-\beta n\epsilon} |n \rangle^{(I)} \otimes \left[ |n \rangle^{(II)} \otimes |\downarrow \rangle - i \gamma\sqrt{n} |n-1 \rangle^{(II)} \otimes |\uparrow \rangle + \dots \right]. \end{equation} If we project on the subspace corresponding to the excited detector (ignoring the irrelevant factor ${\rm i}$ and ensuring that the final state is normalised) we find: \begin{equation} \label{entangledstate-3} |\Psi \rangle^{(T)}_{\rm pos-tsel} = A_{\rm post-sel} \Sigma_{n=0}^{\infty } e^{-\beta n\epsilon} |n \rangle^{(I)} \otimes \left[ \sqrt{n} |n-1\rangle^{(II)} + \dots \right], \end{equation} \noindent where in this case $ A_{\rm post-sel} \approx \left( \frac{ e^{-2 \beta \epsilon}}{(1-e^{- 2 \beta \epsilon})^2} \right)^{- 1/2}$. Now the expectation values of the energy of the harmonic oscillator $I$ is $ \langle E \rangle_I = \frac { \epsilon (1+ e^{- 2\beta \epsilon}) }{(1-e^{-2 \beta \epsilon})} $, which is different from that of a purely thermal state, while the expectation value of the harmonic oscillator $II$ is $ \langle E \rangle_{II} = \frac {2 \epsilon e^{- 2 \beta \epsilon}}{(1-e^{- 2\beta \epsilon})} $. Note that if we add the energy of the excited detector, $\epsilon$, we have that $ \langle E \rangle_{II} + \epsilon = \langle E \rangle_I $. On the other hand the expectation energies of both cases have definitely changed as a result of the ``projection". At this point we might find nothing seriously puzzling because we have been dealing with systems that are not initially in energy eigen-states. The change in the case of the harmonic oscillator II in the above example, is of course a bit puzzling due to the fact that this oscillator has not been made to directly interact with a detector to bring about the ``projection", however this is nothing more than the usual change occurring as a result of quantum entanglement with a second system which has been subjected to a measurement. It is worth noting that, in dealing with mixed states it is only when the expectation of the energy momentum is extracted from an improper mixture (and thus, indirectly from a pure state) that it make sense to use it in semiclassical gravity. If this is done with a proper mixture what we would obtain is indeed an average value of that quantity over some ensemble (such as in the case of the many realizations involved in stochastic gravity) and then self consistency will be in doubt\footnote{ In fact we do not know what it means to make averages over collections of space-times and the non-linearity of GR clearly cast serious doubts that say Einstein's equations would be preserved under any kind of averaging one might want to consider.}. The situation considered in Sec. 4 is, however a bit more troublesome as it seems to have the potential for a serious violation of our fundamental ideas about relativity, in particular, the potential to offer a path for superluminal communication. \section{Faster than light signaling?} \label{sec:Superluminal} It is widely agreed that our world contains non-local features. This is reflected for instance in the violations of Bell's inequalities\cite{Bell}, which have been experimentally confirmed by now in several experiments \cite{Bell experiments, Bell experiments-2, Bell experiments-3, Bell experiments-4, Bell experiments-5}. While even simpler theoretical settings, such as the GHZ construction \cite{GHZ} (which have not been experimentally realised due to technical difficulties) are expected to provide further and even more transparent evidence for non-locality. See for instance the discussion about the GHZ scheme in \cite{Maudlin-Book}. Nevertheless, there are a number of arguments and general widespread conviction that such non-locality cannot be exploited to communicate superluminally. Indeed, the non-locality present in the situation examined by Bell does not allow for superluminal communication. Faster-than-light communication would force us to revise the very foundations of special relativity -- and of physics as a whole. In any case, ``textbook" quantum mechanics and its alternatives seeking to resolve the measurement problem do not seem to offer paths that allow for faster-than-light communication The change that we have noted in the expectation value of the stress-energy tensor in the left Rindler wedge (region II) due to measurements in the right Rindler wedge (region I) -- if detectable -- however seems to provide a path for superluminal communication, which we describe in the following \emph{gedankenexperiment}: {\bf \emph{Gedankenexperiment}:} Suppose a linearly uniformly accelerated observer in the right Rindler wedge, say Alice, is equipped with a highly-efficient detector for which it is highly probable to detect a Rindler particle and non-detection is negligible. This enables Alice to change the expectation value of the stress-energy tensor in the left Rindler wedge by turning her detector on or to decide not to do it by not switching on her detector. A causally-disconnected observer, say Bob, in the left Rindler wedge that can probe, either the state of the field or more specifically, the expectation value of stress-energy tensor (for example by probing the gravitational field with a torsion balance), would then be able to infer whether Alice has or has not turned on her detector in the right Rindler wedge. This seems to be a path for achieving superluminal communication between Alice and Bob. A simple specific protocol that realises the above {\it Gedankenexperiment} can be thought of as follows. The simplest specific protocol for superluminal communication is the following. Alice and Bob are given instructions that Alice could send to Bob a signal (or instance that she has decided on something and the answer is ``yes") by turning on her detector. She would \emph{not} turn her detector at all if the answer is ``no". Bob will then monitor the value of the expansion of nearby geodesics (which might be hard technically as he is not moving along one), by say looking at freely falling particles he is continuing releasing. Bob would have to be very careful to ensure that nothing he does generates in his surroundings any energy momentum tensor that could mimic that associated with the change in the state resulting from a detection of a field quantum by Alice's detector (or to take any energy momentum he generates into account so as to be able to distinguish that from the one associated with the latter). If at any time in his world line he detects such geodesic expansion he would know Alice's decision is ``yes". The point is that no mater when along his world-line that would happen the communication would be superluminal as Alice and Bob are never in causal contact. This is of course a rather poor signaling protocol because Bob could eventually know if Alice's decision is ``yes" but he would never know if Alice's decision is ``no". We can however remedy this by having Alice, say, use two different kind of detectors (with say vastly different energy of excitation, say one of then being tenfold, or having different orientation), which could produce two different changes in the energy momentum tensor at Bob's location. The detection of one of the two changes would mean the answer is ``yes" and the other the answer is ``no". This would correspond to a complete 1- bit signaling. Again the efficiency and reliability of this protocol depends strongly on the magnitude of Alice's and Bob absolute accelerations $a_{\rm A}$ and $ a_{\rm B}$ as well as the change on the state of the quantum filed induced by the excitation of Alice's detector (which the timing we recall she can not control, as she can only decide whether to turn or not the detector on). This change would depend also on Alice detector's which we can modify by selecting, say, the energy gap, $\Omega$, with which the detector operates. Ignoring switching effects, reasonably good performance for the detector could be achieved by setting the energy gap near the Unruh temperature (in natural units), since for long-time interactions the transition probability of the detector should behave as a Planckian distribution \cite{Fewster:2016ewy}. This part can be modified and we see no obstacle, in principle, to make this quantity arbitrarily large. At the same time the distance $D_{\rm A}$ along a hypersurface orthogonal to $\xi$ from Alice's intersection world-line and say the bifurcating surface of the Killing Horizon is determined by $a_{\rm A}$ but decreases as the later increases it, so for larger $D_{\rm A}$ the protocol becomes poorer. Optimizing the protocol performance with respect to $ a_ {\rm B}$ presumably will have to balance the fact that the analogous distance from Bob to the bifurcating horizon decreases with increasing $a_{\rm B}$ and the detailed functioning of the device he uses in detecting the expansion of nearby geodesics (an analysis that does not seem very easy to carry out). In any event we think that even if the optimal level of efficiency is not very high, that does not detract from the fact that in principle there are no trivial or evident obstacles for he protocol to work at some level of reliability and any nonzero value of that would represent the opening of a door for superluminal communication. Indeed, there exist currently serious proposals to measure the gravitational field with high precision, e.g. \cite{SmallGrav}. We will however discuss at further down the manuscript what seem to be the most natural possibilities for nature to prevent in principle the operation of such protocol. In any event it seems that, even if not highly effective, the possibility of Alice signaling to Bob in a superluminal way should be considered problematic in view of the implications that would have for our understanding of the world. In fact, the situation is aggravated by the fact that even if the detector does not detect a particle, there are higher-order effects (in the coupling constant) that induce changes in the stress-energy tensor on the left Rindler wedge \cite{AudMul}.\footnote{Although it is not clear if the effect when no particle is absorbed is an artifact of perturbation theory.} Thus, it seems imperative to consider the caveats that might provide a path to avoid such a problematic communication protocol. Before we start let us make some observations regarding semiclassical gravity. Semiclassical gravity is a formalism in which the gravitational field is taken as dynamical but treated in classical terms as the metric of a spacetime $(M, g_{ab})$ in general relativity, while the matter is treated in the language of quantum fields on such spacetime. The metric is taken to satisfy the semiclassical version of Einstein's field equations \begin{equation} \label{semi-simple} G_{ab} (x) = 8\pi G \langle T_{ab} (x) \rangle, \end{equation} with $\langle T_{ab}^{\rm ren} \rangle $, the expectation value of the matter's quantum stress-energy tensor in a suitable quantum state, while the quantum matter obeys the dynamics of QFT. See e.g. \cite{QGFord} for a review. In incorporating state collapses in the semiclassical gravity context we remark that there is an issue, which we have touched on briefly above. The point is that once something like state reduction is cosidered (be it in the Copenhaguen approach or in any of the spontaneous collapse theories models, e.g. \cite{collapse theories, collapse theories-2, collapse theories-3, collapse theories-4, collapse theories-5, collapse theories-6, collapse theories-7, collapse theories-8, collapse theories-9, collapse theories-10, collapse theories-11, collapse theories-12}), one has the ambiguity of which state should be used in computing the right-hand side of eq. \eqref{semi-simple} should be employed. In other words, it seems natural that if we are interested in the value of the left-hand side of eq. \eqref{semi-simple} at a point $x$, the right-hand side should be computed using a state associated with a Cauchy surface that passes trough $x$ , but of course there are infinite such surfaces and the proposal to consider eq. \ref{semi-simple} even as an approximation must be completed with a detailed prescription in this regard in order to at least have a well-defined proposal \cite{Hypersurface, Hypersurface-2, Hypersurface-3, Hypersurface-4, Hypersurface-5}. We should emphasize however that no matter what Cauchy surface is used, a portion of the Cauchy surface must extend to the left Rindler wedge. Thus, assuming that the collapse occurs along any arbitrary Cauchy surface, the left Rindler wedge must include a ``pre-collapse" and a ``post-collapse" region with different spacetime geometry -- according to semiclassical gravity. Let us now offer, and briefly discuss what we think is the list of serious set of options to be considered, that could help in avoiding the conclusion of super-luminal signaling: \begin{itemize} \item[(i)] {\it Strong enough departures from semiclassical gravity, at least in region II}: \item[(ii)] {\it Existence of effects that are indistinguishable from observations of the change in the stress-energy in region II}: \item[(iii)] {\it A fundamental indetectability of the change of the state in region II, and in particular that of expectation value of the stress-energy tensor in said region}: \item[(iv)] {\it Some fundamental impediment of the construction of the set up.} \end{itemize} Let us now briefly discuss the options (i)-(iv) considered above. (i) First, and in order to address the issue, we must clarify what is meant by the words ``strong enough". We take that to indicated that as a result of some unknown aspect of physics (with originating, say, in aspects of quantum gravity), the {\color{red} validity of eq. \eqref{semi-simple} with the right-hand side in the updated state, cf. the discussion of sec. \ref{sec:ChangeTab}, would be violated} to such a large degree that, for any design of Bob measuring instrument, the expected result will be modified by a factor of at least the same order of magnitude. {\color{red} As we have pointed out in sec. \ref{sec:ChangeTab},} it is possible to make the expectation value of the stress-energy tensor at points in region II arbitrarily large in the post-collapse state by increasing the detector gap. However, it must be acknowledged that if $\Omega$ becomes large enough modeling the state collapse in semiclassical gravity along a Cauchy surface could be questioned on the basis that a large violation of the stress-energy tensor along a Cauchy surface is too strong a deviation from the semiclassical regime. Thus, one might argue that although one could trust semiclassical gravity before and after the state collapse, there is no way to model the system ``during" the state collapse in semiclassical terms, and a more refined understanding of how to model state collapses in semiclassical gravity could be required. We note however that even if that is the case, such conclusion will not prevent the super-luminal signaling as all we need is for Bob to detect the effect at any time. On the other hand it seems rather unlikely that arbitrarily large departures of semiclassical gravity will be associated with such simple and rather common situation. The full analysis of the question thus requires consideration of the means by which Bob might detect the corresponding change in the spacetime metric in region II \footnote{ We will not study here in detail the detection method to be employed by Bob, but just point out that interesting ideas to measure small gravitational effects have been used in say searches for deviation of the universality of free fall, and also note a recent proposl to look for Plank scale dark matter \cite{DarkMaterSerach}. }. As it seems clear that a modification of the spacetime curvature might, in principle, be measured by the study of the geodesic deviation equation and in particular the expansion of the congruence of geodesics in that region, it appears that the question will have to be connected at least to some fundamental limitation on the validity of the notion of test point particles following geodesics of the underlying metric. There are of course some limitations on that arising from simple quantum considerations about the description of so called free point particles which in fact negate the notion of well defined trajectories. Moreover, as discussed in \cite{Chryss}, the standard quantum mechanical minimal de-localization of a quantum particle (characterized for instance by its Compton wavelength) implies such particle ought not to be considered as point like, and as is well known, in general, even at the classical level, extended objects fail to follow geodesics. It is unclear at this point if these kind of considerations will be enough to dismiss our example, given the fact that, as noted, the effect could be made as large as one wants and the time available for Bob to make the measurement is arbitrarily large. (ii) Here we must consider the existence of other effects that might not be effectively distinguished from the changes resulting from the collapse. Those might be intrinsically associated with the gravitational effects of either the measuring devices that Bob would be introducing in order to detect the changes in the spacetime metric in region II. They also might be associated with mere existence of (and the non-vanishing stress-energy tensor) corresponding to Alice and her detectors, whose own gravitational effects we have been neglecting so far, although classically these effects will in principle propagate causally and not affect region II, it is likely that deviations from Minkowski spacetime reflect on the \emph{non-local} state of the field from a semiclassical-gravity viewpoint in such a way that the quantum state of the field cannot be in principle the Minkowski vacuum state to begin with. Moreover, the stress-energy of Alice and her apparatus must be considered in the constraints of the theory, and the spacetime will have a non-vanishing ADM mass, reflected in the asymptotic behavior in all spacetime directions. Another possibility is to consider that the model used for Alice's detector, although standard in the literature, is a local model (in Region I) of the detector. Treating Alice's detector as a quantum field yields a detector model with support in both Regions I and II. In the Heisenberg picture we know that the existence of a field-like detector for Alice will only affect the dynamics in the causal future of the coupling region between Alice's detector (or probe) and the quantum field considered as a system. This can also be formalised in the algebraic QFT language \cite{Fewster:2018qbm}. However, as emphasised in \cite{Fewster:2018qbm}, a local and covariant measurement scheme is only able to describe the \emph{probe-system measurement chain} under the assumption that somewhere, someone knows how to measure something. Thus, it seems that the issue of sending a faster-than-light signal by the ``act of measurement" cannot be resolved by simply changing the detector model. Sorkin has pointed out in his ``impossible measurements" protocol \cite{Sorkin Imposible} that the existence of spatially extended detectors, would make faster than light signaling almost unavoidable. See however \cite{Bostelmann:2020unl}. (iii) This possibility could result from a variety of reasons. For instance, it might be that the appropriate versions of semiclassical gravity that will hold are such that the expectation value of the stress-energy tensor at any point, is taken on states associated with hypersurfaces which would not incorporate the change in the state resulting from a collapse or a measurement on region I. One such option would require to take the expectation value of the stress-energy tensor appearing at the right-hand side of Einstein's semiclassical equation in $x$ to be computed using the state corresponding to the something like the hypersurface $\partial J^{-} (x)$ ( the boundary of tha causal past of $ x$) , and that, at the same time, the detailed theory characterising the measurement (i.e., something like the spontaneous collapse theories considered in \cite{Collapse5, Collapse6, Collapse7}), is such, that the state associated with $\partial J^{-} (x)$ is unaffected. A scheme of that kind would ensure that the measurement of Alice would have no consequences from semiclassical gravity for spacelike-separated events. Such proposal is not without difficulties, one of which is the fact that in general $\partial J^{-} (x)$ is not a Cauchy surface, and another is that is not smooth at $x$. The first difficulty might be resolved if one has some good reason to consider that there is a certain initial state associated with an initial Cauchy hypersurface $ \Sigma_{in}$, which might be considered as the initially ``{\it prepared} " state, or perhaps {\it the initial state of the universe} or something like that, and then to consider, computing the expectation values of quantities of interest at $x$, using the state associated with the surface $ ( I^{+} (\Sigma_{in}) \cap \partial J^{-} (x) )\cup (\Sigma_{in} - J^{-} (x)) $, while the second problem might be dealt with, with and adjustment of the recipe based on the taking of appropriate limits of a succession of smooth Cauchy hypersurfaces that have as limit (in a suitable sense), the hypersurface indicated above. The issue is however quite delicate as illustrated by the discussions in \cite{Hypersurface}, the pursuit of which lies outside the scope of the present work. An argument in favour of the impossibility of detecting the signal is the following. One can assume that stress-energy conservation must hold \emph{on average}, i.e., for successive measurements, which prevents one from extracting arbitrarily large amounts of energy from the quantum field by detector measurements, as argued in \cite{Unruh-Wald}. Thus, an arbitrarily-efficient detector must produce arbitrarily-small changes for the stress-energy tensor when it detects a Rindler particle. The issue now becomes whether there exists an ``engineering window" where the efficiency of the detector and the amplitude of the signal can be compensated, such that a faster-than-light signal is measurable by Bob and that such signal can be sent with sufficient certainty by Alice. On this point, it follows from the inequality $\Delta E \Delta T \geq \hbar $ that to detect such a small signal Bob requires a large amount of time. On the other hand, in principle, Bob has an infinite time to measure an arbitrarily small signal, for he can orbit along a Killing vector boost orbit in the left Rindler wedge, and one could argue that such engineering window can always be found. However, any physical signal sent by Alice must decay as it approaches $\mathscr{I}^+$.Thus, it is expected that the signal will become weaker as Bob's proper time elapses, and this might render the signal ``effectively undetectable ". (iv) Finally there could be aspects of the set up that would make it simply unfeasible. One possibility we should consider is the following. In order for Bob to be sure that the change in the energy momentum he observes corresponds indeed to a signal that was sent by Alice, he must be sure that similar signals are nor reaching him coming from elsewhere. That is, he must be quite sure that the state of the quantum field prior to Alice's manipulation of her detector is the Minkowski vacuum, and as he must be ready to receive Alice's signal at any point in his world line, he must be sure that such chaaracterization of the sate of the quantum filed must be the appropriate one up to arbitrarily distant regions of ``space". So there must be a preparation of the initial state of the quantum filed on an extremely large spatial region occurring well before the whole protocol is even started (and if we want to ensure Bob has in effect arbitrarily long time to make the detection it seems the preparation of the state ought to encompass a full Cauchy hypersurface. However, as already noted, the simultaneous (in some reference frame) measurements of observables associated with such extended spatial regions -- say, as a means of state preparation -- are known to be rather problematic in their own right as argued by Sorkin\cite{Sorkin Imposible} (again, see however \cite{Bostelmann:2020unl}). In any case, such arguments must be considered with care, because, in principle, even a very unreliable communication protocol which allows faster than light communications would be quite problematic. We think these ideas deserve much deeper exploration. \section{Final remarks} \label{sec:Conclusions} We have revisited the issue of detection of a Rindler particle by an accelerator detector confined to the right Rindler wedge, focusing attention on the implications of the reduction of the state associated with the actual detection or what is often referred to as the measurement part of the process. We have noted that the concomitant modification of the state of the quantum field, imply changes in the (expectation value of the renormalised) stress-energy tensor in both Rindler wedges. Concerning the right Rindler wedge, it is interesting that the resulting state and stress-energy tensor expectation values become non-thermal which is, in a sense, easy to understand as result of the disruption brought by the interaction with the detector. Of course, one expects that in the long run further interaction between the field and detector or among the field modes themselves will bring the system to a new state of thermal equilibrium. Concerning the left Rindler wedge, however, the change in the stress-energy tensor expectation value is more problematic, as it seems to open a possibility for faster than light communication. We have noted some of what we see as the most natural possibilities to avoid such conclusion, but further studies of these issues are required in order to get to more definite and solid conclusions. For the moment, we can only stress that the fuziness in the theoretical characterization of the act of measuring in quantum theory can bring about complications even in apparently inocuous circumstances, such as in the context of the detection of a Rindler particle in Minkowski spacetime. Despite being overlooked in many practical applications in physics, the literature on the measurement problem is quite large, and the positions taken in its face are quite varied. We do not intend to discuss them in detail. It suffices for us to mention that the paths to overcome the measurement problem have been classified by Maudlin \cite{TimTopoi} as follows: It is internally inconsistent to hold simultaneously the following three propositions about a quantum theory: \begin{enumerate}[(i)] \item The description of a physical system as provided by the quantum state or wave function is complete. \item The evolution of the quantum state is always dictated by the Sch\"rodinger equation (or its relativistic generalisations). \item Individual experiments produce definite (even if often unpredictable) results. \end{enumerate} Thus, one must negate at least one of (i)-(iii) above. Negating (i) leads down, in general, the path of the so-called {\it hidden variable theories}, of which the de Broglie-Bohm theory is the best known example \cite{Bohm}. Negating (ii) implies that the collapse or reduction of the wave function plays a central r\^ole, as in the Copenhaguen textbook interpretation, as well as in so-called {\it spontaneous collapse} or {\it dynamical state reduction} theories, such as GRW, CSL, Penrose-Di\'osi, etc \cite{Collapse1,Collapse2, Collapse3, Collapse4, Collapse5, Collapse6, Collapse7}. The negation of (iii) leads to {\it many-worlds}- or {\it many-minds}-type interpretations \cite{ManyWorlds1, ManyWorlds2}. In any case, we should restate that the superluminar tension relies on the application of two central hypotheses in this work. The first one is that measurements induce the collapse of the wavefunction, and that such collapse occurs on a Cauchy surface of spacetime. The second one is that the problem studied in (nearly) flat spacetime is within the regime of applicability of semiclassical gravity. It is well possible that either one of the two hypotheses fail, or that they cannot be taken to hold together. If this is the case, it would seem that there exist apparently inocuous situations, such as the one here studied, for which a correct theoretical description of the evolution of the system must resort to quantum gravity. This seems to be in agreement with conclusions drawn from \cite{Belenchia:2019gcc, Wald:2020jow}. However if that is the case it seems likely that other situations that are often discussed using semiclassical language, for instance Hawking radiation and the backreaction in black holes (even at early times), might require reconsideration as well. Finally, it is our hope to draw the attention of the community to these delicate issues that afflict our understanding of quantum theory in general and of QFT in particular, especially in gravitational contexts. \section*{Acknowledgements} We gladly thank Prof. George Matsas for his careful reading of a previous version of this work and very useful comments, and Prof. Stephen A. Fulling for very stimulating discussions,as well as Ruth E Kastner for pointing out a few mistakes in the original version of this work. B.A.J-A. is supported by a CONACYT postdoctoral fellowship. D.S. acknowledges partial financial support from PAPIIT-UNAM, Mexico (Grant No.IG100120); the Foundational Questions Institute (Grant No. FQXi-MGB-1928); the Fetzer Franklin Fund, a donor advised by the Silicon Valley Community Foundation. B.A.J.-A. and D.S. acknowledge the support of CONACYT grant FORDECYT-PRONACES no. 140630. This article has been prepared as a contribution to the Topical Collection ``Acceleration and Radiation: Classical and Quantum, Electromagnetic and Gravitational" of the Journal Symmetry, edited by Prof. Stephen A. Fulling.
{ "timestamp": "2022-04-26T02:47:35", "yymm": "2105", "arxiv_id": "2105.01831", "language": "en", "url": "https://arxiv.org/abs/2105.01831" }
\section{Momentum-based Loss (MBL) Functions} As stated in the main text, the weights for different loss, $\lambda_i,~i=1,2,3,4$, are treated as hyperparameters. To automate the procedure of hyperparameter search, we design momentum-based weights, which are adjusted adaptively during the training process. This module is primarily proposed for automatic selection of four hyperparameters, not for performance improvement. {\em Momentum-based Weights.} The key advantage of using momentum is to combine current exploration with historical information. For each training epoch, we maintain a mean value for each type of loss, $\bar{L}_i^{(k)}$, which is calculated over the first $k$ training samples ($i=1,2,3,4$ stand for {\em rec, ddi, bce, multi} parts, separately). We use $L_i^{(k)}$ to denote the actual loss at the $k_{th}$ training point, and the momentum-based weights, $\lambda_i^{(k)}$, are calculated by the relative difference from the mean, {\footnotesize \begin{align} \vspace{-1mm} \mbox{diff}_i^{(k)} &= \frac{L_i^{(k)} - \bar{L}_i^{(k-1)}}{\bar{L}_i^{(k-1)}}, \\ \vspace{-1mm} \lambda_i^{(k)} &= \gamma\cdot\frac{exp(\mbox{diff}_i^{(k)})}{\sum_{i}exp(\mbox{diff}_i^{(k)})} + (1-\gamma)\cdot\lambda_i^{(k-1)}, \vspace{-1mm} \end{align} }% where we use the same $\gamma$ (in Eqn.~\eqref{eq:loss}) to balance the current exploration and historical inertia. Meanwhile, the momentum values are also updated by, \begin{equation}\footnotesize \bar{L}_i^{(k)} = \frac{L_i^{(k)} + (k-1)\cdot\bar{L}_i^{(k-1)}}{k}. \end{equation} The momentum weights are in the same spirit as the negative control. The momentum weight would enable our model to automatically identify and focus on the under-optimized parts in the loss as well as utilizing historical loss information. It empirically provides stable results. Note that in model implementation, we further employ a DDI threshold, $\eta\in(0,1)$. If the result DDI rate of the $k_{th}$ sample is below the threshold, we will directly set $\lambda_2^{(k)}=0$. We provide an ablation study on the momentum-based loss in Table~\ref{tb:MBL}. \begin{table}[!htbp] \small \caption{Ablation Study on Momentum-based Loss (for MIMIC-III / DS2)} \vspace{-2mm} \centering\begin{tabular}{c|c|c} \toprule Metrics & \texttt{MICRON}\xspace & \texttt{MICRON}\xspace with MBL \\ \midrule DDI & 0.0695 / 0.0143 & 0.0690 / 0.0147 \\ Jaccard & 0.5234 / 0.3634 & 0.5252 / 0.3479 \\ F1 & 0.6778 / 0.4544 & 0.6807 / 0.4536 \\ Err(add) & 6.090 / 2.088 & 6.255 / 2.115 \\ Err(remove) & 5.853 / 1.213 & 5.623 / 1.157 \\ \bottomrule \end{tabular} \label{tb:MBL} \vspace{-1mm} \end{table} \section{Smart Inference} This module will potentially improve the efficiency and interpretation of our model. Our model can reflect when each medicine is added or dropped and why (based on the appearance or disappearance of what diagnoses or procedures). Typically, during the inference phase, $\mathbf{r}^{(t)}$ is calculated by {\small \begin{align} \mathbf{r}^{(t)}&=\mathbf{h}^{(t)}-\mathbf{h}^{(t-1)} \notag\\ &{ =\mbox{NET}_{health}\left(\left[\mathbf{d}^{(t)}_e~\|~\mathbf{p}^{(t)}_e\right]\right) - \mbox{NET}_{health}\left(\left[\mathbf{d}^{(t-1)}_e~\|~\mathbf{p}^{(t-1)}_e\right]\right)}\label{eq:affine} . \vspace{-2mm} \end{align}} \vspace{-2mm} However, we can trace the cause back to the very beginning if $\mbox{NET}_{health}$ is \textit{affine}, \begin{equation} \mbox{NET}_{health}(\mathbf{x}-\mathbf{y}) = \mbox{NET}_{health}(\mathbf{x}) - \mbox{NET}_{health}(\mathbf{y}),~\forall \mathbf{x},\mathbf{y} \notag \end{equation} where Eqn.~\eqref{eq:affine} could be written as {\small \begin{align} \mathbf{r}^{(t)} &={ \mbox{NET}_{health}\left(\left[\mathbf{d}^{(t)}_e-\mathbf{d}^{(t-1)}_e~\|~\mathbf{p}^{(t)}_e-\mathbf{p}^{(t-1)}_e\right]\right)} \notag \\ &= { \mbox{NET}_{health}\left(\left[(\mathbf{d}^{(t)}-\mathbf{d}^{(t-1)})\cdot\mathbf{E}_d~\|~(\mathbf{p}^{(t)}-\mathbf{p}^{(t-1)})\cdot\mathbf{E}_p\right]\right)}\label{eq:fast}. \end{align}} In fact, this equation reveals the causality between the update in diagnosis and procedure measurements and the update in patient health representation. For example, a patient get extra X-ray scanning at this visit while other diagnosis and procedure measurements are the same as the last time. Then, the model input is barely the code of X-ray, and the input will hopefully stimulate some of the medicines while discourage others, based solely on the X-ray information. Eqn.~\eqref{eq:fast} is practically efficient, since the clinical update $\mathbf{d}^{(t)}-\mathbf{d}^{(t-1)}$ (or $\mathbf{p}^{(t)}-\mathbf{p}^{(t-1)}$) is much sparser than original diagnosis documentary $\mathbf{d}^{(t-1)}$ and $\mathbf{d}^{(t)}$ (or procedure documentary $\mathbf{p}^{(t-1)}$ and $\mathbf{p}^{(t)}$). Also, it only requires one-time computation of $\mbox{NET}_{health}$ for the residual health representation. \section{Datasets and Hyperparameter Settings} Here are the details about two datasets used in the paper. \begin{itemize} \item (i) {\em MIMIC-III} \cite{johnson2016mimic} is a benchmark inpatient dataset, which collects electronic ICU records of Beth Israel Deaconess Medical Center between 2001 and 2012. We utilize the diagnosis, procedure, medication data and filter out patients with only one visit, so many patients will have 2 and more visits (up to 29 visits) in this data. From the original files in MIMIC-III ``DIAGNOSES\_ICD.csv", ``PROCEDURES\_ICD.csv" and ``PRESCRIPTIONS.csv", we extract diagnosis, procedure and medication code list, separately, then. These three files are then merged by the patient id and ``HADM\_ID", which is also used as visit id. After the merging, diagnosis and procedure are ICD-9 coded. \item (ii) {\em IQVIA PharMetrics Plus} is a private outpatient dataset. Patients in this database is generally representative of the under 65 commercially insured population in the U.S., who receives treatment without being admitted to a hospital from 2015 to 2019. We keep visit records with at least $3$ medications. In this dataset, patients usually have around 10 visits. Similar to Figure~1 in the main text, we provide a Jaccard coefficient distribution for {\em IQVIA} dataset in Figure~\ref{fig:jaccard_distribution2}. Again, we observe that medications have larger overlaps than diagnosis. \end{itemize} \begin{figure}[thbp!] \centering \includegraphics[width=3.3in]{figure/Jaccard2.pdf} \vspace{-3mm} \caption{Frequency Histogram of Jaccard Coefficients on {\em IQVIA}.} \vspace{-3mm} \label{fig:jaccard_distribution2} \end{figure} The experimental settings are largely adopted from GAMENet \cite{shang2019gamenet} with minor adjustments. For both of the datasets, we extract the DDI information from Top-40 severity types from TWOSIDES \cite{tatonetti2012data}. We use ATC-3 Level drug coding to process all medications, use ICD-9 codes for diagnosis and procedures in {\em MIMIC-III}, and for {\em IQVIA}, we use ICD-10-CM codes for diagnoses and CPT/HCPCS codes for procedures. In MIMIC, ``visit" is identified by the field ``HADM\_ID" in raw data file. When running the experiments, we use multi-hot encodings for the diagnosis and procedure vectors. Two datasets are randomly split into training, validation and test by $60\% : 20\% : 20\%$. For {\em MIMIC-III}, we set embedding size $s=64$ for diagnosis and procedure tables, $\mathbf{E}_d,\mathbf{E}_p$. The health representation network, $\mbox{NET}_{health}$, is an affine transformation, we implement it as one linear layer without activation function. The prescription network, $\mbox{NET}_{med}$, is a two-layer feed-forward neural network with $256$ hidden units. The hyperparameters, $\gamma=0.75,\eta=0.08$, are selected from validation set. We choose RMSprop as the optimizer with learning rate at $2e^{-4}$ and weight decay at $1e^{-5}$. For {\em IQVIA}, we set $s=32$, choose a three-layer $\mbox{NET}_{med}$ with $128$ hidden units, the hyperparameters $\gamma=0.75,\eta=0.03$, while the learning rate is $5e^{-4}$. For two datasets, the hyperparameters for the loss function are $\lambda_i=0.25,~i=1,2,3,4$. The implementation for baseline methods can be referred to RETAIN\footnote{https://github.com/mp2893/retain}, LEAP and GAMENet\footnote{https://github.com/sjy1203/GAMENet}. For MIMIC-III dataset, the hyperparameters of the baselines are mainly selected by grid search. \begin{itemize} \item For RETAIN, We choose two 64-dim GRUs as the implementation of two-level RNN with dropout rate at 0.5 for the output embedding. RMSprop is used as the optimizer with learning rate at $5\times 10^{-4}$ for RETAIN and other deep learning baselines. \item LEAP is also implemented by 64-dim GRU as the feature encoder. However, we choose droput rate at 0.3 between two layers, as 0.3 works better than 0.5 in the validation set. Particularly, since LEAP are sequence-based models, we set 20 as the max drug combination size. \item For GAMENet, we use the same suit of hyperparameters reported from the original paper \cite{shang2019gamenet}: expected DDI rate at 0.05, initial annealing temperature at 0.85, mixture weights $\pi=[0.9, 0.1]$ and also use 64-dim embedding tables and 64-dim GRU as RNN. We find the hyperparameter works well in validation set, so we keep it for test. \end{itemize} For {\em IQVIA}, the embedding size and hidden layers are reduced to 32-dim. In both of the datasets, we could only observe a window of patient's visits, instead of a complete visit history. Since the evaluation is on medication change prediction, we assume the earliest visit appeared in the window as the ``first" visit of a patient $j$, and the evaluation of all models starts from the ``second" visit. All experiments are implemented by {\em PyTorch 1.4.0} on a Ubuntu server with 64GB memory, 32 CPUs and two GTX 2080Ti GPU. The reported results are averaged over 5 independent experiments with different random seeds. \section{Details about Metrics} This section provides the definition of each metric used in the experiment section. We already present the calculation of {\em Err(add)} in the main text, in this section, we show the calculation of {\em Err(remove)}, DDI, Jaccard coefficient (Jaccard), F1-score (F1). All the metrics are calculated from the observed ``second" visits. \paragraph{Err(remove).} The {\em Err(remove)} metrics for a particular patient $j$ is calculated by {\small\begin{align} \small \mathcal{O}_{target, j}^{(t)} &= \tilde{\mathcal{M}}_j^{(t-1)} \setminus \mathcal{M}^{(t)}_j\notag\\ \mbox{Err(remove)}_j &= \frac{1}{V(j)}\sum_{t=2}^{V(j)} |\mathcal{O}_{target, j}^{(t)}\setminus \mathcal{O}_j^{(t)}| + |\mathcal{O}_j^{(t)}\setminus \mathcal{O}_{target, j}^{(t)}|\notag \end{align}} where $\mathcal{O}_{target}^{(t)}$ is the target removal set, $\mathcal{N}^{(t)}$ is the predicted removal set, $\mathcal{A}\setminus\mathcal{B}$ is set minus operation, $V(j)$ means total number of visits of patient $j$, $|\cdot|$ is for set cardinality, $|\mathcal{O}_{target,j}^{(t)}\setminus \mathcal{O}_j^{(t)}|$ and $|\mathcal{O}_j^{(t)}\setminus \mathcal{O}_{target,j}^{(t)}|$ are false negative and false positive numbers in terms of removal for the $t_{th}$ visit. The final {\em Err(remove)} is calculated by taking the average of all patients. \paragraph{DDI rate.} This metric is used to reflect the safety of drug combinations. As is stated in the main text, $\mathcal{M}_j^{(t)}$ is the target medication set and $\tilde{\mathcal{M}}_j^{(t)}$ is the estimated medication set for patient $j$ at the $t_{th}$ visit. The DDI rate for patient $j$ is calculated by, \begin{equation} DDI_j = \frac{\sum^{V(j)}_{t=2}\sum_{m, n \in \tilde{\mathcal{M}}_j^{(t)}} \mathbf{1}\{\mathbf{A}_{mn}=1\}}{\sum^{V(j)}_{t=2}\sum_{m,n\in \tilde{\mathcal{M}}_j^{(t)}}1}, \notag \end{equation} where $V(j)$ is the total number of visits of the $j$-th patient, $\mathbf{A}$ is the DDI matrix and $\mathbf{1}\{\cdots\}$ is an indicator function, which returns 1 when the expression in $\{\cdots\}$ is true, otherwise 0. The reported DDI rate is by taking average of all patients. \paragraph{Jaccard.} The Jaccard coefficient for patient $j$ is calculated by \begin{equation} Jaccard_j = \frac{1}{V(j)}\sum_{t=2}^{V(j)}\frac{|\tilde{\mathcal{M}}_j^{(t)}\cap {\mathcal{M}}_j^{(t)}|}{|\tilde{\mathcal{M}}_j^{(t)}\cup {\mathcal{M}}_j^{(t)}|}, \notag \end{equation} while the overall Jaccard coefficient of the test data is by taking further average over all patients. In the main text, we report only the overall Jaccard coefficient. \paragraph{F1.} The F1 score is calculated by the harmonic mean of precision and recall. For patient $j$ at the $t$-th visit, the Precision, Recall, F1 are calculated by \begin{equation} Precision_j^{(t)} = \frac{|\tilde{\mathcal{M}}_j^{(t)}\cap {\mathcal{M}}_j^{(t)}|}{ |\tilde{\mathcal{M}}_j^{(t)}|} \notag \end{equation} \begin{equation} Recall_j^{(t)} = \frac{|\tilde{\mathcal{M}}_j^{(t)}\cap {\mathcal{M}}_j^{(t)}|}{ |{\mathcal{M}}_j^{(t)}|} \notag \end{equation} \begin{equation} F1_j^{(t)} = \frac{2}{\frac{1}{Precision_j^{(t)}}+\frac{1}{Recall_j^{(t)}}} \notag \end{equation} The F1 score for one patient $j$ is by taking the average over his visits, \begin{equation} F1_j = \frac{1}{V(j)}\sum_{t=2}^{V(j)}F1_j^{(t)}, \notag \end{equation} while the overall F1-score of the test data is by taking further average over all patients. In the main text, we report only the overall F1-score. \section{Standard Deviation and $p$-value} We provide the standard deviation and $p$-value results for performance comparison in MIMIC-III (Table~\ref{eq:mimic-iii-result}) and IQVIA (Table~\ref{eq:ds2-result}). \begin{table*}[!htbp] \caption{Performance Comparison on {\em MIMIC-III}} \vspace{-2mm} \resizebox{\textwidth}{!}{\centering\begin{tabular}{c|ccccccc} \toprule \centering Metrics & DDI &Jaccard & F1 & Err(add) & Err(remove)\\ \midrule SimNN & 0.0837 $\pm$ 0.0005 (2.5518e-11) & 0.4658 $\pm$ 0.0004 (6.6613e-16) & 0.6235 $\pm$ 0.0004 (4.4408e-16) & 6.856 $\pm$ 0.0054 (9.3702e-14) & 7.329 $\pm$ 0.0123 (1.5543e-15) \\ DualNN & 0.0925 $\pm$ 0.0006 (6.9011e-13) & 0.4880 $\pm$ 0.0004 (8.0380e-14) & 0.6447 $\pm$ 0.0004 (5.5733e-14) & 6.261 $\pm$ 0.0119 (7.6331e-07) & 7.987 $\pm$ 0.0112 (0.0) \\ RETAIN & 0.0932 $\pm$ 0.0016 (1.8352e-09) & 0.4796 $\pm$ 0.0006 (2.9309e-14) & 0.6389 $\pm$ 0.0005 (2.1316e-14) & 8.552 $\pm$ 0.0396 (2.4424e-15) & 6.338 $\pm$ 0.0369 (1.8950e-08) \\ LEAP & 0.0880 $\pm$ 0.0028 (3.2323e-06) & 0.4331 $\pm$ 0.0011 (4.4408e-16) & 0.5953 $\pm$ 0.0010 (4.4408e-16) & 9.105 $\pm$ 0.0356 (2.2204e-16) & 5.939 $\pm$ 0.0335 (0.0124) \\ GAMENet & 0.0928 $\pm$ 0.0007 (1.8409e-12) & 0.4980 $\pm$ 0.0012 (2.4894e-10) & 0.6549 $\pm$ 0.0011 (2.5575e-10) & 8.810 $\pm$ 0.1021 (4.8889e-12) & 5.854 $\pm$ 0.0450 (0.9780) \\ \midrule \texttt{MICRON}\xspace & {0.0695} $\pm$ 0.0004 & {0.5234 } $\pm$ 0.0008& {0.6778 } $\pm$ 0.0007& {6.090} $\pm$ 0.0189& {5.853} $\pm$ 0.0219\\ \bottomrule \end{tabular}} \label{eq:mimic-iii-result} * $p$-values are at the parenthesis. \end{table*} \begin{table*}[!htbp] \caption{Performance Comparison on {\em IQVIA}} \vspace{-2mm} \resizebox{\textwidth}{!}{\centering\begin{tabular}{c|ccccccc} \toprule \centering Metrics & DDI &Jaccard & F1 & Err(add) & Err(remove)\\ \midrule SimNN & 0.0152 $\pm$ 0.0002 (0.0054) & 0.192 $\pm$ 0.0003 (0.0) & 0.2383 $\pm$ 0.0005 (0.0) & 2.906 $\pm$ 0.0031 (0.0) & 1.679 $\pm$ 0.0056 (1.0043e-12) \\ DualNN & 0.0156 $\pm$ 0.0003 (0.0015) & 0.112 $\pm$ 0.0003 (0.0) & 0.1783 $\pm$ 0.0003 (0.0) & 3.406 $\pm$ 0.0069 (0.0) & 2.579 $\pm$ 0.0051 (0.0) \\ RETAIN & 0.0279 $\pm$ 0.0008 (6.7339e-10) & 0.332 $\pm$ 0.0002 (1.5543e-15) & 0.4215 $\pm$ 0.0003 (4.4408e-16) & 2.353 $\pm$ 0.0229 (3.7266e-08) & 0.927 $\pm$ 0.0169 (better) \\ LEAP & 0.0134 $\pm$ 0.0013 (better) & 0.1871 $\pm$ 0.0007 (0.0) & 0.2742 $\pm$ 0.0008 (0.0) & 3.663 $\pm$ 0.0205 (2.2204e-16) & 1.286 $\pm$ 0.0153 (0.0005) \\ GAMENet & 0.0166 $\pm$ 0.0003 (1.7060e-05) & 0.2025 $\pm$ 0.0008 (0.0) & 0.3016 $\pm$ 0.0007 (0.0) & 3.016 $\pm$ 0.0589 (8.6569e-10) & 2.179 $\pm$ 0.0205 (9.7255e-14) \\ \midrule \texttt{MICRON}\xspace & 0.0143 $\pm$ 0.0003 & {0.3634} $\pm$ 0.0005& {0.4544} $\pm$ 0.0004 & {2.088} $\pm$ 0.0104& 1.213 $\pm$ 0.0141\\ \bottomrule \end{tabular}} \label{eq:ds2-result} * $p$-values are at the parenthesis. \\ * In this dataset, for Err(remove) metric, RETAIN is better than our \texttt{MICRON}\xspace . For DDI, LEAP is better than our \texttt{MICRON}\xspace . \end{table*} \section{Conclusion} This paper tackles the medication change prediction problem and proposes a recurrent residual learning model, named \texttt{MICRON}\xspace , for predicting medication changes. We compare our model with state of the art approaches and show its effectiveness and efficiency on inpatient {\em MIMIC-III} dataset and a proprietary outpatient {\em IQVIA} dataset. Like previous medication recommendation works, this paper uses the existing prescriptions as a gold standard. The efficacy of the recommendation is evaluated by comparing it with the prescriptions given by the dataset, which might be a limitation. In the future, we will perform a clinical user study to evaluate our results further. Also, we would like to consider three practical extensions: (i) enhance the model by utilizing relations between medications, e.g., similarities; (ii) consider medicine dosages during recommendation; (iii) penalize different medications based on patient-specific conditions. \section{Experiments} \vspace{-0.3mm} We evaluate \texttt{MICRON}\xspace against several baselines in both inpatient and outpatient datasets. We focus on answering the following questions: \begin{itemize} \item How does \texttt{MICRON}\xspace perform against the baselines in medication and change prediction? \item How does \texttt{MICRON}\xspace perform in model efficiency? \item How do different components in \texttt{MICRON}\xspace contribute to accurate recommendations? \end{itemize} \vspace{-2mm} \subsection{Experimental Setup} \noindent\textbf{Dataset}. We consider a benchmark inpatient dataset: {\em MIMIC-III} \cite{johnson2016mimic}, and a private outpatient dataset: {\em IQVIA PharMetrics Plus} (see processed statistics in Table~\ref{tb:dataset}). Details of dataset descriptions, preprocessing, hyperparameter selections can be found in Appendix. \begin{table}[h!] \small \caption{Statistics of Datasets} \vspace{-2.5mm}\centering \begin{tabular}{l|cc} \toprule \textbf{Items} & \textbf{MIMIC-III} & \textbf{IQVIA}\\ \midrule \# of visits & 14,960 & 30,794 \\ \# of patients & 6,335 & 3,023 \\ \# of diagnosis codes & 1,958 & 1,744 \\ \# of procedure codes & 1,430 & 1,250 \\ \# of medication codes & 131 & 155 \\ \bottomrule \end{tabular} \label{tb:dataset \end{table} \noindent\textbf{Baselines}. We consider the following baselines (SimNN and DualNN are designed by ourselves). \begin{itemize} \item {\bf SimNN} use the same patient representation $\mathbf{h}^{(t)}$ as \texttt{MICRON}\xspace and then learns a simple 3-way classifier for each medicine (add, remove, and remain) with the cross-entropy loss. \item {\bf DualNN} also starts from patient representation $\mathbf{h}^{(t)}$ and then diverges to two different neural networks. The first one is for addition and the second for removal. Each neural network classifier uses the binary cross-entropy loss. \item{\bf LEAP} \cite{zhang2017leap} is an instance-based approach that uses a sequence to sequence model with reinforcement aftermath fine-tuning. This method generates a list of medications based on the diagnoses in the same visit. \item{\bf RETAIN} \cite{choi2016retain} is a longitudinal predictive model, which designs a specialized attention model over RNN. It learns the temporal dependencies between clinical visits and makes medication recommendations. \item{\bf GAMENet} \cite{shang2019gamenet} is also a longitudinal model, which uses RNN, memory network, and graph neural network. It predicts medications using historical prescriptions as reference. \end{itemize} \noindent\textbf{Evaluation Strategy and Metrics}. We use evaluation metrics such as DDI rate, Jaccard Similarity, F1-score similar to evaluate the overall recommended medications, as other related works \cite{shang2019gamenet,zhang2017leap}. Also, we design new error metrics to evaluate the accuracy of medication changes: {\em Err(add)} and {\em Err(remove)}. {\em Err(add)} computes the sum of false positive part and false negative part (white region in Fig.~\ref{fig:error_metric}) between the predicted addition set $\mathcal{N}^{(t)}$ and the target addition set $\mathcal{N}_{target}^{(t)}$. The target addition set is calculated by \begin{equation} \mathcal{N}_{target}^{(t)} = \mathcal{M}^{(t)} \setminus \tilde{\mathcal{M}}^{(t-1)}, \notag \end{equation} where $\mathcal{M}^{(t)}$ is the target medication set in the $t_{th}$ visit, and $\tilde{\mathcal{M}}^{(t-1)}$ is the predicted medication set at the $(t-1)_{th}$ visit. For a particular patient $j$, this metric is calculated from the second visit, where the medication changes start, {\footnotesize \begin{align} \mbox{Err(add)}_j &= \frac{1}{V(j)}\sum_{t=2}^{V(j)} \left(|\mathcal{N}_{target}^{(t)}\setminus \mathcal{N}^{(t)}| + |\mathcal{N}^{(t)}\setminus \mathcal{N}_{target}^{(t)}|\right), \notag \end{align} } \noindent where $\mathcal{A}\setminus\mathcal{B}$ is the set subtraction, and $V(j)$ means total number of visits of patient $j$. Finally, we average over all patients and get {\em Err(add)}. Similarly, we design {\em Err(remove)} to denote errors for removal. Also, we report model size and training/inference time. \begin{figure}[t!] \centering \includegraphics[width=2.7in]{figure/Error_metric.png} \caption{Illustration of Err(add) Metric at the $t$-th Visit} \label{fig:error_metric} \end{figure} \vspace{-0.5mm} Since the evaluation is on medication change prediction, we assume the earliest visit appeared in the window as the ``first" visit of a patient $j$, where we could extract the initial medication set, $\tilde{\mathcal{M}}_j^{(1)}=\mathcal{M}_j^{(1)}$, and the initial medication vector, $\tilde{\mathbf{m}}_j^{(1)}=\hat{\mathbf{m}}_j^{(1)}$. For a fair comparison, the evaluation of all models starts from the ``second" visit. The definition of other metrics can be found in Appendix. \subsection{Experimental Results} \begin{table*}[!h] \caption{Performance Comparison (on {\em MIMIC-III / IQVIA})} \vspace{-2mm} \resizebox{\textwidth}{!}{\centering\begin{tabular}{c|ccccccc} \toprule \centering Metrics & DDI &Jaccard & F1 & Err(add) & Err(remove) & Model Size & Train(epoch) / Test\\ \midrule SimNN & \underline{0.0837} / {0.0152} & 0.4658 / 0.1920 & 0.6235 / 0.2383 & {6.856} / 2.906 & 7.329 / 1.679 & 1.5MB (376,009 params) & 27.53s / 6.67s \\ DualNN & 0.0925 / {0.0156} & 0.4880 / 0.1120 & 0.6447 / 0.1783 & \underline{6.261} / 3.406 & 7.987 / 2.579 & 1.3MB (325,702 params) & 27.90s / 5.76s \\ RETAIN & 0.0932 / 0.0279 & 0.4796 / \underline{0.3320} & 0.6389 / \underline{0.4215} & {8.552} / \underline{2.353} & 6.338 / \underline{\bf 0.927} & 1.2MB (287,940 params) & 34.78s / 6.97s\\ LEAP & {0.0880} / \underline{\bf 0.0134} & 0.4331 / 0.1871 & 0.5953 / 0.2742 & 9.105 / 3.663 & 5.939 / 1.286 & 1.7MB (433,286 params) & 199.94s / 26.45s \\ GAMENet & 0.0928 / 0.0166 & \underline{0.4980} / 0.2025 & \underline{0.6549} / 0.3016 & 8.810 / 3.016 & \underline{5.854} / 2.179 & 1.8MB (449,092 params) & 55.31s / 8.84s \\ \midrule \texttt{MICRON}\xspace & {\bf 0.0695} / 0.0143 & {\bf 0.5234 / 0.3634} & {\bf 0.6778 / 0.4544} & {\bf 6.090} / {\bf 2.088} & {\bf 5.853} / 1.213 & { 1.1MB (275,395 params)} & { 38.83s / 4.41s}\\ $\Delta$ Improve. & $\downarrow17.0$\% / $\uparrow6.7$\% & $\uparrow5.1$\% / $\uparrow9.5$\% & $\uparrow3.5$\% / $\uparrow7.8$\% & $\downarrow2.7$\% / $\downarrow11.3$\% & $\downarrow0.02$\% / $\uparrow30.9$\% & --- & --- \\ \bottomrule \end{tabular}} {\\~*The result of {\em MIMIC-III} and {\em DS2} are reported together and separated by ``/". The last two metrics, Model Size and Train(epoch)/Test, are based on {\em MIMIC-III} dataset. For the other five metrics, we select the best results in {\em bond font} and use {\em underscore} to select the best baseline. We also report the improvement $\Delta$ of our \texttt{MICRON}\xspace over the best baseline.} \label{tb:performance} \end{table*} \begin{table*}[!h] \centering \caption{Ablation Study for Different Model Components (on {\em MIMIC-III})} \vspace{-2mm} \resizebox{0.8\textwidth}{!}{\centering\begin{tabular}{l|ccccccc} \toprule \centering Metrics & DDI &Jaccard & F1 & Err(add) & Err(remove) \\ \midrule \texttt{MICRON}\xspace w/o $L_{rec}$ & 0.0618 $\pm$ 0.0002& 0.4449 $\pm$ 0.0138& 0.6050 $\pm$ 0.0014& 7.143 $\pm$ 0.3753& 6.224 $\pm$ 0.0598\\ \texttt{MICRON}\xspace w/o $\tilde{\mathbf{m}}^{(t)}$ & 0.0696 $\pm$ 0.0004& 0.4509 $\pm$ 0.0046& 0.6096 $\pm$ 0.0368& 8.496 $\pm$ 0.1401& 6.463 $\pm$ 0.1468\\ \texttt{MICRON}\xspace w/o $L_{multi}$ & 0.0780 $\pm$ 0.0015& 0.5020 $\pm$ 0.0072& 0.6590 $\pm$ 0.0288& 6.544 $\pm$ 0.2172& 5.509 $\pm$ 0.0732\\ \texttt{MICRON}\xspace w/o $L_{ddi}$ & 0.0931 $\pm$ 0.0005& 0.5248 $\pm$ 0.0006& 0.6793 $\pm$ 0.0081& 6.402 $\pm$ 0.3020& 5.897 $\pm$ 0.0397\\ \texttt{MICRON}\xspace w/o $\delta_1,\delta_2$ & 0.0628 $\pm$ 0.0018& 0.5074 $\pm$ 0.0016& 0.6635 $\pm$ 0.0026& 7.216 $\pm$ 0.1335& 5.084 $\pm$ 0.2706\\ \texttt{MICRON}\xspace & {0.0695} $\pm$ 0.0004 & {0.5234} $\pm$ 0.0008& {0.6778 } $\pm$ 0.0007& {6.090} $\pm$ 0.0189& {5.853} $\pm$ 0.0219\\ \bottomrule \end{tabular}} \label{tb:ablation} \\ For {\em DDI, Err(add), Err(remove)} metrics, the lower the better, while for {\em Jaccard} and {\em F1} metrics, the higher the better. \vspace{-2mm} \end{table*} We conduct experimental comparison based on five different random seeds and show the mean metric values in Table~\ref{tb:performance}. \texttt{MICRON}\xspace outperforms all baselines in both inpatient and outpatient settings, especially for Jaccard and F1 metrics. LEAP gives a relatively good DDI measure on two datasets; however, its performance is weaker than other baselines in terms of accuracy. Although SimNN, DualNN, and RETAIN are implemented from very different perspectives, the former two are instance-based while the latter uses sequence modeling. They show neck-to-neck performance on {\em MIMIC-III}. For outpatient medication change prediction (on {\em IQVIA}), RETAIN shows strong performance while some recent state of the art baselines failed, such as GAMENet. We hypothesize that time spans between two visits can be much longer for outpatients, and thus the stored memory can be less trustworthy in GAMENet. By learning an effective residual representation, \texttt{MICRON}\xspace provides more accurate and safe medication recommendations for inpatient or outpatient settings. Also, \texttt{MICRON}\xspace requires much fewer parameters than the state-of-the-art approaches, which is more efficient. Due to space limitation, the standard deviation results are reported in the Appendix. We also test the model's stability and do a T-hypothesis testing of \texttt{MICRON}\xspace on each metric. As a summary, most of the $p$-values are less than $0.001$ (mean $p$-value at 6.2e-5), except in two cases on {\em IQVIA}: the DDI rate compared to LEAP and the Err(remove) compared to RETAIN. \subsection{Ablation Study on Model Components} In this section, we verify the effectiveness of different components in \texttt{MICRON}\xspace . Specifically, we conduct ablation studies on {\em MIMIC-III} and test on the following variants: \begin{itemize} \item (i) \texttt{MICRON}\xspace { \em w/o $L_{rec}$}. We remove the unsupervised loss during training and solely trained on the supervised loss. \item (ii) \texttt{MICRON}\xspace { \em w/o $\tilde{\mathbf{m}}^{(t)}$}. We do not maintain the medication vector, $\tilde{\mathbf{m}}^{(t)}$, and only utilizes the update feature information, $\mathbf{r}^{(t)}$, between two visits; \item (iii) \texttt{MICRON}\xspace { \em w/o $L_{multi}$}. We remove $L_{multi}$, and it will be less confident to use thresholds, $\delta_1$ and $\delta_2$; \item (iv) \texttt{MICRON}\xspace { \em w/o $L_{ddi}$}. We remove DDI loss, and the model probably would provide high-DDI combinations; \item (v) \texttt{MICRON}\xspace { \em w/o $\delta_1,\delta_2$}. We set $\delta_1=\delta_2=\delta$, which implies medications with score above or equal $\delta$ being added, and medications with score less than $\delta$ being removed. This is a common strategy used in previous works: $\delta=0.5$ in \cite{shang2019gamenet} and $\delta=0.3$ in \cite{shang2019pre} (this model requires ontology information, so it is not included as baseline). We use $\delta=0.5$ for this model variant. \end{itemize} The comparison results with variances (after $\pm$) are shown in Table~\ref{tb:ablation}. Overall, all other variations perform better than variant (i) and (ii), highlighting that the reconstruction design and the initial medication vector are essential in the model. Without medication vector $\tilde{\mathbf{m}}^{(t)}$, the model cannot retain the longitudinal information, thus variant (ii) provides poor results. We also notice that without DDI loss, variant (iv) outputs a significantly higher DDI rate, and \texttt{MICRON}\xspace shows slightly better results than model variant (iii) without $L_{multi}$. By integrating all components, \texttt{MICRON}\xspace achieves a more balanced and stable performance in all metrics. \section{Introduction} Recently years, deep learning has demonstrated initial success in potentially assisting clinical decision-making ~ \cite{almirall2012designing,choi2017using,xiao2018opportunities,mao2019medgcn}. Among others, the medication recommendation task has drawn lots of research interest~\cite{wang2017safe,wang2019order,shang2019pre,shang2019gamenet,zhang2017leap,killian2019learning,wang2018supervised}. The common strategy of medication recommendation learns representations for medical entities (e.g., diagnoses, medications) from electronic health records, and use the learned representations to predict medications that fit the patient’s health condition while avoiding adverse drug interactions. Many existing works focus on recommending the full set of medications in a visit~\cite{zhang2017leap,shang2019pre,shang2019gamenet,xiao2018opportunities}, which can be quite redundant from previous visits. Because the medications often remain stable over time with large overlaps between consecutive visits. For example, we investigate the Jaccard coefficient over consecutive visits on {\em MIMIC-III} data \cite{johnson2016mimic} (Fig.~\ref{fig:jaccard_distribution}). Among patients with multiple visits, although most of them are diagnosed with different conditions during consecutive visits (mean Jaccard below 0.2), the sets of medications remain stable (mean Jaccard around 0.5) \begin{figure}[tbp!] \centering \includegraphics[width=3.3in]{figure/Jaccard.pdf} \vspace{-3mm} \caption{Histogram of Jaccard Coefficients between consecutive visits. We observe relatively weaker overlaps in diagnoses but much stronger overlaps in medications over consecutive visits. It implies that the medication change is potentially more meaningful to predict. } \vspace{-3mm} \label{fig:jaccard_distribution} \end{figure} However, such medication patterns were rarely explored and leveraged to augment medication recommendation tasks. Challenges mainly arise from (1) how to accurately characterize the changes of patient health condition for each time step, and (2) how to correctly identify the medication changes based on health changes. To fill in the gap, we propose a new recurrent residual learning approach, named MedicatIon Change pRedictiON (\texttt{MICRON}\xspace ) to predict medication changes and simultaneously model longitudinal medical history. \texttt{MICRON}\xspace is enabled by the following technical contributions. \begin{itemize} \item {\bf Efficient representation of changing health conditions}. \texttt{MICRON}\xspace uses a residual health representation to sequentially update changes in patient health conditions, which provides more efficient model inference than RNN-based alternatives. \item {\bf Explicit change set prediction:} \texttt{MICRON}\xspace decomposes the medication change prediction task into predicting two sets: (1) the {\em removal set} that removes previous medicines that are no longer needed; and (2) the {\em addition set} that brings in new medicines for newly developed diseases. Two sets are modeled by a recurrently updated medication vector and addition and removal thresholds selected at the high-confidence region from validation ROC-curve and thus could provide reliable inclusion-exclusion criterion. \end{itemize} We evaluated \texttt{MICRON}\xspace against state-of-the-art models on two real-world patient datasets: one inpatient {\em MIMIC-III} data and one outpatient dataset. \texttt{MICRON}\xspace outperforms the best baseline GAMENet \cite{shang2019gamenet} with $3.5$\% and $7.8$\% relative improvement in F1 measure, respectively. In addition, \texttt{MICRON}\xspace achieves $1.5\times$ speed-up in training and inference compared with GAMENet. \section{Method} \subsection{\texttt{MICRON}\xspace Method} \begin{figure*}[!ht] \centering \includegraphics[width=6.3in]{figure/framework.png} \vspace{-1mm} \caption{\texttt{MICRON}\xspace Framework. To represent a patient health condition, the model first feeds on diagnosis and procedure information and then generates a compact patient health representation by {\em health representation network}, an affine function. During training, the model uses a feed-forward network for {\em prescription network} and learns the residual representation under a novel reconstruction loss. In the inference, our model inputs the heath update to the same {\em prescription network} and then generates addition/removal sets to update the current prescription.} \label{fig:framework} \vspace{-1mm} \end{figure*} {\bf Overview} As shown in Fig.~\ref{fig:framework}, \texttt{MICRON}\xspace has three modules: (1) a {\em patient representation module} that embeds diagnosis and procedure codes into latent health representation; (2) a {\em prescription reconstruction module (training phase)}, where \texttt{MICRON}\xspace trains on consecutive pairs of visits and learns residual medication representations under a new reconstruction design; (3) a {\em medication updating module (inference phase)} for model inference, where \texttt{MICRON}\xspace initializes with previous medication information. For each subsequent visit, \texttt{MICRON}\xspace only requires an update of patient health status and then will predict the changes in the existing medications. The key difference between \texttt{MICRON}\xspace and existing medication recommendation models \cite{shang2019gamenet,choi2016retain} is that while these models learn global sequential patterns using RNNs, \texttt{MICRON}\xspace learns sequential information locally (by every two consecutive visits) and propagates them visit-by-visit to preserve the longitudinal patient information. \subsection{Patient Representation} Patient representation aims to learn a compact and indicative vector to represent a patient's status. In a clinical visit, doctors will recommend medications based on diagnosis and procedure information. Our module also feeds on these two features. Since \texttt{MICRON}\xspace is proposed for generic patients, we leave the subscript notation in the following. \smallskip \noindent\textbf{Diagnosis and Procedure Encoders.} For the $t_{th}$ visit, the input features, $\mathbf{d}^{(t)} \in\mathbb{R}^{|\mathcal{D}|}$ and $\mathbf{p}^{(t)} \in\mathbb{R}^{|\mathcal{P}|}$, can be extracted from clinical documentary. $\mathbf{d}^{(t)}$ is the multi-hot diagnosis vector, while $\mathbf{p}^{(t)}$ is the procedure vector. Following the similar strategy in \cite{zhang2017leap,shang2019gamenet}, we transform these two vectors into the embedding space using mapping matrices $\mathbf{E}_d\in\mathbb{R}^{s\times |\mathcal{D}|}$ and $\mathbf{E}_p\in\mathbb{R}^{s\times |\mathcal{P}|}$ ($s$ is the size of embedding space), \begin{align} \mathbf{d}^{(t)}_{e} = \mathbf{E}_d \mathbf{d}^{(t)}~~~~\mbox{and}~~~~\mathbf{p}^{(t)}_{e} = \mathbf{E}_p \mathbf{p}^{(t)}. \end{align} During training, these two tables are shared among all visits and patients. The results, $\mathbf{d}^{(t)}_{e}$ and $\mathbf{p}^{(t)}_{e}$, are of the same dimension $\mathbb{R}^s$. \smallskip \noindent\textbf{Patient Hidden Representation.} To achieve one compact health representation, $\mathbf{d}^{(t)}_{e}$ and $\mathbf{p}^{(t)}_{e}$ are further concatenated and parametrized by a {\em health representation network}, $\mbox{NET}_{health}$, \begin{equation} \label{eq:patient_rep} \mathbf{h}^{(t)} = \mbox{NET}_{health}~\left(\left[\mathbf{d}^{(t)}_{e}~\|~\mathbf{p}^{(t)}_{e}\right]\right), \end{equation} which outputs an integrated health representation $\mathbf{h}^{(t)}\in\mathbb{R}^s$. In this paper, we use affine function (one layer neural network without activation) for $\mbox{NET}_{health}$. Unlike previous works \cite{shang2019gamenet,zhang2017leap}, this paper does not use recurrent neural networks (RNN) to capture a patient's health history. Starting from $\mathbf{h}^{(t)}$, the model architecture differs in training and inference. Next, we elaborate on these two phases. \subsection{Training: Prescription Reconstruction Module} Our \texttt{MICRON}\xspace trains on two consecutive visits, e.g., the $(t-1)_{th}$ and the $t_{th}$ visits. Given the health representations, i.e., $\mathbf{h}^{(t-1)}$ and $\mathbf{h}^{(t)}$, a straightforward way for recommending medications is to learn a mapping, i.e., a {\em prescription network}, $\mbox{NET}_{med}: \mathbb{R}^s \mapsto \mathbb{R}^{|\mathcal{M}|}$, from hidden embedding space to medication space for two visits, separately. \begin{align} \hat{\mathbf{m}}^{(t-1)} &= \mbox{NET}_{med}~(\mathbf{h}^{(t-1)}), \label{eq:complete1}\\ \hat{\mathbf{m}}^{(t)} &= \mbox{NET}_{med}~(\mathbf{h}^{(t)}). \label{eq:complete2} \end{align} where $\hat{\mathbf{m}}^{(t-1)},\hat{\mathbf{m}}^{(t)}\in\mathbb{R}^{|\mathcal{M}|}$ are medication representations and each entry quantifies a real value for the corresponding medicine. In the paper, $\mbox{NET}_{med}$ is implemented as a fully connected neural network. To obtain the actual recommendations, a trivial way \cite{shang2019pre,shang2019gamenet} is to apply a {\em medication output layer}, which consists of a {\em Sigmoid} function $\sigma(\cdot)$, followed by a pre-defined threshold $\delta$, picking up medicines with larger activation value. However, in this paper, we hope to utilize and emphasize the dependency usage between $\mathbf{h}^{(t-1)}$ and $\mathbf{h}^{(t)}$ in the model. \smallskip \noindent\textbf{Residual Medication Representation.} Formally, the difference between $\mathbf{h}^{(t-1)}$ and $\mathbf{h}^{(t)}$, i.e., $\mathbf{r}^{(t)} = \mathbf{h}^{(t)}-\mathbf{h}^{(t-1)}$, is denoted as \textit{residual health representation}, which encodes the changes in clinical health measurements, indicating an update in patient's health condition. Naturally, the health update $\mathbf{r}^{(t)}$ will cause an update in the resulting medication representation $\mathbf{u}^{(t)}$. Our motivation is that if $\mbox{NET}_{med}$ can map a complete health representation (e.g., $\mathbf{h}^{(t)}$) into a complete medication representation (e.g., $\hat{\mathbf{m}}^{(t)}$), then a residual health representation should also be mapped into an update in the same representation space through $\mbox{NET}_{med}$. In other words, $\mathbf{r}^{(t)}$ and $\mathbf{u}^{(t)}$ shall also follow the same mapping function, $\mbox{NET}_{med}$, \begin{equation} \mathbf{u}^{(t)} = \mbox{NET}_{med}~(\mathbf{r}^{(t)}). \label{eq:residual} \end{equation} To learn Eqn.~\eqref{eq:complete1} and \eqref{eq:complete2}, we could use the medication combinations in the dataset as supervision, however, it is hard to formulate direct supervision for Eqn.~\eqref{eq:residual}. A simple idea is to model the addition and the removal medication sets separately (as we show in the experiment that separate modeling DualNN does not work well). Therefore, we consider reconstructing ${\mathbf{u}}^{(t)}$ from $\hat{\mathbf{m}}^{(t-1)}$ and $\hat{\mathbf{m}}^{(t)}$ by both unsupervised and supervised regularization. \smallskip \noindent\textbf{Unsupervised Residual Reconstruction.} To model the medication changes, we design a reconstruction loss. For Eqn.~\eqref{eq:complete1}, \eqref{eq:complete2} and \eqref{eq:residual}, the inputs follow a residual relation: $\mathbf{h}^{(t-1)} + \mathbf{r}^{(t)} = \mathbf{h}^{(t)}$. Naturally, we also impose a similar relation in the {\em medication output layer} by introducing an {\em unsupervised reconstruction loss} ($\sigma(\cdot)$ is a {\em Sigmoid} function), \begin{equation} L^{(t)}_{rec} = \|\sigma(\hat{\mathbf{m}}^{(t-1)}+{\mathbf{u}}^{(t)}) - \sigma(\hat{\mathbf{m}}^{(t)})\|_2, \end{equation} which is calculated with $L_2$ norm. This reconstruction loss enforces the reconstructed recommendations from $\hat{\mathbf{m}}^{(t-1)}$ and the residual ${\mathbf{u}}^{(t)}$ to be close to the recommendations given by $\hat{\mathbf{m}}^{(t)}$. We show in the experiment that $L^{(t)}_{rec}$ is essential for learning the residual. \smallskip \noindent\textbf{Supervised Multi-label Classification.} To jointly modeling a low DDI output, we introduce three differentiable loss functions to improve $\hat{\mathbf{m}}^{(t-1)}$ and $\hat{\mathbf{m}}^{(t)}$, so as to achieve a better reconstruction ${\mathbf{u}}^{(t)}$. \begin{itemize} \item {\em Drug-Drug Interaction Loss.} Since adverse drug-drug interaction (DDI) is a leading cause of morbidity and mortality in clinical treatments \cite{percha2013informatics}, we penalize the presence of DDIs in the output medication representation, $\hat{\mathbf{m}}^{(t)}$. First, we transform it by {\em Sigmoid} function, $\hat{\mathbf{o}}^{(t)} = \sigma(\hat{\mathbf{m}}^{(t)})$, and then design the DDI loss as, \begin{equation} L^{(t)}_{ddi} = \sum_{i=1}\sum_{j=1} \mathbf{A}_{ij} \cdot \hat{\mathbf{o}}^{(t)}_i \cdot \hat{\mathbf{o}}^{(t)}_j, \end{equation} where $\mathbf{A}$ is the binary DDI matrix, extracted externally \cite{tatonetti2012data} and $\mathbf{A}_{ij}$ indicates that medicine $i$ and $j$ have interaction or not. The term $\mathbf{A}_{ij} \cdot \hat{\mathbf{o}}^{(t)}_i \cdot \hat{\mathbf{o}}^{(t)}_j$ is the a scalar product, which is the interaction penalty for medicine $i$ and $j$, and $\hat{\mathbf{o}}^{(t)}_i$ is the $i$-th element of the vector. Since we care about the DDI rate in the reconstructed representation, this loss only applies to the current visit $t$. \item {\em Binary Cross-entropy Loss.} In addition, we also extract real medication set as supervision. Assume a multi-hot vector ${\mathbf{m}}^{(t)}\in\{0,1\}^{|\mathcal{M}|}$ is the vectorization of the target medication set $\mathcal{M}^{(t)}$. We adopt {binary cross entropy (BCE) loss}, \begin{equation} \footnotesize L^{(t)}_{bce} = -\sum_{i=1} {\mathbf{m}}^{(t)}_i log (\hat{\mathbf{o}}^{(t)}_i)+(1-{\mathbf{m}}^{(t)}_i)log (1-\hat{\mathbf{o}}^{(t)}_i), \end{equation} where subscript $i$ indicates each element of the vectors. For this loss function, we compute on both $\hat{\mathbf{m}}^{(t-1)}$ and $\hat{\mathbf{m}}^{(t)}$. \item {\em Multi-Label Margin Loss.} Then, we employ margin-based loss to enlarge the gap between the recommended medications and the unselected ones. Since $\hat{\mathbf{o}}^{(t)}_i\in(0,1)$, the margin is set 1 in our paper. \begin{equation}\vspace{-1mm} \footnotesize L^{(t)}_{multi} = \sum_{i,j:~\mathbf{m}^{(t)}_i=1,\mathbf{m}^{(t)}_j\neq 1} \frac{\mbox{max}(0,1-(\hat{\mathbf{o}}^{(t)}_i-\hat{\mathbf{o}}^{(t)}_j))}{|\mathcal{M}|}. \end{equation} We also consider to penalize both of the visits, i.e., calculating $L^{(t-1)}_{multi}$ and $L^{(t)}_{multi}$ using this loss. \end{itemize} These three losses use external supervision to optimize the {\em prescription network}, so that during inference, our \texttt{MICRON}\xspace would predict medication changes more accurately. \smallskip \noindent\textbf{Overall Loss Function.} In the training process, we hope to find optimal values for embedding tables, $\mathbf{E}_d$ and $\mathbf{E}_p$, parameter matrices in $\mbox{NET}_{health}$ and $\mbox{NET}_{med}$. The loss functions are combined by weighted sum, \begin{align} L_{total} &= \lambda_1 L^{(t)}_{rec} + \lambda_2 L^{(t)}_{ddi} + \lambda_3\left(\gamma L^{(t)}_{bce} + (1-\gamma)L^{(t-1)}_{bce}\right) \notag\\ & + \lambda_4\left(\gamma L^{(t)}_{multi} + (1-\gamma)L^{(t-1)}_{multi}\right), \label{eq:loss} \end{align} where $\lambda_i,~i=1,2,3,4$, are different weights for four types of loss functions, and $\gamma$ is introduced to balance two consecutive visits. During the training, one batch contains all visits of one patient, and the loss is back-propagated after each batch. In the paper, we treat the weights as hyperparameters. In Appendix, we also prototype a momentum-based method to select the weights automatically. \vspace{1mm} \subsection{Inference: Medication Updating Module} To predict medication changes, it is essential to maintain a {\em medication combination}. We hope that for the subsequent visit, it would be enough to derive an update in the combination based on new diagnosis or procedure information. However, like Risperdal for treating schizophrenia and bipolar disorder, some medicines will not be prescribed based on only one or two visits, and it might need long-term clinical observation. We therefore also maintain a {\em medication vector}, where each element quantifies the cumulative effect of a medicine. After clinical visits, each element in the vector will increase or decrease based on the updates of patient health status. Essentially, the {\em medication vector} is like the memory cell in RNNs, which is refreshed visit-by-visit. Once it is above or below certain thresholds, the medicine will be added or removed from the current sets. More concretely, the medicine change prediction follows three steps. \smallskip \noindent\textbf{Step 1: Medication Vector Update.} Specifically, for the $t_{th}$ visit of a patient, the medication changes start from a medication vector, $\tilde{\mathbf{m}}^{(t-1)}\in\mathbb{R}^{|\mathcal{M}|}$, and a medication set, $\tilde{\mathcal{M}}^{(t-1)}\subset\mathcal{M}$. The model first updates the vector based on a residual health representation, $\mathbf{r}^{(t)}$, \begin{align} \tilde{\mathbf{m}}^{(t)} &= \tilde{\mathbf{m}}^{(t-1)} + {\mathbf{u}}^{(t)} \notag\\ & = \tilde{\mathbf{m}}^{(t-1)} + \mbox{NET}_{med}~(\mathbf{r}^{(t)}), \end{align} where $\mathbf{r}^{(t)}$ is calculated by $\mathbf{h}^{(t)}-\mathbf{h}^{(t-1)}$ (defined in Eqn.~\eqref{eq:patient_rep}, $\mbox{NET}_{health}$), which is implemented as an affine function. We use an efficient {\em smart inference} module to calculate $\mathbf{r}^{(t)}$, in case only the updates in medical codes (e.g., diagnosis and procedure) are accessible. We specify it in Appendix. \smallskip \noindent\textbf{Step 2: Addition and Removal.} Then, based on the updated medication vector, $\tilde{\mathbf{m}}^{(t)}\in\mathbb{R}^{|\mathcal{M}|}$, we identify which medicines are ready to add or remove. We design two thresholds ($\delta_1$ is for the addition set, while $\delta_2$ is for the removal set, where $1\geq\delta_1\geq \delta_2\geq0$) to control the size of changes. Specifically, we first apply a {\em Sigmoid} function $\sigma(\cdot)$, and then the addition and removal sets are generated by applying the thresholds $\delta_1$ and $\delta_2$ element-wise, \begin{align} \mathcal{N}^{(t)} &= \{i\mid \sigma(\tilde{\mathbf{m}}^{(t)}_i) \geq\delta_1\}, \\ \mathcal{O}^{(t)} &= \{i\mid \sigma(\tilde{\mathbf{m}}^{(t)}_i) \leq \delta_2\}, \end{align} where $\mathcal{N}^{(t)}$ ($\mathcal{O}^{(t)}$) is for addition (removal) set, and subscript $i$ enumerates the index of $\tilde{\mathbf{m}}^{(t)}$. Note that, $\mathcal{N}^{(t)}\cap \mathcal{O}^{(t)} = \varnothing$. For two thresholds, if $\delta_1=1$ and $\delta_2=0$, then $\mathcal{N}^{(t)} = \mathcal{O}^{(t)} = \varnothing$; in another case, $\delta_1=\delta_2$, then ``medication change prediction" becomes ``full medication prediction". The thresholds $\delta_1$ and $\delta_2$ are selected based on the receiver operating characteristic (ROC) of each medicine. Specifically, we load the pre-trained \texttt{MICRON}\xspace on the validation set. For each medicine, we collect cut-off thresholds of the ROC curve in descending order. $\delta_1$ is the average of $5$-percentile over all medications, while $\delta_2$ is based on $95$-percentile. Essentially, $\delta_1$ will provide a low false negative (FN) rate, while $\delta_2$ ensures a low false positive (FP) rate. \smallskip \noindent\textbf{Step 3: Medication Set Update.} Next, we apply the changes (addition and removal) in the existing medication set. The generation of a new combination is given by set operations \begin{equation} \tilde{\mathcal{M}}^{(t)} = (\tilde{\mathcal{M}}^{(t-1)} \cup {\mathcal{N}}^{(t)}) \setminus {\mathcal{O}}^{(t)}, \end{equation} where we use set union and subtraction operation. ${\mathcal{N}}^{(t)}$ and $\tilde{\mathcal{M}}^{(t-1)}$ could have overlaps, while ${\mathcal{O}}^{(t)}$ could also contain medications that are not in $\tilde{\mathcal{M}}^{(t-1)}$. The overlaps will not affect the final recommendation results due to set operations. To sum up, the model begins with a medication vector, $\tilde{\mathbf{m}}^{(t-1)}$, and a medication set, $\tilde{\mathcal{M}}^{(t-1)}$, which are provided by the previous visit. During the current visit, \texttt{MICRON}\xspace uses the update of patient status as input and walks through the above three steps to finish one round of medication change, as well as to update $\tilde{\mathbf{m}}^{(t)}$ and $\tilde{\mathcal{M}}^{(t)}$ for the next visit. \section{Method} \subsection{Problem Formulation} \label{sec:notations} \begin{definition}[Patient EHR Records] Patient EHR records are usually represented by an ordered sequence of tuples. For a patient $j$, we denote his/her clinical documentaries as $\mathbf{X}_{j}=[\mathbf{x}_{j}^{(1)},\mathbf{x}_{j}^{(2)},\mathbf{x}^{(3)}_j,\dots]$, where the $t_{th}$ entry, $\mathbf{x}^{(t)}_j$, records the information of the $t_{th}$ visit, such as diagnoses, procedures and prescription information. In the paper, $\mathbf{x}^{(t)}_j=[\mathbf{d}^{(t)}_j,\mathbf{ p}^{(t)}_j,\mathcal{M}^{(t)}_j]$, where $\mathbf{d}^{(t)}_j\in\{0,1\}^{|\mathcal{D}|}$ and $\mathbf{p}^{(t)}_j\in\{0,1\}^{|\mathcal{P}|}$ are multi-hot diagnoses and procedure vectors, while $\mathcal{D}$ and $\mathcal{P}$ are the overall diagnosis and procedure sets. $\mathcal{M}^{(t)}_j\subset\mathcal{M}$ is the $t_{th}$ medication set and $\mathcal{M}$ is a set for all possible medicines. We denote the visit-wise medication addition (new) and removal (old) sets as $\mathcal{N}^{(t)}_{target},\mathcal{O}^{(t)}_{target}\subset \mathcal{M}$, separately, which naturally follows the equality, ${\mathcal{M}}^{(t)} = ({\mathcal{M}}^{(t-1)} \cup \mathcal{N}^{(t)}_{target})\setminus \mathcal{O}^{(t)}_{target}$. \end{definition} \vspace{-0.5mm} \begin{problem}[Medication Change Prediction] Medication change prediction aims at determining the medication addition set $\mathcal{N}^{(t)}$ and the removal set $\mathcal{O}^{(t)}$ at visit $t$, given last prescription, $\tilde{\mathcal{M}}^{(t-1)}$ and patient health history $[\mathbf{d}^{(1)},\dots,\mathbf{d}^{(t)}]$ and $[\mathbf{p}^{(1)},\dots,\mathbf{p}^{(t)}]$. The model aims to minimize the gap between current estimation $\tilde{\mathcal{M}}^{(t)}= (\tilde{\mathcal{M}}^{(t-1)} \cup \mathcal{N}^{(t)})\setminus \mathcal{O}^{(t)}$ and real prescriptions ${\mathcal{M}}^{(t)}$, and also control the incidence of DDIs as denoted by $\mathbf{A}\in\{0, 1\}^{|\mathcal{M}|\times |\mathcal{M}|}$, where $\mathbf{A}_{ij}=1$ implies that medicine $i$ and $j$ could interact. \end{problem} \section{Related Works} \noindent \textbf{Rule-based models} \cite{almirall2012designing,chen2016physician} typically rely on human-designed clinical guidelines, which require huge efforts from clinicians. For example, \cite{lakkaraju2017learning} optimizes a sequence of if-then-else rules, which maps the patient status into the prescription decision. \smallskip \noindent \textbf{Instance-based methods} extract patient features only from current visits. \cite{zhang2017leap} formulated the medication recommendation as a multi-instance multi-label (MIML) task and proposed a content-attention mechanism-based sequence-to-sequence model. \cite{wang2017safe} jointly embedded diseases, medicines, patients, and their corresponding relations into a shared space by the knowledge graph, which requires multiple external data sources. \smallskip \noindent \textbf{Longitudinal approach} \cite{wang2018supervised,wang2019order,xiao2018opportunities,bhoi2020premier} is a popular approach that captures the sequential dependency in patient treatment history. \cite{choi2016retain,bajor2016predicting} modeled the longitudinal patient history by RNNs for various clinical predictive tasks. \cite{shang2019gamenet} and \cite{le2018dual} adopted memory-based networks with RNNs to handle the dependency among longitudinal medical codes. Compared with existing works, \texttt{MICRON}\xspace is based on a new perspective that focuses on predicting the changes. This is more realistic since clinicians usually update patient prescriptions by only a small proportion for a patient's new visit.
{ "timestamp": "2021-05-06T02:10:15", "yymm": "2105", "arxiv_id": "2105.01876", "language": "en", "url": "https://arxiv.org/abs/2105.01876" }
\section{Introduction} Weight regularization is a process adding information to the model to avoid overfitting \cite{deeplearningbook, l2regularization}. In this paper, we explore weight compression as a form of weight regularization as it severely restricts the search space of weights (i.e., regularized by compression forms). Moreover, model compression shrinks the effective model size, which is an important regularization principle \cite{deeplearningbook} (note that improved model accuracy by model compression is reported \cite{lotteryscale}). Weights are regularized in numerous ways by model compression. For example, each weight can be pruned (e.g., \cite{SHan_2015}) or quantized (e.g., \cite{Greedy_Quan}) to yield a sparse model representation or to reduce the number of bits to represent each weight. While model compression can be performed without training dataset, compression-aware training can improve model accuracy by reflecting the impact of model compression on the loss function for every mini-batch update \cite{binaryconnect, suyog_prune, ternary2017, Greedy_Quan}. For such a method, the regularization strength is mainly determined by compression ratio. Note that for typical regularization schemes, adjusting the regularization strength, such as dropout rate or weight decay factor, is a crucial process to maximize generalization capability of DNNs. To improve model accuracy given a target compression ratio, we need an additional way to control the regularization strength. In this paper, we introduce regularization frequency to represent how many mini-batches are trained without regularization. Then for weight decay and weight noise insertion, we show that decreasing regularization frequency allows higher weight decay factors or larger amounts of noise per regularization step. In other words, the overall regularization strength is affected by regularization frequency as well as weight decay factors (or the amount of weight noise). As a result, a similar amount of average weight updates (determined by both regularization frequency and weight decay factors) is associated with a similar regularization strength, and hence, similar model accuracy. We demonstrate that the same principle holds for model compression; regularization strength is affected not only by compression ratio but also by regularization frequency. We verify that our simple model compression techniques (without modifying the underlying training procedures) based on occasional weight regularization can achieve higher compression ratio and higher model accuracy compared to previous techniques that demand substantial modifications to the training process. Our proposed compression-aware training algorithm enables the followings: \begin{itemize \item Our compression-aware training technique does not require any modifications to the original training algorithms except an additional regularization method in that weights are occasionally transformed by compression forms. \item Computational overhead by compression is not noticeable because such new additional regularization is performed infrequently. Hence, complex compression algorithms are allowed without concerns on training time increase. \item We propose an additional regularization hyper-parameter, regularization frequency, to provide larger parameter search space. \item Our proposed training method can be a platform to support various kinds of compression techniques including even futuristic ones. Model compression designers can focus on developing new compression architectures without concerns on particular associated training algorithm design. \end{itemize} \section{Batch Size Selection} Since a unit of measurement of regularization frequency is highly correlated with batch size, let us discuss batch size considered for our work. To overcome some practical issues of gradient descent on non-convex optimization problems, there have been several enhancements such as learning rate scheduling and adaptive update schemes using momentum and update history \cite{overview_optimization}. Optimizing batch size is another way to yield efficient gradient descent. Note that large batch size has the advantage of enhancing parallelism of the training system in order to speed up training, critical for DNN research \cite{large_scale_distributed}. Despite such advantages, small batch size is preferred because it improves generalization associated with flat minima search \cite{largebatch} and other hyper-parameter explorations are more convenient \cite{small_batch}. Small batch size also affects weight regularization if weight updates for gradient descent and weight regularization are supposed to happen for every mini-batch. For example, for weight decay conducted for every mini-batch, if batch size is modified, then the weight decay factor should also be adjusted accordingly \cite{decoupledweightdecay}. In this paper, we assume a reasonably small batch size. \section{Weight Updates by Model Compression} \begin{figure}[t] \centering \begin{subfigure}[t]{.45\textwidth} \includegraphics[width=1\textwidth]{EPS/svd_err.eps} \caption{Low-rank approximation via SVD} \end{subfigure} \begin{subfigure}[t]{.45\textwidth} \includegraphics[width=1\textwidth]{EPS/quant_err.eps} \caption{Quantization} \end{subfigure} \caption{Distribution of weight noise $\epsilon$ when $R$ is the rank and $Q$ is the number of quantization bits.} \label{fig:weight_compression_noise} \end{figure} Before investigating the effects of regularization frequency, we first study the relationship between model compression ratio and the weight regularization strength using quantization and singular-value decomposition (SVD) as model compression techniques. We assume a popular quantization method based on binary codes for which a weight vector ${\bm{w}}$ is approximated to be $\sum_{i=1}^{q} \alpha_i {\bm{b}}_i$ for $q$-bit quantization, where $\alpha$ is a scaling factor and ${\bm{b}} (=\{-1,+1\}^n)$ is a binary vector, and $n$ is the vector size. The quantization error $|| {\bm{w}} - \sum_{i}\alpha_i {\bm{b}}_i||^2$ is minimized by a method proposed by \cite{xu2018alternating} to compute $\alpha$ and ${\bm{b}}$. For SVD, a weight matrix ${\bm{W}} \in\mathbb{R}^{m\times n}$ is approximated to be ${\bm{W}}' \in\mathbb{R}^{m\times n}$ by minimizing $||{\bm{W}} - {\bm{W}}'||$ subject to rank$({\bm{W}}') \le R$, where $R$ is the target rank. For our experiments, we use a synthetic $(2048\times2048$) weight matrix where each element is randomly generated from the Gaussian distribution $\mathcal{N}(\mu\!=\!0, \sigma^2 \!= \!1)$. Then, we are interested in the amount of change of each weight after quantization and SVD. Assuming that weight noise through compression is expressed as $\epsilon$ in the form of $w' \!= \!w (1+\epsilon)$, Figure~\ref{fig:weight_compression_noise} shows the distribution of $\epsilon$ with various quantization bits or target ranks. From $\epsilon$ distributions skewed to be negative, it is clear that weights tend to decay more with higher compression ratio, along with a wider range of random noise. Reasonable explanations of Figure~\ref{fig:weight_compression_noise} would include: 1) weights generated from the Gaussian distribution are uncorrelated such that an approximation step (by compression) using multiple weights would result in noise for each weight, 2) in the case of SVD, elements associated with small eigenvalues are eliminated, 3) averaging effects in quantization reduce the magnitude of large weights. For weight pruning, $\epsilon$ becomes $-1$ or $0$ (i.e., weight decay for selected weights). Correspondingly, we study weight decay and weight noise insertion in the next two sections as an effort to gain a part of basic knowledge on improved training for model compression, even though actual model compression would demand much more complicated weight noise models. \begin{figure*}[t] \centering \includegraphics[width=0.5\linewidth]{EPS/pNR_scheme.eps} \caption{Our proposed modulating regularization frequency scheme when NR period is given as a multiple of batches. } \label{fig:loss_surface} \end{figure*} \section{Non-Regularization Period Study on Weight Decay and Weight Noise Insertion} Since weight regularization cannot precede updates for gradient descent, in order to control the frequency of weight regularization, an available option is to skip a few batches without regularization. In this paper, we propose a new hyper-parameter, called ``Non-Regularization period'' or NR period, to enable occasional regularization and to define the interval of two consecutive regularization events as shown in Figure~\ref{fig:loss_surface}. NR period is an integer number and expressed as a multiple of batches (from now on, thus, we use `NR period' or $pN\!R$ to represent regularization frequency). Weight decay is one of the most well-known regularization techniques \cite{three_mecha} and different from $L_2$ regularization in a sense that weight decay is separated from the loss function calculation \cite{decoupledweightdecay}. Weight decay is performed as \begin{equation} \label{eq:weight_decay} {\bm{w}}_{t+1} = (1 - \gamma \theta {\bm{w}}_t) - \gamma \nabla_{{\bm{w}}_t} \mathcal{L}({\bm{w}}), \end{equation} where $\theta$ is a constant weight decay factor. Weight noise insertion is another regularization technique aiming at reaching flat minima \cite{deeplearningbook, flat_minima}. For our experiments, we assume that random Gaussian noise is added to weights such that ${\bm{w}}' \!= \!{\bm{w}} \!+ \!\bm{\epsilon}$ when $\bm{\epsilon} \!\sim\! \mathcal{N}(0,\eta I)$. \begin{figure}[t] \begin{center} \begin{minipage}[t]{.40\textwidth} \centering \includegraphics[width=1\linewidth]{EPS/res_wd.eps} \end{minipage} \begin{minipage}[t]{.40\textwidth} \centering \includegraphics[width=1\linewidth]{EPS/res_uni.eps} \end{minipage} \caption{Model accuracy of ResNet-32 on CIFAR-10 using various NR period and amount of weight decay or noise for regularization (original model accuracy without regularization is 92.6\%). (Left): Weight decay. (Right): Uniform weight noise insertion.} \label{fig:resnet32_noise} \end{center} \end{figure} \begin{figure}[t] \begin{center} \begin{minipage}[t]{.40\textwidth} \centering \includegraphics[width=1\linewidth]{EPS/ptb_wd.eps} \end{minipage} \begin{minipage}[t]{.40\textwidth} \centering \includegraphics[width=1\linewidth]{EPS/ptb_uni.eps} \end{minipage} \caption{Perplexity of LSTM model on PTB dataset using various NR period and amounts of weight decay or noise. (Left): Weight decay. (Right): Uniform weight noise insertion.} \label{fig:ptb_noise} \end{center} \end{figure} We study the impact of NR period on weight decay and weight noise insertion using ResNet-32 on CIFAR-10 model \cite{resnet} and a long short-term memory (LSTM) model on PTB dataset \cite{PTB_google}. For the LSTM model, we use 2 layers with 200 hidden units and the hyper-parameter set introduced by \cite{PTB_google}. For the weight noise model, we plug $\theta \sim \mathcal{U}(-a, +a)$ (uniform distribution) into Eq.~(\ref{eq:weight_decay}) to simplify the experiments. Figure~\ref{fig:resnet32_noise} shows model accuracy of ResNet-32 given different NR period and weight decay factors. For both weight decay and weight noise insertion, the choice of $\theta$ (representing the amount of weight regularization for every $pN\!R$) has a clear correlation with NR period (refer to Appendix for training and test accuracy graphs). If we wish to apply larger $\theta$, then weight regularization should be conducted less frequently (i.e., larger weight decay factor requires longer NR period) to optimize the regularization effect and achieve high model accuracy. For similar model accuracy, weight decay factor can be approximately 1,000 times larger with $pN\!R \approx 1,000$ in Figure~\ref{fig:resnet32_noise} and Figure~\ref{fig:ptb_noise}. Similar observations are discovered by the LSTM model on PTB as shown in Figure~\ref{fig:ptb_noise}. Lower perplexity (indicating better generalization) is obtained when the NR period increases as $\theta$ becomes larger for each regularization event. For weight decay, increasing $\theta$ by longer $pN\!R$ may not be significant because of similar model accuracy. On the other hand, \textbf{increasing model compression ratio by longer $pN\!R$ should be significant as we show in the next section.} \section{NR Period for Model Compression} As discussed, weight compression incurs a much more complicated weight regularization model than weight decay or uniform weight noise insertion because 1) as shown in Figure~\ref{fig:weight_compression_noise}, diversified noise models need to be combined to describe weight regularization after model compression and 2) compression-aware training methods would reduce the strength of weight regularization as training is performed with more epochs and weights converge to a compressed form. Nonetheless, we can conjecture that the best training scheme for model compression may require the condition of $pN\!R \neq 1$ that can be empirically justified. \begin{figure*}[t] \centering \begin{subfigure}{0.98\textwidth} \centering \includegraphics[width=0.92\linewidth]{EPS/ptb_svd.eps} \caption{Perplexity when the weights are compressed by SVD.} \label{fig:ptb_svd} \end{subfigure} \centering \begin{subfigure}{0.98\textwidth} \centering \includegraphics[width=0.92\linewidth]{EPS/ptb_quant.eps} \caption{Perplexity when the weights are compressed by quantization based on binary codes.} \label{fig:ptb_quant} \end{subfigure} \centering \begin{subfigure}{0.98\textwidth} \centering \includegraphics[width=0.92\linewidth]{EPS/ptb_pruning.eps} \caption{Perplexity when the weights are compressed by magnitude-based pruning.} \label{fig:app_ptb_pruning} \end{subfigure} \caption{Model accuracy of an LSTM model on PTB compressed by quantization, low-rank approximation or pruning. Original perplexity without model compression is 114.6. For more details, refer to Appendix.} \label{fig:ptb_pnr_weight_relationship} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=1.0\textwidth]{EPS/ptb_wd_15_err.eps} \caption{Weight Decay ($e_{opt}{=}0.0007$)} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=1.0\textwidth]{EPS/ptb_svd_err.eps} \caption{SVD ($e_{opt}{=}0.08$)} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=1.0\textwidth]{EPS/ptb_pruning_err.eps} \caption{Pruning ($e_{opt}{=}0.035$)} \end{subfigure} \caption{Model accuracy and regularization error (defined as the difference between $e_w/pN\!R$ and $e_{opt}$) using PTB LSTM model when weights are regularized by weight decay, SVD, or pruning.} \label{fig:ptb_pnr_err} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=1.0\textwidth]{EPS/res_wd_err.eps} \caption{Weight Decay ($e_{opt}{=}$9.46e-06)} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=1.0\textwidth]{EPS/res_quant_err.eps} \caption{Quantization ($e_{opt}{=}$9.38e-06)} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=1.0\textwidth]{EPS/res_pruning_err.eps} \caption{Pruning ($e_{opt}{=}$4.25e-6)} \end{subfigure} \caption{Test accuracy drop and regularization error using ResNet-32 on CIFAR-10 when weights are regularized by weight decay, quantization, or pruning.} \label{fig:resnet_pnr_err} \end{figure*} We apply weight quantization, low-rank approximation (SVD), and pruning to an LSTM model on PTB that we selected for the previous section. We do not modify underlying training principles and use the following simple strategy: \framebox{\parbox{0.97\linewidth}{ \begin{enumerate}[noitemsep,topsep=0pt,parsep=1pt,partopsep=0pt] \item [(1)] Train the model for $pN\!R$ batches \\(as if model compression is not being considered.) \item [(2)] Perform weight compression in the form of \\${\bm{w}}' = h({\bm{w}} )$. \item [(3)] With new full-precision weight ${\bm{w}}'$, \\repeat the above two steps. \end{enumerate} }} $h({\bm{w}})$ can be a magnitude-based pruning (i.e., $h(w)$=$w$ if $|w|$ is larger than a certain threshold, or $h(w)$=$0$, otherwise), $\alpha {\bm{b}}$ for quantization, SVD function, or even as-yet undiscovered functions. Figure~\ref{fig:ptb_pnr_weight_relationship} shows model accuracy associated with a number of different sets of $pN\!R$ and model compression strength (i.e., target rank for low-rank approximation and the number of quantization bits). Notice that the optimal $pN\!R$ for the best model accuracy is definitely larger than $1$. To explain how Figure~\ref{fig:ptb_pnr_weight_relationship} is aligned with the previous section, we investigate the relationship between model accuracy and the average of $e_w/pN\!R$ ($e_w = \mathbb{E} [| {\bm{w}} - {\bm{w}}' |]$, where ${\bm{w}}'$ is a weight vector after weight decay, SVD, or pruning) throughout the entire training. We first optimize $e_w/pN\!R$ $(=e_{opt})$ to achieve the best model accuracy. Following Figure~\ref{fig:resnet32_noise} and \ref{fig:ptb_noise}, let us assume $e_{opt}$ to be constant regardless of compression ratio or decay factor, and obtained by finding hyper-parameter sets associated with maximum model accuracy in Figure~\ref{fig:ptb_noise} and Figure~\ref{fig:ptb_pnr_weight_relationship} and by taking the average of corresponding $e_w$ values. When regularization error is defined to be $| e_w/pN\!R - e_{opt}|$, Figure~\ref{fig:ptb_pnr_err} shows test perplexity and regularization error of PTB LSTM model with different $pN\!R$. Such defined regularization error is affected by $pN\!R$ as shown in Figure~\ref{fig:ptb_pnr_err}, and indeed, when regularization error approaches to the minimum (i.e., zero) we gain improved model accuracy. Unlike weight decay where $e_w$ is directly computed by decay factors, for model compression techniques, $e_w$ is not directly related to compression-related hyper-parameters (such as ranks and pruning rates). As a result, while Figure~\ref{fig:ptb_noise} shows a clear correlation between decay factors and $pN\!R$ for best model accuracy, Figure~\ref{fig:ptb_pnr_weight_relationship} suggests that compression ratio and $pN\!R$ are weakly correlated. Hence, $pN\!R$ is a hyper-parameter to be determined empirically for model compression. Nonetheless, the optimal $pN\!R$ is definitely larger than 1, as shown in Figure~\ref{fig:ptb_pnr_err}, and decoupled from batch size selection. That means weight regularization for model compression needs to be conducted much less frequently compared with gradient descent since batch size selection considers generalization ability of gradient descent, not regularization effects. For ResNet-32 on CIFAR-10, we also find $e_{opt}$ and investigate the relationship between model accuracy and $pN\!R$ as shown in Figure~\ref{fig:resnet_pnr_err}. Similar to the case of PTB LSTM model, ResNet-32 presents a particular $pN\!R$ that minimizes regularization error. It is clear that for both PTB LSTM and ResNet-32, optimal $pN\!R$ is definitely larger than `1' despite some variation on model accuracy. We summarize our empirical observations as follows: \begin{itemize \item Unlike conventional wisdom, a wide range of weight decay factors is allowed since we can adjust $pN\!R$ to optimize the regularization strength. \item For each weight decay factor selected, there is an optimal $pN\!R$ to maximize model accuracy. \item Similarly, for each compression ratio, a particular $pN\!R$ presents the best model accuracy. \item Such $pN\!R$ is a hyper-parameter that is empirically searched. \end{itemize} From our extensive experiments, optimal $pN\!R$ is usually searched in the range from 10 to 1000 for model compression. Large $pN\!R$ provides a benefit of less amount computations for model compression. Especially when the compression method is based on iterative mathematical principles (such as SVD \cite{SVD2013} or quantization \cite{xu2018alternating}), large $pN\!R$ can save training time significantly. \section{Comparison with Previous Model Compression Techniques} In this section, we compare some of previous model compression techniques with our compression scheme that introduces $pN\!R$ and obviates special training algorithm modifications. Due to the space limit, please refer to Appendix for more experimental results with ImageNet and PTB. \subsection{Fine-Grained Weight Pruning} The initial attempt of pruning weights was to locate redundant weights by computing the Hessian to calculate the sensitivity of weights to the loss function \cite{optimalbrain}. However, such a technique has not been considered to be practical due to significant computation overhead for computing the Hessian. Magnitude-based pruning \cite{SHan_2015} has become popular because one can quickly find redundant weights by simply measuring the magnitude of weights. Since then, numerous researchers have realized a higher compression ratio largely by introducing Bayesian inference modeling of weights accompanying supplementary hyper-parameters. For example, dynamic network surgery (DNS) \cite{DNS} permits weight splicing when a separately stored full-precision weight becomes larger than a certain threshold. Optimizing splicing threshold values, however, necessitates extensive search space exploration, and thus, longer training time. Variational dropout method \cite{sparseVD} introduces an explicit Bayesian inference model for a prior distribution of weights, which also induces various hyper-parameters and increased computational complexity. We perform magnitude-based pruning at every $pN\!R$ step. As a result, even though weights are pruned and replaced with zero at $pN\!R$ steps, pruned weights are still updated in full precision during NR period. If the amount of updates of a pruned weight grows large enough between two consecutive regularization steps, then the weight pruned at the last $pN\!R$ step may not be pruned at the next $pN\!R$ step. Such a feature (i.e., pruning decisions are not fixed) is also utilized for weight splicing in DNS \cite{DNS}. Weight splicing in DNS relies on a hysteresis function (demanding sophisticated fine-tuning process with associated hyper-parameters) to switch pruning decisions. Pruning decisions through our scheme, on the other hand, are newly determined at every $pN\!R$ step. We present experimental results with LeNet-5 and LeNet-300-100 models on MNIST dataset which are also reported by \cite{DNS, sparseVD}. LeNet-5 consists of 2 convolutional layers and 2 fully connected layers while 3 fully connected layers construct LeNet-300-100. We train both models for 20000 steps using Adam optimizer where batch size is 50. All the layers are pruned at the same time and the pruning rate increases gradually \cite{suyog_prune}. We exclude dropout to improve the accuracy of LeNet-300-100 and LeNet-5 since pruning already works as a regularizer \cite{SHan_2015, dropconnect}. We keep the original learning schedule and the total number of training steps (no additional training time for model compression). \begin{table}[t] \small \caption{Pruning rate and accuracy comparison using LeNet-300-100 and LeNet-5 models on MNIST dataset. DC (Deep Compression) and Sparse VD represent a magnitude-based technique \cite{deepcompression} and variational dropout method \cite{sparseVD}, respectively.} \label{table:lenet_pruning_comparison} \begin{center} \setlength\tabcolsep{3.0pt} \begin{tabular}{c c c c c c c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Layer} & Weight & \multicolumn{4}{c}{Pruning Rate (\%)}\\ \cline{4-7} & & Size & DC & DNS & SparseVD & Ours \\ \hline & FC1 & 235K & 92 & 98.2 & 98.9 & 98.9 \\ LeNet & FC2 & 30K & 91 & 98.2 & 97.2 & 96.0 \\ -300-100 & FC3 & 1K & 74 & 94.5 & 62.0 & 62.0 \\ \cline{2-7} & Total & 266.2K & 92 & 98.2 & 98.6 & 98.4 \\ \hline \multirow{5}{*}{LeNet-5} & Conv1 & 0.5K & 34 & 85.8 & 67 & 60.0 \\ & Conv2 & 25K & 88 & 96.9 & 98 & 97.0 \\ & FC1 & 40K & 92 & 99.3 & 99.8 & 99.8 \\ & FC2 & 5K & 81 & 95.7 & 95 & 95.0 \\ \cline{2-7} & Total & 430K & 92 & 99.1 & 99.6 & 99.5 \\ \hline \\ \end{tabular} \begin{tabular}{c c c c c} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Accuracy (\%)} \\ \cline{2-5} & DC & DNS & Sparse VD & DeepTwist \\ \hline LeNet-300-100 & 98.4 & 98.0 & 98.1 & 98.1 \\ LeNet-5 & 99.2 & 99.1 & 99.2 & 99.1 \\ \hline \end{tabular} \end{center} \end{table} \begin{table*}[t] \caption{Test accuracy (average of 10 runs for each choice of $pN\!R$) of pruned LeNet-5 model. Pruning rates are described in Table \ref{table:lenet_pruning_comparison}.} \label{table:lenet_step_exp} \begin{center} \begin{tabular}{c | c c c c c c c c c} \hline NR Period ($pN\!R$) & 1 & 2 & 5 & 10 & 50 & 100 & 200 & 500 \\ \hline Accuracy (\%) & 99.00 & 99.06 & 99.06 & 99.11 & 99.05 & 98.98 & 98.72 & 96.52 \\ \hline \end{tabular} \end{center} \end{table*} Table~\ref{table:lenet_pruning_comparison} presents the comparison on pruning rates (see Appendix for test accuracy that is almost the same among all selected schemes). Despite the simplicity, our pruning scheme produces higher pruning rate compared with DNS and similar compared with variational dropout technique which involves much higher computational complexity. For Table \ref{table:lenet_pruning_comparison}, we use $pN\!R$=10 for LeNet-5 and $pN\!R$=5 for LeNet-300-100. We investigate how sensitive $pN\!R$ is to the test accuracy when the other parameters (such as pruning rates, learning rate, and total training time) are fixed. As shown in Table 2, for a wide range of $pN\!R$, the test accuracy has negligible fluctuation\footnote{Even though we cannot show such a sensitivity study for all of the remaining experiments in this paper, $pN\!R$ has also shown low sensitivity to the accuracy even for other models and compression techniques.}. Too large $pN\!R$ would result in 1) too little weight distortion, 2) coarse-grained gradual pruning, and 3) unnecessarily large updates for correctly pruned weights. On the other hand, too small $pN\!R$ may yield excessive amounts of weight distortion and reduce the opportunity for the pruned weights to recover. We apply $pN\!R$-based pruning to an RNN model to verify the effectiveness of $pN\!R$. We choose an LSTM model \citep{PTB_google} on the PTB dataset \citep{Marcus_1993}. Following the model structure given in \cite{PTB_google}, our model consists of an embedding layer, 2 LSTM layers, and a softmax layer. The number of LSTM units in a layer can be 200, 650, or 1500, depending on the model configurations (referred as small, medium, and large model, respectively). The accuracy is measured by Perplexity Per Word (PPW), denoted simply by perplexity in this paper. $pN\!R$-based pruning for the PTB models is performed gradually using $pN\!R =100$ and the initial learning rate is 2.0 for the medium model (1.0 for pre-training) and 1.0 for the large model (1.0 for pre-training) while the learning policy remains to be the same as in \cite{PTB_google}. \begin{table*}[t!] \caption{Comparison on perplexity using various pruning rates. $p_f$ is the target pruning rates for the embedded layer, LSTM layer, and softmax layer.} \label{table:ptb_pruning} \begin{center} \begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{Model Size} & \multirow{2}{*}{Pruning Method} & & \multicolumn{6}{c}{Perplexity} \\ \cline{3-9} & & $p_f$=& 0\% & 80\% & 85\% & 90\% & 95\% & 97.5\% \\ \hline Medium & \citep{suyog_prune} & & 83.37 & 83.87 & 85.17 & 87.86 & 96.30 & 113.6 \\ (19.8M) & DeepTwist & & 83.78 & 81.54 & 82.62 & 84.64 & 93.39 & 110.4 \\ \hline Large & \citep{suyog_prune} & & 78.45 & 77.52 & 78.31 & 80.24 & 87.83 & 103.20 \\ (66M) & DeepTwist & & 78.07 & 77.39 & 77.73 & 78.28 & 84.69 & 99.69 \\ \hline \end{tabular} \end{center} \end{table*} \begin{figure}[t] \begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.8\linewidth]{EPS/old_pruning_ptb.eps} \end{minipage} \begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.8\linewidth]{EPS/new_pruning_ptb.eps} \end{minipage} \caption{Weight distribution of LSTM layer 1 of the medium PTB model after retraining with (Left) a magnitude-based pruning and (Right) $pN\!R$-based pruning with 90\% pruning rate. Our compression scheme incurs a sharp drop in the count of near-zero weights.} \label{fig:weight_dist_ptb} \end{figure} For all of the pruning rates selected, Table \ref{table:ptb_pruning} shows that our compression scheme improves perplexity better than the technique in \cite{suyog_prune} which is based on \cite{SHan_2015}. The superiority of $pN\!R$-based pruning is partly supported by the observation that non-zero weights successfully avoid to be small through retraining while the conventional pruning still keeps near-zero (unmasked) weights as depicted in Figure~\ref{fig:weight_dist_ptb}. \subsection{Low-Rank Approximation} We apply our proposed occasional regularization algorithm integrated with Tucker decomposition \cite{tucker} to convolutional neural network (CNN) models and demonstrate superiority of the $pN\!R$-based scheme over conventional training methods. In CNNs, the convolution operation requires a 4D kernel tensor $\mathcal{K}=$ $\mathbb{R}^{d \times d \times S \times T}$ where each kernel has $d\times d$ dimension, $S$ is the input feature map size, and $T$ is the output feature map size. Then, following the Tucker decomposition algorithm, $\mathcal{K}$ is decomposed into three components as \begin{equation} \label{eq:eq_tucker} \mathcal{\Tilde{K}}_{i,j,s,t} = \sum_{r_s=1}^{R_s}\sum_{r_t=1}^{R_t}\mathcal{C}_{i,j,r_s,r_t}{\bm{P}}_{s,r_s}^S{\bm{P}}_{t,r_t}^T , \end{equation} where $\mathcal{C}_{i,j,r_s,r_t}$ is the reduced kernel tensor, $R_s$ is the rank for input feature map dimension, $R_t$ is the rank for output feature map dimension, and ${\bm{P}}^S$ and ${\bm{P}}^T$ are 2D filter matrices to map $\mathcal{C}_{i,j,r_s,r_t}$ to $\mathcal{K}_{i,j,s,t}$. As a result, one convolution layer is divided into three convolution layers, specifically, $(1 {\times} 1)$ convolution for ${\bm{P}}^S$, $(d {\times} d)$ convolution for $\mathcal{C}_{i,j,r_s,r_t}$, and $(1 {\times} 1)$ convolution for ${\bm{P}}^T$ \cite{tucker_samsung}. In prior tensor decomposition schemes, model training is performed as a fine-tuning procedure after the model is restructured and fixed \cite{cp_decomposition, tucker_samsung}. On the other hand, our training algorithm is conducted for Tucker decomposition as follows: \begin{enumerate \item Perform normal training for $pN\!R$ (batches) without considering Tucker decomposition \item Calculate $\mathcal{C}$, ${\bm{P}}^S$, and ${\bm{P}}^T$ using Tucker decomposition to obtain $\mathcal{\Tilde{K}}$ \item Replace $\mathcal{K}$ with $\mathcal{\Tilde{K}}$ \item Go to Step 1 with updated $\mathcal{K}$ \end{enumerate} After repeating a number of the above steps towards convergence, the entire training process should stop at Step 2, and then the final decomposed structure is extracted for inference. Because the model is not restructured except in the last step, Steps 2 and 3 can be regarded as special steps to encourage wide search space exploration so as to find a compression-friendly local minimum where weight noise by decomposition does not noticeably degrade the loss function. Using the pre-trained ResNet-32 model with CIFAR-10 dataset \cite{resnet, tensorly}, we compare two training methods for Tucker decomposition: 1) typical training with a decomposed model and 2) $pN\!R$-based training, which maintains the original model structure and occasionally injects weight noise through decomposition. Using an SGD optimizer, both training methods follow the same learning schedule: learning rate is 0.1 for the first 100 epochs, 0.01 for the next 50 epochs, and 0.001 for the last 50 epochs. Except for the first layer, which is much smaller than the other layers, all convolution layers are compressed by Tucker decomposition with rank $R_s$ and $R_t$ selected to be $S$ and $T$ multiplied by a constant number $R_c$ ($0.3{\le}R_c{\le}0.7$ in this experiment). Then, the compression ratio of a convolution layer is $d^2ST/(SR_s+d^2R_sR_t+TR_t)$ $=d^2ST/(S^2R_c+d^2R_c^2ST+T^2R_c)$, which can be approximated to be $1/R^2_c$ if $S=T$ and $d\gg R_c$. $pN\!R$ is chosen to be 200. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{EPS/tucker_cr.eps} \caption{Test accuracy comparison on ResNet-32 using CIFAR-10 trained by typical training method and the proposed training method with various compression ratios. For the proposed scheme, test accuracy is measured only at Step 3 that allows to extract a decomposed structure, and $pN\!R$ is 200.} \label{fig:tucker1} \end{figure} \begin{figure}[t] \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=0.85\textwidth]{EPS/resnet32_loss_1.eps} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=0.85\textwidth]{EPS/resnet32_loss_2.eps} \end{minipage} \caption{Training loss (Left) and test accuracy (Right) of ResNet-32 using CIFAR-10. For the proposed scheme, training loss and test accuracy are only monitored right before or after weight regularization for compression (Pre-Reg. or Post-Reg.). Compression ratio is 2.8 with $R_c$=0.5.} \label{fig:tucker2} \end{figure} Figure~\ref{fig:tucker1} shows test accuracy after Tucker decomposition\footnote{https://github.com/larry0123du/Decompose-CNN} by two different training methods. Note that test accuracy results are evaluated only at Step 3 where the training process can stop to generate a decomposed structure. In Figure~\ref{fig:tucker1}, across a wide range of compression ratios (determined by $R_c$), the proposed scheme yields higher model accuracy compared to typical training. Note that even higher model accuracy than that of the pre-trained model can be achieved by our method if the compression ratio is small enough. In fact, Figure~\ref{fig:tucker2} shows that our technique improves training loss and test accuracy throughout the entire training process. Initially, the gap of training loss and test accuracy between pre-regularization and post-regularization is large. Such a gap, however, is quickly reduced through training epochs. Overall, ResNet-32 converges successfully through the entire training process with lower training loss and higher test accuracy compared with a typical training method. To investigate the effect of NR period on local minima exploration with ResNet-32 on CIFAR-10, Figure~\ref{fig:delta_values} presents the changes of loss function and weight magnitude values incurred by occasional regularization. In Figure~\ref{fig:delta_values}(left), $\Delta \mathcal{L}/\mathcal{L}$ is given as the loss function increase $\Delta\mathcal{L}$ (due to weight regularization at $pN\!R$ steps) divided by $\mathcal{L}$, which is the loss function value right before weight regularization. In Figure~\ref{fig:delta_values}(right), $\Delta {\bm{w}}$ is defined as $||{\bm{w}} - \Tilde{{\bm{w}}}||^2_{\mathcal{F}}$ $/N({\bm{w}})$, where ${\bm{w}}$ is the entire set of weights to be compressed, $\Tilde{{\bm{w}}}$ is the set of weights regularized by Tucker decomposition, $N({\bm{w}})$ is the number of elements of ${\bm{w}}$, and $||{\bm{X}}||^2_{\mathcal{F}}$ is the Frobenius norm of ${\bm{X}}$. Initially, ${\bm{w}}$ fluctuates with large corresponding $\Delta \mathcal{L}$. Then, both $\Delta \mathcal{L}$ and $\Delta {\bm{w}}$ decrease and Figure~\ref{fig:delta_values} shows that occasional regularization finds flatter local minima (in the view of Tucker decomposition) successfully. When the learning rate is reduced at 100th and 150th epochs, $\Delta \mathcal{L}$ and $\Delta {\bm{w}}$ decrease significantly because of a lot reduced local minima exploration space. In other words, occasional regularization helps an optimizer to detect a local minimum where Tucker decomposition does not alter the loss function value noticeably. \begin{figure*}[h] \begin{center} \includegraphics[width=0.8\linewidth]{EPS/loss.eps} \caption{Difference of training loss function and average Frobenius norm of weight values by weight updates for model compression. $R_c$ $=0.5$ and $pN\!R$ $=200$ are used.} \label{fig:delta_values} \end{center} \end{figure*} For ResNet-18 on ImageNet experiments and VGG19 on CIFAR-10 (including additional compression techniques), refer to Appendix. \section{NR Period for Convergence} \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{EPS/pNR_convergence.eps} \caption{Gradient descent and weight regularization when NR period is given as a multiple of batches. Depending on the loss surface and/or strength of regularization, regularization would lead to step 2 (escaping from a local minimum) or step 5 (returning to a local minimum).} \label{fig:app_loss_surface} \end{figure} NR period influences convergence in training. Strong weight regularization facilitates the chance of escaping a local minimum (depicted as step 2 in Figure~\ref{fig:app_loss_surface}) or requires longer NR period to return to a local minimum (described as step 5 in Figure~\ref{fig:app_loss_surface}). Let us estimate the desirable NR period ($pN\!R$) considering the convergence of training even though $pN\!R$ is supposed to be searched empirically. Given a parameter set ${\bm{w}}$ (that is assumed to be close enough to a local minimum) and a learning rate $\gamma$, the loss function of a model $\mathcal{L}({\bm{w}})$ can be approximated as \begin{equation} \label{eq:eq_loss_approxi} \mathcal{L}({\bm{w}}) \simeq \mathcal{L}({\bm{w}}_0) + ({\bm{w}} - {\bm{w}}_0)^\top (H({\bm{w}}_0)/2) ({\bm{w}} - {\bm{w}}_0) \end{equation} using a local quadratic approximation where $H$ is the Hessian of $\mathcal{L}$ and ${\bm{w}}_0$ is a set of parameters at a local minimum. After regularization is performed at step $t$, ${\bm{w}}$ can be updated by gradient descent as follows: \begin{equation} \label{eq:eq_1} {\bm{w}}_{t+1} = {\bm{w}}_{t} - \gamma \frac{\partial \mathcal{L}}{\partial {\bm{w}}}\big|_{{\bm{w}}={\bm{w}}_t} \simeq {\bm{w}}_{t} - \gamma H({\bm{w}}_0)({\bm{w}}_t - {\bm{w}}_0). \end{equation} Thus, after $pN\!R$, we obtain \begin{equation} \label{eq:eq_w_pNR} {\bm{w}}_{t+pN\!R} = {\bm{w}}_0 + (I-\gamma H({\bm{w}}_0))^{pN\!R} ({\bm{w}}_t - {\bm{w}}_0), \end{equation} where $I$ is an identity matrix. Suppose that $H$ is positive semi-definite and all elements of $I - \gamma H({\bm{w}}_0)$ are less than 1.0, ${\bm{w}}_{t+pN\!R}$ can converge to ${\bm{w}}_0$ with long $pN\!R$ which should be longer with larger $({\bm{w}}_t - {\bm{w}}_0)$ (i.e., stronger weight regularization) or smaller $\gamma H({\bm{w}}_0)$. \section{Related Work} Periodic compression has been introduced in the literature to gradually improve compression ratio or automate hyper-parameter search process. DropPruning repeats dropping weights randomly and retraining the model while some previously dropped weights are unpruned until pruning rate reaches a target number \cite{jia2018droppruning}. Weights are incrementally quantized to improve model accuracy \cite{zhou2017incremental} or the number of quantization bits can be controlled differently for each layer by a loop based on reinforcement learning \cite{elthakeb2018releq}. Structured pruning and fine-tuning process can be iterated to increase pruning rate \cite{molchanov2016pruning, liu2017learning}. All of these previous works assume $pN\!R=1$ (i.e., performing compression for every mini batch) while the goal is increasing compression ratio slowly or finding a set of hyper-parameters through iterative fine-tuning stages. Our proposed compression technique can be combined with such periodic compression methods (incremental compression or automatic hyper-parameter selection are also applicable to our proposed method). In the work by \cite{he2018soft}, soft filter pruning is conducted with $pN\!R$ = 1 epoch without analysis of why such occasional pruning improves model accuracy. \section{Conclusion} In this paper, we introduce a new hyper-parameter called non-regularization period or NR period during which weights are updated only for gradient computations. NR period (or equivalently regularization frequency) provides a critical impact on the overall regularization strength. For example, if a weight decay factor becomes larger, then NR period can be longer to maintain the regularization strength. Using such a property, we demonstrate that during compression-aware training, NR period can control the regularization strength given a target compression ratio such that model accuracy is improved compared to the case of compression for every mini-batch. Throughout various experiments, we show that there is a particular NR period (associated with occasional weight compression accordingly) that maximizes model accuracy. \nocite{langley00}
{ "timestamp": "2021-05-06T02:10:05", "yymm": "2105", "arxiv_id": "2105.01875", "language": "en", "url": "https://arxiv.org/abs/2105.01875" }
\section{Introduction} The model size of deep neural networks (DNNs) is rapidly growing to support various complex target applications with increasing target accuracy goals. Hence, numerous model compression techniques are being actively studied to enable DNN inference operations for a given service response time while computing resources are limited. Such compression techniques include parameter pruning \cite{DNS, SHan_2015}, low-rank approximation \cite{SVD2013, SVD_Projection}, knowledge distillation \cite{distillation, distillation2018}, and quantization \cite{binaryconnect, Hubara2016}. In this paper, we discuss a quantization method specifically designed to preserve the model accuracy of DNNs. Since quantization plays a major role to determine not only the (expensive) off-chip memory bandwidth but also the basic design principles of core arithmetic units performing DNN operations \cite{Hubara2016, lin2016fixed}, researchers are paying a lot of attention to the advance of DNN-dedicated quantization methods. For example, the Straight-Through Estimator (STE) \cite{binaryconnect} is widely used for quantization-aware training to enable binary neural networks in which each weight and activation can be represented by one bit. In order to improve model accuracy after quantization, hyper-parameters for the quantization format can be designed to be differentiable and trainable \cite{sait_uniform, adaptive, ternary2017}. Recently, even encryption techniques and training algorithms are introduced to implement sub-1-bit quantization with negligible accuracy drop \cite{flexor}. While quantization-aware training is effective to improve model accuracy, a wide range of post-training quantization schemes are also being considered because 1) DNN designers may not have enough expertise to consider model compression, and 2) model compression engineers may not be able to access the whole training dataset. As a result, numerous sophisticated post-training quantization algorithms are introduced \cite{bit-split, OCS}, and are being served by various DNN model development tools \cite{tensorflow2015-whitepaper, pytorch}. Most existing post-training quantization algorithms rely on convex-like optimization, essentially because fine-tuning or retraining as non-convex optimization is not available. For example, quantization errors on weights \cite{Greedy_Quan, OCS, balanced} or layer outputs \cite{adaround, facebook_quan, bit-split} are mainly minimized. When such minimization is NP-hard problems, quadratic approximations can be adopted to simplify the minimization process \cite{limitquant2017, adaround}. Note that when the amount of weight perturbation through quantization is large, then minimizing the quantization error may not be a convex-like optimization as shown in Figure~\ref{fig:loss_landscape}. In addition, if a given loss surface of a pre-trained model is not smooth enough \cite{loss_surface}, then even post-training 8-bit quantization can be translated into a non-convex optimization problem. As a comprehensive post-training quantization, thus, an approach considering non-convexity is necessary. \begin{figure} \vskip 0.2in \begin{center} \includegraphics[width=0.9\linewidth]{eps/convex_like.eps} \end{center} \caption{A loss landscape example where 8-bit quantization can be a convex-like optimization while optimal 4-bit quantization needs to be achieved by a non-convex optimization.} \label{fig:loss_landscape} \vskip -0.2in \end{figure} Intuitively, it would be challenging to study a thorough non-convex analysis for post-training quantization. In this paper, based on the recognition that post-training inherently requires non-convex approaches, we propose a new post-training quantization method, called Q-Rater, that is especially useful for low-bit quantization. Instead of minimizing quantization errors, Q-Rater searches (not computes) hyper-parameters for quantization to minimize the task loss. The contributions in this paper can be summarized as follows: \begin{itemize} \item We present some examples suggesting that minimizing quantization error may not be tightly correlated with minimizing a training loss function. \item We propose new methods to find hyper-parameters determining clipping threshold values and rounding schemes. Such hyper-parameters are obtained by evaluating training loss function through a grid search for a layer. Then, searched hyper-parameters can be fixed for the next layer exploration. \item We show that bias correction needs to be selectively performed per layer. \item Experimental results describe that Q-Rater yields higher model accuracy compared to previous techniques relying on convex-like optimizations, especially when the number of quantization bits is low. \end{itemize} \section{Weight quantization strategy} Let ${\bm{W}}$ and ${\bm{x}}$ represent weights and inputs of a layer, respectively. By large, post-training weight quantization strategies can be categorized into three schemes as shown in Figure~\ref{fig:quant_strategy} depending on the selection of the target variable to be minimized. Note that Figure~\ref{fig:quant_strategy}(a) and \ref{fig:quant_strategy}(b) represent convex optimizations to calculate quantized ${\bm{W}}$. \begin{figure} \vskip 0.2in \begin{center} \includegraphics[width=0.9\linewidth]{eps/quant_strategy.eps} \end{center} \caption{Comparison on three post-training quantization strategies. Quantization algorithm can be designed to minimize (a) quantization error on weights, (b) reconstruction error by quantization, or (c) task loss error by quantization.} \label{fig:quant_strategy} \vskip -0.2in \end{figure} \paragraph{Weight only} As the simplest method in Figure~\ref{fig:quant_strategy}, weights can be quantized in a local optimization manner, i.e., weight quantization does not consider other layers nor activations. Various objective functions using the difference between ${\bm{W}}$ and quantized weights ${\bm{W}}'$ have been suggested. For example, the mean squared error (MSE) using a histogram \cite{shin2016MSE} or a predetermined distribution model \cite{ACIQ} can be minimized to obtain quantized weights. Minimizing KL divergence between full-precision weight distribution and quantized weight distribution is also proposed \cite{tensorRT}. Since input data is not utilized, the quantization process can be simple and fast \cite{DFQ} even though the correlation between weight quantization and task loss is not deeply investigated. \paragraph{Layer output objective} Given a specific domain of input data, quantized weights obtained only by using weights would not produce the best-quantized layer outputs \cite{facebook_quan}. Correspondingly, to take into account the statistical properties of input domains, samples of inputs can be fed into the network and the quantization error on layer outputs can be minimized \cite{facebook_quan, bit-split}. Suppose that ${\bm{X}}$ is a set of input samples and the objective function is given as $\min ||{\bm{W}}{\bm{X}} - {\bm{W}}'{\bm{X}}'||^2$. Then, compared to the case of Figure~\ref{fig:quant_strategy}(a), the computational complexity (to solve $\min ||{\bm{W}}{\bm{X}} - {\bm{W}}'{\bm{X}}'||^2$) may significantly increase because of large ${\bm{X}}$. For example, feature size can be larger than the weight size in convolutional neural networks (CNNs) \cite{resnet}. In addition, the number of features (i.e., input sets) needs to be much larger than 1. For instance, in the case of ResNet-101 on ImageNet, the number of elements in ${\bm{X}}$ can be about 80M with 100 input samples (56x56 output size and 256 channels). \paragraph{Task loss objective} Weight quantization to produce the minimum task loss can be the most effective strategy if we find a sophisticated relationship between weight manipulation for quantization and corresponding task loss change. Unfortunately, because the task loss function of DNNs is non-convex \cite{deeplearningbook}, there is no analytical solution to find the form of post-training quantized weights. As a result, various approximations are being suggested mainly by using quadratic approximations that imply quantization is performed in a convex-like regime \cite{adaround, loss-aware}. Note that such approximations may hold only for a large number of quantization bits as we demonstrate in Section 4. \section{Activation quantization strategy} Unlike weights that can be quantized and fixed in advance, activations should be quantized on the fly during inference. In other words, weights are static data while activations are dynamic data that change during inference. Hence, quantization techniques dedicated to static data cannot be adopted for activation quantization. Accordingly, there are relatively fewer studies for activation (post-training) quantization compared to weight quantization. Thus, hyper-parameters regarding activation quantization are usually drawn by a sampling of activations and estimating a distribution. For example, the moving average of activation values obtained by feeding input samples can be a clipping threshold \cite{jacob2018quantization}. \section{Issues on the previous methods} Some major issues on previous techniques for post-training quantization include that 1) non-convex properties are not exploited; 2) discussions on activations are somewhat overlooked; and 3) consequently, the impact of quantization on task loss is loosely coupled with an objective function to be minimized. \begin{figure}[t] \centering \vskip 0.2in \includegraphics[width=1.0\linewidth]{eps/loss_surface_32_interp_8.eps} \caption{A simple 1-D trajectory investigation through interpolations of weight sets quantized from the same pre-trained ResNet-32 model on CIFAR-10.} \label{fig:1d_trajectory} \vskip -0.2in \end{figure} Let us first show that quantization needs to be considered as a non-convex optimization. When ${\bm{W}}_{q1}$ and ${\bm{W}}_{q2}$ are two quantized weights with different number of bits, interpolated ${\bm{W}}_q$ is given as \begin{equation} {\bm{W}}_q = (1-\alpha){\bm{W}}_{q1} + \alpha {\bm{W}}_{q2}, \; \mathrm{for} \; \alpha \in [0,1]. \end{equation} Even though a thorough trajectory study between different parameter sets is highly complicated, evaluating a loss function $L({\bm{W}}_q)$ with sweeping $\alpha$ can provide a 1-D interpolated trajectory cross-section to seek a counter-example of convex-like regime \cite{1d_plot_trajectory}. Figure~\ref{fig:1d_trajectory} traces the training loss function when weight sets are interpolated using weights of full-precision or quantized. Note that if a function $f(x)$ is convex, then for any two points $x_1$ and $x_2$, we obtain \begin{equation}\label{eq:convexity} f(\lambda x_1 + (1-\lambda )x_2) \le \lambda f(x_1) + (1-\lambda) f(x_2), \end{equation} when $\forall \lambda \in [0,1]$. In Figure~\ref{fig:1d_trajectory}, we find that \textbf{training loss function is non-convex} because training loss value can be even lower when weight sets are interpolated such that Eq.~\ref{eq:convexity} does not hold (e.g., training loss function is non-convex between ${\bm{W}}_{i1}$ and ${\bm{W}}_{i2}$ in Figure~\ref{fig:1d_trajectory}). Hence, Figure~\ref{fig:1d_trajectory} supports our claim that convex-like optimizations could miss better quantization schemes. \begin{figure*}[t] \centering \vskip 0.2in \includegraphics[width=\linewidth]{eps/cvpr_scatter_resnet32_w5a32.eps} \caption{Correlation between quantization error and training loss of ResNet-32 (on CIFAR-10) when each weight of the 1st, the 16th, or the 31st layer is quantized by using 4 bits.} \label{fig:resnet32_qerr_and_loss_4bit} \vskip -0.2in \end{figure*} Non-convexity of quantization can also be confirmed by investigating the relationship between quantization error and training loss. Suppose that $n$-bit quantization schemes lie in a convex-like regime. If it is the case, increasing quantization error would result in increasing training loss with a high correlation. As a case study, we apply 4-bit weight quantization to a few layers in ResNet-32 on CIFAR-10 using various quantization schemes that we discuss in the next section. Figure~\ref{fig:resnet32_qerr_and_loss_4bit} illustrates that a correlation between quantization error (on weights) and training loss is not high such that low-bit quantization can be regarded as a non-convex problem. Similar observations are also reported in \cite{adaround, facebook_quan} (where reconstruction error is minimized as a practical solution). Analytical solutions based on the first two strategies in Figure~\ref{fig:quant_strategy} may not be appropriate for the post-training quantization of high non-convexity. In this work, we show that there is a way to find quantization schemes to reduce model accuracy reduction directly without quantization error minimization on weights or layer outputs. \section{Q-Rater: holistic non-convex quantization} Unlike most previous methods proposing convex optimizations to obtain hyper-parameters for quantization, Q-Rater explores various hyper-parameters and evaluates corresponding training loss values using a grid search for a layer. Then, Bayesian optimization is performed to fine-tune a set of hyper-parameters in a layer. Once the fine-tuned hyper-parameters are achieved for a certain layer, then those hyper-parameters are frozen, and Q-Rater proceeds to the next target layer. For Q-Rater, we consider rounding and clipping as the underlying quantization operations. The bias correction method is also discussed in the context of non-convexity. In this section, Q-Rater operations (i.e., rounding, clipping, and bias correction) are individually introduced and then combined to present the overall impact on model accuracy. Throughout this paper, even though Q-Rater does not rely on particular quantization formats, we assume \textbf{layer-wise} and \textbf{symmetric} quantization structure for both weights and activations. Such quantization structure is highly practical because 1) a floating-point scaling factor is shared by all elements in a layer, and thus, whole matrix multiplications (or convolutions) can be performed in fixed-point formats, and 2) computations for zero-point are not necessary \cite{jacob2018quantization, bit-split}. We show that Q-Rater with such simple and computationally efficient quantization formats can maintain reasonable model accuracy for low-bit quantization. \subsection{Rounding scheme of Q-Rater} Assume that a (layer-wise) clipping threshold is given as $Th_c (>0)$. As the first step of symmetric quantization, a weight is clipped to be $w_c = \max (\min(w, Th_c ),-Th_c)$. Rounding is then performed to map a continuous value $w_c$ into one of the discrete values that are pre-determined for uniform quantization. Note that rounding-to-nearest (RTN) has been dominating for uniform quantization since the per-weight difference becomes the smallest after mapping. To be more specific, since we consider symmetric quantization, a scaling factor $s$ is given as $s=Th_c/(2^{q-1}-1)$ when $q$ is the number of quantization bits. Then an integer $w_r$ can be obtained by RTN as $w_r = \left \lfloor w_c / s + 0.5 \right \rfloor$. Training loss can be further reduced when weight is rounded up or down depending on the interaction between a perturbed weight and task loss. Recently proposed adaptive rounding, AdaRound \cite{adaround}, investigates such an interaction using quadratic approximations and obtains rounding results by minimizing an asymmetric reconstruction formulation using gradient descent. The rounding principles of AdaRound, however, cannot be applied to activations since rounding needs to be a fixed hyper-parameter for each weight. To allow rounding up and down for activations as well, Q-Rater takes into account (unequal) ranges of a variable to be quantized. In other words, Q-Rater divides a range of weights or activations into unequal parts where each part is mapped into the same discrete value. Specifically, we propose the two following rounding schemes to achieve a quantized weight $w_q$ when $f_r(\cdot)$ decides the rounding (activations follow the same procedures). \paragraph{1st-order rounding scheme ($\gamma_n$):} \begin{equation} w_q = s \cdot \left \lfloor \frac{w_c}{s} + 0.5 + f_r(w_c, \gamma_n ) \right \rfloor, \label{eq:1st-order-eq1} \end{equation} \begin{equation} f_r(w_c, \gamma_n) = 0.5 \cdot \sign(w_c\gamma_n) \cdot |\gamma_n |^{|w_r|}, \label{eq:1st-order-eq2} \end{equation} where $\gamma_n \in [-1,1]$ is a hyper-parameter. If $\gamma_n=0$, Eq.~\ref{eq:1st-order-eq1} is equivalent to RTN. \paragraph{2nd-order rounding scheme ($\gamma_n, \gamma_s$):} \begin{equation} w_q = s \cdot \left \lfloor \frac{w_c}{s} + 0.5 + f_r(w_c, \gamma_n, \gamma_s ) \right \rfloor, \label{eq:2nd-order-eq1} \end{equation} \begin{equation} \begin{split} f_r(w_c, \gamma_n, \gamma_s) = 0.5 \cdot \sign(w_c\gamma_n (\gamma_s \cdot 2^{q-1} - |w_r|)) \\ \cdot |\gamma_n |^{|||w_r| - \gamma_s \cdot 2^{q-1}| - \beta|}, \end{split} \label{eq:2nd-order-eq2} \end{equation} where $\beta = 2^{q-2}$, and $\gamma_n \in [-1,1]$ and $\gamma_s \in [0,1]$ are hyper-parameters. $\gamma_n=0$ produces RTN regardless of $\gamma_s$. \begin{figure*}[t] \centering \vskip 0.2in \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{eps/mapping.eps} \caption{Proposed mapping method by unequal range division of a weight or activation.} \label{fig:rounding_left_figure} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{eps/nonlinear_2nd_exp_s0.50.eps} \caption{Examples of unequal range division by 2nd-order rounding scheme when $\gamma_s=0.5$ with various $\gamma_n$.} \label{fig:rounding_right_figure} \end{subfigure} \caption{Proposed rounding scheme based on unequally divided ranges to be mapped to quantized values.} \vskip -0.2in \label{fig:rounding_example} \end{figure*} \bigskip The proposed rounding scheme is illustrated in Figure~\ref{fig:rounding_example} (due to the space limit, all illustrations about 1st-order rounding are provided in Appendix A). Since $f_r(\cdot)$ has the range of $[-0.5,0.5]$ for both proposed 1st- and 2nd-order round schemes, Eq.~\ref{eq:1st-order-eq1} and \ref{eq:2nd-order-eq1} can select one of three rounding schemes, namely, RTN, round up, and round down. $\gamma_n$ controls the amount of inequality among mapping ranges while $\gamma_s$ (for 2nd-order rounding) decides from where mapping ranges start to increase or decrease. \begin{figure*}[t] \centering \vskip 0.2in \includegraphics[width=0.95\linewidth]{eps/resnet32_w3a3_nonlinear_2.eps} \caption{Test accuracy comparison using ResNet-32 model (on CIFAR-10) when weights and/or activations are rounded by RTN or the proposed (2nd-order) rounding scheme that evaluates train accuracy with various $\gamma_n$ and $\gamma_s$. 3-bit quantization is performed from the second layer to the last one incrementally and the clipping method follows MSE minimization.} \label{fig:comparison_non_linear_2} \vskip -0.2in \end{figure*} Now let us explain how to find hyper-parameters for rounding. First, for a target layer, we find a clipping threshold $Th_c$. Then, we sweep $\gamma_n$ and $\gamma_s$, and find a particular set of $\gamma_n$ and $\gamma_s$ that correspond to the best training loss. Once the optimal $\gamma_n$ and $\gamma_s$ are obtained through such a grid search, then we quantize the weights of a target layer. Those quantized weights are fixed and we proceed to the next layer. We apply different rounding methods to activations and/or weights of the ResNet-32 model on CIFAR-10 while quantization is performed incrementally from the second layer to the last one (note that the first layer is too small in ResNet-32). To be focused on the impact of the new rounding scheme, we utilize MSE minimization to compute the clipping threshold $Th_c$ as verified to be effective in \cite{OCS} while we propose a new clipping technique in the next subsection. Sweep interval is 0.1 for both $\gamma_n$ and $\gamma_s$. Figure~\ref{fig:comparison_non_linear_2} compares quantization results of ResNet-32 model on CIFAR-10 using RTN or 2nd-order Q-Rater rounding scheme ($q=3$ for both weights and activations). When $\gamma_s=0.2$, sweeping $\gamma_n$ to quantize the weights of the 31st layer yields high variations on train accuracy. Note that for both weights and activations, RTN (i.e., $\gamma_n=0$) does not provide the best train accuracy and the optimal $\gamma_n$ is far from zero. A set of $\gamma_n$ values optimized separately for each layer leads to significantly improved test accuracy as layers are quantized incrementally as shown on the left of Figure ~\ref{fig:comparison_non_linear_2}. For the experimental results using ResNet-18 and MobileNetV2 on ImageNet (with only 20K input samples), refer to Appendix A. Throughout comprehensive experiments, the 2nd-order rounding scheme offers slightly better results than the 1st-order rounding scheme (shown in Appendix A). For the remainder of this paper, thus, we choose the 2nd-order rounding scheme. \subsection{Clipping method of Q-Rater} \label{sec:clipping} Most DNN models exhibit a bell-shaped distribution for weights and activations \cite{OCS}. Hence, a few outliers in distribution can affect the overall quality of uniform quantization decisively. Clipping method, therefore, is being widely used for post-training quantization \cite{ACIQ, jacob2018quantization, OCS} as a trade-off between the quantization resolution and the amount of outliers' quantization error. \begin{figure*}[t] \centering \vskip 0.2in \includegraphics[width=0.95\linewidth]{eps/resnet32_w3a3_clip.eps} \caption{Test accuracy comparison using MSE, ACIQ, KL, or our proposed clipping method with RTN rounding scheme for weights and activations of ResNet-32 model on CIFAR-10 ($q=3$ for both weights and activations). For each layer, Q-Rater sweeps $\gamma_c$ and evaluates corresponding train accuracy.} \label{fig:clipping_exp} \vskip -0.2in \end{figure*} Recently proposed clipping methods include MSE minimization (i.e., L2-norm between the full-precision values and quantized values is minimized \cite{shin2016MSE, sung2016_mse_clip}), ACIQ (i.e., minimizing MSE between a pre-determined distribution model and the full-precision model \cite{ACIQ}), and KL divergence minimization \cite{tensorRT}. An additional method to control outliers is outlier channel splitting (OCS) that duplicates channels containing outliers, and then halves activations or weights without modifying the functionality of a model \cite{OCS}. OCS becomes more effective when combined with existing clipping methods \cite{OCS}. Note that non-convexity is not recognized for those previous clipping techniques. For Q-Rater, we introduce a hyper-parameter $\gamma_c \in (0.0, 1.0)$ to determine $Th_c = \max({\bm{W}}) \cdot \gamma_c$ when $\max({\bm{W}})$ describes the maximum elements of a set of full-precision weights ${\bm{W}}$ (clipping for activations follows the same structure). Similar to our rounding schemes, we sweep $\gamma_c$ from 0 to 1 and investigate the corresponding training accuracy. For experiments, after an optimal $\gamma_c$ (producing the best training accuracy) is obtained through a sweeping, RTN is followed for each weight and activation (RTN is used to study the impact of clipping method independently). We perform an incremental quantization and Figure~\ref{fig:clipping_exp} presents the results using the ResNet-32 model on CIFAR-10 with different clipping methods. It is clear that for low-bit quantization (e.g., $q=3$ in Figure~\ref{fig:clipping_exp}), conventional techniques may not produce the best training accuracy. When such $\gamma_c$ optimization for a layer is conducted incrementally through all target layers, Q-Rater presents significantly higher test accuracy compared to MSE, ACIQ, and KL. Experimental results using the ResNet-18 model on ImageNet are presented in Appendix B. \begin{figure} \centering \vskip 0.2in \includegraphics[width=0.8\linewidth]{eps/resnet32_w3a3_bc.eps} \caption{Test accuracy of ResNet-32 model on CIFAR-10 when bias correction is always applied, never applied, or selectively applied by Q-Rater. Weights and activations are quantized (by using $q=3$) incrementally with RTN and MSE clipping.} \label{fig:comparison_bias_corr} \vskip -0.2in \end{figure} \subsection{Bias correction of Q-Rater} Bias correction is an operation to compensate for the biased error in output activations after quantization. The amount of shift induced by quantization is diminished by adjusting the bias parameters of the neurons or channels because shifted output activations through quantization may degrade the quantization quality of the next layer \cite{fighting_bias, DFQ}. The amount of shift can be calculated as the expected error on the output activations that can be expressed as \begin{equation} \mathbb{E}[{\bm{y}}] -\mathbb{E}[{\bm{y}}'] = \mathbb{E}[{\bm{W}}{\bm{X}}] - \mathbb{E}[{\bm{W}}'{\bm{X}}']. \label{eq:bias_corr} \end{equation} Then, the expected error (or shift) of the output activations is subtracted from the corresponding layer's bias terms. \begin{table*}[t] \small \caption{Top-1 test accuracy (\%) of various models quantized by previous methods or by Q-Rater. Model names are annotated with dataset and test accuracy of full-precision. All results are accompanied by \textbf{layer-wise} and \textbf{symmetric} quantization.} \vskip 0.15in \label{table:ablation_study_final_results} \centering \begin{threeparttable} \begin{tabular}{cc|ccc|ccccc} \hline \mr{2}{Model} & \mr{2}{W/A\\bits} & \multicolumn{3}{c|}{Clip (+RTN)\tnote{1}} & \multicolumn{5}{c}{Q-Rater (Proposed)} \\ \cline{3-10} & & MSE & ACIQ & KL & \ding{202}(Round)\tnote{2} & \ding{203}(Clip) & \ding{204}(Bias) & \ding{202}+\ding{203} & \ding{202}+\ding{203}+\ding{204} \\ \hline \mr{3}{\textbf{ResNet-32}\\\textit{on CIFAR-10}\\(92.63)} & 5/5 & 90.64 & 75.68 & 90.45 & 91.70 & 91.68 & 91.64 & 91.93 & 91.74 \\ & 4/4 & 84.46 & 24.57 & 71.32 & 89.03 & 89.60 & 88.47 & 89.71 & 90.24 \\ & 3/3 & 16.86 & 11.23 & 58.48 & 75.74 & 72.61 & 67.63 & 77.35 & 79.69 \\ \hline \mr{3}{\textbf{ResNet-18}\\\textit{on ImageNet}\\(69.69)} & 8/8 & 69.25 & 68.15 & 68.39 & 69.27 & 69.35 & 69.50 & 69.33 & 69.43 \\ & 6/6 & 66.75 & 59.78 & 65.64 & 67.34 & 67.91 & 67.42 & 67.95 & 68.46 \\ & 4/4 & 32.16 & 2.10 & 11.27 & 45.18 & 52.59 & 51.59 & 54.87 & 59.17 \\ \hline \mr{3}{\textbf{MobileNetV2}\\\textit{on ImageNet}\\(71.78)} & 8/8 & 69.81 & 69.98 & 69.98 & 71.01 & 70.73 & 71.34 & 70.92 & 71.25 \\ & 6/6 & 36.82 & 11.08 & 38.63 & 64.15 & 61.04 & 64.89 & 65.33 & 67.64 \\ & 4/4 & 0.12 & 0.39 & 0.22 & 7.08 & 4.67 & 6.58 & 15.58 & 25.35 \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item[1] Reference code: https://github.com/cornell-zhang/dnn-quant-ocs \item[2] 2nd-order rounding scheme is applied. \end{tablenotes} \end{threeparttable} \vskip -0.1in \end{table*} Bias correction has been a supplementary and optional technique for quantization. For example, bias correction is not introduced in \cite{OCS} while it is playing a key role in enhancing model accuracy in \cite{fighting_bias, DFQ}. In the context of non-convexity, Q-Rater compares two model accuracy values evaluated with or without bias correction for a layer. Only when bias correction turns out to improve model accuracy for a given layer, Q-Rater compensates for bias terms of output activations. Such selective application of bias correction implies that reducing quantization error on layer outputs (described in Figure~\ref{fig:quant_strategy}) may not reduce model accuracy drop. Figure~\ref{fig:comparison_bias_corr} presents incremental quantization results using ResNet-32 on CIFAR-10 with different bias correction schemes (when weights and activations are quantized by using RTN and MSE clipping). When combined with RTN and MSE clipping, bias correction is effective for initial layers in Figure~\ref{fig:comparison_bias_corr}. Then, model accuracy is dropped sharply for a few final layers. On the other hand, interestingly, selective bias correction by Q-Rater offers significantly improved test accuracy throughout the entire layers. Hence, searching for quantization schemes differently for each layer based on training accuracy monitoring is effective for bias correction as well. Interestingly, as shown in Appendix C, more layers require bias correction as $q$ decreases while the locations of such layers seem to be random. It is also interesting that MobileNetV2 demands bias corrections for layers close to inputs and outputs. Refer to Appendix C for experimental results using ResNet-18 and MobileNetV2 on ImageNet. \subsection{Combination effects on Q-Rater operations} So far, we studied Q-Rater operations (i.e., rounding, clipping, and bias correction) individually. Now, we combine those Q-Rater operations to see combination effects. For experiments, as a reference quantization scheme, weights and activations are quantized by using RTN and MSE-based clipping without bias correction. Then each quantization operation is replaced with that of Q-Rater. We perform a grid search to find $\gamma_n \in [-1,1]$, $\gamma_s \in [0,1]$, and $\gamma_c \in [0,1]$ (by monitoring training task loss) when the search resolutions are 0.1, 0.25, and 0.1, respectively. The entire Q-Rater procedures are described in Algorithm 1 (Appendix E). When Q-Rater operations are combined, clipping is performed first and rounding is followed. After weights and activations are quantized by $\gamma_n, \gamma_s$, and $\gamma_c$, selective bias correction for Q-Rater is conducted. Note that we restrict our interests to layer-wise and symmetric quantization that is efficient for inference but challenging in terms of maintaining test accuracy. In the case of the ImageNet dataset, we choose 20K samples only for fast evaluations of the model. Table~\ref{table:ablation_study_final_results} presents top-1 test accuracy of ResNet-32 (on CIFAR-10), ResNet-18 (on ImageNet), and MobileNetV2 (on ImageNet) when quantized by selected previous methods and by Q-Rater. Previous methods include three different clipping methods (i.e., MSE, ACIQ, and KL) and RTN without bias correction (as introduced in \cite{OCS}). Note that all three individual Q-Rater operations outperform previous methods. We also observe that combining multiple Q-Rater operations can substantially improve test accuracy for low-bit quantization. \begin{table*}[h] \small \caption{Top-1 test accuracy (\%) comparison results. Q-Rater performs clipping (using $\gamma_c$), rounding (2nd-order using $\gamma_n$ and $\gamma_s$), and selective bias correction sequentially for a layer.} \label{table:final_experimental_results} \vskip 0.15in \centering \begin{threeparttable} \begin{tabular}{ccc|ccc|cc} \hline \mr{2}{Dataset} & \mr{2}{Model\\(Full Acc.)} & \mr{2}{W/A\\bits} & \mr{2}{MSE} & \mr{2}{OCS\tnote{1}\\(+MSE)} & \mr{2}{Bit-Split\tnote{2}} & \multicolumn{2}{c}{Q-Rater (\ding{202}+\ding{203}+\ding{204})} \\ \cline{7-8} & & & & & & Grid Search & Bayesian Opt. \\ \hline \mr{9}{ImageNet} & \mr{3}{ResNet-18\\(69.69)} & 8/8 & 69.31 & 69.38 & 69.48 & \textbf{69.43} & \textbf{69.67} \\ & & 6/6 & 66.75 & 67.51 & 68.59 & \textbf{68.46} & \textbf{68.60} \\ & & 4/4 & 32.16 & 34.85 & 55.18 & \textbf{59.17} & \textbf{60.77} \\ \cline{2-8} & \mr{3}{ResNet-101\\(77.79)} & 8/8 & 77.02 & 77.18 & 77.20 & \textbf{77.10} & \textbf{77.16} \\ & & 6/6 & 74.20 & 75.36 & 0.08\tnote{3} & \textbf{75.97} & \textbf{76.21} \\ & & 4/4 & 19.48 & 29.76 & 0.06\tnote{3} & \textbf{64.49} & \textbf{68.24} \\ \cline{2-8} & \mr{3}{MobileNetV2\\(71.78)} & 8/8 & 69.81 & N/A & 70.60\tnote{4} & \textbf{71.25} & \textbf{71.10} \\ & & 6/6 & 36.82 & N/A & 53.88\tnote{4} & \textbf{67.64} & \textbf{68.20} \\ & & 4/4 & 0.12 & N/A & 0.28\tnote{4} & \textbf{25.35} & \textbf{22.95} \\ \hline \mr{3}{CIFAR-10} & \mr{3}{ResNet-32\\(92.63)} & 5/5 & 90.64 & 91.14 & 91.68 & \textbf{91.74} & \textbf{91.84} \\ & & 4/4 & 84.46 & 86.87 & 89.50 & \textbf{90.24} & \textbf{90.15} \\ & & 3/3 & 16.86 & 35.97 & 76.07 & \textbf{79.69} & \textbf{79.71} \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item[1] Reference code: https://github.com/cornell-zhang/dnn-quant-ocs \item[2] Reference code: https://github.com/wps712/BitSplit \item[3] Numerical instability is observed probably because of matrices of too large dimensions. \item[4] While depthwise convolution is not discussed in the reference, for our experiments, we revise the reference code such that depthwise convolution layers are quantized in a way to quantize fully-connected layers. \end{tablenotes} \end{threeparttable} \vskip -0.1in \end{table*} \section{Experimental results} The success of Q-Rater largely depends on the search quality of hyper-parameters $\gamma_n$, $\gamma_s$, and $\gamma_c$. Even though a coarse-grained grid search method can produce impressive results as shown in Table~\ref{table:ablation_study_final_results}, as a way of fine-tuning hyper-parameters, we adopt Bayesian Optimization\footnote{We use a publicly available code in \cite{Nogueira2014Bayesian}.} (BO), one of the automated techniques to search parameter spaces. After a brief introduction of the BO technique, we introduce the outlier channel split (OCS) \cite{OCS} and bit-split \cite{bit-split} as examples minimizing quantization error on weights and layer outputs, respectively. Then, we show comparison results on model accuracy. \paragraph{Bayesian optimization} For a given dataset $D$ and a given parameter vector ${\bm{h}}{=}\{\gamma_c\}$ or ${\bm{h}}{=}\{\gamma_n, \gamma_s\}$, BO aims to find an optimal ${\bm{h}}^{*}$ which maximizes an evaluation function $f({\bm{h}}, D)$ for a network. Four times processes for each layer are required in our suggested algorithm (refer to Algorithm 1 in Appendix). To avoid over-fitting of quantization parameters to a test dataset, a training dataset or a subset of the training dataset is used for evaluation at each step. Using grid search results as initial search space, in our experiments, BO explores additional 50 hyper-parameter sets. \paragraph{Outlier channel split} OCS represents recent efforts to minimize quantization error on weights. Halved outliers in weights due to OCS without model structure modifications enable aggressive clipping that plays a pivotal role in uniform quantization. It is reported that OCS improves MSE, ACIQ, and KL-based clipping noticeably especially for low-bit quantization of weights \cite{OCS}. \paragraph{Bit-split} We introduce the bit-split technique as one of the attempts to minimize reconstruction error (i.e., quantization error on layer outputs) for post-training uniform quantization. Bit-split formulates quantization as an optimization problem where quantization-related parameters are computed in an iterative fashion to solve a complicated objective function. While partial analytical solutions are obtained during iterative procedures, matrix multiplications can entail large ${\bm{X}}$ matrices whose dimensions increase with the number of input samples and the size of input features. \paragraph{Comparison results} Table~\ref{table:final_experimental_results} presents comparison results on top-1 test accuracy of ResNet-18, ResNet-101 (representing large depth models), and MobileNetV2 (challenging to be quantized mainly due to depth-wise layers) on ImageNet dataset, and ResNet-32 on CIFAR-10 dataset. \textbf{For ImageNet, Q-Rater does not need the entire dataset.} Instead, we use only 20K random samples to estimate the loss function (see Appendix D for the relationship between the number of samples and quantization quality). For Q-Rater with grid search, we use the same search resolutions of the previous section (i.e., 0.1 for $\gamma_n$, 0.25 for $\gamma_s$, and 0.1 for $\gamma_c$). We observe that even for layer-wise and symmetric quantization we choose for this work, OCS indeed enhances MSE. Then, bit-split outperforms OCS except for the ResNet-101 model where we observe numerical instability during computations (probably because of too complicated analytical equations that can be potentially solved by higher precision such as double precision). Note that since we assume layer-wise quantization, bit-split entails a lot more complicated computations compared to channel-wise quantization that is chosen in \cite{bit-split}. For all models in the table, Q-Rater presents the best test accuracy while BO slightly exceeds a grid search for most cases. \section{Conclusion} In this paper, we propose a new post-training uniform quantization technique, called Q-Rater, which does not depend on convex optimizations. While previous works mainly compute hyper-parameters for quantization to minimize quantization errors on weights or layer outputs, we suggest that we can perform a grid search of quantization hyper-parameters by evaluating corresponding training loss. For each layer, hyper-parameters for clipping and rounding are searched, and then bias correction is conducted selectively. Through layer-wise incremental optimization for hyper-parameters, Q-Rater presents significantly higher model accuracy for various CNN models even assuming a simple layer-wise and symmetric quantization format.
{ "timestamp": "2021-05-06T02:09:50", "yymm": "2105", "arxiv_id": "2105.01868", "language": "en", "url": "https://arxiv.org/abs/2105.01868" }
\section{Introduction} \label{sec:1} Various exotic symmetry-breaking, such as violations of the rotational, time-reversal and inversion symmetries, have been discovered in many strongly correlated metals, thanks to the recent progress of experiments. For example, electronic nematic states (=rotational symmetry breaking) without magnetization commonly emerge in Fe-based and cuprate superconductors. These discovered exotic symmetry breaking are generally called the ``quantum liquid crystal states'', and they are totally different from conventional local spin/charge density waves (SDW/CDW) studied so far. These exotic orders are ``hidden'' due to the difficulties in experimental detection, while they are fundamental states of metals because their transition temperatures are frequently higher than conventional SDW/CDW orders. In this article, we investigate the rich variety of exotic orderings in terms of the non-$A_{1g}$ symmetry breaking in self-energy, which is represented as the form factor $f_{\k,\q}$, in a unified way. Figure \ref{fig1-1} (a) shows a schematic phase diagram of cuprate superconductors. Below $T_{\rm CDW}\sim 200$K, smectic $p$-orbital charge-density-wave ($p$O-CDW) emerges at finite wavevector $\q\approx(\pi/2,0)$ in many compounds \cite{Y-Xray1,Bi-Xray1,STM-Kohsaka,STM-Fujita}. The discovery of this smectic $p$O-CDW significant progress in theoretical studies. A natural candidate of the order parameter behind the $p$O-CDW is the $d$-symmetry bond order (BO) shown in Fig. \ \ref{fig1-1}(b), where $\delta t$ represents the modulation of hopping integrals. Various spin-fluctuation-driven BO mechanisms have been proposed \cite{Davis:2013ce,Metlitski:2010gf,Husemann:2012eb,Efetov:2013ib,Sachdev:2013bo,Mishra:2015fb,Yamakawa-CDW,Orth:2017,Yamase}. Also, the pair-density-wave scenarios have been proposed in Refs. \onlinecite{Berg:2009gt,Fradkin:2015co,Wang:2015iq,Lee:2014ka,Agterberg}. \begin{figure}[htb] \includegraphics[width=8.5cm]{phase2.eps} \caption{(a) Schematic phase diagram of hole-doped cuprates. (b) Smectic $d$-symmetry bond order at $\q=(\pi/2,0)$ for $T<T_{\rm CDW}$ and (c) nematic one at $\q={\bm0}$ for $T<T^*$. (d) Spin loop-current order pattern at $\q=(\pi/2,\pi/2)$. (e) Intra-unit-cell charge loop-current proposed in Ref. \onlinecite{Varma}. } \label{fig1-1} \end{figure} Another important unsolved issue is the origin of the pseudogap in the density-of-states (DoS) below $T^*$. At present, it is an open problem whether the pseudogap is a distinct phase or a continuous crossover. As for the latter case, short-range spin fluctuations at $T\sim T^*$ can induce the pseudogap due to large quasiparticle damping \cite{TPSC,Kotliar,Maier}. As for the former case, experimental evidence of the phase transition at $T^*$ has been accumulated \cite{Shekhter:2013eh,ARPES-Science2011,Fujimori-nematic,Y-Sato,Hg-Murayama,Shibauchi-nematic}, {\it e.g.}, the ARPES \cite{ARPES-Science2011,Fujimori-nematic}, magnetic torque \cite{Y-Sato,Hg-Murayama}, polarized neutron diffraction (PND) \cite{TRSB-neutron1,TRSB-neutron2}, nematic susceptibility \cite{Shibauchi-nematic} measurements. The presence or absence of the time-reversal symmetry (TRS) in the pseudogap phase has been unsolved for years. We first discuss candidates of TRS preserving order parameter at $T^*$: Considering the $C_4$ symmetry breaking below $T^*$ \cite{Y-Sato} and the enhancement of the nematic susceptibility above $T^*$ \cite{Ishida-nematic}, the $d$-wave nematic ($\q={\bm0}$) order in Fig. \ \ref{fig1-1} (c) would be naturally expected. A similar nematic transition is realized in many Fe-based superconductors \cite{Onari:2012jb,YYamakawa-PRX2016,Onari-FeSe,Onari-B2g,Onari-AFBO}. On the other hand, pseudogap is not induced by intra-unit-cell orders. Another candidate of TRS preserving order is the staggered ($\q={\pi/2,\pi/2}$) spontaneous spin loop-current (sLC) order shown in Fig. \ \ref{fig1-1}(d). The sLC order is ``hidden'' in that neither internal magnetic field nor charge density modulation is induced, whereas the predicted sLC with finite wavenumber naturally gives the Fermi arc structure and the pseudogap in the DoS. We also discuss candidates of the TRS broken order parameter at $T^*$: Figure \ref{fig1-1} (e) depicts the intra-unit-cell ($\q={\bm0}$) charge loop current (cLC) order that accompanies the magnetic field proposed by Varma \cite{Varma}. Recently, a number of experimental reports for the cLC order have been accumulated. For instance, in quasi 1D two-leg ladder cuprates, the PND reveal the broken time-reversal symmetry \cite{cLC-2leg} and conclude that the cLC appears. The cLCs in the spin disordered phase are also reported in cuprates \cite{TRSB-neutron1,TRSB-neutron2} and iridates \cite{TRSB-iridate}, and their existence is also supported by the optical second harmonic generation (SHG) \cite{SHG-cuprate,SHG-iridate} and Kerr effect \cite{Kerr-cuprate} measurements. Theoretically, quantum phase transitions in metals depicted in Figs. \ref{fig1-1} (b)-(e) are given by the form factor $\delta t_{i,j}^\s$, which corresponds to spontaneous symmetry breaking in self-energy. Here, $i$, $j$ represent the sites and $\s$ is the spin index. Hereafter, we focus on the exotic nature of the ``non-local form factor $i\ne j$''. The original hopping integral between sites $i$ and $j$ is modified to $t_{i,j}+\delta t_{i,j}^\s$. The Hermite condition leads to the relation $\delta t_{i,j}^\s = (\delta t_{j,i}^\s)^*$. Here, we set $\delta t_{i,j}^{c(s)}\equiv (\delta t_{i,j}^\uparrow+(-)\delta t_{i,j}^\downarrow)/2$. For example, the BO is given by a real and even-parity $\delta t_{i,j}^c$, which is shown in Fig. \ref{fig1-2} (a) \cite{Bulut,Chubukov,Sachdev,Metzner,Davis:2013ce,Berg:2009gt,Yamakawa-CDW,Tsuchiizu-CDW,Kawaguchi-CDW}. In contrast, the cLC order is given by a pure imaginary and odd-parity; $\delta t_{i,j}^c=-\delta t_{j,i}^c = {\rm imaginary}$ \cite{Varma,Affleck,FCZhang,Schultz}. In this case, $\delta t_{i,j}^c$ represents the fictitious Peierls phase, and the spontaneous cLC is induced as shown in Figs. \ref{fig1-2} (b). The cLC order causes the real magnetic field. In contrast, spin current flows if pure imaginary order parameter is odd under space and spin inversions; $\delta t_{i,j}^{s}=-\delta t_{j,i}^{s}= {\rm imaginary}$. Then, $\delta t_{i,j}^s$ represents the spin-dependent fictitious Peierls phase, and therefore spontaneous sLC order in Figs. \ref{fig1-2} (c) is induced \cite{Schultz,Nersesyan,Ozaki,Ikeda,Fujimoto,Sr2IrO4}. \begin{figure}[htb] \includegraphics[width=9cm]{formfactor.eps} \caption{Form factor $\delta t_{i,j}^{c,s}$ in each nonlocal order: (a) BO ($\delta t_{i,j}^{c}=\delta t_{j,i}^{c}=a$), (b) cLC ($\delta t_{i,j}^{c}=-\delta t_{j,i}^{c}=ia$), and (c) sLC ($\delta t_{i,j}^{s}=-\delta t_{j,i}^{s}=ia$). Here, $a$ is a real quantity. } \label{fig1-2} \end{figure} From the microscopic viewpoint, however, the mechanism of these exotic nonlocal orders is highly nontrivial, since the local ($i=j$) SDW/CDW occurs within the mean-field approximation (MFA). For example, if the Coulomb interaction is local, the induced order is always local within the MFA. One may consider that non-local form factors are realized in extended Hubbard models with non-local Coulomb interaction. However, within the MFA, the realization conditions of nonlocal orders are severe even in extended $U$-$V$-$J$ Hubbard model \cite{Nersesyan}. These facts indicate the importance of non-local effective interaction due to beyond-mean-field many-body effects, called the vertex corrections (VCs). This is the main issue of the present article. Recently, important roles of the VCs on the nonlocal orders have been revealed step by step. The nematic order in Fe-based superconductors is induced by the Aslamazov-Larkin (AL) VCs \cite{Onari:2012jb,YYamakawa-PRX2016,Onari-FeSe,Onari-B2g,Onari-AFBO}, which are significant near the magnetic quantum-critical-point (QCP). The physical meaning of the AL-VCs is the ``quantum interference'' between different spin fluctuations at ${\bm Q}$ and ${\bm Q}'$, which is depicted in Fig. \ref{fig-interference}. Due to this mechanism, non-local order at ${\bm Q}-{\bm Q}'$ is established. This mechanism has been applied to many strongly correlated metals to explain various hidden orders \cite{Onari:2012jb,YYamakawa-PRX2016,Onari-FeSe,Onari-B2g,Onari-AFBO}. In cuprate superconductors, emergence of the BO and the sLC order has been discussed based on the paramagnon-interference (AL-VC) mechanism \cite{Yamakawa-CDW,Kawaguchi-CDW,Tsuchiizu-CDW,Tsuchiizu:2016ix}. In addition, other spin-fluctuation-driven mechanisms have been successfully applied \cite{Davis:2013ce,Metlitski:2010gf,Husemann:2012eb,Efetov:2013ib,Sachdev:2013bo,Mishra:2015fb,Orth:2017}. The renormalization group (RG) theory is a very powerful method to study the quantum interference, because huge numbers of parquet-type diagrams are generated by solving the RG differential equation. Although conventional $N$-patch RG \cite{Zanchi:1996ul,Zanchi:1998ua,Zanchi:2000ua} is applicable for a model with simple band dispersion, this constraint is alleviated by combining the RG and the constrained randon-phase-approximation (cRPA). Using this RG+cRPA method, we can calculate the general charge (spin) susceptibilities with non-local form factor, $\chi^{c(s)}_f(\q)$, by including higher-order VCs. The realized order with form factor $f$ at wavevector $\q$ is determined under the condition of maximizing the function $\chi^{c(s)}_f(\q)$. \begin{figure}[htb] \includegraphics[width=4.5cm]{interference.eps} \caption{Interference mechanism due to spin fluctuations. It causes exotic $d$- an $p-$wave quantum liquid crystal phases such as nematic/smectic BO, sLC, and cLC orders.} \label{fig-interference} \end{figure} In this article, we perform the RG analysis of exotic ``quantum liquid crystal states'' described by non-$A_{1g}$ and non-local form factor in cuprate superconductors, $\kappa$-(BEDT-TTF)$_2$X, and coupled chain Hubbard models. It is clarified that the paramagnon interference mechanism causes rich quantum phase transition with $d$-wave and $p$-wave form factors in typical low-dimensional Hubbard models. This paper is organized as follows: In Sect. \ref{sec:2}, we explain the formalism of the RG method, based on which we derive general charge (spin) channel susceptibility with form factor, $\chi_f^{c(s)}(\q)$. We explain how to derive the optimized form factor $f$. In Sects. \ref{sec:3} and \ref{sec:4}, we discuss the BO order and sLC order formation in cuprate superconductor and $\kappa$-(BEDT-TTF)$_2$X, respectively. In Sect \ref{sec:5}, we discuss the cLC order induced in the quasi 1D Hubbard model with geometrical fluctuation. The BO, sLC and cLC orders are induced by paramagnon interference mechanism, which is totally dropped in the MFA. The discussions and summary are presented in Sect. \ref{sec:6}. \section{renormalization group theory} \label{sec:2} ``How to treat many-body effects'' is one of the long-standing problems in strongly correlated electron systems. Until now, a number of theoretical approaches have been generated \cite{George-rev,Yokoyama,Varma2,various1,various2}, such as dynamical mean-field theory (DMFT) and variational Monte Carlo (VMC) studies. One fundamental approach is perturbation theory based on diagrammatic expansion. However, it is hopeless to consider all possible diagrams, which continue to infinite order. Then, low-order perturbation theory fails to explain strongly correlated systems, such as the Mott transition in cuprates. For long time, the random-phase-approximation (RPA), in which the infinite series of particle-hole loop diagrams are considered, has been used to explain various magnetic transition. On the other hand, RPA cannot explain recently discovered exotic non-local transitions, since these phenomena originate from mode coupling via spin-, charge-, and orbital-degrees of freedom in nature. Therefore, it is necessary to consider the vertex corrections (VCs) neglected in conventional RPA. Renormalization group (RG) is a powerful tool to calculate the VCs accurately in an unbiased way. Thanks to the recent theoretical improvement, exotic non-local susceptibilities $\chi^{c(s)}_f (\q)$ with form factors $f_{\k,\q}$ are analyzed by RG with high accuracy. For instance, in this article, we calculate the non-local susceptibilities in cuprates for nematic/smectic $d$-symmetry BO as well as $p$-symmetry sLC and cLC ordered phase. Hereafter, we explain the optimization of form factors within the RG scheme. \subsection{RG formalism} Here, we explain the RG method based on one-orbital Hubbard models whose interaction part is given by \begin{eqnarray} \hat{H}_{I}=&&\sum_{\k_{i},\sigma_i} \Gamma^{\sigma_1 \sigma_2 \sigma_3 \sigma_4}_{\k_1 \k_2\k _3 \k_4} c^{\dagger}_{\k_1\sigma_1} c_{\k_2\sigma_2} c^{\dagger}_{\k_4\sigma_3} c_{\k_3\sigma_4}, \label{eq:34} \end{eqnarray} where $c^{\dagger}_{\k\sigma}$ is the creation operator of the electron, and $\hat \Gamma$ is fully antisymmetrized four-point bare vertex. In the case of on-site Coulomb repulsion $U$, the non-zero components of $\hat \Gamma$ are given as $\Gamma^{\sigma \sigma \bar{\sigma} \bar{\sigma}}= -\Gamma^{\sigma \bar{\sigma} \bar{\sigma} \sigma}=U/4$, so the requirement by the Pauli exclusion principle is satisfied. When the system has SU(2) symmetry in the spin space, the following relation is satisfied, \begin{eqnarray} \Gamma^{\sigma \sigma \sigma \sigma}_{\k_1\k_2\k_3\k_4}-\Gamma^{\sigma \sigma \sigma \sigma}_{\k_1\k_3\k_2\k_4}=\Gamma^{\sigma \sigma \bar{\sigma} \bar{\sigma}}_{\k_1\k_2\k_3\k_4}-\Gamma^{\sigma \sigma \bar{\sigma} \bar{\sigma}}_{\k_1\k_3\k_2\k_4}. \label{eqn:SU2} \end{eqnarray} Note that $\Gamma^{\sigma \sigma \sigma \sigma}$ and $\Gamma^{\sigma \sigma \bar{\sigma} \bar{\sigma}}$ stand for four-point vertex function with parallel- ($g^{\parallel}$) and anti-parallel-spin ($g^{\perp}$) in $g$-ology theory \cite{Emery,Bourbonnais,Kishine1,Kishine4, Suzumura,Suzumura2,SSolyom}. Thus, Eq. (\ref{eqn:SU2}) is equivalent to the relation $g^{\parallel}_{1}-g^{\parallel}_{2}=g^{\perp}_{1}-g^{\perp}_{2}$, which we will use in the Sect \ref{sec:5}. Also, tensor $\Gamma$ is uniquely decomposed into spin- and charge-channels as \begin{eqnarray} \Gamma_{\k_1\k_2\k_3 \k_4}^{\sigma \sigma' \rho \rho'} =\frac{1}{2}\Gamma^{s}_{\k_1\k_2\k_3 \k_4} \vec{\sigma}_{\sigma \sigma'} \cdot \vec{\sigma}_{\rho' \rho} +\frac{1}{2}\Gamma^{c}_{\k_1\k_2\k_3 \k_4} \delta_{\sigma \sigma'}\delta_{\rho' \rho}, \label{eqn:Gamma2} \end{eqnarray} where $\vec{\sigma}$ is the Pauli matrix vector. \begin{figure}[htb] \includegraphics[width=9cm]{RGpluscRPA.eps} \caption{(a) Diagrammatic explanation of the RG+cRPA method. (b) RG equation of the susceptibility $\chi^{c(s)}_f(\q)$ and three-point vertex function $R_{\q\k\k'}$ with the form factor $f_{\k,\q}$.} \label{fig:RG} \end{figure} The RG concept is based on the adiabatic condition, in which the energy scale of viewing the electron system gradually changes. For this purpose, the logarithmic energy cutoff is introduced as \begin{eqnarray} \Lambda=\Lambda_{0}e^{-l} \hspace{10pt} (l\ge0). \label{eqn:logene} \end{eqnarray} Also, the Green function with energy cutoff is defined as \begin{eqnarray} G(k)\equiv (i\e_n-\xi_\k)^{-1} \Theta(\Lambda-|\xi_{\k}|), \end{eqnarray} where $\xi_{\k}$ is the energy dispersion of the electron, and $k=(\k,\e_n)$ with wavevector $\k$ and fermion Matsubara frequency $\e_n$. $\Theta$ is the Heaviside step function reflecting high energy cutoff within the RG framework. Based on the path integral RG formula, fermionic field operators on the energy-shell $\Lambda -d \Lambda < |\xi_{\k}| < \Lambda$ are integrated out. Then, the effective low-energy four-point vertex with $|\xi_{\k}| < \Lambda- d \Lambda$ is obtained. This complex procedure is automatically performed by solving the following differential equation, so-called RG equation. Within the one-loop approximation, it is given by \begin{eqnarray} \hspace{-5pt}\frac{d}{d\Lambda} \Gamma_{\k_1 \k_2 \k_3 \k_4} &=& -\frac{T}{N}\sum_{\k,\k',\e_n,\e_{m}} \frac{d}{d\Lambda} \left[ G(\k,\e_n) \, G(\k',\e_m) \right] \nonumber \\ & &\! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \times \Bigl\{ \Bigl[ \Gamma_{\k_1\k_2\k\k'} \Gamma_{\k\k'\k_3\k_4} -\Gamma_{\k_1\k_3\k\k'} \Gamma_{\k\k'\k_2\k_4} \Bigr]\delta_{\e_n,\e_m} \nonumber \\ & &\! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! + \frac{1}{2} \Gamma_{\k_1\k\k'\k_4}\Gamma_{\k\k_2\k_3\k'} \delta_{\e_n,-\e_{m}} \Bigr\}. \label{eqn:S-RG} \end{eqnarray} The first and second terms in the right-hand-side originate from Peierls-channel scattering, while the third one corresponds to the Cooper-channel scattering. Here, we employ the Wick-ordered scheme \cite{Wick}, in which the cutoff function $\Theta_<= \Theta(\Lambda-|\xi_{\k}|)$ is used for the Green function \cite{Metzner}. Thus, the VCs due to the higher-energy processes are included more accurately than the Kadanoff-Wilson scheme for $\Theta_>= \Theta(|\xi_{\k}|-\Lambda)$ in Ref. \onlinecite{Tsuchiizu:2016ix}. The renormalization procedure of the four-point vertex function is summarized as follows; (i) At the starting point of $\Lambda=W$ (bandwidth), $\Gamma$ takes $U$. (ii) $\Gamma$ is gradually renormalized toward $\Lambda \rightarrow 0$ by following the RG equation. (iii) Finally, we obtain the effective low-energy $\Gamma$ including higher-order many-body effects. The final $\Gamma$ is essentially equal to the one by solving the parquet equation introduced by Abrikosov \cite{parquet}. In general, the best way to consider the many-body processes is applying the RG method (i)-(iii) to the full energy region of electron states by putting $\Lambda_0=W$. However, in this way, the numerical accuracy is not ensured since it is impossible to fully consider $\k$- and $\e_n$-dependences of $\Gamma$. To improve the numerical accuracy, $N$-patch RG has been invented \cite{Halboth:2000vm,Metzner,Zanchi:1996ul,Zanchi:1998ua,Zanchi:2000ua}, in which the momentum space of the electron system is divided into finite $N$-patches. While it is, it fails to explain two-dimensional electron systems, in which $k$-dependence of band structure plays important roles in nature. Therefore, more reliable RG framework is required to study two-dimensional strongly correlated electron systems such as cuprate superconductors. \subsection{RG+cRPA with form factor} In an effort to establish more reliable RG to apply two-dimensional electrons, the RG+cRPA method has been recently developed \cite{Penc:1994uh,Tsuchiizu:2002eg,Tsuchiizu:2004ct, Tsuchiizu:2013gu,Tsuchiizu:2015cs,Tsuchiizu:2016ix,Tazai-FRG}. We show the diagrammatic explanation of the RG+cRPA method in Fig. \ref{fig:RG} (a). In RG+cRPA method, the energy scale of electron system is divided into two different region by putting $\Lambda_0<W$. The higher energy region with $\Lambda_0<\Lambda<W$ is considered by RPA with fine $\k$-mesh by dropping the VCs. On the other hand, the lower energy region with $\Lambda<\Lambda_0$ is considered by RG scheme. This hybrid method is based on the intuitive idea that higher-order many-body effects become significant only in low-energy region. Thanks to RG+cRPA, the numerical accuracy of the susceptibilities is drastically improved even in the weak-coupling region. Here, we consider the important roles of the form factor $f_{\k,\q}$ within the RG+cRPA scheme to explain non-local symmetry breaking such as nematic/smectic $d$- and $p$-symmetry orders. The charge- (spin-) channel static susceptibility with form factor is given by \begin{eqnarray} \chi^{c(s)}_{ff'}(\bm q) &=& \frac{1}{2} \int_0^{\beta} \! d\tau \, \left\langle O_f^{c(s)}(\bm q,\tau)O_{f'}^{c(s)}(-\bm q,0) \right\rangle, \label{eqn:chif} \\ O_{f}^{c(s)}(\bm q) & \equiv & \sum_\k f_{\k,\q} \left\{ c_{\k_+ \uparrow}^\dagger c_{\k_- \uparrow} +(-)c_{\k_+ \downarrow}^\dagger c_{\k_- \downarrow}\right\}, \end{eqnarray} where $\k_\pm = \k \pm \q/2$. We denote $\chi^{c(s)}_{f}\equiv \chi^{c(s)}_{ff}$ hereafter. The RG equation of the charge- (spin-) channel susceptibilities with respect to the form factor $f_{\k,\q}$ are given as \begin{eqnarray} \frac{d}{d\Lambda}\chi^{c(s)}_f(\q) &=&\frac{T}{N}\sum_{\k,\k'\e_n} \frac{d}{d\Lambda}\left[ G(\k,\e_n) G(\k',\e_n) \right] \delta_{\k',\k+\q} \nonumber \\ &&\times R^{c(s)}_{f, \q \k \k'} R^{c(s)}_{f, -\q \k' \k}, \label{eq:chi-RG} \\ \frac{d}{d\Lambda}R^{c(s)}_{f, \q \k \k'} &=&\frac{T}{N}\sum_{\p,\p',\e_n} \frac{d}{d\Lambda}\left[G(\p,\e_n) G(\p',\e_n) \right] \delta_{\p',\p+\q} \nonumber \\ &&\times R^{c(s)}_{f, \q \p \p'}\Gamma^{c(s)}_{\p \p' \k \k'}, \label{eq:R-RG} \end{eqnarray} where $R^{c(s)}_f$ is the three-point vertex function of charge- (spin-) channel with form factor $f$, and its initial function is \\ \begin{eqnarray} R^{c(s)}_{f, \q \k \k'}(l=0)=f_{(\k+\k')/2, \q} + \mbox{[cRPA correction]}. \label{eq:R0} \end{eqnarray} The diagrammatic RG flows of $\chi^{c(s)}_f$ and $R^{c(s)}$ are given in Fig.\ \ref{fig:RG} (b). Based on the Lagrange multipliers method, we optimize the form factor $f_{\k, \q}$ so as to maximize the susceptibility. For this purpose, we introduce the Fourier expansion of the form factor as \begin{eqnarray} f_{\k, \q}=\sum_{n,m=1}^{7}2 a_{n m}^{\q} h_{n}(k_x) h_m(k_y), \end{eqnarray} where $h_{n}(k)=\{ \frac{1}{\sqrt{2}}, \cos k, \cos 2k, \cos 3k, \sin k, \sin 2k, \sin 3k \}$ for $n=1-7$, respectively. The coefficient $a_{n m}^{\q}$ is optimized under the condition $\frac{1}{N}\sum_\k|f_{\k,\q}|^2=1$ by solving the following eigen equation, \begin{eqnarray} \sum_M \, \chi_{LM}^{c(s)}(\q) a^{\q}_{M} =\lambda \,a_{L}^{\q}, \end{eqnarray} where each index $M \equiv (m,m')$ and $L \equiv (l,l')$ takes $1-7^2$. Here, $\chi_{LM}^{c(s)}$ is the susceptibility with respect to the form factors $f=2h_m(k_x)h_{m'}(k_y)$ and $f'=2h_l(k_x)h_{l'}(k_y)$ in Eq. (\ref{eqn:chif}). The eigenvalue $\lambda$ corresponds to the undetermined multiplier in the Lagrange multipliers method. In the following sections, we discuss the exotic non-local orders in cuprates based on the present improved RG+cRPA method with form factor $f_{\k, \q}$. \section{$d/p$-wave bond/sLC order in cuprates} \label{sec:3} To understand the origin of exotic phase transitions in cuprate superconductors exhibited in Fig. \ref{fig1-1}, we perform the RG+cRPA analysis by focusing on the importance of the quantum interference (see Fig. \ref{fig-interference}) \cite{Tsuchiizu-CDW,Tsuchiizu:2016ix}. We investigate a three-orbital $d$-$p$ Hubbard model shown in Fig.\ \ref{cuprate1} (a) for YBCO \cite{Bulut,Yamakawa-CDW,Thomson:2015ie}. Its Hamiltonian is given by \begin{equation} H_{dp} = \sum_{\bm k, \sigma} \bm c_{\bm k, \sigma}^\dagger \, \hat h_0(\bm k) \, \bm c_{\bm k, \sigma}^{} +U\sum_{\bm j} n_{d, \bm j,\uparrow} n_{d, \bm j,\downarrow}, \end{equation} where $\bm c_{\bm k, \sigma}^\dagger= (d_{\bm k,\sigma}^\dagger, p_{x,\bm k,\sigma}^\dagger, p_{y,\bm k,\sigma}^\dagger)$ is the creation operator for the electron on $d_{x^2-y^2}$, $p_x$, and $p_y$ orbitals with wavevector $\bm k$ and spin $\sigma$. $U$ is the Coulomb interaction on $d$-orbital, and $n_{d,\bm j,\sigma}=d^\dagger_{{\bm j}\sigma} d_{{\bm j}\sigma}$. In the kinetic term $\hat h_0(\bm k)$, we introduce the third-nearest $d$-$d$ hopping $-0.1$ eV into the first-principles $d$-$p$ model for La$_2$CuO$_4$ \ \cite{Hansmann:2014ib} in order to reproduce YBCO-like Fermi surface (FS) depicted in Fig.\ \ref{cuprate1} (b) \cite{Yamakawa-CDW}. The electron filling is set to $n = n_d + n_p = 4.9$, corresponding to the hole number $x = 0.1$. \begin{figure}[htb] \includegraphics[width=9cm]{cuprate-chis.eps} \caption{ (a) $d$-$p$ Hubbard model. (b) Fermi surface (FS) of the present YBCO model. The lower-energy region ($|\xi_{\k}|<\Lambda_0=0.5$ eV) is denoted by the shaded area. The $N$-patch discretization for $N =64$ is shown, whereas we set $N = 128$ in the present numerical study. (c) Obtained spin susceptibility $\chi^{s}(\bm q)$. (d) $U$ dependencies of $\chi^{s}_\mathrm{max} [\equiv \chi^s(\bm Q_{rm S})]$ given by the RG+cRPA method and by the RPA. The initial spin susceptibility given by the cRPA is very small. } \label{cuprate1} \end{figure} Figure \ \ref{cuprate1} (c) shows the spin susceptibility $\chi^{s}(\bm q)$ obtained by the RG+cRPA method. The obtained strong spin fluctuations at $\bm Q_\mathrm{S} =(\pi-\delta_\mathrm{s},\pi)$ and $\bm Q_\mathrm{S}' =(\pi,\pi-\delta_\mathrm{s})$ are consistent with the neutron inelastic scattering measurements. As increasing $U$, $\chi^{s}_\mathrm{max} \equiv \chi^{s}({\bm Q_{\rm S}})$ develops monotonically and diverges at $U=U^\mathrm{cr}(\approx 4.5 \mbox{ eV})$, as shown in Fig.\ \ref{cuprate1} (d). Thanks to the numerical accuracy of the RG+cRPA method, $\chi^s_\mathrm{max}$ perfectly follows the RPA result for a wide weak-coupling region ($U<4$ eV). As seen from Figs.\ \ref{cuprate1} (c) and (d), the cRPA contribution for $\Lambda_0=0.5$ eV in the initial value is small but very important for the RG analysis \cite{Tsuchiizu:2013gu,Tsuchiizu:2015cs}. We verified in Ref. \onlinecite{Tsuchiizu-CDW} that the numerical results by the RG+cRPA method are \textit{qualitatively} similar to those by conventional patch-RG method, whereas the numerical accuracy is well improved. Next, we investigate the following $B_{1g}$-symmetry ($d$-symmetry) charge susceptibility for $p$-electrons, \begin{eqnarray} \chi^{p\mbox{-}\mathrm{orb}}_{d}(\bm q) &=& \frac{1}{2} \int_0^{\beta} \! d\tau \, \left\langle n^{p\mbox{-}\mathrm{orb}}_d(\bm q,\tau) n^{p\mbox{-}\mathrm{orb}}_d(-\bm q,0) \right\rangle, \nonumber \\ n^{p\mbox{-}\mathrm{orb}}_d(\bm{q}) &\equiv & n_{x}(\bm{q}) - n_{y}(\bm{q}) \end{eqnarray} where $n_{x(y)} (\bm q ) = \sum_{\bm k, \sigma} p_{x(y),\bm{k}\sigma}^\dagger p_{x(y),\bm{k+q},\sigma}$ is the $p$-orbital charge-density-wave ($p$O-CDW) operator. Figures\ \ref{cuprate2} (a) and (b) exhibit the obtained $\chi^{p\mbox{-}\mathrm{orb}}_{d}(\bm q)$ by the RG+cRPA method for $U=4.32$ eV at $T=0.1$ eV. The obtained large peaks at $\bm q=\bm 0$, $\bm Q_\mathrm{a}$, and $\bm Q_\mathrm{d}$ originate from the VCs, since the RPA result is small and non-singular as seen in Fig.\ \ref{cuprate2} (b). We see that the highest peak locates at $\bm q=\bm 0$. This is consistent with the experimental uniform nematic transition at $T^* \ (>T_{\rm CDW})$ \cite{Y-Sato}. The second highest peak locates at $\bm q=\bm Q_\mathrm{a}$, which naturally explains the CDW phase below $T_{\rm CDW}$ in Fig. \ref{fig1-1} (a). Note that the temperature $T=0.1$ eV is comparable to $T^*\sim300$ K if the mass-enhancement factor $m^*/m_{\rm band}\sim 3$ is taken into account. The enhancement of $\chi^{p\mbox{-}\mathrm{orb}}_{d}(\bm q)$ in Figs. \ref{cuprate2} (a) and (b) is also obtained in the single-orbital Hubbard model, as the enhancement of $d$-wave bond susceptibility. To explain this fact, we investigate the $d$-electron charge susceptibility in the $d$-$p$ Hubbard model with form factor $\chi^c_f(\q)$, which is introduced in Eq. (\ref{eqn:chif}). We optimize the form factor by following Sect. \ref{sec:2}. The numerically optimized $f_{\k,\q}$ at $\q=\bm 0$ is shown in Fig.\ \ref{cuprate2}(c), which has the $B_{1g}$-symmetry. Its Fourier transformation gives the modulation of the effective hopping integrals, called the $d_{x^2-y^2}$-wave BO. The $\k$-dependence of $f_{\k,\bm0}$ in Fig.\ \ref{cuprate2}(c) is similar to that of the expectation value of the operator $n^{p\mbox{-}\mathrm{orb}}_d({\bm0})$ on the FS that is proportional to $|u_x(\k)|^2-|u_y(\k)|^2$. Here, $|u_{x(y)}(\k)|^2$ is the weight of $p_{x(y)}$ orbital at the Fermi momentum $\k$. Thus, the $p$O-CDW obtained in the $d$-$p$ Hubbard model is essentially equivalent to the $d$-wave BO in the single-orbital Hubbard model. Next, we discuss the physical picture of the origin of BO obtained by the RG study. To find out the significant quantum process, it is useful to perform the diagrammatic calculations and to compare the obtained results with the RG results. For this purpose, we develop the density-wave (DW) equation method \cite{Onari-FeSe,Kawaguchi-CDW,Onari-B2g,Onari-AFBO}. Using this method, we can obtain the most divergent susceptibility with the optimized form factor, by including higher-order VCs. It is clarified that Aslamazov-Larkin (AL) type VCs shown in Fig.\ \ref{cuprate2} (d) are the origin of the enhancement of the $p$O-CDW susceptibility. Here, red wave lines represent the dynamical spin susceptibility, and the paramagnon interference given by the convolution of $\chi^s$'s, $C_\q\equiv \sum_\p\chi^s(\p+\q)\chi^s(\p)$, becomes large $\q\approx \Q_{\rm S}-\Q_{\rm S}= {\bm 0}$ and $\q\approx \Q_{\rm S}-\Q_{\rm S}'\approx \Q_\mathrm{d}$. This quantum interference gives rise to the BO formation, see Fig. \ref{fig-interference}. (Note that moderate peak at $\bm Q_\mathrm{d}$ is caused by the single-fluctuation-exchange processes called the Maki-Thompson (MT) VC \cite{Sachdev:2013bo,Mishra:2015fb}.) Both the RG+cRPA method and the DW equation method conclude the emergence of the $d$-wave BOs at $\q={\bm0}$ and $\Q_\mathrm{a}$ in several single-orbital Hubbard models. Thus, the nematic ($\q={\bm0}$) and smectic ($\q\ne{\bm0}$) BO formations due to paramagnon interference are expected to be general in many strongly correlated metals. The DW equation method was originally developed to explain the electronic nematic order in Fe-based superconductors \cite{Onari:2012jb,YYamakawa-PRX2016,Kontani:2014ws}. \begin{figure}[htb] \includegraphics[width=8cm]{cuprate-bond.eps} \caption{ (a) (b) Obtained $p$O-CDW susceptibility $\chi^{p\mbox{-}\mathrm{orb}}_d(\bm q)$. The RPA result is also shown for comparison in (b). The obtained peak at $\q={\bm Q_\mathrm{a}}$, which corresponds to the nesting vector in Fig.\ \ref{cuprate1} (b), is consistent with experimental CDW wavevector. (c) The optimized form factor $f_{\k,\q=\bm0}$ on the FS, which has the $d$-symmetry. (d) Example of AL-type vertex corrections that give large $p$O-CDW susceptibility. } \label{cuprate2} \end{figure} It is noteworthy that the DW equation method also predicts the emergence of the sLC order described by the spin-channel form factor \cite{HKontani-sLC}. The predicted transition temperature $T_{\rm sLC}$ is higher than $T_{\rm CDW}$. The obtained sLC in real space is shown in Fig. \ref{fig1-1} (d). The sLC order is ``hidden'' in that neither internal magnetic field nor charge density modulation is induced, whereas the predicted sLC naturally explains the pseudogap in the DoS at $T^*$. It is an important future issue to study the general spin-channel susceptibility with nonlocal form factor based on the RG+cRPA method. \section{$d$-wave bond order in $\kappa$-(BEDT-TTF)$_2$X} \label{sec:4} The layered organic superconductor $\kappa$-(BEDT-TTF)$_2$X has been attracting great attention as a similar substance to cuprate superconductors. Schematic $P$-$T$ phase diagram is depicted in Fig. \ref{fig-BEDT1} (a) \cite{Raman}. Under pressure, unconventional superconductivity ($T_{\rm c}\gtrsim10$K) appears next to the antiferro magnetic (AFM) phase \cite{Kanoda-rev,Kanoda-rev2}. (In X=Cu[N(CN)$_2$]Br and X=Cu(NCS)$_2$, metallicity and superconductivity appear even at ambient pressure.) $T^\rho_{\rm max}$ is the metal-insulator crossover temperature observed in the resistivity. $T^*$ is the pseudogap temperature, below which the NMR relaxation ratio $1/T_1T$ \cite{Kanoda-rev,Kanoda-rev2} and the DoS measured by the STM \cite{Nomura-STM} exhibit the gap-like behaviors. The nature of the pseudogap and its relation to the superconductivity has been a central mystery in the unconventional metallic states in $\kappa$-(BEDT-TTF)$_2$X. To study the origin of the pseudogap, we introduce the anisotropic triangular lattice dimer Hubbard model shown in Fig. \ref{fig-BEDT1} (b). Each site in the dimer model is composed of the anti-bonding molecular orbital of the BEDT-TTF molecule dimer This is the simplest effective model for $\kappa$-(BEDT-TTF)$_2$X; $\hat{H}=\hat{H}_{0}+\hat{H}_{I}$ \cite{Kino-Fukuyama}. The kinetic term is given by $\hat{H}_{0}=\sum_{\k\sigma}\xi_{\k}c^{\dagger}_{\k\sigma}c_{\k\sigma}$ with $\xi_{\k}=2t(\cos k_x+\cos k_y)+2t'\cos(k_x+k_y)$. Here, we set the hopping integrals in Fig. \ref{fig-BEDT1} (b) as $(t,t')=(-1,-0.5)$. We verified that similar numerical results are obtained for $t'/t=0.5\sim 0.8$, which is realized in many $\kappa$-(BEDT-TTF) families \cite{McKenzie}. In this dimer Hubbard model, both RPA and FLEX approximation predict the emergence of spin fluctuations at $\Q_{\rm S}\approx(\pi ,\pi)$, consistently with experimental staggered AFM order \cite{Schmalian-ET,Kino-ET,Kondo-ET,Kontani-ET}. These spin fluctuations mediate the $d_{x^2-y^2}$-wave superconductivity \cite{Schmalian-ET,Kino-ET,Kondo-ET,Kontani-ET}. (Based on more realistic four-site Hubbard models, the $d_{xy}$-wave state can be obtained in case of weak spin fluctuations at $\q\sim(\pi,0)$ \cite{Kuroki-ET}.) \begin{figure}[htb] \includegraphics[width=8cm]{BEDTTTF.eps} \caption{ (a) Schematic $P$-$T$ phase diagram of $\kappa$-(BEDT-TTF)$_2$X \cite{Raman}. (b) Anisotropic triangular dimer Hubbard model. (c) FS and (d) band structure of the dimer Hubbard model at half filling with $t'/t=0.5$. } \label{fig-BEDT1} \end{figure} In the following numerical study, we set the energy unit $|t|=1$, and put the temperature $T=0.05$ and the electron filling $n=1$ ($\mu=0.55$). The FS and the band structure are presented in Figs. \ref{fig-BEDT1} (c) and (d), respectively. The patch indices ($1\sim64$) are shown on the ellipsoid electron pockets. The total band width is $W\sim 10$ (in unit $|t|=1$), and $|t|$ corresponds to 0.05eV since $W\sim 0.5$eV experimentally \cite{Kino-Fukuyama,Kino-ET}. From now on, we analyze the dimer Hubbard model by applying the RG+cRPA method \cite{RTazai-PRR2021}. The RG+cRPA method is an efficient hybrid method between the RG and the RPA \cite{Tsuchiizu:2013gu,Tsuchiizu:2016ix,Tsuchiizu-CDW,Tazai-FRG}. Here, we introduce the higher-energy cutoff $\Lambda_0 \ (=2)$. The RG flow will stop for $\Lambda_l\lesssim \w_c$ with $\w_c={\rm max}\{ T,\gamma\}$, where $\gamma \ (\propto|{\rm Im}\Sigma|)$ is the quasi-particle damping rate. Considering large $\gamma$ in $\kappa$-(BEDT-TTF)$_2$X, we introduce the low-energy cutoff $\w_c=\pi T$ in the RG equation of the four-point vertex $\Gamma$ in calculating Fig. \ref{fig:RG} (a) by following Refs. \onlinecite{Tsuchiizu:2016ix,Tsuchiizu-CDW}. First, we calculate various spin and charge susceptibilities with the form factor $f_{\k,\q}$; $\chi^{c(s)}_f(\q)$ introduced in Eq. (\ref{eqn:chif}). By analyzing the following from factors $f=1$, $\sqrt{2}\sin k_x$, $\sqrt{2}\sin k_y$, $\cos k_x - \cos k_y$ and $2\sin k_x \sin k_y$, we find that the conventional spin susceptibility $\chi^{s}(\q)$ ($=\chi_f^{s}(\q)$ with $f=1$) and the $d$-wave bond susceptibility $\chi^{\rm BO}(\q)$ ($=\chi_f^{c}(\q)$ with $f=\cos k_x - \cos k_y$) strongly develop. Other susceptibilities remain small in the present study. \begin{figure}[htb] \includegraphics[width=8cm]{BEDTTTF-bond.eps} \caption{ (a) $\q$-dependences of $\chi^{\rm BO}(\q)$ obtained by the RG+cRPA method at $U=3.5$. (b) The RG flow for spin and BO susceptibilities at $U=3.54$. (c) Schematic BO pattern at $\q=\Q_{\rm B}$ in real space, where $\bm{\lambda}\approx(8/3,8/3)$ is the wavelength vector. (d) Obtained Fermi arc structure in the unfolded zone. The pseudogap in the DoS with $f^{\rm max}=0.1$ is shown in the inset. (e) Obtained $1/T_1T$, where $T^*$ is the BO transition temperature. } \label{fig-BEDT2} \end{figure} In Fig. \ref{fig-BEDT2} (a), we plot $\q$-dependences of $\chi^{\rm BO}(\q)$ at $U=3.5$. We reveal the development of $\chi^{\rm BO}(\q)$ at $\q=\Q_{\rm B}\approx (3\pi/8,3\pi/8)$ in addition to $\q=(0,0)$. The obtained strong bond fluctuations originate from the VCs that are dropped in the RPA. (We note that $\chi^{\rm BO}(\q)$ is not enhanced at all in the RPA and FLEX.) The $\chi^{\rm BO}(\q)$ strongly develops by increasing $U$. Figure \ref{fig-BEDT2} (b) shows the RG flow of the susceptibilities in the case of $U=3.54$. In this case, the bond susceptibility exceeds the spin one after completing the renormalization. We see that $\chi^{s}(\Q_{\rm S})$ starts to increase in the early stage of the renormalization, by reflecting the major nesting of the FS at $\q=\Q_{\rm S}$. Next, $\chi^{\rm BO}(\Q_{\rm B})$ starts to increase for $l\gtrsim3$, and it exceeds $\chi^{s}(\Q_{\rm S})$ at $l\sim4$. Finally, $\chi^{\rm BO}(\bm{0})$ starts to increase for $l\gtrsim 4$ ($\Lambda_l\lesssim 0.037$), because the renormalization of any $\q=\bm{0}$ susceptibility occurs only for $\Lambda_l\lesssim T$. All susceptibilities saturate for $l\gtrsim8$ ($\Lambda_l\lesssim 0.7\times10^{-3}$). The final result in Fig. \ref{fig-BEDT2} (a) is given at $l\approx9$. Thus, all $\chi^{s}(\Q_{\rm S})$, $\chi^{\rm BO}(\Q_{\rm B})$ and $\chi^{\rm BO}(\bm{0})$ strongly develop at $U=3.54$. Figure \ref{fig-BEDT2} (c) shows the schematic $d$-wave BO pattern at $\q=\Q_{\rm B}$. Here, each red (blue) ellipse represents the increment (decrement) of the hopping integral $\delta t_{\mu}$ ($\mu=x,y$) caused by the BO parameters. The opposite sign between the adjacent $\delta t_{x}$ and $\delta t_{y}$ reflects the $d$-wave symmetry of the BO. The BO parameter causes the pseudogap in the DoS: Figure \ref{fig-BEDT2} (d) shows the Fermi arc structure obtained for $f^{\rm max}\equiv \max_\k \{f_{\k,\Q_{\rm B}}\} =0.1$. Here, the folded band structure under the BO at $\q=\Q_{\rm B}$ is ``unfolded'' into the original Brillouin zone \cite{Ku} to make a comparison with ARPES experiment. The resultant pseudogap in the DoS is shown in the inset of Fig. \ref{fig-BEDT2} (d), which is consistent with the STM study \cite{Nomura-STM}. The BO leads to significant reduction of the spin fluctuation strength, so the $1/T_1T$ will exhibit kink-like pseudogap behavior. To show that, we calculate the value of $1/T_1T$ in Fig. \ref{fig-BEDT2} (e), which is defined as \begin{eqnarray} \frac{1}{T_1T} \propto \sum_{\q,\a,\b} {\rm Im}\left. \chi^s_{\a,\b}(\q,\w)/\w \right|_{\w=0}, \end{eqnarray} where $\a,\b$ represent the sites in the unit cell under the presence of the BO. We set $f^{\rm max}=0.2\times{\rm tanh}(1.74\sqrt{(1-T/T^*)}$ below the BO transition temperature $T^*=0.1$. (Here, $2f^{\rm max}(T=0)/T^*=4$.) The obtained pseudogap behaviors in $1/T_1T$ and DoS are consistent with phase-transition-like experimental behaviors \cite{Kanoda-rev,Kanoda-rev2,Raman}. Finally, we discuss that the physical origin of the large ferro-BO ($\q=\bm{0}$) and incommensurate-BO ($\q=\Q_{\rm B}$) instabilities in $\kappa$-(BEDT-TTF)$_2$X model depicted in Fig. \ref{fig-BEDT2}(a). The obtained large BO instabilities are very similar to those of cuprate and Fe-based superconductors given by the RG+cRPA method and the DW equation analysis. Therefore, the main origin of $\q=\bm{0}$ BO and $\q=\Q_{\rm B}$ BO would be the paramagnon-interference mechanism. On the other hand, $\chi^{\rm BO}(\bm{0})$ is significantly smaller than $\chi^{\rm BO}(\bm{Q}_B)$ in the DW analysis for $\kappa$-(BEDT-TTF)$_2$X model \cite{RTazai-PRR2021}. This result indicates that large peak at $\q={\bm0}$ in the present RG study in Fig. \ref{fig-BEDT2}(a) originates from the spin and SC fluctuations cooperatively, because the AL process by SC fluctuations can cause the ferro-BO instability as revealed in the previous RG study \cite{Tsuchiizu:2013gu}. \section{TRS broken $p$-wave order: charge loop current} \label{sec:5} In previous sections, we explained that exotic TRS preserving nonlocal orders are induced by the quantum interference mechanism. The obtained nematic ($\q={\bm0}$) and smectic ($\q\ne{\bm0}$) BOs are widely observed not only in cuprates and iridates, but also in Fe-based superconductors. In addition, the sLC naturally explain the pseudogap in the DoS in cuprates. Also, the emergence of TRS breaking charge-current orders has been actively studied in cuprates and iridates \cite{TRSB-iridate}. Especially, the intra-unit-cell cLC order along the nearest Cu-O-O triangles \cite{TRSB-neutron1,TRSB-neutron2} shown in Fig. \ref{fig1-1} (e) has been actively discussed recently. In this Varma cLC order, $p$ orbitals on O atoms contribute to the current order, so extended Hubbard models with O-site and off-site (Cu-O) Coulomb interactions given by $U_p$ and $V_{\rm Cu-O}$ may have to be analyzed. Within the MFA, however, very huge $V_{\rm Cu-O}$ is required to explain the cLC. Thus, it is important to find the general mechanism of the cLC order by going beyond the MFA. To find a general driving force of the cLC order, it is useful to study simple theoretical models accurately using reliable theoretical method. Here, we study the quasi-one-dimensional (q1D) Hubbard model at half-filling ($n=1$) by applying the RG theory, which becomes more reliable in q1D systems rather than 2D systems. As a result, we reveal that the spin-fluctuation-driven cLC mechanism, which is expected to be general in low-dimensional Hubbard models with geometrical frustrations. Below, we study unconventional orders in a geometrically frustrated Hubbard model \cite{RTazai-PRB2021}. The Hamiltonian is $\hat{H}=\hat{H}_{0}+\hat{H}_{I}$, which is schematically shown in Fig. \ref{fig:cLC1} (a). The energy dispersion is simply putted by $\xi_{\k}=-2t\cos k_{x}-2t^{\perp} \{ \cos k_{y}+\cos(k_{x}+k_{y})\}-\mu$ with $t=1$ and the chemical potential $\mu$. The inter-chain hopping $t^{\perp} (\ll 1)$ controls the dimensionality; $t^{\perp}=0$ corresponds to complete 1D system. The on-site Coulomb interaction is $\hat{H}_{I}=\sum_{i}U n_{i\uparrow}n_{i\downarrow}$ where $i$ is the site index. The Fermi surface of the present model is shown in Fig. \ref{fig:cLC1} (b) In the numerical calculation, each left (L) and right (R) Brillouin zone is divided into $24$ patches. The logarithmic energy scale for performing the RG is given by $\Lambda_{l}=\Lambda_0 e^{-l} \,\, (0\leq l \leq l_c )$ for $\Lambda_0=3$ and $\Lambda_{l_c}=T/100$ ($l_c=4.6$). We consider the half-filling case and $(t^{\perp},T,U)=(0.2,0.05,2.01)$ is used. \begin{figure}[htb] \includegraphics[width=8cm]{cLC1-1.eps} \caption{(a) Present model and obtained charge loop current pattern by RG. (b) FS composed of left ($L$) and right ($R$) branches. (c) Obtained charge-channel susceptibility $\chi^{c}_f(\q)$ with the form factor. (d) Optimized charge-channel form factor at $\q={\bm0}$, which corresponds to the uniform charge-loop current. } \label{fig:cLC1} \end{figure} Based on the RG, we calculate the charge- (spin-) channel susceptibilities with the form factor introduced in Eq. (\ref{eqn:chif}). The form factor $f_{\k,\q}$ is optimized unbiasedly to maximize $\chi^{c(s)}_f(\q)$ at each $\q$-point using the Lagrange multipliers method in Sect.\ref{sec:2}. Figure \ref{fig:cLC1} (c) shows the obtained susceptibility. The strong charge-channel fluctuations develop at $\q=\bm{0}$, while the spin fluctuations remain small even at the nesting vector $\Q_{\rm{S}}=(\pi,\pi/2)$. Figure \ref{fig:cLC1} (d) shows the $\k$-dependence of the charge-channel form factor at $\q=\bm{0}$. For a fixed $k_y$, the obtained result shows $p$-wave symmetry as \begin{eqnarray} f_{k_x, k_y,\bm{0}}\simeq -f_{-k_x,k_y,\bm{0}} \hspace{5pt} \propto \sin k_x+b\sin 3k_x . \label{eq:fkx_cLC} \end{eqnarray} Then, the real-space order parameter is $\delta t_{ij}=-\delta t_{ji}$, which leads to the emergence of ferro-type cLC order. Thus, we conclude that the TRS broken $p$-wave cLC phase is strongly stabilized at $(t,t^{\perp})=(0.1,0.2)$. From the obtained form factor, we calculate the current from $0$-site ($\bm{r}=\bm{0}$) to $i$-site ($\bm{r}=\bm{r}_i$) written by \begin{eqnarray} j_i= 2i e \left\{ (t_{i0} + \delta t_{i0})G (-\bm{r}_{i})- (t_{0i} + \delta t_{0i})G (\bm{r}_{i}) \right\}, \end{eqnarray} where $-e$ is the charge of an electron. $\delta t_{i0}$ is the Fourier transformation of charge-channel $f_{\k,\q}$ multiplied by the energy scale $\Delta t$. Note that $\delta t_{i0}$ is pure imaginary and $\delta t_{i0}=-\delta t_{0i}$ holds. The equal-time Green function $G (\bm{r}_i)$ in the real space is defined by \begin{eqnarray} G (\bm{r}_i)=T\sum_{n,\k} \frac{1}{i\e_n-\xi_{\k}-\Delta t f_{\k,\bm{0}}} e^{i\k \bm{r}_i}. \end{eqnarray} Figures \ref{fig:cLC12} (a) and (b) show the values of the intra- and inter-chain current, respectively. Here, we put $(e,\Delta t)=(1,0.05)$. We find that the third-nearest-intra-chain form factor is significant to obtain the charge-loop current. In addition, we verified that the macroscopic current is zero due to the cancellation between intra- and inter-chain current. Thus, the present result is consistent with Bloch's theory, which predicts the absence of the macroscopic currents in infinite periodic systems \cite{Bohm}. In Fig. \ref{fig:cLC1} (a), we show the schematic picture of the cLC, which is a magnetic-octupole-toroidal order. \begin{figure}[htb] \includegraphics[width=8cm]{cLC1-1-2.eps} \caption{(a) Obtained intra-chain current $j(x,0)$ and (b) inter-chain one $j(x,1)$. (c) Obtained phase diagram. The charge-loop current order appears between anti-ferro magnetic and $d$-wave superconducting phases. } \label{fig:cLC12} \end{figure} Figure \ref{fig:cLC12} (c) is obtained phase diagram in the $T$-$t^{\perp}$ space. The cLC phase appears around $t^{\perp}\simeq 0.2$ as an intertwined order between antiferro magnetic and $d$-wave superconducting states. Note that the dark shaded area is 1D Mott insulating phase that is beyond the scope of the present study \cite{Kishine1,Kishine4}. As a result, the cLC phase is stabilized in the Fermi liquid region around $t^{\perp} \gg T$. To understand the origin of the cLC, we analyze the charge- (spin-) channel four-point vertex function based on the $g$-ology theory defined as \begin{eqnarray} g^{c(s)}_{a a'}(\q) \equiv \max_{\p\in a,\p' \in a'}\Gamma^{c(s)}_{\p, \p+\q, \p', \p'+\q}, \end{eqnarray} where $a,a'$ are indices of the branch of the FS and takes $R$ ($p_x>0$) or $L$ ($p_x<0$) as defined in Fig. \ref{fig:cLC1} (b). Based on the $g$-ology theory, the $g^{c(s)}$ is classified into backward ($g_1$), forward ($g_2,g_4$) and umklapp ($g_3$) scattering as defined in Fig. \ref{fig:cLC2} (a) \cite{Emery,Bourbonnais,Kishine1,Kishine4, Suzumura,Suzumura2,SSolyom}. There is one-to-one correspondence between $g^{c(s)}$ and $g_{i=1-4}$ as diagrammatically shown in Fig. \ref{fig:cLC2} (b), which is described as \begin{eqnarray} &\!\!\!\!\!\! g^{c}_{RR}(\bm{0})\approx 2\pi v_F g_4^{\perp}, \hspace{3pt} &\!\!\! g^{c}_{LR}(\bm{0})\approx 2\pi v_F(2g_2^{\perp} -g_1^{\perp}) \nonumber \\ &\!\!\!\!\!\! g^{s}_{RR}(\bm{Q}_{\rm{S}})\approx -2\pi v_F g_2^{\perp}, \hspace{3pt} &\!\!\! g^{s}_{LR}(\bm{Q}_{\rm{S}})\approx -2\pi v_F g_3^{\perp}, \label{eq:golo1} \end{eqnarray} where $g^{\perp (\parallel)}$ stands for the four-point vertex function with parallel (anti-parallel) spin. To derive Eq. (\ref{eq:golo1}), we use the SU(2)-symmetry and anti-commutation relation of the fermion, which leads to \begin{eqnarray} g_{1}^{\perp}-g_{2}^{\perp}=g_1^{\parallel}-g_2^{\parallel}\label{eq:golosu2}. \end{eqnarray} This relation is equivalent to that of Eq. (\ref{eqn:SU2}). Note that the $g^{\parallel}_{3}$ and $g^{\parallel}_{4}$ do not affect the physical quantity due to the anti-commutation relation. Thus, all physical quantities are written by using $g^{\perp}$ without $g^{\parallel}$. \begin{figure}[htb] \includegraphics[width=9cm]{cLC1-2.eps} \caption{(a) Definition of the four-point vertex function $g_{i}$ in the $g$-ology theory. (b) One-to-one correspondence between $g^{c(s)}_{aa'}(\q)$ and $g_i^{\parallel (\perp)}$. } \label{fig:cLC2} \end{figure} Both $\chi^{c}_f(\bm{0})$ and $\chi^{s}_f(\bm{Q}_{\rm{S}})$ are derived from the RG equations (\ref{eq:chi-RG})-(\ref{eq:R0}). For a qualitative analysis, we introduce the following simplified expressions: \begin{eqnarray} \hspace{-10pt}\chi^{c}_f(\bm{0})&\sim& -\left\{ (f_{R,\bm{0}})^2 g^{c}_{R R}(\bm{0})+f_{L,\bm{0}} f_{R,\bm{0}} g^{c}_{L R}(\bm{0})\right\} \nonumber \\ &&\times (W_{R,R}^{+}(l^*))^2, \label{eq:chic-simple} \\ \hspace{-10pt}\chi^{s}_f(\bm{Q}_{\rm{S}})&\sim& -\left\{ (f_{R,\bm{0}})^2 g^{s}_{R R}(\bm{Q}_{\rm{S}})+f_{L,\bm{0}} f_{R,\bm{0}} g^{s}_{L R}(\bm{Q}_{\rm{S}}) \right\} \nonumber \\ &&\times (W_{R,L}^{+}(l^*))^2, \label{eq:chis-simple} \end{eqnarray} where $l^*$ is an appropriate scaling parameter with $T\ll \Lambda_{l^*} \ll E_F$, and \begin{eqnarray} W_{\p,\p'}^{\pm}(l)= T\sum_{\bm{k} \bm{k}' n}G(\bm{\k},\e_n)G(\bm{\k}',\pm\e_{n}) \Omega_{\bm{p}}(\bm{k})\Omega_{\bm{p}'}(\bm{k}'), \end{eqnarray} where $G(\bm{k},\e_n)= (i\e_n-\xi_{\bm{k}})^{-1} \theta(\Lambda_{l}-|\xi_{\k}|)$, and $\Omega_{\bm{p}}(\bm{k})= 1 \ (0)$ only if the momentum $\bm{k}$ is inside (outside) of the $\bm{p}$-patch. Figure \ref{fig:cLC3} (a) represents the diagrammatic expression of Eqs. (\ref{eq:chic-simple}) and (\ref{eq:chis-simple}). In the odd-parity case for $f_{R,\bm{0}}=-f_{L,\bm{0}}$, the cLC susceptibility is derived from ${\chi}^{c}_f(\bm{0})$ as \begin{eqnarray} {\chi}^{\rm cLC}(\bm{0})\propto (f_{R,\bm{0}})^2 (-g_{4}^{\perp} +2g_{2}^{\perp}-g_1^{\perp}). \end{eqnarray} Thus, the uniform cLC phase appears due to the coupling constant $-g_4^{\perp}+2g_2^{\perp}-g_1^{\perp}$. On the other hand, in the even-parity case for $f_{R,\bm{0}}=f_{L,\bm{0}}$, the AFM susceptibility is derived from ${\chi}^{s}_f(\bm{Q}_{\rm{S}})$ as \begin{eqnarray} {\chi}^{\rm AFM}(\bm{Q}_{\rm{S}}) \propto (f_{R,\bm{0}})^2 (g_2^{\perp}+g_3^{\perp}). \end{eqnarray} Thus, AFM susceptibility is enlarged due to $g_2^{\perp}+g_3^{\perp}$. The classification of general instabilities for $\q=0,\Q_{\rm S}$ are summarized in Fig. \ref{fig:cLC3} (b). Figure \ref{fig:cLC3} (c) shows the obtained RG flow of $g_i^{\perp}$. We find that $g_4^{\perp}$ ($g_2^{\perp}$) has large negative (positive) value as $l$ increases, while $g_1^{\perp}$ and $g_3^{\perp}$ are quite small. Thus, the strong enhancement of the cLC susceptibility originates from $g_2^{\perp}$ and $g_4^{\perp}$. \begin{figure}[htb] \includegraphics[width=9cm]{cLC1-3.eps} \caption{(a) Diagrammatic expression of susceptibility with form factor. The shaded box is the four-point vertex function by RG. (b) Classification of the charge- and spin- channel instabilities. (c) Obtained RG flow of $g_i^{\perp}$. (d) $\Lambda_l$-dependence of the $I_{P}=-I_{C}$ (red solid line) and $I_{L}=-I_{C+}$ (green dotted line). (e) Maximum values of $\chi^{c(s)}_{f} (\q)$ without the geometrical frustration. } \label{fig:cLC3} \end{figure} To understand the large positive (negative) value of $g_2^{\perp} \ (g_4^{\perp})$, we consider the classical one-loop RG equation for $g_i^{\perp}$, which is given as \begin{eqnarray} \frac{dg_{1}^{\perp}}{dl}&=&2g_1^{\perp} g_2^{\perp} I_{C}+2g_1^{\perp}( g_2^{\parallel}-g_1^{\parallel}) I_{P}+2g_1^{\perp} g_4^{\perp} I_{L}, \nonumber \\ \frac{dg_2^{\perp}}{dl}&=&(g_2^{\perp}g_2^{\perp} +g_1^{\perp}g_1^{\perp}) I_{C} +2 g_4^{\perp}(g_1^{\parallel}-g_2^{\parallel}) I_{L}\nonumber \\ &+& (g_2^{\perp} g_2^{\perp} +g_3^{\perp} g_3^{\perp} ) I_{P}, \nonumber \\ \frac{dg_3^{\perp}}{dl}&=&2g_3^{\perp} g_4^{\perp} I_{C+}+2g_3^{\perp}(g_2^{\parallel}- g_1^{\parallel})I_{P}+2g_3^{\perp}g_2^{\perp}I_{P},\nonumber \\ \frac{dg_4^{\perp}}{dl}&=&(g_4^{\perp} g_4^{\perp}+g_3^{\perp}g_3^{\perp}) I_{C+}\nonumber \\ &+& 2g_2^{\perp}(g_1^{\parallel}-g_2^{\parallel})I_{L}+(g_4^{\perp} g_4^{\perp}+g_1^{\perp}g_1^{\perp}) I_{L}. \label{eq:goloperp} \end{eqnarray} Their diagrammatic expressions are given in Fig. \ref{fig:cLC5}. Here, $I_P$ and $I_L$ denotes the Peierls and Landau channel terms due to the particle-hope loop diagram, and $I_{C}$ and $I_{C+}$ are the Cooper and Cooper+ channel ones \cite{Emery,Bourbonnais,Kishine1,Kishine4,Suzumura,Suzumura2,SSolyom}. They are expressed as \begin{eqnarray} I_{P(L)} &\equiv &2\pi v_F \cdot dW_{\bm{R},\bm{L}(\bm{R})}^{+}/dl, \\ I_{C(C+)}& \equiv &2\pi v_F \cdot dW_{\bm{R},\bm{L}(\bm{R})}^{-}/dl. \end{eqnarray} In the present cLC mechanism, we find that Cooper channel scatterings are negligible since the same cLC phase is obtained even if we neglect $I_{C}$ and $I_{C+}$. Thus, the RG equation becomes simpler as \begin{eqnarray} \frac{dg_2}{dl}&=&(g^2_2 +g^2_3) I_{P} +(-2g_2 g_4+2g_1g_4) I_{L}, \label{eq:goloperp2-2} \\ \frac{dg_4}{dl}&=& (g^2_4 +g^2_1-2g^2_2 +2g_1g_2) I_{L}, \label{eq:goloperp2} \end{eqnarray} where $g_i$ stands for $g^{\perp}_i$ for the simplicity and the SU(2) condition is used. Then, $g_2$ is enhanced by the Peierls-channel term $(g^2_2 +g^2_3) I_{P}$, while it is suppressed by $-2g_2 g_4 I_{L}$ \cite{Bourbonnais}. On the other hand, $g_4$ reaches the negative value due to the Landau-channel term $-2g^2_2 I_{L}$. In the 1D region, it is well known that only $g_2$ is dominant. Therefore, the enhancement of the Landau channel scattering is a key fact to obtain the cLC phase. Moreover, the Landau-channel scattering becomes important in the presence of geometrical frustration. In fact, finite $t^{\perp}$ violate the perfect nesting condition, and therefore $g_2$ is relatively suppressed than the 1D system at $\Lambda_l < t^{\perp}$ \cite{Emery,Bourbonnais,Kishine1,Kishine4,Suzumura,Suzumura2}. Then, Landau-channel scattering can be enlarged at low-energies ($\Lambda_l < T$) without prohibited by the SDW. Figure \ref{fig:cLC3} (d) exhibits the $\Lambda_{l}$-dependence of $I_{L(P)}$ in the linear dispersion model, which is given by \begin{eqnarray} I_{P}&=&\tanh(\Lambda_l/2T), \\ I_{L}&=&(\Lambda_l/2T)\cosh^{-2}(\Lambda_l/2T). \end{eqnarray} Thus, the Landau-channel scattering becomes as important as the Cooper- (Peierls-) scattering in the lower energy region \cite{fuseya_g4}. To verify the importance of geometrical frustration, we calculate the $\chi^{c(s)}_{f}$ by dropping the frustration ($t^{\perp}=0$) in Fig. \ref{fig:cLC3} (e). In this case, only spin susceptibility develops, while the charge one is quite small. Thus, the cLC order can overcome the SDW order in the presence of geometrical frustration. \begin{figure*}[htb] \includegraphics[width=17cm]{cLC1-4.eps} \caption{One-loop RG equation for the four-point vertex function $g^{\perp}_{i}$ in $g$-ology theory.} \label{fig:cLC5} \end{figure*} Next, we verify the spin-fluctuation-driven cLC mechanism based on the Fermi liquid theory valid for two-dimensional systems. The important roles of spin fluctuations on the $d$-wave superconductivity and the non-Fermi liquid-type behaviors (such as $\rho\propto T$ and $R_H\propto T^{-1}$) have been discovered previously \cite{Kontani-rev,Moriya,Yamada,Sato-RH}. Here, in order to clarify the important role of spin fluctuations on the cLC formation, we solve the DW equation for the charge-channel form factor \cite{RTazai-PRB2021}: \begin{eqnarray} \lambda_{\q} f_{\k,\q}=\sum_{\k'} f_{\k',\q} L_{\k',\q} \left(-\frac{3}{2}V^{s}_{\k-\k'} -\frac{1}{2}V^{c}_{\k-\k'} \right),\label{eq:cdw} \end{eqnarray} where $\lambda_{\q}$ is the eigenvalue and $L_{\k,\q}\equiv (n_{\k_{-}}-n_{\k_{+}})/(\xi_{\k_{+}}- \xi_{\k_{-}}) >0$ with Fermi distribution function $n_{\k}$. The interaction $V^{c(s)}_{\q}\equiv -(+)U+U^2 \chi^{c(s)}(\q)$ is calculated by the RPA, We find that the cLC solution $f_{\k,\q}\propto \sin k_x$ at $\q={\bm 0}$ gives the largest eigenvalue due to large $V^{s}_{\k-\k'}$ at $\k-\k'\approx\pm\Q_{\rm S}$ \cite{RTazai-PRB2021}. In solving the DW equation, higher-order fluctuation-exchange processes with respect to $\chi^s(\bm{Q}_{\rm S})$ are generated. The even (odd)-order processes gives the inter-branch repulsion $g_2>0$ (intra-branch attraction $g_4<0$), consistently with the $g$-ology analysis in Fig. \ref{fig:cLC3} (c). In conclusion, we proposed the microscopic origin of the cLC phase based on the RG theory with optimized form factor. By virtue of this method, the ferro-type cLC order is obtained with high accuracy in a simple frustrated chain Hubbard model. Especially, the geometrical frustration helps the strong enhancement of the forward scattering ($g_2$ and $g_4$) via Landau channel scattering. The present study indicates that the cLC can emerge in various metals near the magnetic quantum criticality with geometrical frustration. Thus, the present proposed mechanism can be essential origin of the cLC phases reported in such as cuprates, iridates, and their related materials. \section{Summary} \label{sec:6} Exotic symmetry breaking phenomena, such as the nematic/smectic BOs and charge/spin current orders, have been recently reported in many strongly correlated metals. In this article, we discuss the variety of exotic orderings in terms of the symmetry breaking in self-energy $\delta t_{i,j}^{c,s}$ ($i\ne j$) in a unified way. (Its Fourier transformation gives the form factor $f_{\k,\q}^{c,s}$.) Since these exotic order cannot be explained within the mean-field-level approximations, we analyzed beyond-mean-field electron correlations by applying the RG theory. Based on the RG theory, we found that various types of exotic orders originate from the quantum interference shown in Fig. \ref{fig-interference}. Due to this mechanism, nematic ($\q={\bm0}$) and smectic ($\q\ne{\bm0}$) bond orders with $d$-wave form factor $f_{\k,\q} \propto\cos k_x-\cos k_y$ appear in both cuprates and $\kappa$-(BEDT-TTF)$_2$X. The derived bond order naturally explains the pseudogap behaviors in these compounds. The quantum interference mechanism also causes the sLC order, which naturally explains the pseudogap in the DoS in cuprates. Recently, emergence of the three-dimensional CDW phase under the magnetic field \cite{cuprate-3D,cuprate-3D2} and uniaxial stress \cite{cuprate-3D3} has been reported by resonant x-ray measurements in cuprates. It is an important future problem to explain these experiments by using the present fRG method. In addition, we discussed the emergence of TRS-breaking charge-current orders. The emergence of exotic orders has been discussed by performing precise RG analysis of the q1D Hubbard model. We revealed the spin-fluctuation-driven charge loop-current mechanism, which is expected to be general in low-dimensional Hubbard models with geometrical frustrations. Thus, rich quantum phase transition with $d$- and $p$-wave form factors are driven by the paramagnon interference in cuprates and their related materials. Finally, we comment that the quantum interference (Fig. \ref{fig-interference}) is significant in $f$-electron systems with strong spin-orbit interaction. Based on this mechanism, the multipole-fluctuation-pairing mechanism has been discussed in Refs. \onlinecite{RTazai-CeCu2Si2-1,RTazai-CeCu2Si2-2}, and the fully-gapped $s$-wave superconductivity without sign reversal in CeCu$_2$Si$_2$ is satisfactorily explained. Also, quadrupole and hexadecapole ordering in CeB$_6$ is also studied \cite{RTazai-CeB6}. \acknowledgements We are grateful to S. Onari for useful discussions. This work is supported by Grants-in-Aid for Scientific Research (KAKENHI) Research (No. JP20K22328, No. JP20K03858, No. JP19H05825, No. JP18H01175, JP16K05442) from MEXT of Japan. \input{reference_cLC_07} \end{document}
{ "timestamp": "2021-05-06T02:09:59", "yymm": "2105", "arxiv_id": "2105.01872", "language": "en", "url": "https://arxiv.org/abs/2105.01872" }
\section{Introduction}\label{sec-intro} Noisy low-rank matrix completion is concerned with the recovery of a low-rank matrix when only a fraction of noisy entries are observed. This topic has received much attention in the past decade, as a result of its vast applications in practical contexts such as collaborative filtering \citep{goldberg1992}, system identification \citep{liu2010} and sensor localisation \citep{biswas2006}. While the majority of the literature considers the completion of real-valued observations \citep{candes2009exact, candes2010power, keshavan2010matrix, koltchinskii2011, negahban2012, chen2020noisy}, many practical problems involve categorical-valued matrices, such as the famous Netflix challenge. Several works have been done on the completion of categorical matrix, including \cite{davenport20141} and \cite{bhaskar20151} for 1-bit matrix, and \cite{klopp2015adaptive} and \cite{bhaskar2016probabilistic} for categorical matrix, whose entries can take multiple discrete values. In these works, low-dimensional nonlinear probabilistic models are assumed to handle the categorical data. Despite the importance of uncertainty quantification to matrix completion, most of the matrix completion literature focus on point estimation and prediction, while statistical inference has received attention only recently. Specifically, \cite{chen2019inference} and \cite{xia2021statistical} considered statistical inference under the linear models and derived asymptotic normality results. The statistical inference for categorical matrices is more challenging due to the involvement of nonlinear models. To our best knowledge, no work has been done to provide statistical inference for the completion of categorical matrices. In addition to nonlinearity, another challenge in modern theoretical analysis of matrix completion concerns the double asymptotic regime where both the numbers of rows and columns are allowed to grow to infinity. Under this asymptotic regime, both the dimension of the parameters and the number of observable entries grow with the numbers of rows and columns. However, existing theory on the statistical inference for diverging number of parameters \citep{portnoy1988asymptotic,he2000parameters,wang2011gee} is not directly applicable, as the dimension of the parameter space in the current problem grows faster than that is typically needed for asymptotic normality; see Section~\ref{sec-asym-properties} for further discussions. In this paper, we move one step further towards the statistical inference for the completion of categorical matrix. Specifically, we consider the inference for 1-bit matrix completion under a unidimensional nonlinear factor analysis model with the logit link. Such a nonlinear factor model is one of the most popular models for multivariate binary data, having received much attention from the theoretical perspective \citep{andersen1970asymptotic,haberman1977maximum,lindsay1991semiparametric,rice2004equivalence}, as well as wide applications in various areas, including educational testing \citep{van2013handbook}, word acquisition analysis \citep{kidwell2011statistical}, syntactic comprehension \citep{gutman2011rasch}, and analysis of health outcomes \citep{hagquist2017recent}. It is also referred to as the Rasch model \citep{rasch1960studies} in psychometrics literature. Despite the popularity and extensive research of the model, its use to 1-bit matrix completion and related statistical inferences for the latent factors and model parameters have not been explored. The considered nonlinear factor model is also closely related to the Bradley-Terry model \citep{bradley1952rank,simons1999asymptotics,han2020asymptotic} for directed random graphs and the $\beta$-model \citep{chatterjee2011random,rinaldo2013maximum} for undirected random graphs. In fact, the considered model can be viewed as a Bradley-Terry model or $\beta$-model for bipartite graphs \citep{rinaldo2013maximum}. However, the analysis of bipartite graphs is more involved, for which the results and proof strategies in the existing works no longer apply and new technical tools are needed. \iffalse a non-linear logistic link function to the simplest factor model consisting of only a row effect and a column effect. It can be interpreted as the Rasch model \citep{rasch1960studies} in educational testing literature or a special form of the Bradley-Terry model \citep{bradley1952rank} that is commonly used in ranking. {\color{blue}The Rasch model is one of the most popular models for multivariate binary data, which has been widely applied in educational testing \citep{van2013handbook}, word acquisition analysis \citep{kidwell2011statistical}, syntactic comprehension \citep{gutman2011rasch}, and analysis of health outcomes \citep{hagquist2017recent}.} Its estimation has been well-studied asymptotically under various settings \citep{haberman1977maximum,lindsay1991semiparametric,rice2004equivalence}, but its use to 1-bit matrix completion has not been explored. \fi Specifically, we introduce a likelihood-based estimator under the nonlinear factor analysis model for 1-bit matrix completion. Under a very flexible missing-entry setting that does not require a random sampling scheme, asymptotic normality results are established that allow us to draw statistical inference. These results suggest that our estimator is asymptotically efficient and optimal, in the sense that the Cramer-Rao lower bound is achieved for model parameters. The proposed method and theory are applied to two real-world problems including (1) linking two forms of a college admission test that have common items and (2) linking the voting records from multiple years in the United States senate. In the first application, the proposed method allows us to answer the question ``for examinees A and B who took different test forms, would examinee A perform significantly better than examinee B, if they had taken the same test form?". In the second application, it can answer the questions such as ``Is Republican senator Marco Rubio significantly more conservative than Republican senator Judd Gregg?". Note that Marco Rubio and Judd Gregg had not served in the United States senate at the same time. We point out that the entry missingness in these applications does not satisfy the commonly assumed random sampling schemes for matrix completion. \iffalse Two potential applications are considered. The proposed method is first applied to item response data from two college admission tests that have common items \citep{gonzalez2017applying}, which allows us to compare students who take different tests, as well as the non-common items in the two tests. We also apply the proposed method to senate votings data from different terms, which allows us to directly rank the level of conservativeness of senators even when they serve for different terms in the congress. \fi The rest of the paper is organized as follows. In Section~\ref{sec-parameter-est}, we introduce the considered factor model and discuss its application to 1-bit matrix completion. In Section~\ref{sec-asym-properties}, we establish the asymptotic normality for the maximum likelihood estimator. A simulation study is given in Section~\ref{sec-simulation} and two real-data applications are presented in Section~\ref{sec-real-data}. We conclude with discussions on the limitations of the current work and future directions in Section~\ref{sec-discussion}. \section{Model and Estimation}\label{sec-parameter-est} Let $Y$ be a 1-bit matrix with $N$ rows and $J$ columns and $Y_{ij} \in \{0,1\}$ be the entries of $Y$, $i = 1, ..., N$, and $j = 1, ..., J$. Some entries of $Y$ are not observable. We use $z_{ij}$ to indicate the missing status of entry $Y_{ij}$, where $z_{ij} = 1$ indicates that $Y_{ij}$ is observed and $z_{ij} = 0$ otherwise. We let $Z=(z_{ij})_{N\times J}$ be the indicator matrix for data missingness. The main goal of 1-bit matrix completion is to estimate $E(Y_{ij}\vert z_{ij} = 0)$. This problem is typically tackled under a probabilistic model \citep[see e.g.,][]{cai2013max, davenport20141,bhaskar20151}, which assumes that $Y_{ij}$, $i = 1, ..., N$, $j = 1, ..., J$, are independent Bernoulli random variables, with success probability $\exp(m_{ij})/\{1 + \exp(m_{ij})\}$ or $\Phi(m_{ij})$, where $m_{ij}$ is a real-valued parameter and $\Phi$ is the cumulative distribution function of the standard normal distribution. It is further assumed that the matrix $M = (m_{ij})_{N\times J}$ is either exactly low-rank or approximately low-rank, where the approximate low-rankness is measured by the nuclear norm of $M$. Finally, a random sampling scheme is typically assumed for $z_{ij}$. For example, \cite{davenport20141} considered a uniform sampling scheme where $z_{ij}$ are independent and identically distributed Bernoulli random variables, and \cite{cai2013max} considered a non-uniform sampling scheme. Under such a random sampling scheme, $Z$ and $Y$ are assumed to be independent and thus data missingness is ignorable. It is of interest to draw statistical inference on linear forms of $M$, including the inference of individual entries of $M$. This is a challenging problem under the above general setting for 1-bit matrix completion, largely due to the presence of a non-linear link function. In particular, the existing results on the inference for matrix completion as established in \cite{xia2021statistical} and \cite{chen2019inference} are under a linear model that observes $m_{ij} + \epsilon_{ij}$ for the non-missing entries, where $\epsilon_{ij}$ are mean-zero independent errors. Their analyses cannot be directly applied to non-linear models. As the first inference work of 1-bit matrix completion with non-linear models, we start with a basic setting in which we assume the success probability takes a logistic form of $M$ and each $m_{ij}$ depends on a row effect and a column effect only. Asymptotic normality results are then established for the inference of $M$. Specifically, this model assumes that \begin{itemize} \item[(1)] given $M$, $Y_{ij}$, $i = 1, ..., N$, $j = 1, ..., J$, are independent Bernoulli random variables whose distributions do not depend on the missing indicators $Z$, \item[(2)] the success probability for $Y_{ij}$ is assumed to be $\exp(m_{ij})/\{1 + \exp(m_{ij})\}$ that follows a logistic link, \item[(3)] $M$ has the model parameterization that $m_{ij} = \theta_i - \beta_j$. \end{itemize} In the rest, $\theta_i$ and $\beta_j$ will be referred to as the row and column parameters, respectively. This parameterization allows the success probability of each entry to depend on both a row effect and a column effect. We now introduce two real-world applications and discuss the interpretations of the row and column parameters in these applications. \begin{example}\label{example:linking} The introduced model is also referred to as the Rasch model, one of the most popular item response theory models \citep{embretson2013item} to model item-level response data in educational testing and psychological measurement. In educational testing, each row of the data matrix represents an examinee and each column represents an item (i.e., exam question). Each binary entry $Y_{ij}$ records whether examinee $i$ correctly answers item $j$. The row parameter $\theta_i$ is interpreted as the ability of examinee $i$, which is an individual-specific latent factor, and the column parameter $\beta_j$ is interpreted as the difficulty of item $j$, as the probability of correctly answering an item increases with one's ability $\theta_i$ and decreases with the difficulty level $\beta_j$ of the item. In Section~\ref{sec-real-data}, we apply the considered model to link two forms of an educational test, an important practical issue in educational assessment \citep{kolen2014test}. That is, consider two groups of examinees taking two different forms of an educational test, where the two forms share some common items but not all, resulting in missingness of the data matrix. As the two test forms may have different difficulty levels, it is usually not fair to directly compare the total scores of two students who take different forms. The proposed method allows us to compare examinees' performance as if they had taken the same test form and to also quantify the estimation uncertainty. \end{example} \begin{example}\label{example:voting} Consider senators' roll call voting records in the United States senate, and in this application, each row of the data matrix corresponds to a senator and each column corresponds to a bill voted in the senate. Each binary response $Y_{ij}$ records whether the senator voted for or against the bill. It has been well recognized in the political science literature \citep{poole1991dimensionalizing,poole1991patterns} that senate voting behavior is essentially unidimensional, though slightly different latent variable models are used in that literature. That is, it is believed that senators' voting behavior is driven by a unidimensional latent factor, often interpreted as the conservative-liberal political ideology. Moreover, it is a consensus that the Republican senators tend to lie on the conservative side of the factor and the Democratic senators tend to lie on the liberal side, though there are sometimes a very small number of exceptions. To apply the our method to senators’ roll call voting records, we pre-process the data as follows. If bill $j$ is more supported by the Republican party than the Democratic party and senator $i$ voted for the bill, then we let $Y_{ij} = 1$. If bill $j$ is more supported by the Democratic party and senator $i$ voted against the bill, we let $Y_{ij} = 1$. Otherwise, $Y_{ij} = 0$. More details about this data pre-processing can be found in Section~\ref{sec-real-data}. Under the considered model, the row parameter may be interpreted as the conservativeness score of senator $i$. That is, the higher the conservativeness score of a senator, the higher chance for him/her to support a bill favored by the Republican party and to vote against a bill favored by the Democratic party. The column parameter characterises the bill effect. In Section~\ref{sec-real-data}, we apply the model to link the roll call voting records from multiple years, where different senators have different terms in the senate, resulting in missingness of the data matrix. The model allows us to compare senators in terms of their conservative-liberal political ideology, even if they have not served in the senate at the same time. \iffalse \clearpage {\color{red} The Bradley-Terry ranking model \citep{bradley1952rank} in bipartite graphs with application to US senator voting. In this application, each row and each column of the response matrix correspond to a senator and a bill respectively. Each binary entry $Y_{ij}$ then indicates senator $i$'s inclination to the conservativeness of bill $j$. The row parameter $\theta_i$ can be interpreted as senator $i$'s conservativeness score while the column parameter $\beta_j$ can be interpreted as the liberty score of bill $j$, as the probability of voting for a conservative bill or voting against a liberal bill increases with senator's conservativeness score $\theta_i$ and decreases with the liberty score $\beta_j$ of the bill.} \fi \end{example} As mentioned previously, the considered nonlinear factor model can be viewed as a Bradley-Terry model \citep{bradley1952rank} for directed graphs that is commonly used for modeling pairwise comparisons. In Remark~\ref{rmk:BT} below, we discuss this connection and explain the reason why the existing result such as \cite{han2020asymptotic} does not apply to the current setting. \begin{remark}\label{rmk:BT} Data $Y$ under our model setting can be viewed as a bipartite graph with $N+J$ nodes. Its adjacency matrix takes the form \begin{equation}\label{eq:bipartite} \left(\begin{array}{cc} \text{NA}_{N,N} & Y \\ (1_{N,J}-Y)^T & \text{NA}_{J,J} \end{array}\right), \end{equation} where $\text{NA}_{N,N}$ and $\text{NA}_{J,J}$ are two matrices whose entries are missing and $1_{N,J}$ is a matrix with all entries being 1. We let the value of $1 - Y_{ij}$ be missing if $Y_{ij}$ is missing (i.e., $z_{ij} = 0$). Such a directed graph can be modeled by the Bradley-Terry model; see \cite{bradley1952rank}. In \cite{han2020asymptotic}, asymptotic normality results are established for $n$-by-$n$ adjacency matrices that follow the Bradley-Terry model when the graph size $n$ grows to infinity. However, \cite{han2020asymptotic} only consider a uniformly missing setting. That is, the probability that the edges between two nodes are missing is assumed to be the same for all pairs of nodes. This assumption is not satisfied for the adjacency matrix \eqref{eq:bipartite}, due to the two missing matrices on the diagonal. In fact, the asymptotic analysis under the current setting is more involved, due to the need of simultaneously considering two indices $N$ and $J$ and the increased complexity in approximating the asymptotic variance of model parameters. \end{remark} Given data $\{Y_{ij}:z_{ij} = 1, i = 1, ..., N, j = 1, ..., J\}$, the log-likelihood function for parameters $\theta = (\theta_1, ..., \theta_N)^T$ and $\beta = (\beta_1, ..., \beta_J)^T$ takes the form \begin{equation}\label{eq: likelihood-M} l(\theta, \beta) = \sum_{i,j:z_{ij}=1} \left[ Y_{ij}(\theta_i - \beta_j)-\log\{1+\exp(\theta_i - \beta_j)\}\right]. \end{equation} The identifiability of parameters $\theta$ and $\beta$ is subject to a location shift. That is, the distribution of data remains unchanged, if we add a common constant to all the $\theta_i$ and $\beta_j$, as the likelihood function in \eqref{eq: likelihood-M} only depends on the all differences $\theta_i - \beta_j$. To avoid ambiguity, we require $\sum_{i=1}^N\theta_i=0$ in the rest. We point out that this requirement does not play a role when we draw inference about any linear form of $M$, as the location shift of $\theta$ and $\beta$ does not affect the value of $M$. We estimate $\theta$ and $\beta$ by the maximum likelihood estimator \begin{equation}\label{eq:mle} (\hat \theta, \hat \beta) = \arg\max_{\theta, \beta}~l(\theta, \beta), s.t., \sum_{i=1}^N\theta_i=0. \end{equation} The maximum likelihood estimator of $\theta$ and $\beta$ further leads to the maximum likelihood estimator of $M$, $\hat m_{ij} = \hat \theta_i - \hat \beta_j$. It is easy to see that \eqref{eq:mle} is a convex optimization problem. Thanks to the low-rank structure of $M$, this problem can be efficiently solved by performing alternating maximization, as often used for estimating low-rank matrices \citep{chen2019joint,chen2019structured,udell2016generalized}. Such an algorithm is implemented for the numerical experiments, whose details are provided in Algorithm \ref{alg} below. \begin{algorithm}[h] \SetAlgoLined \caption{Alternating Gradient Ascent Algorithm} \KwIn{Partially observed data matrix $Y$, learning rate $\gamma$, tolerance threshold $tol$.} \KwOut{Estimates $\hat{\theta}$, $\hat{\beta}$.} Initialize $\theta^{(0)}=\{\theta_i^{(0)}:i=1,...,N\}, \beta^{(0)}=\{\beta_j^{(0)}:j=1,...,J\}$ \\with $\theta_i^{(0)}, \beta_j^{(0)}\sim $ Uniform$(-c, c)$, $i=1,...,N, j=1,...,J$,\\ JML$^{(0)}=0$, JML$^{(1)}={\sum}_{i,j:z_{ij}=1}\Big( y_{ij}\big\{\theta_i^{(0)} - \beta_j^{(0)}\big\}-\log\big[1+\exp\big\{\theta_i^{(0)} - \beta_j^{(0)}\big\}\big]\Big)$\; \While{JML$^{(1)}-$JML$^{(0)} > tol$} { JML$^{(0)}=$JML$^{(1)}$\; \For{$i = 1, \dots, N$}{ $\theta_i^{(1)} \leftarrow \theta_i^{(0)}+\gamma\Big(\sum_{j: z_{ij}=1}^{J}\Big[y_{ij}-\Big\{e^{\theta_i^{(0)}-\beta_j^{(0)}}\Big\}/\Big\{1+e^{\theta_i^{(0)}-\beta_j^{(0)}}\Big\}\Big]\Big)$\; } \For{$j = 1, \dots, J$}{ $\beta_j^{(1)} \leftarrow \beta_j^{(0)}+\gamma\Big(\sum_{i: z_{ij}=1}^{N}\Big[y_{ij}+\Big\{e^{\theta_i^{(1)}-\beta_j^{(0)}}\Big\}/\Big\{1+e^{\theta_i^{(1)}-\beta_j^{(0)}}\Big\}\Big]\Big)$\; } $\theta^{(1)}=\theta^{(1)}-N^{-1}\sum_{i=1}^N\theta_i^{(1)}$\; $\beta^{(1)}=\beta^{(1)}-N^{-1}\sum_{i=1}^N\theta_i^{(1)}$\; JML$^{(1)}={\sum}_{i,j:z_{ij}=1}\Big( y_{ij}\big\{\theta_i^{(1)} - \beta_j^{(1)}\big\}-\log\big[1+\exp\big\{\theta_i^{(1)} - \beta_j^{(1)}\big\}\big]\Big)$\; $\theta^{(0)}=\theta^{(1)}$\; $\beta^{(0)}=\beta^{(1)}$; } \label{alg} \end{algorithm} \section{Statistical Inference}\label{sec-asym-properties} In this section, we consider the statistical inference of any linear form of $M$. Specifically, we use $g: \mathbb R^{N\times J} \mapsto \mathbb R$ to denote a linear function of $M$ that takes the form \begin{equation}\label{eq:g} g(M)=\sum_{i=1}^{N}\sum_{j = 1}^J w_{ij}m_{ij}, \end{equation} where the weights $w_{ij}$ are pre-specified. It is straightforward that a point estimate of $g(M)$ is given by $g(\hat M)= \sum_{i=1}^{N}\sum_{j = 1}^J w_{ij}\hat m_{ij}$. Our goal is to establish the asymptotic normality for $g(\hat M)$, based on which we can test hypothesis about $g(M)$ or construct confidence intervals. We provide two examples of $g(M)$ that may be of interest in practice. \begin{example} Consider $g(M)=m_{ij}$ for entry $(i,j)$ that is not observed, i.e., $z_{ij} = 0$. The asymptotic normality of $\hat m_{ij}$ allows us to quantify the uncertainty in our prediction $\exp(\hat m_{ij})/\{1+\exp(\hat m_{ij})\}$ of the unobserved entry. \end{example} \begin{example} Consider $g(M)= \sum_{j=1}^J (m_{ij} - m_{i'j})/J = \theta_i - \theta_{i'}$, that is of interest in both educational testing and ranking. If we interpret the model as the Rasch model in educational testing, then $\theta_i$ can be regarded as examinee $i$'s ability level. Examinee $i$ is more likely to answer any question correctly than examinee $i'$ if $\theta_i > \theta_{i'}$, and vise versa. Therefore, even when two examinees do not answer the same test form, the statistical inference of this quantity will allow us to compare their performance and further quantify the uncertainty in this comparison. On the other hand, if we draw connections to Bradley-Terry model in ranking, then $\theta_i$ can be interpreted as subject $i$'s ranking criteria. The statistical inference on $(\theta_i - \theta_{i'})$ for any combination of $i, i'$ would allow us to quantify the uncertainty in the rankings of all $N$ subjects. \end{example} We first establish the existence and consistency for $M$, $\theta$, and $\beta$. We denote $$J_{*}=\min\Big\{\sum_{j=1}^{J}z_{ij} : i = 1, ..., N \Big\} \mbox{ and } J^{*}=\max\Big\{\sum_{j=1}^{J}z_{ij}: i = 1, ..., N \Big\}$$ as the minimum and maximum numbers of observed entries per row, respectively. Similarly, we denote $$N_{*}=\min\Big\{\sum_{i=1}^{N}z_{ij}: j = 1, ..., J \Big\} \mbox{ and } N^{*}=\max\Big\{\sum_{i=1}^{N}z_{ij}: j = 1, ..., J \Big\}$$ as the minimum and maximum numbers of observed entries per column, respectively. Let $\Vert x \Vert_{\infty} = \max\{|x_i|: i = 1, ..., n\}$ be the infinity norm of a vector $x = (x_1, ..., x_n)^T$. Let $\theta^*$, $\beta^*$ and $M^*$ be the true values of $\theta$, $\beta$ and $M$, respectively. Without loss of generality, we assume $N\geq J.$ For simplicity, we also assume $N_{*}>J_{*}$ and $N^*>J^*$. The following conditions are required. \begin{condition}\label{cond:speed} As $N$ and $J$ grow to infinity, the following are satisfied: \begin{itemize} \item[$(a)$] There exists a constant $k>0$, such that $N_{*} \geq k N^{2/3}$ and $J_{*} \geq k J^{2/3}$; \item[$(b)$] $(J_{*})^{-1} (\log N)$ converge to 0; \item[$(c)$] There exist positive constants $k_1$ and $k_2$ such that $k_1 J_{*} \leq J^{*} \leq k_2 J_{*}$. \end{itemize} \end{condition} \begin{condition}\label{cond:bound} There exists a constant $c<\infty$ such that $\Vert \theta^*\Vert_{\infty} < c$ and $\Vert \beta^*\Vert_{\infty} < c$. \end{condition} \begin{condition} \label{cond:connect} For any $(i, j)$, there exists $1\leq i_1, i_2,...,i_k \leq N$ and $1\leq j_1,j_2, ...,j_k \leq J$ such that $z_{ij_1}=z_{i_1j_1}=z_{i_1 j_2}=z_{i_2 j_2}=...=z_{i_kj_k}=z_{i_kj}=1.$ \end{condition} We provide some discussions on Conditions~\ref{cond:speed} and \ref{cond:connect}. Condition~\ref{cond:speed}(a) requires the number of observations for each parameter to grow to infinity at a suitable rate. Under this requirement, the proportion of observable entries is allowed to decay to zero at the rate $(NJ)^{-\frac{1}{3}}$. Condition~\ref{cond:speed}(b) is a very mild technical condition that requires $J_*$ to grow faster than $\log(N)$. Condition~\ref{cond:speed}(c) requires that $J_*$ and $J^*$ are of the same order. This assumption essentially requires a balanced missing data pattern that has a similar spirit as the random sampling regimes for missingness adopted in \cite{cai2013max} and \cite{davenport20141}. Condition~\ref{cond:connect} is necessary and sufficient for the identifiability of $\theta$ and $\beta$; see Proposition~\ref{prop:connect} for a formal statement. \begin{proposition}\label{prop:connect} If Condition~\ref{cond:connect} holds, then $\theta$ and $\beta$ are uniquely determined by equations $\sum_{i=1}^N\theta_i=0$ and $\theta_i - \beta_j = m_{ij}$, $i = 1, ..., N, j = 1, ..., J$, for which $z_{ij} = 1$. If Condition~\ref{cond:connect} does not hold, then there exists $(\tilde \theta, \tilde \beta) \neq (\theta, \beta)$, such that $\sum_{i=1}^N \tilde\theta_i=0$, $\sum_{i=1}^N \theta_i=0$, and $\theta_i - \beta_j = \tilde \theta_i - \tilde \beta_j$, $i = 1, ..., N, j = 1, ..., J, z_{ij} = 1$. \end{proposition} Theorem~\ref{thm-existence} below guarantees the existence and consistency of the maximum likelihood estimator, when both $N$ and $J$ grow to infinity. \begin{theorem}\label{thm-existence} Assume Conditions \ref{cond:speed}, \ref{cond:bound} and \ref{cond:connect} hold. Then, as $N, J$ grow to infinity, maximum likelihood estimator $(\hat{\theta},\hat{\beta})$ exists, with probability tending to 1. Furthermore, as $N$ and $J$ grow to infinity, we have $$\Vert \hat \theta - \theta^*\Vert_{\infty} = O_p\big\{(\log N)^{\frac{1}{2}}J_{*}^{-\frac{1}{2}} \big\}, \quad \Vert \hat \beta - \beta^*\Vert_{\infty} = O_p\big\{(\log J)^{\frac{1}{2}}N_{*}^{-\frac{1}{2}} \big\},$$ and $$\max_{i,j} \vert \hat m_{ij} - m_{ij}^*\vert = O_p\big\{(\log J)^{\frac{1}{2}}N_{*}^{-\frac{1}{2}} + (\log N)^{\frac{1}{2}}J_{*}^{-\frac{1}{2}} \big\}.$$ \end{theorem} \iffalse Theorem~\ref{thm-consistency} below shows the rate of convergence for $\theta$ and $\beta$ under vector maximum norm, which further implies the convergence for $M$. \begin{theorem}\label{thm-consistency} Suppose Conditions \ref{cond:speed}, \ref{cond:bound} and \ref{cond:connect} hold. Then as $N$ and $J$ grow to infinity, we have $\Vert \hat \theta - \theta^*\Vert_{\infty} = O_p\big\{(\log J)^{\frac{1}{2}}N_{*}^{-\frac{1}{2}} \big\}$, $\Vert \hat \beta - \beta^*\Vert_{\infty} = O_p\big\{(\log N)^{\frac{1}{2}}J_{*}^{-\frac{1}{2}} \big\}$, and $\max_{i,j} \vert \hat m_{ij} - m_{ij}^*\vert = O_p\big\{(\log J)^{\frac{1}{2}}N_{*}^{-\frac{1}{2}} + (\log N)^{\frac{1}{2}}J_{*}^{-\frac{1}{2}}) \big\}$. \end{theorem} \fi \iffalse Note that as a direct consequence of Proposition \ref{prop:connect}, we can reparameterise $g(M)$ as $\Tilde{g}$ with observed entries only. For each $(i,j)$ with $z_{ij}=0,$ there exist $1\leq i_{i1}, ...,i_{ik}\leq N$ and $1\leq j_{j1},...,j_{jk}\leq J$ such that $z_{i,j_{j1}}=z_{i_{i1},j_{j1}}=...=z_{i_{ik},j_{jk}}=z_{i_{ik},j}=1$ with $$ \Tilde{g}(M)=\sum_{i, j: z_{ij}=1}w_{ij}m_{ij}+\sum_{i,j: z_{ij}=0}w_{ij}\big(m_{i,j_{j1}}-m_{i_{i1},j_{j1}}+m_{i_{i1},j_{j2}}-...+m_{i_{i_k},j}\big).$$ \fi \iffalse \begin{theorem}\label{thm-consistency} Assume Condition~\ref{cond:speed} holds. The maximum likelihood estimators $\hat{M}_{ij}=\hat{\theta}_{i}-\hat{\beta}_{j}$, for all $(i,j)$ such that $1\leq i\leq N,1 \leq j\leq J, z_{ij}=1$, are consistent with, \begin{align*} \max_{i, j: z_{ij}=1}|\hat{m}_{ij}-m_{ij}|=O_p\Big\{(\log N^{*})^{\frac{1}{2}}(N_{*}^{-1}+J_{*}^{-1})^{\frac{1}{2}}\Big\} \quad \text{as}\quad N, J \to \infty. \end{align*} If further assume condition 2 holds, then the maximum likelihood estimators for the column parameters $\hat{\beta}_{j}$, $1 \leq j \leq J$, and for the row parameters $\hat{\theta}_{i}$, $1 \leq i \leq N$, are consistent, \begin{align*} &max_{1\leq j \leq J}|\hat{\beta}_{j}-\beta_{j}|=O_p\Big\{(\log J)^{\frac{1}{2}}N_{*}^{-\frac{1}{2}}\Big\}\quad \text{as}\quad N, J \to \infty,\\ &max_{1\leq i \leq N}|\hat{\theta}_{i}-\theta_{i}|=O_p\Big\{(\log N)^{\frac{1}{2}}J_{*}^{-\frac{1}{2}}\Big\}\quad \text{as}\quad N, J \to \infty. \end{align*} \end{theorem} \fi \iffalse \begin{theorem}\label{thm-asym-normality} Assume Conditions \ref{cond:speed}, \ref{cond:bound} and \ref{cond:connect} hold. Further, assume there exists a constant $C>0$ such that $\sum_{i,j:z_{ij}=1}\vert\lambda_{ij}(g)\vert \leq C$. Then for any linear function $g$ defined in \eqref{eq:g} such that $g(x) \neq 0,$ $$\sigma(g)^{-1}\big\{g(\hat{M})-g({M^*})\big\} \to N(0,1)\quad \text{in distribution}.$$ Where, $\sigma(g)=\sup\{|\sum_{i,j: z_{ij} = 1} \lambda_{ij}(g) x_{ij}|: \sum_{i,j:z_{ij}=1} x_{ij}^2\sigma_{ij}^2\leq 1, x_{ij} \in \mathbb R\}$. \end{theorem} Note that the asymptotic variance in Theorem~\ref{thm-asym-normality} is the solution to a quadratic programming problem that does not have an explicit form in general. In Theorem~\ref{thm-3-sufficient-condition} below, we consider a subset of linear functions, for which $\sigma(g)$ can be approximated by an analytic form. \fi To state the asymptotic normality result for $g(\hat M)$, we reexpress $$g(M) = w_g^T \theta +\tilde w_g^T \beta,$$ where $w_g=(w_{g1}, \cdots, w_{gN})^T$ and $\tilde w_g=(\Tilde w_{g1}, \cdots, \Tilde w_{gJ})$. Note that this expression always exists, by letting $w_{gi}=\sum_{j=1}^Jw_{ij}$ and $\Tilde w_{gj}=-\sum_{i=1}^Nw_{ij}$. We introduce some notations. Let $\sigma_{ij}^2 =$ var$(Y_{ij}) = \exp(\theta_i^* -\beta_j^*)/\{1+\exp(\theta_i^* -\beta_j^*)\}^2$, $\sigma_{i+}^2= \sum_{j = 1}^J z_{ij}\sigma_{ij}^2$, and $\sigma_{+j}^2= \sum_{i = 1}^N z_{ij}\sigma_{ij}^2.$ Further denote $\hat \sigma_{ij}^2 = \exp(\hat \theta_i -\hat \beta_j)/\{1+\exp(\hat \theta_i -\hat \beta_j)\}^2$, $\hat \sigma_{i+}^2= \sum_{j = 1}^J z_{ij}\hat \sigma_{ij}^2$, and $\hat \sigma_{+j}^2= \sum_{i = 1}^N z_{ij}\hat \sigma_{ij}^2$ to be the corresponding plug-in estimates. We use $\Vert \cdot\Vert_1$ to denote the $L_1$ norm of a vector. The result is summarized in Theorem~\ref{thm-3-sufficient-condition} below. \iffalse {\color{red} Asymptotic normality is then established on a more practical form of $g$ in terms of $\theta$ and $\beta.$ Note that for any linear form $g$ define in \eqref{eq:g}, there exist vectors $w_g=(w_{ig}:i=1,...,N)$ and $\tilde w_g=(\Tilde w_{jg}: j=1,...,J)$ such that $g(M) = w_g^T \theta +\tilde w_g^T \beta$, where $w_{ig}=\sum_{j=1}^Jw_{ij}$ and $\Tilde w_{jg}=-\sum_{i=1}^Nw_{ij}$. Let $\sigma_{ij}^2 =$ var$(Y_{ij}) = \exp(\theta_i^* -\beta_j^*)/\{1+\exp(\theta_i^* -\beta_j^*)\}^2$, $\sigma_{i+}^2= \sum_{j = 1}^J z_{ij}\sigma_{ij}^2$, $\sigma_{+j}^2= \sum_{i = 1}^N z_{ij}\sigma_{ij}^2.$ Further denote $\hat \sigma_{ij}^2 = \exp(\hat \theta_i -\hat \beta_j)/\{1+\exp(\hat \theta_i -\hat \beta_j)\}^2$, $\hat \sigma_{i+}^2= \sum_{j = 1}^J z_{ij}\hat \sigma_{ij}^2$, and $\hat \sigma_{+j}^2= \sum_{i = 1}^N z_{ij}\hat \sigma_{ij}^2$ be the corresponding plug-in estimators. We use $\Vert \cdot\Vert_1$ to denote the $L_1$ norm of a vector. The result is summarized in Theorem~\ref{thm-3-sufficient-condition} below. } \fi \begin{theorem}\label{thm-3-sufficient-condition} Assume Conditions \ref{cond:speed}, \ref{cond:bound} and \ref{cond:connect} hold and $J_{*}^{-2}N_{*}(\log N)^2\to 0$ as $N \to \infty$. Consider a linear function $g(M) = w_g^T \theta +\tilde w_g^T \beta$ with $g(M) \neq 0$. Further suppose that there exists a constant $C > 0$ such that $\| w_g\|_1 < C$ and $\| \tilde w_g\|_1 < C$. Then $$\tilde \sigma(g)^{-1}\big\{g(\hat{M})-g({M^*})\big\} \to N(0,1)~ \text{in distribution},$$ where $\tilde{\sigma}^2(g)=\sum_{i=1}^{N}w_{gi}^2(\sigma_{i+}^2)^{-1}+\sum_{j=1}^{J}\Tilde w_{gj}^2(\sigma_{+j}^2)^{-1}.$ Moreover, $\tilde{\sigma}(g)$ can be replaced by its plug-in estimator, i.e., \begin{equation}\label{eq:normality} \hat \sigma(g)^{-1}\big\{g(\hat{M})-g({M^*})\big\} \to N(0,1) \text{ in distribution}, \end{equation} where $\hat{\sigma}^2(g)=\sum_{i=1}^{N}w_{gi}^2(\hat\sigma_{i+}^2)^{-1}+\sum_{j=1}^{J}\Tilde w_{gj}^2(\hat\sigma_{+j}^2)^{-1}.$ \end{theorem} We now discuss the implications of Theorem~\ref{thm-3-sufficient-condition}. For each $\theta_i$, var$(\hat{\theta}_i)=(\sigma_{i+}^2)^{-1}\big\{1+ o(1)\big\}$. It is worth noting that by the classical theory for maximum likelihood estimation, $(\sigma_{i+}^2)^{-1}$ is the Cramer-Rao lower bound for the estimation of $\theta_i$, when the column parameters $\beta$ are known. Thus, the result of Theorem~\ref{thm-3-sufficient-condition} implies that $\hat \theta_i$ is an asymptotically optimal estimator for $\theta_i$. Similarly, for each $\beta_j$, var$(\hat{\beta}_j)=(\sigma_{+j}^2)^{-1}\big\{1+ o(1)\big\}$, which also achieves the Cramer-Rao lower bound asymptotically, when the row parameters $\theta$ are known. Moreover, var$(\hat{m}_{ij}) =$ var$(\hat \theta_i - \hat \beta_j) =\big\{(\sigma_{i+}^2)^{-1}+(\sigma_{+j}^2)^{-1}\big\}\big\{1+ o(1)\big\}$. We end this section with a remark. \begin{remark} The derived asymptotic theory is different from that for non-linear regression models of increasing dimensions that has been studied in \cite{portnoy1988asymptotic}, \cite{he2000parameters} and \cite{wang2011gee}. To achieve asymptotic normality under the setting of these works, one at least requires the number of observations to grow faster than the square of the number of parameters. Under the setting of the current work, the model has $N+J-1$ free parameters, while the number of observed entries is allowed to grow as slow as $O((NJ)^{\frac{2}{3}})$, which is much slower than $(N+J-1)^2$. Even when there is no missing entries, the number of observed entries is $NJ$ which does not grow as fast as $(N+J-1)^2$. \end{remark} \section{Simulation Study}\label{sec-simulation} We study the finite-sample performance of the likelihood-based estimator. We consider two settings: (1) $N=5000$ and $J=200$, and (2) $N=10000$ and $J=400$. Missing data are generated under a block-wise design. That is, we split the rows into five equal-sized clusters and the columns into four equal-sized clusters. We let each row cluster correspond to the columns from a distinct combination of two column clusters. Rows from the same cluster have the same missing pattern. Specifically, their entries are observable and only observable, on the columns that this row cluster correspond to. This missing data pattern can be illustrated by a five-by-four block-wise matrix $\{(1,0,0,1,0)^T, (1,1,0,0,1)^T, (0,1,1,1,0)^T, (0,0,1,0,1)^T\}$, where 1 and 0 represent a submatrix with $z_{ij} = 1$ and 0, respectively. An illustration of the missing pattern $Z$ is illustrated in Figure \ref{fig:simulation_data1}. Under the first setting, $N_{*}=2000, N^{*}=3000$, and $J_{*}=J^{*}=100$. Under the second setting, $N_{*}=4000, N^{*}=6000$, and $J_{*}=J^{*}=200$. For each setting, $\theta$ is simulated from a uniform distribution over the space $\{x = (x_1, ..., x_N)^T: \sum_{i=1}^N x_i = 0, -2 \leq x_i \leq 2 \}$, and $\beta$ is obtained by simulating $\beta_j$ independently from the uniform distribution over the interval $[-2,2]$. For each setting, 2000 independent datasets are generated from the considered model. \begin{figure}[ht] \centering \includegraphics[scale=1.25]{plots/missing.jpg} \caption{A heat map of $Z$. The black and white regions correspond to $z_{ij}=1$ and 0, respectively.} \label{fig:simulation_data1} \end{figure} Under setting (1), the mean squared estimation errors for $M$, $\theta$ and $\beta$ are 0.067, 0.064 and 0.0028, respectively, across all relevant entries and all 2000 independent samples. Under setting (2), these values read 0.033, 0.031 and 0.0013, respectively. Unsurprisingly, increasing sample sizes can improve estimation accuracy. We then examine the variance approximation in Theorem~\ref{thm-3-sufficient-condition}. We compare $\hat{\sigma}^2(g)$, $\tilde{\sigma}^2(g)$ and $s^2(g)$, where $s^2(g)$ denotes the sample variance of $g(\hat M)$ that is calculated based on the 2000 simulations. As $\hat{\sigma}^2(g)$ varies across the datasets, we calculate $\bar{\sigma}^2(g)$ as the average of $\hat{\sigma}^2(g)$ over 2000 simulated datasets. We consider functions $g(M) = m_{ij}, \theta_i, \beta_j$, $i=1, ..., N, j = 1, ..., J$. The results are given in Figure~\ref{fig:var-pairs}, where panels (a)-(c) show the scatter plots of ${s}^2(g)$ against $\bar{\sigma}^2(g)$ and panels (d)-(f) show those of $s^2(g)$ against $\tilde{\sigma}^2(g)$. These plots suggest that $\bar{\sigma}^2(g)$, $\tilde{\sigma}^2(g)$, and $s^2(g)$ are close to each other, for the specific forms of $g$ that are examined. \iffalse We validate variance approximation \eqref{eq:normality} and compare it to the Cramer-Rao lower bound with plots of $(\hat{\sigma}_{ave}^2, \sigma_{s}^2)$ and $(\sigma_{opt}^2, \sigma_{s}^2)$ when $g(M)$ takes the forms of $m_{ij}$, $\theta_i$ and $\beta_j$ respectively. $\hat{\sigma}_{ave}^2$, $\sigma_{s}^2$ and $\sigma_{opt}^2$ denote the average approximated variance, the sample variance and Cramer-Rao lower bound for a parameter respectively, calculated based on 2000 independent datasets. According to Fig. \ref{fig:var-pairs}(a) - (c), the sample variances are close to the corresponding approximations. Fig. \ref{fig:var-pairs}(d) - (e) show that the sample variances are close to the Cramer-Rao lower bounds. \fi \iffalse \begin{figure}[h!] \centering \includegraphics[scale=0.42]{plots/varPair_sampleVsEst_corrected1.pdf} \caption{Panels $(a)-(c)$ show $\sigma_{s}^2$ against $\hat{\sigma}_{ave}^2$ for $m_{ij}$, $\theta_i$ and $\beta_j$, respectively. Each panel shows 100 randomly sampled $m_{ij}$, $\theta_i$, or $\beta_j$ from each setting. The line $y = x$ is given as a reference. } \label{fig:var-pairs} \end{figure} \fi \begin{figure}[ht] \centering \begin{subfigure}{\linewidth} \centering \includegraphics[scale=0.45]{plots/varPair_sampleVsEst_corrected1.pdf} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[scale=0.45]{plots/varPair_estVstrue_corrected1.pdf} \end{subfigure} \caption{Panels (a)-(c) plot $s^2(g)$ against $\bar{\sigma}^2(g)$ for $g(M) = m_{ij}$, $\theta_i$, and $\beta_j$, respectively. Panels (d)-(f) plot $s^2(g)$ against $\tilde{\sigma}^2(g)$ for $g(M) = m_{ij}$, $\theta_i$ and $\beta_j$, respectively. Each panel shows 100 randomly sampled $m_{ij}$, $\theta_i$, or $\beta_j$ under each setting. The line $y = x$ is given as a reference. } \label{fig:var-pairs} \end{figure} To validate asymptotic normality, we compare the empirical densities of the 2000 sample estimates of $m_{11}$, $\theta_1$ and $\beta_1$ against their respective theoretical normal density curves in Figure \ref{fig:density} for illustration. We can observe from Figure \ref{fig:density} that the empirical distributions of the estimates agree well with their corresponding theoretical distributions. \begin{figure}[ht] \centering \begin{subfigure}{\linewidth} \centering \includegraphics[scale=0.45]{plots/density_plots_corrected.pdf} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[scale=0.45]{plots/density_plots_case2_corrected.pdf} \end{subfigure} \caption{Panels (a)-(c) presents the empirical densities (histograms) of $\hat{m}_{11}$, $\hat{\theta}_{1}$ and $\hat{\beta}_1$ under setting (1), respectively, out of 2000 simulations. Panels (e)-(g) presents the empirical densities of $\hat{m}_{11}$, $\hat{\theta}_{1}$ and $\hat{\beta}_1$ under setting (2), respectively, out of 2000 simulations. The curves are theoretical density curves of N$(m_{11}, \Tilde \sigma^2(m_{11}))$, N$(\theta_{1}, \Tilde \sigma^2(\theta_{1}))$ and N$(\beta_{1}, \Tilde \sigma^2(\beta_{1})),$ respectively, included as references.} \label{fig:density} \end{figure} Furthermore, for each $m_{ij}$, $\theta_i$, and $\beta_j$, we construct its 95\% Wald interval based on \eqref{eq:normality}, for which the empirical coverage based on 2000 independent replications is computed. This result is shown in Figure~\ref{fig:coverage_prob}, where the two panels correspond to the two simulation settings, respectively. In each panel, the three box-plots show the empirical coverage probabilities for entries of $M$, $\theta$, and $\beta$, respectively. As we can see, all these empirical coverage probabilities are close to the nominal level 95\%. \iffalse \begin{figure}[h] \centering \includegraphics[scale=0.8]{plots/missing.jpg} \caption{A heat map of $Z$. The black and white regions correspond to $z_{ij}=1$ and 0, respectively.} \label{fig:simulation_data} \end{figure} \fi \begin{figure}[ht] \centering \includegraphics[scale=0.45]{plots/coverage_prob_corrected1.pdf} \caption{Panels (a) and (b) show the empirical coverage rates for the 95\% Wald intervals under settings (1) and (2), respectively.} \label{fig:coverage_prob} \end{figure} \iffalse \clearpage We next provide an approximation formula for $\sigma^2(g)$ to facilitate inferential procedures. Denote the analytical approximation of $\sigma^2(g)$ by $\Tilde{\sigma}^2(g)$ and it takes the following form, \begin{align*} \Tilde{\sigma}^2(g)=&\sum_{i=1}^{N}w_{i+}^2(\sigma_{i+}^2)^{-1}+\sum_{j=1}^{J}w_{+j}^2(\sigma_{+j}^2)^{-1} \end{align*} \begin{remark} Note the variance approximation formula $\Tilde{\sigma}^2(g)$ is in population form consisting of $\sigma_{ij}^2=$var$(Y_{ij})=e^{\theta_i-\beta_j}/(1+e^{\theta_i-\beta_j})^2$. In practical implementations, we can simply replace the latent row and column parameters with their maximum likelihood estimates to obtain an empirical version $\hat{\sigma}^2(g)$ to approximate $\sigma^2(g).$ \end{remark} We discuss a few corollaries of theorem \ref{thm-3-sufficient-condition} following our motivating examples.\\ \textit{Example 1 (continued).} Recall that in example 1 we considered $g(\Tilde{M})=m_{ij}$ for any $(i,j)$ such that $1\leq i\leq N, 1 \leq j\leq J$. A direct application of Theorem \ref{thm-3-sufficient-condition} gives the following corollary. \begin{corollary}\label{lemma-peopleitemdiffvar} Assume Conditions \ref{cond:speed}, \ref{cond:bound} and \ref{cond:connect} hold. The asymptotic variance of the maximum likelihood estimator of $m_{ij}$, var$(\hat{m}_{ij})$, $1 \leq i \leq N$, $1 \leq j\leq J$ attains oracle variance of $\big\{(\sigma_{i+}^2)^{-1}+(\sigma_{+j}^2)^{-1}\big\}$, where var$(\hat{m}_{ij})&=\big\{(\sigma_{i+}^2)^{-1}+(\sigma_{+j}^2)^{-1}\big\}\big\{1+ O(N_{*}^{-1}J_{*}^{-1})\big\}$ as $N, J \to \infty. $ \end{corollary} Corollary \ref{lemma-peopleitemdiffvar} gives a remarkable result as it states the variance of $\hat{m}_{ij}$ is optimal in a way that it can match the oracle Cramer-Rao lower bound up to a pre-constant. Together with the asymptotic normality result from Theorem \ref{thm-asym-normality}, this offer an effective tool for inference procedures such as constructing precise confidence intervals for $m_{ij}$.\\ \textit{Example 2 (continued).} For $g(\Tilde{M})=\theta_i$, $1\leq i \leq N$, we have the following corollary. \begin{corollary}\label{proposition-peoplevar} Assume Conditions \ref{cond:speed}, \ref{cond:bound} and \ref{cond:connect} hold. The asymptotic variance of the maximum likelihood estimator for any row parameter, var$(\hat{\theta}_i)$, $1 \leq i\leq N$, attains oracle variance of $(\sigma_{i+}^2)^{-1}$, where var$(\hat{\theta}_i)&=(\sigma_{i+}^2)^{-1}\big\{1+ O(N_{*}^{-1}J_{*}^{-1})\big\}$ as $N, J \to \infty.$ \end{corollary} Corollary \ref{proposition-peoplevar} states that the leading term of var$(\hat{\theta}_i)$ agrees with the optimal Cramer-Rao lower bound given the oracle knowledge of all the latent column parameters. Note that the estimation of row parameters depends on the estimated column parameters. Surprisingly, corollary \ref{proposition-peoplevar} shows that the additional variation resulting from the estimation process of the column parameters will not be added to var$(\hat{\theta}_i)$ as if we have observed all of them in advance. As a result, corollary \ref{proposition-peoplevar} gives a very sharp bound on var$(\hat{\theta}_i)$.\\ \textit{Example 3 (continued).} For $g(M)=\theta_i-\theta_k$, $ 1\leq i,k\leq N,$ we have the following corollary. \begin{corollary}\label{proposition-peoplediffvar} Assume Conditions \ref{cond:speed}, \ref{cond:bound} and \ref{cond:connect} hold. The asymptotic variance of the difference between any two row parameters var$(\hat{\theta}_i-\hat{\theta}_k)$, $1 \leq i, k\leq N,$ attains oracle variance of $\big\{(\sigma_{i+}^2)^{-1}+(\sigma_{k+}^2)^{-1}\big\}$ with var$(\hat{\theta}_i-\hat{\theta}_k) = \big\{(\sigma_{i+}^2)^{-1}+(\sigma_{k+})^{-1}\big\}\big\{1+ O(N_{*}^{-1}J_{*}^{-1})\big\}$ as $N, J \to \infty.$ \end{corollary} \begin{remark} Similar results to corollaries \ref{proposition-peoplevar} and \ref{proposition-peoplediffvar} for column parameters can be derived. \end{remark} \fi \section{Real-data Applications}\label{sec-real-data} In what follows, we consider two real-data applications. \subsection{Application to Educational Testing}\label{sec-real-data1} We first apply the proposed method to link two forms of an educational test that share common items. The dataset is a benchmark dataset for studying linking methods for educational testing \citep{gonzalez2017applying}. It contains binary responses from two forms of a college admission test. Each form has 120 items and is answered by 2000 examinees. There are 40 common items shared by the two test forms. There is no missing data within each test. Thus, $N = 4000$, $J = 200$, and 40\% of the data entries are missing. We apply the proposed method to this dataset. Making use of Theorem~\ref{thm-3-sufficient-condition}, 95\% confidence intervals are obtained for both the row (i.e., person) parameters and the column (i.e., item) parameters. The results allow us to compare students who took different test forms, as well as non-common items from the two forms. For illustration, we randomly choose 100 row parameters and 100 column parameters and show their 95\% confidence intervals in Figure~\ref{fig: real-data-CI}. Such uncertainty quantification can be vital for colleges when making admission decisions. \begin{figure}[ht] \centering{\includegraphics[scale=0.45]{plots/real_CI_corrected.pdf}} \caption{(a) 95\% confidence intervals of 100 row parameters, with 50 randomly selected from each group. (b) 95\% confidence intervals of the 100 column parameters, with 40 each randomly chosen from group 1 and group 2 and 20 randomly selected from anchor items (i.e., common items).} \label{fig: real-data-CI} \end{figure} \subsection{Application to Senate Voting}\label{sec-real-data2} We now apply the proposed method to the United States senate roll call voting data. Data from the 111th through the 113th congress that include the voting records from January 11, 2009 to December 16, 2014. Quite a few senators did not serve for the entire period. To apply our method to senators’ roll call voting records with $\theta_i$ being interpreted as the conservativeness score of senator $i$, we pre-process the data as follows. First, five senators who did not serve for more than half a year during the period are removed from the dataset, including Edward M. Kennedy, Joe Biden, Hilary Clinton, Julia Salazar and Carte Goodwin. Second, 191 bills are removed, as all the observed votes to each of these bills are the same and consequently their maximum likelihood estimates do not exist. After these two steps, the resulting dataset contains $N = 139$ senators and $J = 1648$ bills. Finally, for bill $j$ that has a higher percentage support within the Republican party than that within the Democratic party, we let $Y_{ij} = 1$ if senator $i$ voted for the bill and $Y_{ij} = 0$ if senator $i$ voted against it. For bill $j$ that has a higher percentage support within the Democratic party than that within the Republican party, we let $Y_{ij} = 1$ if senator $i$ voted against the bill and $Y_{ij} = 0$ if he/she voted for it. The value of $Y_{ij}$ is missing, if the senator chose not to vote or he/she was not in the senate when this bill was voted. For the final data being analyzed, the proportion of missing entries is 26.1\% and the connectedness Condition~\ref{cond:connect} is satisfied. The missingness pattern of the dataset is given in Figure \ref{fig: heat-map-Z-voting}. Note that in this example, $N<J$. However, our asymptotic results are still applicable if we simply switch the roles of $N$ and $J$ in the required conditions. Our asymptotic results allow us to compare senators' ideological position, even if they did not serve in the senate at the same time. For example, Judd Gregg served in the senate between January 3, 1993 and January 3, 2011, while Marco Rubio started his first term as a senator since January 3, 2011. In our model, Judd Gregg ($\theta_i$) and Marco Rubio ($\theta_k$) have estimated conservativeness scores of 2.59 and 4.25, respectively. Applying our asymptotic results, we have $\hat{\theta}_i-\hat{\theta}_k = -1.66$ and its standard error is 0.169. If we test $H_0: \theta_i=\theta_k$ against $H_1: \theta_i\neq \theta_k$, we obtain an extremely small p-value of $9.0\times10^{-23}.$ Therefore, we conclude that senator Marco Rubio is significantly more conservative than senator Judd Gregg. In addition, we present in Tables \ref{table: voting-ranking-conservative} and \ref{table: voting-ranking-liberal} the ten senators with the largest row parameter estimates, and the ten senators with the smallest row parameter estimates. These results align well with the public perceptions about these senators. For example, Jim Demint, who is ranked the most conservative senator in this dataset by our method, was also identified by Salon as one of the most conservative members of the senate \citep{Kornacki2011why}. Our method ranks Mike Lee the second, though his conservativeness score is not significantly different from that of Demint. In fact, in 2017, the New York Times used the NOMINATE system \citep{poole2001d} to arrange Republican senators by ideology and ranked Lee as the most conservative member of the Senate \citep{Parlapiano2017how}. For another example, Brian Schatz who is ranked to be the most liberal senator by our method, is well-known as a liberal Democrat. During his time in the senate, he voted with the Democratic party on most issues. Finally, the 95\% confidence intervals for all the row parameters are shown in Figure \ref{fig:real-data-CI1} and a full list of rankings for all the 139 senators is given in the Appendix section, where the corresponding row parameter estimates and their standard errors are also presented. \iffalse \clearpage In particular, we rank the top 10 and the bottom 10 of the estimates for the row parameters to measure the senators' conservativeness score. The results are summarised in Tables \ref{table: voting-ranking-conservative} and \ref{table: voting-ranking-liberal}, respectively. We observe that all the senators with the top 10 conservativeness scores are Republicans. This aligns well with Republicans' traditional and conservative political views. The ranking does make sense as most of senators amongst the list are renowned for their conservative political views. Take Ted Cruz for example, described as the most conservative presidential nominee in at least a half-century by New York Times \citep{Cruz}, is famous for his conservative views. On the other hand, all the senators with the top 10 lowest conservativeness scores are famous Democrats renowned for their liberal political views. Furthermore, making use of Theorem~\ref{thm-3-sufficient-condition}, 95\% confidence intervals are obtained for the row (i.e. senator) parameters as illustrated in Figure \ref{fig:real-data-CI1}. We note that there is one confidence interval, corresponding to senator Carte Goodwin, with exceptionally large width compared to others, as marked red in Figure \ref{fig:real-data-CI1}. It is caused by limited number of observations. In particular, senator Carte Goodwin was appointed by Governor Joe Manchin on July 16, 2010 to fill the vacancy created by the death of Robert Byrd. However, he chose to not run in the special election on November 2, 2010, and his term expired on November 15, 2010. Such results allow us to compare senators' level of conservativeness even if they do not work for the same terms in senate. \fi \begin{figure}[ht] \centering{\includegraphics[height=3.5cm, width=12cm]{plots/heatPlot4.png}} \caption{A heat map of $Z$. The black and white regions correspond to $z_{ij}=1$ and 0, respectively.} \label{fig: heat-map-Z-voting} \end{figure} \iffalse \begin{table}[h] \small \centering \begin{tabular}{c|c|c|c} Rank & Senator (Party) & Rank & Senator \\ \hline 1 & Ted Cruz (Rep) & 145 & Mazie Hirono\\ \hline 2 & Tim Scott & 144 & \\ \hline 3 & Mike Lee & 143 & Cory Booker\\ \hline 4 & Deb Fischer & 142 & Barack Obama\\ \hline 5 & Rand Paul & 141 & Ted Kennedy\\ \hline 6 & Ron Johnson & 140 & Joe Donnelly\\ \hline 7 & Marco Rubio & 139 & Ted Kennedy\\ \hline 8 & Jerry Moran & 138 & Martin Heinrich\\ \hline 9 & Jeff Flake & 137 &Ed Markey\\ \hline 10 & John Boozman& 136 & Chris Murphy \\ \hline \end{tabular} \caption{Ranking of the top 10 most conservative (left) and top 10 most liberal (right) senators predicted by our model.} \label{tab:my_label} \end{table} \fi \begin{table}[h] \centering \caption{Ranking of the top 10 most conservative senators predicted by the model. Rep and Dem represent the Republican party and the Democratic party, respectively.} \label{table: voting-ranking-conservative} \begin{tabular}{lllc} \hline Rank &Senator (party) &State & Conservativeness Score (s.e.$(\hat{\theta}$))\\ \hline 1 & Jim DeMint (Rep) & South Carolina & 5.87 (0.157) \\ 2 & Mike Lee (Rep) & Utah & 5.73 (0.138) \\ 3 & Ted Cruz (Rep) & Texas & 5.65 (0.195) \\ 4 & Tom Coburn (Rep) & Oklahoma & 5.25 (0.114) \\ 5 & Rand Paul (Rep) & Kentucky & 5.24 (0.129) \\ 6 & Tim Scott (Rep) & South Carolina & 5.17 (0.176) \\ 7 & Jim Bunning (Rep) & Kentucky & 4.92 (0.204) \\ 8 & Ron Johnson (Rep) & Wisconsin & 4.84 (0.119) \\ 9 & James Risch (Rep) & Idaho & 4.81 (0.102) \\ 10 & Jim Inhofe (Rep) & Oklahoma & 4.69 (0.103) \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \caption{Ranking of the top 10 most liberal senators predicted by the model. Rep and Dem represent the Republican party and the Democratic party, respectively.} \label{table: voting-ranking-liberal} \begin{tabular}{lllc} \hline Rank & Senator (party) & State & Conservativeness Score (s.e.$(\hat{\theta})$) \\ \hline 1 & Brian Schatz (Dem) & Hawaii & -4.74 (0.468) \\ 2 & Roland Burris (Dem) & Illinois & -4.43 (0.297) \\ 3 & Mazie Hirono (Dem) & Hawaii & -4.17 (0.383) \\ 4 & Cory Booker (Dem) & New Jersey & -4.14 (0.572) \\ 5 & Tammy Baldwin (Dem) & Wisconsin & -3.90 (0.352) \\ 6 & Sherrod Brown (Dem) & Ohio & -3.89 (0.168) \\ 7 & Tom Udall (Dem) & New Mexico & -3.85 (0.165) \\ 8 & Dick Durbin (Dem) & Illinois & -3.83 (0.164) \\ 9 & Ben Cardin (Dem) & Maryland & -3.82 (0.163) \\ 10& Sheldon Whitehouse (Dem) & Rhode Island & -3.74 (0.163) \\ \hline \end{tabular} \end{table} \begin{figure}[ht] \centering{\includegraphics[scale=0.21]{plots/CI_conservativeness_score1.png}} \caption{95\% confidence intervals of 139 row (i.e. senator) parameters. } \label{fig:real-data-CI1} \end{figure} \section{Discussions}\label{sec-discussion} This note considers the statistical inference for 1-bit matrix completion under a unidimensional nonlinear factor model, the Rasch model. Asymptotic normality results are established. Our results suggest that the maximum likelihood estimator is statistically efficient, even though the number of parameters diverges. Our simulation study shows that the developed asymptotic result provides a good approximation to finite sample data, and two real-data examples demonstrate its usefulness in the areas of educational testing and political science. The current results can be easily extended to matrix completion problems with quantized measurement that has a similar natural exponential family form. Admittedly, the model considered may be oversimple for complex application problems, for example, certain collaborative filtering problems for which the rank of the underlying matrix $M$ may be higher than considered here and the underlying latent factors may be multidimensioinal. The extension of the current results to more flexible models is left for future investigation. As the first inference result for 1-bit matrix completion, we believe the current results will shed light on the statistical inference for more general matrix completion problems. \iffalse \section{Proof of Proposition \ref{prop:connect} }\label{proof-prop} We prove the first part of the proposition by direct construction; we find the solution for $\theta$ and $\beta$, respectively, given equations $\sum_{i=1}^N\theta_i=0$ and $\theta_i - \beta_j = m_{ij}$, $i = 1, ..., N, j = 1, ..., J$, for which $z_{ij} = 1$. We first construct the solution for $\beta_j$, $j=1,...,J$. The idea is to include all the row parameters $\theta_i$ so as to use the identifiability constraint $\sum_{i=1}^{N}\theta_i=0$. Denote $S_J(i)=\{j=1,...J :z_{ij}=1\}$, $S_N(j)=\{i=1,...,N:z_{ij}=1\}$, and $S_{N_{\phi}}(j)=\{1,2,...,N\}\setminus S_{N}(j).$ Then for any $i \in S_{N}(j),$ we apply $m_{ij}=\theta_i-\beta_j$ in the construction. While for each $i \in S_{N_{\phi}}(j),$ applying Condition~\ref{cond:connect}, there must exist $1\leq i_{i1}, i_{i2},...,i_{ik}\leq N$ and $1\leq j_{i1}, j_{i2},...,j_{ik}\leq J$ such that \begin{align*} z_{i,j_{i1}}=z_{i_{i1},j_{i1}}=z_{i_{i1},j_{i2}}=z_{i_{i2},j_{i2}}=...=z_{i_{ik},j_{ik}}=z_{i_{ik},j}=1, \end{align*} with \begin{align*} &m_{i,j_{i1}}-m_{i_{i1},j_{i1}}+m_{i_{i1},j_{i2}}-m_{i_{i2},j_{i2}}+...-m_{i_{ik},j_{ik}}+m_{i_{ik},j}\\ =&(\theta_i-\beta_{j_{i1}})-(\theta_{i_{i1}}-\beta_{j_{i1}})+(\theta_{i_{i1}}-\beta_{j_{i2}})-(\theta_{i_{i2}}-\beta_{j_{i2}})+...-(\theta_{i_{ik}}-\beta_{j_{ik}})+(\theta_{i_{ik}}-\beta_j)\\ =&\theta_i-\beta_j. \end{align*} Therefore, the solution for $\beta_j$ is simply \begin{align*} \beta_j=&-\frac{1}{N}\Big\{\sum_{i\in S_N(j)}m_{ij}\\ &+\sum_{i\in S_{N_{\phi}}(j)}\Big(m_{i,j_{i1}}-m_{i_{i1},j_{i1}}+m_{i_{i1},j_{i2}}-m_{i_{i2},j_{i2}}+...-m_{i_{ik},j_{ik}}+m_{i_{ik},j}\Big)\Big\}. \end{align*} To find solution for $\theta_i,$ we can simply fix some $j \in S_J(i)$ with \begin{align*} \theta_i=&m_{ij}-\frac{1}{N}\Big\{\sum_{i'\in S_N(j)}m_{i'j}\\ &+\sum_{i'\in S_{N_{\phi}}(j)}\Big(m_{i',j_{i'1}}-m_{i'_{i'1},j_{i'1}}+m_{i'_{i'1},j_{i'2}}-m_{i'_{i'2},j_{i'2}}+...-m_{i'_{i'k},j_{i'k}}+m_{i'_{i'k},j}\Big)\Big\}. \end{align*} \iffalse Furthermore, for any \begin{align*} g(M)=&\sum_{i=1}^N\sum_{j=1}^Jw_{ij}m_{ij} \\ =&\sum_{i=1}^N\tilde w_i(g) \theta_i+ \sum_{j=1}^J \tilde w_j'(g) \beta_j\\ =&\sum_{i=1}^N\tilde w_i(g) \Big[m_{ij}-\frac{1}{N}\Big\{\sum_{i'\in S_N(j)}m_{i'j}\\ &+\sum_{i'\in S_{N_{\phi}}(j)}\Big(m_{i',j_{i'1}}-m_{i'_{i'1},j_{i'1}}+m_{i'_{i'1},j_{i'2}}-m_{i'_{i'2},j_{i'2}}+...-m_{i'_{i'k},j_{i'k}}+m_{i'_{i'k},j}\Big)\Big\}\Big]\\ &+\sum_{j=1}^J \tilde w_j'(g)\Big[-\frac{1}{N}\Big\{\sum_{i\in S_N(j)}m_{ij}\\ &+\sum_{i\in S_{N_{\phi}}(j)}\Big(m_{i,j_{i1}}-m_{i_{i1},j_{i1}}+m_{i_{i1},j_{i2}}-m_{i_{i2},j_{i2}}+...-m_{i_{ik},j_{ik}}+m_{i_{ik},j}\Big)\Big\}\Big]\\ =&\sum_{i,j: z_{ij} = 1} \lambda_{ij}(g) m_{ij}. \end{align*} \fi This concludes the proof for the first part of the proposition. We can view the row parameters and column parameters as a bipartite graph $\mathcal{G}$, with one part consisting of row parameters as nodes (denote as $\{i=1,...,N\}$ for simplicity) and the other consisting of column parameters as nodes (denote as $\{j=1,...,J\}$ for simplicity). If $z_{ij}=1,$ then there is an edge connecting $i$ and $j$ in $\mathcal{G}.$ For the second part of the proposition, note if Condition~\ref{cond:connect} is not satisfied, then there exists at least one pair of $(i,j)$ such that there does not exist a path connecting them in graph $\mathcal{G}$. This means that claim: $\mathcal{G}$ can be separated into at least two sub-graphs. Denote the two sub-graphs by $\mathcal{G}_1$ and $\mathcal{G}_2$ respectively. The above claim can be proved by a contradiction argument as follows. Suppose not, then there exist either $i_1' \in \mathcal{G}_1$ and $j_2' \in \mathcal{G}_2$ with $z_{i_1'j_2'}=1$, or $j_1' \in \mathcal{G}_1$ and $i_2' \in \mathcal{G}_2$ with $z_{i_2'j_1'}=1.$ By assumption there must exist a path connecting any two nodes within each of the two sub-graphs, otherwise we could split $\mathcal{G}$ into two sub-graphs. Therefore, there must exist a path connecting the pair $(i,j)$. A contradiction. Now, denote $\{(\theta_{i_1}, \beta_{j_1}): 1\leq i_1\leq N, 1\leq j_1\leq J\}$ and $\{(\theta_{i_2}$, $\beta_{j_2}): 1\leq i_2\leq N, 1\leq j_2\leq J\}$ as the values associated with the nodes in $\mathcal{G}_1$ and in $\mathcal{G}_2$ respectively and together also serving as a solution set satisfying $\sum_{i=1}^N\theta_i=0$ and $\theta_i - \beta_j = m_{ij}$, $i = 1, ..., N, j = 1, ..., J, z_{ij} = 1$. Let $n_{i_1}$ and $n_{i_2}$ denote the number of row parameters in $\mathcal{G}_1$ and in $\mathcal{G}_2$ respectively. Let $\tau=n_{i_1}/n_{i_2}.$ For any constant $a$, let $\Tilde{\theta}_{i_1}=\theta_{i_1}+a, \Tilde{\beta}_{j_1}=\beta_{j_1}+a$ and $\Tilde{\theta}_{i_2}=\theta_{i_2}-\tau a, \Tilde{\beta}_{j_2}=\beta_{j_2}- \tau a$. We can check easily that $(\Tilde{\theta}, \Tilde{\beta})$ is also a solution to the system but $(\Tilde{\theta}, \Tilde{\beta})\neq (\theta, \beta).$ This concludes the proof for the second part of the proposition. \fi \iffalse \section{Senator Rankings}\label{sec-appendix-senator-ranking} We provide the full list of ranking for senators serving the 111th, the 112th and the 113th US senate according to their conservativeness score. \begin{table}[h!] \centering \caption{Ranking of the top 62 most conservative senators predicted by the model. Rep represents the Republican party and states are listed in their standard abbreviations. $\hat{\theta}$ represents the conservativeness score of senators and s.e.$(\hat{\theta})$ is the standard error of the estimated conservativeness score. All these 62 senators are Republicans.} \begin{tabular}{p{0.6cm}p{1.5cm}p{0.6cm}p{0.7cm}cp{0.8cm}|p{0.6cm}p{1.5cm}p{0.6cm}p{0.7cm}c p{0.8cm}} \hline Rank& Senator & State & Party & $\hat{\theta}$ & s.e.$(\hat{\theta}$)&Rank&Senator & State & Party & $\hat{\theta}$ & s.e.$(\hat{\theta}$) \\ \hline 1 & Demint & SC & Rep & 5.87 & 0.157 & 2 & Lee & UT & Rep & 5.73 & 0.138 \\ 3 & Cruz & TX & Rep & 5.65 & 0.195 & 4 & Coburn & OK & Rep & 5.25 & 0.114 \\ 5 & Paul & KY & Rep & 5.24 & 0.129 & 6 & Scott & SC & Rep & 5.17 & 0.176 \\ 7 & Bunning & KY & Rep & 4.92 & 0.204 & 8 & Johnson & WI & Rep & 4.84 & 0.119 \\ 9 & Risch & ID & Rep & 4.81 & 0.102 & 10 & Inhofe & OK & Rep & 4.69 & 0.103 \\ 11 & Crapo & ID & Rep & 4.56 & 0.097 & 12 & Sessions & AL & Rep & 4.48 & 0.096 \\ 13 & Enzi & WY & Rep & 4.36 & 0.094 & 14 & Barasso & WY & Rep & 4.35 & 0.094 \\ 15 & Cornyn & TX & Rep & 4.33 & 0.095 & 16 & Rubio & FL & Rep & 4.25 & 0.112 \\ 17 & Ensign & NV & Rep & 4.24 & 0.166 & 18 & Vitter & LA & Rep & 4.20 & 0.094 \\ 19 & Fischer & NE & Rep & 4.14 & 0.145 & 20 & Toomey & PA & Rep & 4.12 & 0.109 \\ 21 & Kyl & AZ & Rep & 4.10 & 0.115 & 22 & Roberts & KS & Rep & 4.06 & 0.091 \\ 23 & Mcconnell & KY & Rep & 4.02 & 0.089 & 24 & Thune & SD & Rep & 3.95 & 0.088 \\ 25 & Burr & NC & Rep & 3.95 & 0.090 & 26 & Moran & KS & Rep & 3.89 & 0.109 \\ 27 & Grassley & IA & Rep & 3.80 & 0.086 & 28 & Shelby & AL & Rep & 3.78 & 0.086 \\ 29 & Boozman & AR & Rep & 3.68 & 0.105 & 30 & Chambliss & GA & Rep & 3.65 & 0.087 \\ 31 & Mccain & AZ & Rep & 3.65 & 0.086 & 32 & Brownback & KS & Rep & 3.61 & 0.153 \\ 33 & Coats & IN & Rep & 3.51 & 0.101 & 34 & Johanns & NE & Rep & 3.39 & 0.082 \\ 35 & Isakson & GA & Rep & 3.38 & 0.082 & 36 & Hatch & UT & Rep & 3.38 & 0.083 \\ 37 & Lemieux & FL & Rep & 3.34 & 0.188 & 38 & Blunt & MO & Rep & 3.31 & 0.099 \\ 39 & Wicker & MS & Rep & 3.29 & 0.080 & 40 & Portman & OH & Rep & 3.28 & 0.098 \\ 41 & Corker & TN & Rep & 3.27 & 0.080 & 42 & Heller & NV & Rep & 3.26 & 0.100 \\ 43 & Hutchison & TX & Rep & 3.25 & 0.105 & 44 & Graham & SC & Rep & 3.18 & 0.080 \\ 45 & Flake & AZ & Rep & 3.03 & 0.125 & 46 & Ayotte & NH & Rep & 3.02 & 0.095 \\ 47 & Hoeven & ND & Rep & 2.97 & 0.094 & 48 & Bennett & UT & Rep & 2.74 & 0.127 \\ 49 & Alexander & TN & Rep & 2.71 & 0.075 & 50 & Kirk & IL & Rep & 2.67 & 0.105 \\ 51 & Cochran & MS & Rep & 2.63 & 0.075 & 52 & Chiesa & NJ & Rep & 2.61 & 0.343 \\ 53 & Gregg & NH & Rep & 2.59 & 0.127 & 54 & Martinez & FL & Rep & 2.47 & 0.186 \\ 55 & Lugar & IN & Rep & 2.29 & 0.088 & 56 & Bond & MO & Rep & 2.25 & 0.118 \\ 57 & Murkowski & AK & Rep & 1.47 & 0.066 & 58 & Brown & MA & Rep & 1.29 & 0.103 \\ 59 & Voinovich & OH & Rep & 1.22 & 0.102 & 60 & Snowe & ME & Rep & 1.06 & 0.080 \\ 61 & Specter & PA & Rep & 1.03 & 0.192 & 62 & Collins & ME & Rep & 0.82 & 0.064 \\ \hline \end{tabular} \label{table: ranking-50} \end{table} \newpage \begin{table}[t] \small \centering \caption{Ranking of the top 63-139 most conservative senators predicted by the model. Dem and Ind represent the Democratic party and independent politician, respectively. States are presented in their standard abbreviations. $\hat{\theta}$ represents the conservativeness score of senators and s.e.$(\hat{\theta})$ is the standard error of the estimated conservativeness score. All the senators listed below are either Democrats or independent politicians.} \begin{tabular}{p{0.6cm}p{1.5cm}p{0.6cm}p{0.7cm}cp{0.8cm}|p{0.6cm}p{1.5cm}p{0.6cm}p{0.7cm}cp{0.8cm}} \hline Rank& Senator & State & Party & $\hat{\theta}$ & s.e.$(\hat{\theta}$)&Rank&Senator & State & Party & $\hat{\theta}$ & s.e.$(\hat{\theta}$) \\ \hline 63 & Nelson & NE & Dem & -0.05 & 0.084 & 64 & Bayh & IN & Dem & -0.13 & 0.104 \\ 65 & Manchin & WV & Dem & -0.66 & 0.099 & 66 & Feingold & WI & Dem & -0.92 & 0.115 \\ 67 & Lincoln & AR & Dem & -0.96 & 0.119 & 68 & Mccaskill & MO & Dem & -1.15 & 0.083 \\ 69 & Webb & VA & Dem & -1.49 & 0.108 & 70 & Pryor & AR & Dem & -1.63 & 0.094 \\ 71 & Lieberman & CT & Dem & -1.68 & 0.113 & 72 & Heitkamp & ND & Dem & -1.87 & 0.183 \\ 73 & Donnelly & IN & Dem & -1.87 & 0.182 & 74 & Hagan & NC & Dem & -1.90 & 0.100 \\ 75 & Byrd & WV & Dem & -2.00 & 0.217 & 76 & Warner & VA & Dem & -2.06 & 0.105 \\ 77 & Landrieu & LA & Dem & -2.07 & 0.106 & 78 & Tester & MT & Dem & -2.11 & 0.105 \\ 79 & Baucus & MT & Dem & -2.11 & 0.112 & 80 & Bennet & CO & Dem & -2.16 & 0.107 \\ 81 & Klobuchar & MN & Dem & -2.26 & 0.109 & 82 & Conrad & ND & Dem & -2.29 & 0.131 \\ 83 & King & ME & Ind & -2.30 & 0.208 & 84 & Nelson & FL & Dem & -2.32 & 0.112 \\ 85 & Kohl & WI & Dem & -2.34 & 0.131 & 86 & Carper & DE & Dem & -2.36 & 0.112 \\ 87 & Udall & CO & Dem & -2.39 & 0.113 & 88 & Begich & AK & Dem & -2.43 & 0.116 \\ 89 & Dorgan & ND & Dem & -2.44 & 0.167 & 90 & Reid & NV & Dem & -2.68 & 0.122 \\ 91 & Shaheen & NH & Dem & -2.76 & 0.125 & 92 & Kaine & VA & Dem & -2.80 & 0.246 \\ 93 & Casey & PA & Dem & -2.83 & 0.127 & 94 & Cantwell & WA & Dem & -2.84 & 0.127 \\ 95 & Coons & DE & Dem & -2.84 & 0.170 & 96 & Specter & PA & Dem & -2.84 & 0.222 \\ 97 & Walsh & MT & Dem & -2.85 & 0.395 & 98 & Wyden & OR & Dem & -2.97 & 0.132 \\ 99 & Bingaman & NM & Dem & -3.03 & 0.155 & 100 & Johnson & SD & Dem & -3.09 & 0.137 \\ 101 & Stabenow & MI & Dem & -3.11 & 0.137 & 102 & Cowan & MA & Dem & -3.19 & 0.439 \\ 103 & Merkley & OR & Dem & -3.19 & 0.140 & 104 & Sanders & VT & Ind & -3.23 & 0.143 \\ 105 & Feinstein & CA & Dem & -3.24 & 0.143 & 106 & Kerry & MA & Dem & -3.25 & 0.165 \\ 107 & Kaufman & DE & Dem & -3.28 & 0.219 & 108 & Murray & WA & Dem & -3.29 & 0.143 \\ 109 & Heinrich & NM & Dem & -3.30 & 0.290 & 110 & Menendez & NJ & Dem & -3.32 & 0.144 \\ 111 & Inouye & HI & Dem & -3.33 & 0.169 & 112 & Boxer & CA & Dem & -3.35 & 0.148 \\ 113 & Dodd & CT & Dem & -3.38 & 0.218 & 114 & Warren & MA & Dem & -3.45 & 0.307 \\ 115 & Levin & MI & Dem & -3.52 & 0.152 & 116 & Blumenthal & CT & Dem & -3.52 & 0.214 \\ 117 & Kirk & MA & Dem & -3.54 & 0.716 & 118 & Akaka & HI & Dem & -3.54 & 0.174 \\ 119 & Franken & MN & Dem & -3.55 & 0.166 & 120 & Rockefeller & WV & Dem & -3.56 & 0.161 \\ 121 & Mikulski & MD & Dem & -3.60 & 0.158 & 122 & Leahy & VT & Dem & -3.63 & 0.158 \\ 123 & Harkin & IA & Dem & -3.64 & 0.158 & 124 & Lautenberg & NJ & Dem & -3.65 & 0.179 \\ 125 & Schumer & NY & Dem & -3.65 & 0.159 & 126 & Reed & RI & Dem & -3.67 & 0.157 \\ 127 & Gillibrand & NY & Dem & -3.67 & 0.158 & 128 & Murphy & CT & Dem & -3.68 & 0.327 \\ 129 & Markey & MA & Dem & -3.73 & 0.465 & 130 & Whitehouse & RI & Dem & -3.74 & 0.163 \\ 131 & Cardin & MD & Dem & -3.82 & 0.163 & 132 & Durbin & IL & Dem & -3.83 & 0.164 \\ 133 & Udall & NM & Dem & -3.85 & 0.165 & 134 & Brown & OH & Dem & -3.89 & 0.168 \\ 135 & Baldwin & WI & Dem & -3.90 & 0.352 & 136 & Booker & NJ & Dem & -4.14 & 0.572 \\ 137 & Hirono & HI & Dem & -4.17 & 0.383 & 138 & Burris & IL & Dem & -4.43 & 0.297 \\ 139 & Schatz & HI & Dem & -4.74 & 0.468 & & & & & & \\ \hline \end{tabular} \label{table: ranking-51-100} \end{table} \fi \iffalse \begin{table}[h] \centering \caption{Ranking of the top 101-140 most conservative senators predicted by the model. Rep, Dem and Ind represents the Republican party, the Democratic party and independent politician, respectively. States are listed in their standard abbreviations.} \begin{tabular}{llllcl} \hline Ranking& Senator & State & Party & Conservativeness Score & s.e.$(\hat{\theta}$) \\ \hline 101 & Stabenow & MI & Dem & -3.11 & 0.137 \\ 102 & Cowan & MA & Dem & -3.19 & 0.439 \\ 103 & Merkley & OR & Dem & -3.19 & 0.140 \\ 104 & Sanders & VT & Ind & -3.23 & 0.143 \\ 105 & Feinstein & CA & Dem & -3.24 & 0.143 \\ 106 & Kerry & MA & Dem & -3.25 & 0.165 \\ 107 & Kaufman & DE & Dem & -3.28 & 0.219 \\ 108 & Murray & WA & Dem & -3.29 & 0.143 \\ 109 & Heinrich & NM & Dem & -3.30 & 0.290 \\ 110 & Menendez & NJ & Dem & -3.32 & 0.144 \\ 111 & Inouye & HI & Dem & -3.33 & 0.169 \\ 112 & Boxer & CA & Dem & -3.35 & 0.148 \\ 113 & Dodd & CT & Dem & -3.38 & 0.218 \\ 114 & Warren & MA & Dem & -3.45 & 0.307 \\ 115 & Levin & MI & Dem & -3.52 & 0.152 \\ 116 & Blumenthal & CT & Dem & -3.52 & 0.214 \\ 117 & Kirk & MA & Dem & -3.54 & 0.716 \\ 118 & Akaka & HI & Dem & -3.54 & 0.174 \\ 119 & Franken & MN & Dem & -3.55 & 0.166 \\ 120 & Rockefeller & WV & Dem & -3.56 & 0.161 \\ 121 & Mikulski & MD & Dem & -3.60 & 0.158 \\ 122 & Leahy & VT & Dem & -3.63 & 0.158 \\ 123 & Harkin & IA & Dem & -3.64 & 0.158 \\ 124 & Lautenberg & NJ & Dem & -3.65 & 0.179 \\ 125 & Schumer & NY & Dem & -3.65 & 0.159 \\ 126 & Reed & RI & Dem & -3.67 & 0.157 \\ 127 & Gillibrand & NY & Dem & -3.67 & 0.158 \\ 128 & Murphy & CT & Dem & -3.68 & 0.327 \\ 129 & Markey & MA & Dem & -3.73 & 0.465 \\ 130 & Whitehouse & RI & Dem & -3.74 & 0.163 \\ 131 & Cardin & MD & Dem & -3.82 & 0.163 \\ 132 & Durbin & IL & Dem & -3.83 & 0.164 \\ 133 & Udall & NM & Dem & -3.85 & 0.165 \\ 134 & Brown & OH & Dem & -3.89 & 0.168 \\ 135 & Baldwin & WI & Dem & -3.90 & 0.352 \\ 136 & Booker & NJ & Dem & -4.14 & 0.572 \\ 137 & Hirono & HI & Dem & -4.17 & 0.383 \\ 138 & Burris & IL & Dem & -4.43 & 0.297 \\ 139 & Schatz & HI & Dem & -4.74 & 0.468 \\ \hline \end{tabular} \label{table: ranking-101-144} \end{table} \fi
{ "timestamp": "2021-05-06T02:04:57", "yymm": "2105", "arxiv_id": "2105.01769", "language": "en", "url": "https://arxiv.org/abs/2105.01769" }
\section{Introduction}\label{sec:Introduction} Ever since the discovery of neutrinos, it has been a great challenge to find the electromagnetic(EM) properties of neutrinos \cite{Pauli30,Reines57,LeeTD63}, which continued in the gauge theory era \cite{Clark73,Kim74,Kim78,Okun86,Barbieri87}. To have the EM properties of neutrinos at the observable level, particles in the beyond-the-standard-model(BSM) had to be introduced \cite{Kim76,Yanagida88,Giunti15,KimMoscow19}. Two relevant EM properties of neutrinos are the magnetic moment(MM) and the charge radius(CR). The MM, coupling to the EM field strength, is gauge invariant. On the other hand, the CR, coupling to the EM field itself, is not gauge invariant. Gauge invariance is needed to maintain the renormalizability. Therefore, if the probing wave length in the effective theory is much larger than the cutoff length-scale in the renormalization, as in the Fermi weak interaction much above the electroweak length-scale $10^{-16\,}$cm, CR can be also considered as a useful physical parameter. The cutoff energy scale for the CR is the mass of a heavy BSM particle whose exchange produces the relevant charge radius. In this paper, we take this viewpoint considering the electromagnetic properties of neutrinos. The published experimental limit on the MM of $\nu_e$ in units of the electron Bohr magneton is \cite{BdNuMagMom17}, $ |f|\lesssim 2.8\times 10^{-11}$, and the limit on the squared CR, $\tilde{r}^2$ is \cite{BdNuChargeR10}, $ \tilde{r}^2= [-2.1, +3.3] \times 10^{-32}{\rm cm^2} $. The standard model(SM) prediction \cite{Fujikawa80} on the neutrino MM is much smaller than the upper limit presented above, by a factor of O($10^{-8}$). Note that the millicharge limit of neutron is O($10^{-21\,}$cm) \cite{ChargeNeutron88}, which however cannot be directly used for a limit on the millicharges of neutrinos. Recently, the XENON group considered the possibility of MMs of solar neutrinos for the excess events in their data, for an exposure of 0.65 tonne-year with an extremely low background rate of $76\pm 2(\rm stat.)\,$(Events)/${\rm (t\cdot y\cdot keV)}$ \cite{Xenon1T20}. The plot on these excess events starts around the electron recoil energy near 2--3\,keV, and ends around 30\,keV as shown in Fig. \ref{fig:FittoEvents}\,(a). Out of the fitted 42,179.4 events, the estimated SM background by the solar neutrinos is 220.8 events \cite{Xenon1T20}, only 0.52\%. For estimating non-vanishing electromagnetic properties of solar neutrinos, therefore, one may neglect the SM background and associate the bulk of 42,179.4 events with the electromagnetic properties of neutrinos in the scattering process. In this way, Ref. \cite{Xenon1T20} obtained the bound on the neutrino MM, $ [1.4,2.9]\times 10^{-11}$ times the electron Bohr magneton. But, the assumed cross section by the XENON group has the reciprocal dependence on the recoil electron energy. This reciprocal dependence is obtained by assuming the two-body scattering \cite{Vogel89}, $\nu_e+e$, which may not be a correct method. The reason is that in kicking out an electron from a heavy atom, the atomic nucleus can carry away some momentum and the momentum conservation used in the $\nu_e+e_{\rm in\,atom}\to \nu_e'+e'$ is not exact. \color{black} In Sec. \ref{sec:XenonData}, we express the XENON1T unit ``${\rm Events/t\cdot y\cdot keV}$'' in terms of the cross section unit ``$\,\mathrm{MeV}^3$'' to compare with the theoretical prediction. In Sec. \ref{sec:Cross}, we discuss the possibility of including electromagnetic properties of neutrinos in the scattering process. Here, the three-body phase space with a non-relativistic atom is discussed. In Subsec. \ref{subsec:Mili}, we briefly comment on the possibility of neutrino milli-charges. In Sec. \ref{sec:Fitting}, from XENON1T data we obtain the CR and millicharge bounds of electron-type neutrino. Section \ref{sec:Conclusion} is a brief summary on the EM properties of neutrinos. In Appendix \ref{AppA}, we estimate the parameters used in the calculation. \section{Event Rate and Cross Section in the XENON1T Data}\label{sec:XenonData} \begin{figure}[!h] \includegraphics[height=0.27\textwidth]{fig1Xe1TaC.pdf} \hskip 0.5cm \includegraphics[height=0.27\textwidth]{fig1Conversion.pdf} \\~~ \\ ~\hskip 1cm(a)\hskip 9cm (b) \caption{(a) Cross section vs. the data curve for $\mu_\nu=7\times 10^{-11}\mu_{\rm B}$ reproducing the RHS panel of Fig. 1 of \cite{Xenon1T20}, and (b) a conversion of the event rate to the cross section.} \label{fig:FittoEvents} \end{figure} The XENON group presents data in units of ${\rm Events/t\cdot y\cdot keV}$. To convert it to cross section, we use the right-hand side (RHS) panel of Fig. 1 of \cite{Xenon1T20}, and followed their analysis using the neutrino magnetic moment $\mu_\nu=7\times 10^{-11}\mu_{\rm B}$ and the solar $pp$ neutrino flux of \cite{Bahcall04}. They used the inverse-$T_e$ rule presented in \cite{Vogel89}. In our paper here, the $pp$ neutrino energy flux of \cite{Bahcall04} in the region, $E_\nu\ge 4\,\,\mathrm{MeV}$, is fitted by $F^{\rm solar}_{\rm pp}(E)$. The analytic form for the $pp$-flux function is presented in \cite{Vitagliano17}, \begin{equation} F^{\rm solar}_{pp}(E) = P_{0} E^{2} (Q + m_{e} - E) \sqrt{(Q + m_{e} - E)^{2} - m_{e}^{2}}, \end{equation} where $Q \approx 423.41\,\textrm{keV}$ is the end point energy of neutrino, and $m_{e} \approx 0.511\,\textrm{MeV}$ is the electron mass. $P_{0} = 188.143\,\textrm{MeV}^{-5}$ is the normalization constant. From this form, some averages of energy functions are \begin{equation} \begin{split} \langle E^{-2} \rangle =&~ 30.0256\,\textrm{MeV}^{-2}, \\ \langle E^{-1} \rangle =&~ 4.46821\,\textrm{MeV}^{-1}, \\ \langle E^{1} \rangle =&~ 0.267946\,\textrm{MeV}^{1}, \\ \langle E^{2} \rangle =&~ 0.0794926\,\textrm{MeV}^{2}, \\ \langle E^{3} \rangle =&~ 0.0251769\,\textrm{MeV}^{3}, \\ \langle \ln{E/Q_{\rm min}} &\rangle = -0.528546. \\ \end{split} \end{equation} Figure \ref{fig:FittoEvents}\,(a) is the overlap of the cross section and the RHS panel data of Fig. 1 of Ref. \cite{Xenon1T20}. Here, we used $Q_{\rm min}$ as the minimum of $|{\bf p}_e'|^2/(2m_e)$. Figure \ref{fig:FittoEvents}\,(b) is the cross section $d\sigma/dT_e$ in the MeV$^{-3}$ units, obtained from the XENON1T data using {\it the inverse-$T_e$ rule}. Then, our calculation of $d\sigma/dT_e$ resulting from some electromagnetic properties of neutrinos, {\it not using the inverse-$T_e$ rule but using our formula}, will be compared with Fig. \ref{fig:FittoEvents}\,(b). \section{Cross Sections with Electromagnetic Form Factors of Neutrino}\label{sec:Cross} The photon vertex does not change the chirality. For the mass and MM, therefore, a chirality change must take place. So, the MM can arise by attaching a photon vertex in the mass generating diagrams. For the CR, it is even simpler because the CR has the same chiral property as that given by the EM coupling. For the neutrino, the gauge coupling is expressed as the form factor $F^\nu_{1}(q^2)$ with $F^\nu_{1}(0)=0$. The CR is defined by the first term in the expansion of $F^\nu_{1}(q^2)$ in terms of momentum transfer $q^2$. As noted in the beginning, we use the charge radius for the scale $q^2\ll$(cutoff scale)$^2$. In addition, if a (almost) massless extra photon (ex-photon) in the BSM sector is present, there is a possibility that neutrinos can carry millicharges \cite{Holdom86,ParkJC07}. The chiral property due to millicharges is the same as that due to the charge radius. The vector couplings of the charge radii and millicharges of neutrinos can mix with the $G_F$-order SM couplings, but the interference between these vector and axial-vector interactions and the SM amplitude has not played an important role because of the tiny contribution in the data of XENON1T if it were from solar neutrinos. \subsection{SM cross section and beyond}\label{subsec:SM} The effective interaction in the SM for the electron-type neutrino scattering on an electron is given by the interaction \cite{tHooft71,KimRMP81}, \begin{eqnarray} \frac{G_F}{\sqrt2}\bar{\nu}_e\gamma^\mu(1+\gamma_5)\nu_e\, \bar{e}\gamma_\mu(g_V+g_A\gamma_5)e.\label{eq:SMcoupling} \end{eqnarray} When we consider the neutrino scattering, kicking out an electron from an atom with atomic number $Z$, the following six-fermion interaction with parameter $M_{\rm eff}$ can be considered, \dis{ \frac{1}{M_{\rm eff}^3}\cdot\frac{G_F}{\sqrt2}\bar{\nu}_e\gamma^\mu(1+\gamma_5)\nu_e\, \bar{e}\gamma_\mu(g_V+g_A\gamma_5)e\,\overline{A} A,\label{eq:Effatom} } where $A$ is treated as a quantum field with $Z-1$ atomic electrons.\footnote{$A$ may be considered as a boson, which gives the same result.} A strategy to estimate $M_{\rm eff}$ for four-fermion interaction is given in Appendix \ref{AppA}. We interpret $M_{\rm eff}$ as representing the electromagnetic process occurring in the atomic orbits. For four-fermion interaction, it is a kind of definition for the atomic process. For six-fermion interaction of (\ref{eq:Effatom}), we use the same $M_{\rm eff}$ determined in Appendix \ref{AppA}. In the SM, we have \dis{ g_V=\frac12+2\sin^2\theta_W,~g_A=\frac12, } where $\sin^2\theta_W\simeq 0.231$ \cite{sin2LHC18,KimRMP81} is the weak mixing angle, and $e^2=g^2\sin^2\theta_W$. The EM couplings of neutrinos are \dis{ \overline{\nu}_\beta\gamma^\mu e F_{1\,\alpha\beta}^\nu \nu_\alpha A^{\rm em}_\mu + \overline{\nu}_\beta\sigma^{\mu\nu} \frac{ef_{\alpha\beta}}{2M}\nu_\alpha (q_\mu A^{\rm em}_{\nu}-q_\nu A^{\rm em}_{\mu})\label{eq:EMcoupling} } where $F_{1\,\alpha\beta}$ and $\frac{f_{\alpha\beta}}{2M}$ are matrices between weak eigenstates of neutrinos, $\nu_\beta(k_\nu')$ and $\nu_\alpha(k_\nu)$. $f_{\alpha\beta}$ is the MM of neutrino in units of the Bohr magneton of the mass $M$ particle. For $\alpha=\beta$, Eqs. (\ref{eq:SMcoupling}) and (\ref{eq:EMcoupling}) combine, and in particular for the electron-type neutrino $\alpha=\beta$ we have \dis{ \frac{G_F}{\sqrt2}\bar{\nu}_e\gamma^\mu(1+\gamma_5)\nu_e\, \bar{e}\left(\left[g_V+ \frac{F_1^{\nu_e}}{\sqrt{2}G_F q^2}\right]\gamma_\mu+g_A\gamma_\mu\gamma_5\right)e.\label{eq:Combine} } The SM neutrino $\nu_\alpha$ is a two-component spinor. Therefore, in the cross section calculation the spin sums for a Majorana neutrino is $\frac12$ of that of a Dirac neutrino, which will be taken into account. For calculating scattering cross section on electrons bound in the orbits of a rest nucleus, the wave function of electrons in the electron cloud around the nucleus must be considered. Let the uncertainty of the outgoing electron position be $\Delta r$. Then, the uncertainty of the momentum is $\Delta p\approx 1/\Delta r$. We consider the recoil energy greater than 2.5 keV (glimpsing Fig. \ref{fig:FittoEvents}\,(a) ), corresponding to to $\Delta r <3.9\times 10^{-9}$cm with $\Delta p=\sqrt{2m_e T_e}$ for the electron recoil energy $T_e$. But, in the neutrino scattering with the incoming solar neutrino energy less than $ 0.45\,\,\mathrm{MeV}$ \cite{Bahcall04}, the uncertainty of neutrino position is less than $4.38\times 10^{-11}$cm which is much shorter than the orbit radius of the $n$-th level of Xenon, $10^{-10}n$\,cm. So, in the neutrino scattering we can safely use the two-body scattering formula as described in Appendix \ref{AppA}. The $3\to n$ transition rate is \dis{ Rate=(2\pi)^{4-3n}\frac{d^3{\bf p}_1'\cdots d^3{\bf p}_n' }{2E_12E_2 2E_32E_1'2E_2'\cdots 2E_n'} \frac{\delta^4(p_1+p_2+p_3-p_1'\cdots-p_n')}{V^2} |\langle p_1'\cdots p_n'|T|p_1p_2p_3\rangle |^2\label{eq:3nRate} } from which we calculate the three body scattering cross section. A flux of incoming particles 1 (neutrino in our case) scatter on two particles 2 (electron in our case) and 3 (atom in our case). Thus, the cross section consists of two parts, one with a flux factor $1/|{\bf v}_1-{\bf v}_2|$ and the other $1/|{\bf v}_1-{\bf v}_3|$. Taking ${\bf v}_3=0$, the neutrino-electron scattering is \dis{ d\sigma=\frac{(2\pi)^{4-9}}{|{\bf v}_1-{\bf v}_2|}\frac{d^3{\bf p}_1'd^3{\bf p}_2' d^3{\bf p}_3' }{2E_12E_2 2E_32E_1'2E_2'2E_3'} \frac{\delta^4(\sum p_i-\sum p_f')}{V} |\langle p_1'p_2' p_3'|T|p_1p_2p_3\rangle |^2\label{eq:3scattT} } where we can take $V$ as the atomic volume to which the flux of neutrinos sweep. For $1/V$, the sum $B$ in Eq. (\ref{eq:Sumsn}) is used. The $T$-matrix squared in Eq. (\ref{eq:3scattT}) is summed over the spins \dis{ \frac{1}{(2s_e+1)(2s_\nu+1)}\sum_{s_\nu,s_e}\sum_{s'_\nu,s'_e}|{\rm Amp}|^2= (|T|^2_{\rm Q}). } If we consider the MM interaction also, then we will have the factor $(|T|^2_{\rm MM} +|T|^2_{\rm Q})$ instead of $(|T|^2_{\rm Q})$. For the volume $V$, we use the Xenon volume for each principal quantum number $n$ separately. The $n$-th shell volume is $V=(4\pi /3)(na_Z)^3$ where $a_Z$ is the K-shell radius of Xenon atom. Then, for the $\nu_\alpha(k_\nu)+e(p_e) +A(P_A)\to \nu'_\beta(k_\nu')+e'(p_e') +A'(P_A') $ scattering, Eq. (\ref{eq:3scattT}) becomes \dis{ d\sigma=\frac{3Z^3\alpha^3_{\rm em}}{2^{12}\pi^4}\,\frac{m_e^2|{\bf p}_e'|^2d|{\bf p}_e'| E_\nu' dE_\nu' }{(m_e+T_e)E_\nu M_A^2} \delta(E_\nu+m_e-\delta_B+M_A-E_\nu'-(m_e+T_e)-M_A) |\langle p_1'p_2' p_3'|T|p_1p_2p_3\rangle |^2\,x\label{eq:Master} } where $\delta_B$ is the binding energy and $T_e$ is the kinetic energy of the final electron $e'$, and $x$ is given in Appendix \ref{AppA} for the process in consideration. The following scalar products will be useful, \dis{ & q^2=-2E_\nu E_\nu'(1-\cos\theta),~q=k_\nu-k_\nu',\\ &k_\nu\cdot k_\nu'=-k_\nu\cdot q=k_\nu'\cdot q=-\frac{q^2}{2},\\ &p_e\cdot p_e'=m_e^2-\frac{q^2}{2}~\textrm{assuming atom at rest},\\ &k_\nu'\cdot p_e=m_e E_\nu'=k_\nu\cdot p_e'=E_\nu E_e'-E_\nu |{\bf p}_e'|\cos\theta_e, \label{eq:kinvar} } where we used massless neutrinos, $\theta_e$ is the polar angle of the outgoing electron direction relative to the incoming neutrino direction,\footnote{The polar angle made by the outgoing neutrino direction relative to the incoming neutrino direction will be denoted as $\theta$.} and the non-relativistic approximation $E_e'\simeq m_e+\frac{|{\bf p}_e'|^2}{2m_e}=m_e+T_e$ is used. Firstly, let us show the inverse-$T_e$ rule, used in Sec. \ref{sec:XenonData}. Note the kinematic variables for $\nu_e(k_\nu)+e(p_e)\to \nu_e'(k'_\nu)+e'(p'_e)$, for (almost) massless neutrinos and in the limit $M_A=\infty$, \dis{ &E_\nu'=|{\bf k}_\nu'|=\sqrt{E_e'^2+E_\nu^2-m_e^2},\\ &2k_\nu\cdot k_\nu'=2E_\nu E_\nu'(1-\cos\theta), } where $\theta$ is the angle of the outgoing netrino relative to the incoming neutrino. It is possible to express $\theta$ in terms of $E_e'$ \cite{Vogel89}, $\cos\theta=(E_\nu+m_e)E_\nu^{-1}T_e^{1/2}(T_e+2m_e)^{-1/2}$ such that $|{\bf p}'_\nu|\simeq E_\nu'$ becomes \dis{ E_\nu'\simeq \sqrt{E_\nu^2+2m_e T_e-2E_\nu \sqrt{2m_eT_e}(E_\nu+m_e)E_\nu^{-1}T_e^{1/2}(T_e+2m_e)^{-1/2}}.\label{eq:TeDep} } Now, the $|{\bf k}_\nu'|d|{\bf k}_\nu'|=E_\nu'dE_\nu'=-E_\nu' dT_e$ integration with the energy conservation delta function $\delta(f(T_e))$, with the approximation $T_e\ll m_e$ and $E_\nu'\simeq E_\nu$ in the final formula, we obtain \dis{ \frac{d\sigma}{dT_e}\simeq \frac{1}{2^5\pi m_e^2 T_e }. \label{eq:StiffPh} } The reason for the $1/T_e$ dependence is the specific $T_e$ powers in Eq. (\ref{eq:TeDep}), and if it is slightly violated then the $1/T_e$ dependence would not result. Thus, we anticipate the inverse-$T_e$ rule is not applicable if the momentum carried by the scattered Xenon atom is considered. Let us now proceed the three-body scattering given in Eq. (\ref{eq:3scattF}). The four momenta delta function \dis{ \delta^{(3)}({\bf k}_\nu-{\bf k}'_\nu-{\bf p}'_e-{\bf P}'_A) \delta(E_\nu-E_\nu'-\Delta-\frac{|{\bf p}_e'|^2}{2m_e}-\frac{|{\bf P}_A'|^2}{2M_A} ) } where ${\bf p}_e' $ and ${\bf P}_A' $ are the momenta of the outing $e'$ and outgoing atom $A'$ with $m_e$ and $M_A$, respectively, and $E_\nu-E_\nu'= \Delta=T_e+\delta_B$ with the binding energy $ \delta_B$ of the electron in the atom. The energy transfer to the atom is neglected in the energy-conservation $\delta$ function, since it is so small. The averaged spin sums of $|T|^2$ from Eq. (\ref{eq:Effatom}) is \dis{ &\frac{G_F^2}{2M_{\rm eff}^{6}}{\rm Tr\,}\gamma^0(1+\gamma_5)\gamma^\rho\gamma^0k\hskip -0.15cm\slash' \gamma^\alpha(1+\gamma_5)k\hskip -0.15cm\slash\cdot {\rm Tr\,} \gamma^0(g_V^*+g_A^*\gamma_5)\gamma_\rho\gamma_0(p\hskip -0.15cm\slash'+m_e)\gamma^0\gamma_\alpha(g_V+g_A\gamma_5)(p\hskip -0.15cm\slash+m_e)\cdot{\rm Tr\,}(P\hskip -0.22cm\slash'+M_A)(P\hskip -0.22cm\slash+M_A)\\ &~=\frac{G_F^2}{2M_{\rm eff}^{6}}{\rm Tr\,}2(1-\gamma_5)\gamma^0\gamma^\rho\gamma^0k\hskip -0.15cm\slash'\gamma^\alphak\hskip -0.15cm\slash \cdot {\rm Tr\,}\gamma^0\gamma_\rho\gamma^0(p\hskip -0.15cm\slash'+m_e)\gamma_\alpha\\ &\qquad\qquad\qquad\big[(|g_V|^2+|g_A|^2)p\hskip -0.15cm\slash- (g_Vg_A^*+g_V^*g_A)p\hskip -0.15cm\slash\gamma_5 +(|g_V|^2-|g_A|^2)m_e-(g_Vg_A^*-g_V^*g_A)m_e\gamma_5\big]\cdot(8M_A^2)\\ &~=\frac{G_F^2}{2M_{\rm eff}^{6}}2\cdot 4(k'^{\rho}k^\alpha -k\cdot k' g^{\rho\alpha}+ k'^{\alpha}k^\rho -i\varepsilon^{\rho\mu\alpha\nu}k'_\mu k_\nu) \times 4\big((|g_V|^2+|g_A|^2)(p'_\rho p_\alpha-p\cdot p'g_{\rho\alpha}+p'_\alpha p_\rho)\\ &\qquad~+(|g_V|^2-|g_A|^2)m_e^2g_{\rho\alpha}-i(g_Vg_A^*+g_V^*g_A)\varepsilon_{\rho\kappa\alpha\eta}p'^\kappa p^\eta \big)\cdot(8M_A^2)\\ &~=\frac{G_F^2}{2M_{\rm eff}^{6}}2\cdot 4\Big( 4(|g_V|^2+|g_A|^2)(2k\cdot p k'\cdot p'+2k\cdot p' k'\cdot p)+4\big[(|g_V|^2-|g_A|^2)m_e^2(-2 k\cdot k')\\ &\qquad-2{\rm Re\,}(g_V g_A^*) (k\cdot p k'\cdot p'-k\cdot p' k'\cdot p)\big] \Big)\cdot(8M_A^2) \\ &~=\frac{2^8G_F^2M_A^2m_e^2 E_\nu^2}{M_{\rm eff}^{6}}\Big( (|g_V|^2+|g_A|^2)\big[(1-\frac{|{\bf p}_e'|^2}{2m_eE_\nu}) +(\frac{E_e'}{m_e} -\frac{|{\bf p}_e'|}{m_e}\cos\theta_e)^2 \big]+ (|g_V|^2-|g_A|^2)m_e^2(-q^2/2E_\nu^2)\\ &\qquad-{\rm Re\,}(g_V g_A^*)\big[(1-\frac{|{\bf p}_e'|^2}{2m_eE_\nu}) +(\frac{E_e'}{m_e} -\frac{|{\bf p}_e'|}{m_e}\cos\theta_e)^2 \big]\Big) \label{eq:Trace} } where we will use $M_{\rm eff}^6$ given in Eq. (\ref{eq:M6eff}) of Appendix \ref{AppA} and $\sum_s u_{e_{\rm atom}}(p)\bar{u}_{e_{\rm atom}}(p)\simeqp\hskip -0.15cm\slash+m_e$. Here, however, the three momentum conservation is treated accurately. On the other hand, most people use the two body scattering and the uncertaintly here is the probability in finding a prospective bound electron in the atom and the subsequent integration over the probability function of the electron cloud in the atom, which will be very complicated. This calculation is not found in the literature. \subsection{Magnetic moment contribution $|T|^2_{\rm MM}$}\label{subsec:MM} If neutrinos have magnetic moment, we consider the second term in Eq. (\ref{eq:EMcoupling}). In this case, summing over the spins, viz. Eq. (\ref{eq:Combine}), we have \dis{ |T|^2_{\rm MM}&=\frac14\left( \frac{e^2 f_M}{2MM_{\rm eff}^3}\right)^2\frac{q_\mu q_\alpha}{q^4}\,{\rm Tr}\,\sigma^{\mu\nu}(\frac{k\hskip -0.15cm\slash_i}{2})\sigma^{\alpha\beta} (\frac{k\hskip -0.15cm\slash_f}{2})\cdot {\rm Tr}\,\gamma_\nu(p\hskip -0.15cm\slash_i+m_e)\gamma_\beta (p\hskip -0.15cm\slash_f +m_e)\cdot{\rm Tr\,}(P\hskip -0.22cm\slash'+M_A)(P\hskip -0.22cm\slash+M_A)\\\\ &= \frac{1}{64}\left( \frac{e^2 f_M}{2MM_{\rm eff}^3}\right)^2\frac{1}{q^4}{\rm Tr} \,(q\hskip -0.15cm\slash\gamma^\nu-\gamma^\nuq\hskip -0.15cm\slash)k\hskip -0.15cm\slash_i(q\hskip -0.15cm\slash\gamma^x-\gamma^xq\hskip -0.15cm\slash) k\hskip -0.15cm\slash_f \cdot{\rm Tr}\gamma_\nu(p\hskip -0.15cm\slash_i+m_e)\gamma_\beta (p\hskip -0.15cm\slash_f +m_e)\cdot (8M_A^2)\\ &= \frac{1}{64}\left( \frac{e^2 f_M}{2MM_{\rm eff}^3}\right)^2\frac{1}{q^4}\,Q^{\nu x}L_{\nu x}\cdot (8M_A^2) } where we neglected the neutrino mass. We obtain\footnote{The trace of six gamma matrices, $ (1/4) {\rm Tr}\, \gamma^\rho a\hskip -0.15cm\slash b\hskip -0.15cm\slash\gamma^\sigma c\hskip -0.15cm\slashd\hskip -0.15cm\slash= g^{\rho\sigma} (a\cdot b \, c\cdot d- a\cdot c \, b\cdot d+a\cdot d \, b\cdot c) +a\cdot c(b^\rho d^\sigma+b^\sigma d^\rho) +b\cdot d (a^\rho c^\sigma+a^\sigma c^\rho) \\ -a\cdot d(b^\rho c^\sigma+b^\sigma c^\rho) -b\cdot c(a^\rho d^\sigma+a^\sigma d^\rho) +a\cdot b(-c^\rho d^\sigma+c^\sigma d^\rho) +c\cdot d(a^\rho b^\sigma-a^\sigma b^\rho) $, is used.} \dis{ Q^{\nu x} &\equiv {\rm Tr} (q\hskip -0.15cm\slash\gamma^\nu-\gamma^\nuq\hskip -0.15cm\slash)(k\hskip -0.15cm\slash_i+m_\nu)(q\hskip -0.15cm\slash\gamma^x-\gamma^xq\hskip -0.15cm\slash) (k\hskip -0.15cm\slash_f+m_\delta) \\ &=4g^{\nu x} (2 q\cdot k_i\, q\cdot k_f)+ 4k_i\cdot k_f ( q^\nu q^x-q^2 \,g^{\nu x}) +4q^2(k_i^\nu k_f^x+k_i^x k_f^\nu ) \\ &\quad -4q^\nu( k_f^x\,q\cdot k_i+k_i^x\, q\cdot k_f) -4 q^x k_f^\nu\,q\cdot k_i +4q^x k_i^\nu\, q\cdot k_f , \\ L_{\nu x} &\equiv 4(p^i_\nu p^f_x+p^f_\nu p^i_x -p^i\cdot p^f g_{\nu x}+m_e^2 g_{\nu x}),\label{eq:Qnux} } from which we obtain, neglecting O($m_e^2/E_\nu^2$), using Eq. (\ref{eq:kinvar}), \dis{ \frac{1}{q^4}Q^{\nu x} L_{\nu x} =&\frac{1}{q^4}\Big( 32k_i\cdot k_f\,q\cdot p_i\, q\cdot p_f +16 q^2(-k_i\cdot k_f\, p_i\cdot p_f+ 2k_i\cdot p_i\, k_f\cdot p_f +2k_i\cdot p_f \, k_f\cdot p_i )\\ &-32q\cdot k_i (q\cdot p_i\, k_f\cdot p_f + q\cdot p_f\, k_f\cdot p_i)-32q\cdot k_f (q\cdot p_i\, k_i\cdot p_f + q\cdot p_f\, k_i\cdot p_i)\\ &+16 m_e^2(4 q\cdot k_i \,q\cdot k_f -q^2 k_i\cdot k_f ) \Big) \\ =&\frac{1}{q^4}\Big(-4q^6 +16 q^2( k_i\cdot p_i+ k_f\cdot p_i)(k_i\cdot p_f+k_f\cdot p_f)\Big)\\ & =4Q^2-\frac{16}{Q^2}(k_i\cdot p_i+k_f\cdot p_f)(k_i\cdot p_f+k_f\cdot p_i)=8E_\nu E_\nu'(1-\cos\theta)-\frac{8m_e^2}{ (1-\cos\theta)} \label{eq:MMterms} } where $q^2=-Q^2$, and $|T|^2_{\rm MM}$ is \dis{ |T|^2_{\rm MM}=\frac{ \pi^2}{2}\left( \frac{ f_{\alpha\beta}}{m_e M_{\rm eff}^3}\right)^2\alpha^2_{\rm em} \Big(E_\nu E_\nu'(1-\cos\theta) -\frac{m_e^2}{ (1-\cos\theta)}\Big) (8M_A^2).\label{eq:MMspins} } Using Eq. (\ref{eq:Master}), we obtain the cross section for the incident neutrino $\nu_\alpha$ going to $\nu_\beta$ with a MM (including transition MM also), $f_{\alpha\beta}$ (in units of the electron Bohr magneton),\footnote{We use the incoherent cross sector for $\Delta p\gg 2.5\,\,\mathrm{eV}$.} \dis{ \frac{d\sigma}{d T_e d\cos\theta} &=\frac{3Z^3\alpha_{\rm em}^3m_e^3E_\nu' |{\bf p}_e'|}{2^{10}\pi^4 E_\nu (m_e+T_e)} 4\pi^2\left( \frac{ f_{\alpha\beta}}{m_eM_{\rm eff}^3 }\right)^2\alpha^2_{\rm em} \Big(E_\nu E_\nu'(1-\cos\theta) -\frac{m_e^2}{ (1-\cos\theta)}\Big)\\ &=\frac{3Z^3\alpha_{\rm em}^5m_e^3}{2^{8}\pi^2 M_{\rm eff}^6 }\frac{\sqrt{2T_e/m_e}}{ (1+T_e/m_e)^{5/2}} (1-\frac{\delta_B}{m_e}-\frac{T_e}{m_e})^2 \Big((1-\cos\theta) -\frac{m_e^2}{ E_\nu E_\nu'(1-\cos\theta)}\Big)\left( f_{\alpha\beta}\right)^2\\ &=0.883\times 10^{-5}\,\mathrm{MeV}^{-3}\frac{\sqrt{2T_e/m_e}}{ (1+T_e/m_e)^{5/2}}(1-\frac{\delta_B}{m_e}-\frac{T_e}{m_e})^2 \Big((1-\cos\theta) -\frac{m_e^2}{ E_\nu E_\nu'(1-\cos\theta)}\Big)\left( f_{\alpha\beta}\right)^2\label{eq:Sc3MM} } leading to \dis{ \frac{d\sigma}{d T_e}=0.883\times 10^{-5}\,\mathrm{MeV}^{-3}\frac{\sqrt{2T_e/m_e}}{ (1+T_e/m_e)^{5/2}}(1-\frac{\delta_B}{m_e}-\frac{T_e}{m_e})^2 \Big(2 +\frac{2m_e^2}{ E_\nu E_\nu'}\ln\frac{1}{\delta_{\rm inf}}\Big)\left( f_{\alpha\beta}\right)^2 \label{eq:ScMM} } where $\theta_{\rm min}$ is near 0 is used, {\it i.e.~} $\cos\theta|_{\rm max}=\frac12-\frac{\sqrt{2m_e T_e}}{<E_\nu'>}$, which is integrated over $T_e$ and the energy flux of solar $pp$ neutrinos \cite{Bahcall04} to give \dis{ \textrm{For MM}:~\frac{d\sigma}{dT_e } &\simeq 1.071\times 10^{-5} \,\mathrm{MeV}^{-3} \left( 2 -8.288 \right)f_{\alpha\beta}^2.\label{eq:sigMMnum} } The infrared divergence in the last term of Eq. (\ref{eq:Sc3MM}) is cured by the soft photon emission processes \cite{Bloch37}, and Eq. (\ref{eq:sigMMnum}) is used for $T_e\ge 2.5\,\mathrm{keV}$. Equation (\ref{eq:sigMMnum}) gives a number $-6.73\times 10^{-5} \,\mathrm{MeV}^{-3}f_{\alpha\beta}^2$. Comparing with the almost flat central region ($T_e=15-20\,\,\mathrm{keV}$) of Fig. \ref{fig:FittoEvents}\,(b), we obtain $|f_{\alpha\beta}|\le 0.86\times 10^{-7}$ which is much larger than that $2.8\times 10^{-11}$ obtained by the Borexino collaboration \cite{BdNuMagMom17}. The huge difference between our estimation and previous ones seems to be from the over estimation of cross sections in the previous analyses, using the inverse $T_e$ rule and the coherent scattering. \subsection{Contributions from vector and axial-vector charges $|T|^2_Q$} The electromagnetic $F_1^\nu$ form factor of a neutrino is identified from \cite{Giunti15} \dis{ &\Lambda_\mu (q) =f_Q(q^2) \gamma_\mu -f_M(q^2) \,i\sigma_{\mu\nu}q^\nu +f_E(q^2)\sigma_{\mu\nu}q^\nu \gamma_5 +f_A(q^2)(q^2\gamma_\mu-q_\mu q\hskip -0.15cm\slash)\\ &F_1^\nu (q^2)=f_Q(q^2) =F_1^\nu (0) +q^2\frac{dF_1^\nu (q^2)}{dq^2}\Big|_{q^2=0}+\cdots. } The charge radius is the $q^2$ dependent part, \dis{ \tilde{r}^2\equiv \langle r^2\rangle=6 \frac{dF_1^\nu (q^2)}{dq^2}\Big|_{q^2=0}.\label{eq:ChRad} } The elastic cross section includes the interference of the SM weak interaction and the vector and axial vector couplings of neutrinos with \dis{G_V=\frac12+2\sin^2\theta_W+e^2\left(\frac{{\tilde r}^2/6}{\sqrt2 G_F}-\frac{\varepsilon}{\sqrt2 G_F Q^2}\right) , \textrm{ and} ~ g_A=\frac12,\label{eq:onGV} } where $Q^2=-q^2>0$ for the $t$-channel scattering. However, we consider the 3-body scattering to take into account the momentum of the recoiling atom. The parameter describing the atomic electromagnetic effects is described in Appendix \ref{AppA} in terms of $B$ and $C$. Inserting (\ref{eq:Trace}) into (\ref{eq:Master}), we obtain \dis{ \frac{d\sigma}{dT_e} &=\frac{3Z^3\alpha_{\rm em}^3G_F^2 m_e^4 }{16\pi^4}\cdot \frac{ \sqrt{2m_eT_e} E_\nu(E_\nu-\delta_B-T_e) }{M_{\rm eff}^{6} (1+T_e/m_e)} \approx 1.22\times 10^{-23}\cdot\frac{ \sqrt{2m_eT_e} E_\nu(E_\nu-\delta_B-T_e) }{ (1+T_e/m_e)}\cdot\frac{1}{M_{\rm eff}^{6}}\\ &=6.33\times 10^{-6}\,\mathrm{MeV}^{-3} \cdot\frac{ \big[\frac{E_\nu}{m_e}(\frac{E_\nu}{m_e}-\frac{\delta_B+T_e}{m_e})\big] }{\big(1+\frac{T_e}{m_e}\big)^4} \sqrt{\frac{2T_e}{m_e}} \label{eq:SigDit} } with $B\simeq 1.144\times 10^4$, where the following $|T|^2_Q$ for the SM value \cite{tHooft71} is \dis{ |T|^2_Q=\frac{8G_F^2m_e^2}{ M_{\rm eff}^2} \,\left\{ (G_V+g_A)^2 E_\nu^2 +(G_V-g_A)^2 E_\nu E_\nu' +m_e E_e' (g_A^2-G_V^2)\right\},\label{eq:TQ2} } and we used $1/M_{\rm eff}^6$ given in Eq. (\ref{eq:M6eff}). \subsection{Millicharge}\label{subsec:Mili} \begin{figure}[!h] \includegraphics[height=0.25\textwidth]{fig2ExPhoton.pdf} \caption{The neutrino coupling to photon through the kinetic mixing with ex-photon $\gamma_{\rm ex}$.} \label{fig:ExPho} \end{figure} There can be another tiny EM property of neutrinos if there exist another (almost) massless U(1) gauge boson. If neutrinos couple to this ex-photon, $\gamma_{\rm ex}$, neutrinos can carry millicharges via the kinetic mixing \cite{Holdom86}, for which the Feynman diagram is shown in Fig. \ref{fig:ExPho}. Since the SM neutrino is in a SM doublet, L-handed electron $e_L$ must carry the same millicharge under the ex-photon gauge group. Then, the magnitude of the $\nu_\mu$ coupling to the ex-photon is limited from the $\nu_\mu+e$ scattering data (through neutral current coupling by $Z_\mu$), \dis{ (\delta_{\rm ex} e)_{\rm from\,\nu\,vertex}\frac{1}{q^2} (\delta_{\rm ex}e)_{\rm from\,e\,vertex}\le \frac{1.167\times 10^{-5}}{\sqrt2 \,\textrm{GeV}^2} \varepsilon(e_L)=0.413\times 10^{-5}/\,\textrm{GeV}^2 } where $ \varepsilon(e_L)(=\varepsilon(\nu_L))$ is the neutral current coupling of $e_L$ to the $Z$ boson in the SM \cite{KimRMP81}, {\it i.e.~} $0.5$. Since there is no way to pinpoint $ \delta_{\rm ex}$, we set it as 1, transfering its uncertainty to the kinetic mixing value $\varepsilon=F_1^{\nu}(0)$. \section{Fitting}\label{sec:Fitting} The contributions from the MM and CR of neutrinos add incoherently. For the MM contribution, we obtained already Eq. (\ref{eq:ScMM}). For the CR contribution, we should consider the interference with the weak amplitude in the SM, which is given in Eq. (\ref{eq:SigDit}). From Fig. \ref{fig:FittoEvents}\,(b), the cross section is less than $ 5\times 10^{-19} \,\,\mathrm{MeV}^{-3}$ for $T_e>17$\,keV. For $\sin^2\theta_W\simeq 0.231$ \cite{sin2LHC18,KimRMP81}, Eq. (\ref{eq:SigDit}) gives, with $\langle E\rangle =0.2668$ and $\langle E^2\rangle = 0.0788\,\mathrm{MeV}^2$ of solar $pp$ neutrinos, at ${\tilde r}^2=\varepsilon =0$, \dis{ \textrm{For CR}: ~ \simeq & 6.33\times 10^{-6} \,\mathrm{MeV}^{-3}\cdot \Big\{\big(0.3018-1.02\, T_{e,\rm MeV}\big)\Big[1.462+ \Gamma\Big]^2\\ &+\big(0.3018-1.02\, T_{e,\rm MeV} \big)\Big[0.462+ \Gamma\Big]^2\Big\}\frac{2.768\, T_{e,\rm MeV}^{1/2} }{\big(1+1.957\, T_{e,\rm MeV} \big)^4} ,\label{eq:sigQnum} } where $T_{e,\rm MeV}=T_e/\,\mathrm{MeV}$, which becomes at $T_e=17$\,keV \dis{ & 0.5703\times 10^{-6} \,\mathrm{MeV}^{-3}\cdot \Big\{\Big[1.462+ \Gamma\Big]^2 +\Big[0.462+ \Gamma\Big]^2\Big\} \to 1.341\times 10^{-6} \,\mathrm{MeV}^{-3}{\rm ~for~}\Gamma=0.\label{eq:Gammais0} } The ratio of $ 5\times 10^{-19} \,\,\mathrm{MeV}^{-3}$ and (\ref{eq:Gammais0}) is $\approx 0.790\times 10^{-13}$. For solar neutrino energies of order 400\,keV, $Q^2$ given in (\ref{eq:kinvar}) is of order $(0.3-0.4\,\,\mathrm{MeV})^2$. Taking $\langle Q^2\rangle\simeq 0.1\,\mathrm{MeV}^2 $, the charge radius limit from Eq. (\ref{eq:ChRad}) is $2.18\times 10^{-6}\,\mathrm{MeV}^{-1}$, or $4.30\times 10^{-17}$cm. Note that $\Gamma$ in Eq. (\ref{eq:sigQnum}) is \dis{ \Gamma &=4\pi\alpha_{\rm em}\left(\frac{{\tilde r}^2/6}{\sqrt2 G_F}-\frac{\varepsilon}{\sqrt2 G_F Q^2}\right)= 0.9265\times 10^{9}\left( \tilde{r}^2\,\mathrm{MeV}^2-6\varepsilon_{\rm MeV}\right)\\ &=2.387\times 10^{30}\left( \tilde{r}^2{\rm cm}^{-2}-2.329\times 10^{-21}\varepsilon_{\rm MeV}\right), } with $\varepsilon_{\rm MeV}=\varepsilon/Q_{\rm MeV}^2$ (where $Q_{\rm MeV}$ is $Q$ in units of MeV). \begin{figure}[!t] \includegraphics[height=0.35\textwidth]{fig3Data.pdf} \caption{The green bars are the $1\sigma_{\rm stat}$-level backgrounds. The systematic error is a factor 3 smaller than the statistical error \cite{Aprile20}. } \label{fig:Xenon1} \end{figure} If we allow millicharges, the $1\sigma$ error (taken as the $\pm 2$ events out of 76 in Fig. \ref{fig:FittoEvents}\,(a)) is shown as the yellow band in the $\tilde{r}^2-\varepsilon$ plane of Fig. \ref{fig:EMbounds}. There already exist bounds on the millicharges of light dark matter from the stellar evolution data, viz. Fig. 12 of \cite{CMBeps1} and Fig. 1 of \cite{CMBeps2}, presenting $\varepsilon<10^{-12}$ in the dark matter mass range less than $10^4$\,eV, which however cannot be used for our SM neutrinos. The difference is that we studied in this paper the (almost) massless SM neutrinos which cannot be dark matter in the Universe and we considered the EM properties of neutrinos only from solar neutrinos. \begin{figure}[!h] \includegraphics[height=0.44\textwidth]{fig4EMQ.pdf} \caption{The bounds on the charge radius (vertical bar) at $\varepsilon=0$ and the $1\sigma$ yellow band in the charge radius vs. millicharge plane, where $\varepsilon_{\rm MeV}=\varepsilon/Q_{\rm MeV}^2$ (where $Q_{\rm MeV}$ is $Q$ in units of MeV) depends on the effective momentum transfer in the process.}\label{fig:EMbounds} \end{figure} \section{Conclusion}\label{sec:Conclusion} We obtained the bounds on the electromagnetic properties of neutrinos implied by the XENON1T data: a bound on the magnetic moment $|f_{\alpha\beta}|\le 0.86\times 10^{-7}$(times the electron Bohr magneton), the charge radius $|\tilde{r}| < 4.30\times 10^{-17\,}{\rm cm}$, and the millicharge bound (Fig. \ref{fig:EMbounds}) if there exists a massless extra photon. \begin{appendix} \section{Estimate of probability amplitude}\label{AppA} If an electron is kicked out from an atomic orbit, an electron in the outer orbit will fill that vacancy immediately. The transition is occurring electromagnetically via the $E1$ or $M1$ transition. The the $E1$ transition rate between the eigenstates $|k\rangle$ and $|n\rangle$ is presented a long time ago \cite{Schiff55,Gottfried03} \dis{ \frac{(2\pi)^2\alpha_{\rm em}}{3}\omega_{kn}^3|\langle k|{\bf r}|n\rangle|^2 , } where ${\bf r}$ is the coordinate of the EM field. We can estimate $\langle k|{\bf r}|n\rangle$ for the ground state ($k=n=1$) as $ a_B/Z$, with $a_B=0.529\times 10^{-8\,}$cm=268.1\,MeV$^{-1}$, and for the excited state $n$ as $ na_B/Z$. We use $a_Z=14.48\,\mathrm{keV}^{-1}$ for $Z=54$. Then, the rest of the matrix element takes into account only the spherical harmonics and gives selection rules between angular momentum eigenfunctions. Now we consider kicking out the electron outside the atom and the selection rule always permits it. So, we will consider $\omega_{nn'}=\omega_{n\infty}=|E_{n}/\hbar|$. Thus, the transition rate to kick out an electron to outside is \cite{Gottfried03}, \dis{ \frac{4\alpha_{\rm em}k^3}{3} \frac{1}{2j_i+1}|\langle n_f j_f|| D ||n_i j_i\rangle|^2\label{eq:ProbE1} } Considering the Hydrogenic radial wave function $C_n r^{n-1}e^{-r/na}$ with $C_n=(2/n)^{2n+1}/\sqrt{4\pi a_Z^3}$ with $a_Z=a_B/Z$, we can estimate the last factor, between the plane wave and the bound state $|n\rangle$, $|2^{2n}(n+1)!a_Z^4/n^{n-1}\sqrt{\pi a_Z^3V}|^2$. Thus, the transition rate to kick out an electron to outside by electromagnetic interactions in the atom is \dis{ \frac{4\alpha_{\rm em}k^3}{3}\frac{a_Z^5}{\pi V}\left(\frac{2^{2n+1} [(n+1)!]}{n^{n-1}}\right)^2. \label{eq:PrE1} } Let us introduce a parameter $M_{\rm eff}$ reproducing Eq. (\ref{eq:PrE1}), using the following effective interaction \dis{ \frac{1}{M^2_{\rm eff}}\bar{e}\gamma^\mu e\bar{A}\gamma_\mu A,\label{eq:4ferInt} } and we use particle physicists' $2\to n$ transition rate, \dis{ Rate=(2\pi)^{4-3n}\frac{d^3{\bf p}_1'\cdots d^3{\bf p}_n' }{2E_12E_2 2E_1'2E_2'\cdots 2E_n'} \frac{\delta^4(p_1+p_2-p_1'\cdots-p_n')}{V} |\langle p_1'\cdots p_n'|T|p_1p_2\rangle |^2.\label{eq:2nRate} } The two-body scattering rate from Eq. (\ref{eq:2nRate}), using (\ref{eq:4ferInt}) and averaging over a flux of incoming particles on a rest target, is \dis{ &(2\pi)^{-2}\frac{d^3{\bf p}_e'}{ 2^4m_eE_e'M_A^2}\frac{\delta(\sum_iE_i-\sum_fE_f)}{V} \frac{2^6}{M_{\rm eff}^4}m_eE_e'M_A^2\\ &=\frac{4|{\bf p}_e'|^2 d|{\bf p}_e'| }{}\frac{\delta( E_\nu'+m_e+\frac{|{\bf p}_e'|^2}{2m_e}+M_A-E_\nu -m_e+\delta_B-M_A)}{\pi V} \frac{1}{M_{\rm eff}^4} =4\pi^{-1}\frac{m_e|{\bf p}_e'| }{M_{\rm eff}^4 V} } where we used \dis{ 16(p_e^\mu {p'_e}^\nu -p'_e\cdot p_e g^{\mu\nu}+p_e^\nu {p'_e}^\mu+m_e^2 g^{\mu\nu}) (P_A^\mu {P'_A}_\nu -P'_A\cdot P_A g_{\mu\nu}+P_A^\nu {P'_A}_\mu+M_A^2 g_{\mu\nu})=\\ 16(2p_e\cdot P_Ap_e'\cdot P_A' -2p'_e\cdot p_eP'_A\cdot P_A +2p_e\cdot P_A'p_e'\cdot P_A+2m_e^2P_A\cdot P_A' \\ -(P_A\cdot P_A')(-2p_e\cdot p_e'+4m_e^2) +M_A^2(-2p_e\cdot p_e'+4m_e^2)\\ \to\frac{32}{M_{\rm eff}^4} (p_e\cdot P_Ap_e'\cdot P_A' +p_e\cdot P_A'p_e'\cdot P_A -M_A^2p_e\cdot p_e'+m_e^2M_A^2)\\ \simeq \frac{32}{M_{\rm eff}^2} \left(3 m_e^2M_A^2-m_e E_e'M_A^2\right)\simeq \frac{2^6}{M_{\rm eff}^4}m_eE_e'M_A^2 . } This can be compared to (\ref{eq:PrE1}), and we obtain \dis{ \frac{1}{M_{\rm eff}^2}=\sqrt{\frac{\alpha_{\rm em}}{3}}\frac{a_Z^{5/2}}{\sqrt{m_e (m_e+T_e)} } k_n^{3/2}\left(\frac{2^{2n+1} [(n+1)!]}{n^{n-1}}\right) ,\label{eq:Fix} } where we use $k_n$ for the threshold value, $k_n\simeq |E_n(Xe)|/\hbar$. In practice, let us use Eq. (\ref{eq:PrE1}) summing over the electrons from each orbit of the Xenon atom with $k_n^3$, \dis{ & A\equiv\sum_{n=1}^5\,\frac{1}{n^3}f(n)\left(\frac{2^{2n+1} [(n+1)!]}{n^{n-1}}\right)^2=7.028\times10^5,\\ & B\equiv\sum_{n=1}^5\,\frac{1}{n^6}f(n)\left(\frac{2^{2n+1} [(n+1)!]}{n^{n-1}}\right)^2\simeq 1.144\times 10^4, \label{eq:Sumsn} } where $f(n)=2,8,18,18,8$ for $n=1,2,\cdots,5$, respectively. This takes into account the number of electrons in the Xenon atom $Z$, corresponding to the incoherent process. $B$ appears when we takes into account $V$ in the denominator, which is the case in the 3-body scattering. Thus, we estimate, for the Xenon atom of $Z=54$ and $M_A\simeq 7.028\times 10^5\,\mathrm{MeV}$, \dis{ \frac{1}{M_{\rm eff}^2}=\sqrt{\frac{\alpha_{\rm em}}{3}}\frac{\sqrt{A}}{Z^{5/2}m_e^2\sqrt{1+T_e/m_e}} =0.150 (1+T_e/m_e)^{-1/2}\,\,\mathrm{MeV}^{-2} .\label{eq:FixMeff} } If we use (\ref{eq:FixMeff}), then \dis{ \frac{1}{M_{\rm eff}^6}=5.69\times 10^{4}\frac{\,\mathrm{keV}^{-15/2}}{[m_e(m_e+T_e)]^{3/2}} \sum_nk_n^{9/2}\left(\frac{2^{2n+1}[(n+1)!]}{n^{n-1}} \right)^3\equiv 2.27\times 10^{5}\frac{\,\mathrm{MeV}^{-3}}{[m_e(m_e+T_e)]^{3/2}}\,C\label{eq:M6eff} } where $C$ becomes\footnote{If we use $Z_{\rm eff}(n)$, including electron interactions, we obtain $C=5.136\times 10^{12}$, and ${1}/{M_{\rm eff}^6}= 1.17\times 10^{18}[m_e(m_e+T_e)]^{-3/2}\,\,\mathrm{MeV}^{-3}$. } \dis{ C\simeq 2.287\times 10^{12}. } The $3\to n$ transition rate is \dis{ Rate=(2\pi)^{4-3n}\frac{d^3{\bf p}_1'\cdots d^3{\bf p}_n' }{2E_12E_2 2E_32E_1'2E_2'\cdots 2E_n'} \frac{\delta^4(p_1+p_2+p_3-p_1'\cdots-p_n')}{V^2} |\langle p_1'\cdots p_n'|T|p_1p_2p_3\rangle |^2\label{eq:3nRate} } from which we calculate the three body scattering cross section. A flux of incoming particles 1 (neutrino in our case) scatter on two particles 2 (electron in our case) and 3 (atom in our case). Thus, the cross section consists of two parts, one with a flux factor $1/|{\bf v}_1-{\bf v}_2|$ and the other $1/|{\bf v}_1-{\bf v}_3|$. Taking ${\bf v}_3=0$, the neutrino-electron scattering is \dis{ d\sigma=\frac{(2\pi)^{4-9}}{|{\bf v}_1-{\bf v}_2|}\frac{d^3{\bf p}_1'd^3{\bf p}_2' d^3{\bf p}_3' }{2E_12E_2 2E_32E_1'2E_2'2E_3'} \frac{\delta^4(\sum p_i-\sum p_f')}{V} |\langle p_1'p_2' p_3'|T|p_1p_2p_3\rangle |^2\label{eq:3scattF} } where we can take $V$ as the atomic volume to which the flux of neutrinos sweep. For $1/V$, the sum $B$ in Eq. (\ref{eq:Sumsn}) will be used below. The $T$-matrix squared in Eq. (\ref{eq:3scattF}) is summed over the spins \dis{ \frac{1}{(2s_e+1)(2s_\nu+1)}\sum_{s_\nu,s_e}\sum_{s'_\nu,s'_e}|{\rm Amp}|^2= (|T|^2_{\rm Q}). } If we consider the MM interaction also, then we will have a factor $(|T|^2_{\rm MM} +|T|^2_{\rm Q})$ instead of $(|T|^2_{\rm Q})$. For the volume $V$, we use the Xenon volume for each $n$ separately. The $n$-th shell volume we used $V=(4\pi /3)(na_Z)^3$. \end{appendix} \acknowledgments{\noindent We thank Kyungwhan Ahn, Elena Aprile, Ki-Young Choi, Paul Frampton, Wonho Jhe, Junu Jeong, Sin-Kyu Kang, Sejin Kim, and Myungbo Shim for useful discussions. J.E.K. thanks the APCTP, for the hospitality extended to him during his visit, where this work has been started. J.E.K. is supported in part by the National Research Foundation (NRF) grant NRF-2018R1A2A3074631, and S.Y. is supported in part by the Institute for Basic Science (IBS-R017-D1-2020-a00/IBS-R017-Y1-2020- 345 a00).}
{ "timestamp": "2021-05-06T02:08:36", "yymm": "2105", "arxiv_id": "2105.01842", "language": "en", "url": "https://arxiv.org/abs/2105.01842" }
\section{The Triangle Counting Algorithm} Let $G = (V,E)$ be a graph on $n$ vertices, received as a stream of undirected edges, adversarially ordered. Let $m$ be the number of edges in the stream. We write the stream as $\sigma = (\sigma_i)_{i=1}^m$, with each $\sigma_i \in E$. We use $T$ to refer to the number of triangles in $G$, $\Delta_E$ to refer to the maximum number of them sharing a single edge, and $\Delta_V$ the maximum number sharing a single vertex. \begin{remark} As with all streaming triangle counting algorithms, our algorithm will need to be parametrized by statistics of the graph that cannot be known exactly without trivializing the problem---in our case $T$, $\Delta_E$, and $\Delta_V$. However, it will not be necessary to know these exactly---an upper bound on $\Delta_E$, $\Delta_V$ and a lower bound on $T$ will be sufficient. If these bounds are tight up to a constant, the complexity of our algorithm will be unchanged, otherwise replace the parameters $T$, $\Delta_E$, $\Delta_V$ with the respective upper and lower bounds. \end{remark} \subsection{Description of the Algorithm} We begin by choosing two hash functions $\mathbf{f} : V \rightarrow {\{0, 1\}}$ and $\mathbf{g} : E \rightarrow {\{0, 1\}}$, which will serve as our ``vertex sampling'' and ``edge sampling'' functions, respectively. We choose $\mathbf{f}$ to be pair-wise independent. $\mathbf{g}$ will only be evaluated at most once for each edge, and so we may choose it to be fully independent. We pick the two functions $\mathbf{f},\mathbf{g}$ such that \[ \E{\mathbf{f}(v)} = p \] for each $v \in V$ and \[ \E{\mathbf{g}(e)} = q \] for each $e \in E$, where $p,q$ are parameters to be set later. Such a hash function $\mathbf{f}$ can be generated by taking a two-wise independent function $\mathbf{h}: V \to [M]$, where $M = \poly(n)$ is a sufficiently large multiple of $1/p$, and setting $\mathbf{f}(v) = 1$ whenever $\mathbf{h}(v) \leq pM$ (one can construct $\mathbf{g}$ similarly using a four-wise independent hash function). Such functions can be generated and stored in at most $O(\log n)$ bits of space \cite{carter1979universal}. The algorithm will be simple: sample vertices with probability $p$, sample incident edges with probability $q$. The formal description is given below in Algorithm \ref{alg:octc}. \begin{algorithm}[H] \caption{Triangle Counting Algorithm} \label{alg:octc} \begin{algorithmic}[1] \Procedure{TriangleCounting}{$p,q$} \State $S \gets \emptyset$ \State $\mathbf{\overline{T}} \gets 0$ \For{each update $wv$} \For{$u \in V$} \If{$\mathbf{f}(u) > 0 \wedge uv, uw \in S$} \State $\mathbf{\overline{T}} \mathrel{+}= 1/pq^2$ \EndIf \EndFor \If{$\mathbf{g}(wv)(\mathbf{f}(w) + \mathbf{f}(v)) > 0$} \State $S \gets S \cup \set{wv}$ \EndIf \EndFor \State \textbf{return}~ $\mathbf{\overline{T}}$. \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Analysis of the Algorithm} \begin{lemma} \label{lm:octcsp} This algorithm uses $\bO{mpq\log n}$ bits of space. \end{lemma} \begin{proof} Besides an $\bO{\log n}$ sized counter and the hash function $\mathbf{f}$ ($\mathbf{g}$ is never evaluated more than once for an edge and thus does not need to be stored), the algorithm maintains a set of edges. Each edge will be kept with probability at most $2pq$ and takes $O(\log n)$ space to store, so the result follows. \end{proof} We will write $T_{uvw}$ for the variable that is $1$ if $uvw$ is a triangle in $G$ with its edges arriving in the order $(uv,uw,vw)$, and 0 otherwise, and so \[ T = \sum_{(u,v,w) \in V^3} T_{uvw}\text{.} \] We will write $\mathbf{\overline{T}}_{uvw}$ for the random variable that is $1/pq^2$ if $T_{uvw} = 1$ and $\mathbf{f}(u)\mathbf{g}(uv)\mathbf{g}(uw) = 1$, and $0$ otherwise. We will therefore have \[ \mathbf{\overline{T}} = \sum_{(u,v,w) \in V^3} \mathbf{\overline{T}}_{uvw}. \] \begin{lemma} \label{lm:octcex} \[ \E{\mathbf{\overline{T}}} = T\text{.} \] \end{lemma} \begin{proof} For any $(u,v,w)$, $\mathbf{f}(u)\mathbf{g}(uv)\mathbf{g}(uw) = 1$ with probability $pq^2$, so $\E{\mathbf{\overline{T}}_{uvw}} = T_{uvw}$. Therefore, \begin{align*} \E{\mathbf{\overline{T}}} &= \sum_{(u,v,w) \in V^3} \E{\mathbf{\overline{T}}_{uvw}}\\ &= \sum_{(u,v,w) \in V^3}T_{uvw}\\ &= T \end{align*} \end{proof} \begin{lemma} \label{lm:octcvar} \[ \Var{\mathbf{\overline{T}}} \le T/pq^2 + T\Delta_E/pq + T\Delta_V/p\text{.} \] \end{lemma} \begin{proof} Consider any (ordered) pair of triples $(u,v,w), (x,y,z) \in V^3$ such that $T_{uvw}T_{xyz} = 1$. If $(u,v,w) = (x,y,z)$, $\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz} = 1/p^2q^4$ with probability $pq^2$ and $0$ otherwise, so \[ \E{\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz}} = \E{\mathbf{\overline{T}}_{uvw}^2} = 1/pq^2. \] At most $T$ such pairs of triples can exist. Now, if $\abs{\set{uv,uw} \cap \set{xy,xz}} = 1$, then $u = x$ and so $\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz} = 1/p^2q^4$ iff $\mathbf{f}(u) = 1$ and $\mathbf{g}(e) = 1$ for all $e$ in the size-3 set $\set{uv,uw,xy,xz}$, which happens with probability $pq^3$, and so \[ \E{\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz}} = 1/pq. \] Each triangle has at most $\Delta_E$ other triangles it shares an edge with, so there are at most $T\Delta_E$ such pairs. If $\set{uv,uw} \cap \set{xy,xz} = \emptyset$ but $u = x$, then $\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz} = 1/p^2q^4$ iff $\mathbf{f}(u) = 1$ and $\mathbf{g}(e) = 1$ for all $e$ in the size-4 set $\set{uv,uw,xy,xz}$, which happens with probability $pq^4$, and so \[ \E{\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz}} = 1/p. \] Each triangle has at most $\Delta_V$ other triangles it shares a vertex with, so there are at most $T\Delta_V$ such pairs. Finally, if $\set{u,v,w} \cap \set{x,y,z} = \emptyset$, then $\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz} = 1/p^2q^4$ iff $\mathbf{f}(u) = 1$, $\mathbf{f}(x) = 1$, and $\mathbf{g}(e) = 1$ for all $e$ in the size-4 set $\set{uv,uw,xy,xz}$, which happens with probability $p^2q^4$, and so \[ \E{\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz}} = 1. \] At most $T^2$ such pairs can exist. Therefore, \begin{align*} \E{\mathbf{\overline{T}}^2} &= \sum_{(u,v,w) \in V^3}\sum_{(x,y,z) \in V^3} \E{\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz}}\\ &= \sum_{(u,v,w) \in V^3} \E{\mathbf{\overline{T}}_{uvw}^2} + \sum_{(u,v,w) \in V^3} \left(\sum_{\substack{(x,y,z) \in V^3\\\abs{\set{uv, uw} \cap \set{xy,xz}} = 1}} \E{\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz}} +\right.\\ &\phantom{=}\left. \sum_{\substack{(x,y,z) \in V^3\\\set{uv, uw} \cap \set{xy,xz} = \emptyset\\ u = x}} \E{\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz}} + \sum_{\substack{(x,y,z) \in V^3\\\set{u,v,w} \cap \set{x,y,z} = \emptyset}} \E{\mathbf{\overline{T}}_{uvw}\mathbf{\overline{T}}_{xyz}} \right)\\ &\le T/pq^2 + T\Delta_E/pq + T\Delta_V/p + T^2 \end{align*} by adding the previously established bounds for all four kinds of pair. The lemma then follows from the fact that $\Var{\mathbf{\overline{T}}} = \E{\mathbf{\overline{T}}^2} - \E{\mathbf{\overline{T}}}^2 = \E{\mathbf{\overline{T}}^2} - T^2$. \end{proof} We may now prove Theorem~\ref{thm:optimaltcalg}. \optimaltcalg* \begin{proof} We may assume $\Delta_V$ (more specifically, the upper bound we have on it) is at least 1, as otherwise we already know $G$ to be triangle-free. By Lemmas~\ref{lm:octcex} and~\ref{lm:octcvar}, we can set $p = \Delta_V/T$, $q = \max\set*{\Delta_E/\Delta_V, 1/\sqrt{\Delta_V}}$ and run Algorithm~\ref{alg:octc} to obtain an estimator with expectation $T$ and variance at most $3T^2$. (These will give valid probabilities, as $\Delta_V \le T$ by definition, and $\Delta_E$ is at least $\Delta_V$, as any pair of triangles sharing an edge also share a vertex.) By Lemma~\ref{lm:octcsp}, this will take $\bO{\frac{m}{T}\paren*{\Delta_E + \sqrt{\Delta_V}}\log n}$ space. Repeating this $36/\varepsilon^2$ times and taking the mean will give an estimator with expectation $T$ and variance at most $\varepsilon T^2/2$. We can then repeat \emph{this} $\bO{\log \frac{1}{\delta}}$ times and take the median to get an estimator that will be within $\varepsilon T$ of $T$ with probability $1 - \delta$. \end{proof} \section{Conclusion} We resolve the complexity of triangle counting in the insertion-only streaming model, in terms of the well-studied natural graph parameters $m, T, \Delta_E, \Delta_V$. The results of~\cite{KKP18} resolved this problem for the \emph{linear sketching} model, and a result of~\cite{LNW14} states that, under certain conditions, turnstile streaming algorithms are equivalent to linear sketches, suggesting that the algorithm of~\cite{KP17} is optimal for turnstile streams as well. However,~\cite{KP20} showed that an insertion-only algorithm of~\cite{JG05} can be converted into a turnstile streaming algorithm provided that, for instance, the length of the stream is reasonably constrained (with the number of insertions and deletions no more than $O(1)$ times the final size of the graph). It remains open whether this algorithm can be converted into a turnstile algorithm under such constraints, or whether the bounded-stream turnstile complexity of triangle counting is somewhere between insertion-only and linear sketching. Another natural question is about the choice of parameters---the algorithm of \cite{PT12} is optimal in terms of $m, T$, and $\Delta_E$, but not when the parameter $\Delta_V$ is considered. Are there natural extensions of the parametrization that allow for better results? The results of \cite{KP17} include a proof of instance-optimality for a restricted subclass of non-adaptive sampling algorithms, but for more general algorithms it is clear that there are at least \emph{unnatural} extensions of the parametrization that help. For instance, if all the edges of a graph are guaranteed to belong to high-degree vertices, but all the triangles belong to low-degree vertices, a simple filtering strategy allows an improvement. In particular, the lower bound instances of \cite{BOV13, KP17} are both sparse graphs, and so cannot be constructed if $n$ is constrained to be small relative to $m$ or $T$. For the most dense graphs (with $\Theta\paren*{n^2}$ edges and $\Theta\paren*{n^3}$ triangles) our algorithm and the algorithm of~\cite{KP17} are already trivially optimal up to log factors, since they use only $\polylog(n)$ bits. However, the complexity landscape for more general dense graphs remains open. \section{Introduction} Triangle counting is a fundamental problem in the study of graph algorithms, and one of the best studied in the field of graph streams. It arises in the analysis of social networks~\cite{BHLP11}, web graphs~\cite{EM02}, and spam detection~\cite{BBCG08}, among other applications. From a theoretical perspective, it is of particular interest as the simplest subgraph counting problem that cannot be solved by considering only \emph{local} information about individual vertices. In other words, counting triangles requires one to aggregate information between pairs of \textit{non-incident} edges. In this paper, we present an optimal algorithm for counting triangles in the \emph{graph streaming} setting, settling a long line of work on this problem. \paragraph{Graph Streaming.} In the (insertion-only) graph streaming setting, a graph $G = (V,E)$ is received as a stream of edges $(\sigma_t)_{t=1}^m$ from its edge set $E$ in an arbitrary order, and an algorithm is required to output the answer to some problem at the end of the stream, using as little space as possible\footnote{Other properties, such as update time, are also of interest, but space has been the primary object of study in the theory of streaming.}. Variants on this model include turnstile streaming (in which edges may be deleted as well as inserted), and models that restrict what kind of state the algorithm may maintain. \paragraph{Triangle Counting in Graph Streams.} The theoretical study of graph streaming was initiated by \cite{BKS02}, who studied the problem of triangle counting---the problem of estimating the number of three-cliques in a graph. They demonstrated that, in general, sublinear space algorithms cannot exist for this problem; namely, in the worst case any algorithm for triangle counting in a stream must use $\bOm{n^2}$ bits of space. On the other hand, they also showed that, if one parameterizes in terms of the number of triangles $T$, one can often beat this pessimistic lower bound. In particular, they gave an algorithm that uses $\bOt{\paren{\frac{mn}{T}}^3}$ space to count triangles in a graph with $m$ edges, $n$ vertices, and $T$ triangles, based on streaming algorithms for approximating frequency moments.\footnote{Here we assume the desired approximation is a multiplicative $(1 \pm \varepsilon)$ with success probability $\delta$ for some positive constants $\varepsilon, \delta$. For most algorithms mentioned here, including our own, the dependence on non-constant $\varepsilon, \delta$ will go as $\varepsilon^{-2}\log\delta^{-1}$. We use $\bOt{\cdot}$ to suppress logarithmic or polylogarithmic factors in the argument.} Of course, it is unreasonable to assume that an algorithm knows the number of triangles $T$ in advance, as this would make counting superfluous. Instead, it will suffice to have constant factor bounds on the parameters in question.\footnote{One might hope to use these parameters adaptively, giving an algorithm that uses more space the smaller $T$ is without needing a lower bound at the start. However, this is in general impossible, as a graph stream with few triangles and a graph stream with many triangles may be indistinguishable until the last few updates.} Several years later, the upper bound for this problem was improved to $\bOt{\frac{mn}{T}}$ by \cite{BFLMS06}, while \cite{JG05} gave a (non-comparable) algorithm that samples edges and stores neighborhoods of their endpoints in order to find triangles, achieving $\bOt{\frac{md^2}{T}}$ space in graphs with maximum degree $d$. Both algorithms were later subsumed by the $\bOt{\frac{md}{T}}$ space algorithm of \cite{PTTW13}. \paragraph{Additional Graph Parameters for Triangle Counting.} Despite the large strides made by the aforementioned algorithms, none of them can achieve sublinear space, even for graphs guaranteed to have as many as $\Omega(m)$ triangles, without bounding parameters of the graph other than $m$ and $T$. This feature was shown to be necessary by \cite{BOV13}, who constructed a family of graphs with either $0$ or $\bOm{m}$ triangles such that distinguishing between the two requires $\bOm{m}$ space. However, this ``hard instance'' is an unusual graph---every triangle in it shares a single edge. This motivated the introduction of a new graph parameter $\Delta_E$, defined as the maximum number of triangles which share a single edge in $G$. When one parameterizes in terms of $\Delta_E$, the lower bound of \cite{BOV13} becomes $\bOm{\frac{m\Delta_E}{T}}$. As it happens, the maximum degree of graphs in this family is also $\Delta_E$, so in particular this proves \cite{PTTW13} to be optimal among algorithms parametrized by only $m, d$, and $T$. The first algorithm to directly take advantage of the new parameter $\Delta_E$ was given by \cite{TKMF09}. Their algorithm is simple: keep each edge in the stream independently with probability $p$, count the number of triangles $T'$ in the resulting graph, and output $T'p^{-3}$. They show that setting $p = \bO{\frac{1}{T^{1/3}} + \frac{\Delta_E}{T}}$ suffices for an accurate count, and thereby achieve $\bOt{m\paren*{\frac{1}{T^{1/3}} + \frac{\Delta_E}{T}}}$ space. This algorithm has another important feature: it is a \emph{non-adaptive sampling} algorithm---whether it keeps an edge it sees does not depend on the contents of the stream before the edge arrives. This means it can naturally handle \emph{turnstile streams}, streams in which edges may be deleted as well as inserted. In fact, through the use of sketches for $\ell_0$ sampling (see e.g.~\cite{CJ19}) such algorithms may be converted into \emph{linear sketches}, which are algorithms that store only a linear function of their input (when considered as a vector in ${\{0, 1\}}^{|V| \choose 2}$). An improved non-adaptive sampling algorithm was given in \cite{PT12}, which used the technique of coloring vertices with one of $k$ colors, and keeping all monochromatic edges. This improved the space usage of the algorithm to $\bOt{m\paren*{\frac{1}{\sqrt{T}} + \frac{\Delta_E}{T}}}$. In \cite{KP17}, it was shown (in combination with the existing lower bound of \cite{BOV13}) that this is optimal, even for insertion-only algorithms---for every $T$ up to $\Omega(m)$, a family of graphs exist with $\Delta_E \le 1$ and either $0$ or $T$ triangles, such that $\bOm{\frac{m}{\sqrt{T}}}$ space is required to distinguish the two. However, as with the lower bound of \cite{BOV13}, the hard instance from \cite{KP17} is a rather strange graph: this time every triangle shares a single \emph{vertex}. Also similarly to the lower bound of \cite{BOV13}, the bound from \cite{KP17} weakens as the maximum number of triangles sharing a single vertex, a parameter denoted by $\Delta_V$, is restricted. In this case, when parameterized by $\Delta_V$, the lower bound becomes $\bOm{\frac{m\sqrt{\Delta_V}}{T}}$. This was accompanied in \cite{KP17} by an algorithm that achieves $\bOt{m\paren*{\frac{1}{T^{2/3}} + \frac{\sqrt{\Delta_V}}{T} + \frac{\Delta_E}{T}}}$ space, improving on \cite{PT12} for graphs with $\Delta_V = o(T)$. Subsequently, it was shown in \cite{KKP18} that any linear sketching algorithm for counting triangles requires $\bOm{\frac{m}{T^{2/3}}}$ space, even if every triangle is disjoint from every other and therefore $\Delta_E = \Delta_V \le 1$, and so the \cite{KP17} algorithm is optimal among linear sketches. By the turnstile streaming-linear sketching equivalence of \cite{LNW14}, this suggests that \cite{KP17} is also optimal among turnstile streaming algorithms.\footnote{However, the \cite{LNW14} equivalence depends on rather stringent conditions that a turnstile algorithm must satisfy. In \cite{KP20}, it was shown that relaxing these conditions allows turnstile streaming algorithms for triangle counting that are closer to the result of \cite{JG05}.} However, this leaves open the question of how hard triangle counting is for algorithms that are \emph{not} required to handle deletions (i.e., the standard ``insertion-only'' model). We resolve this question (up to a log factor, as with previous optimality results), by giving an optimal algorithm for triangle counting in insertion-only streams. \begin{figure} \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{|l|l|l|} \hline Paper & Space & Model\\ \hline\hline \cite{PTTW13} & $\bOt{\frac{md}{T}}$ & Insertion-only\\ \cite{BOV13} & $\bOm{\frac{m\Delta_E}{T}}$ & Insertion-only\\ \cite{KP17} & $\bOm{\frac{m\sqrt{\Delta_V}}{T}}$ & Insertion-only\\ \hline \cite{PT12} & $\bOt{m\paren*{\frac{1}{\sqrt{T}} + \frac{\Delta_E}{T}}}$ & Linear Sketching\\ \cite{KP17} & $\bOt{m\left(\frac{1}{T^{2/3}} + \frac{\sqrt{\Delta_V}}{T} + \frac{\Delta_E}{T}\right)}$ & Linear Sketching\\ \cite{KKP18} & $\bOm{\frac{m}{T^{2/3}}}$ & Linear Sketching\\ \hline This work & $\bOt{\frac{m}{T}\paren{\sqrt{\Delta_V} + \Delta_E}}$ & Insertion-only\\ \hline \end{tabular} \caption{Best known upper and lower bounds for triangle counting for insertion-only and linear sketching algorithms. $m$ is the number of edges, $T$ the number of triangles, $d$ the maximum degree, and $\Delta_E$, $\Delta_V$ are the maximum number of triangles sharing an edge or a vertex respectively. Note that linear sketching upper bounds imply insertion-only upper bounds, while lower bounds are the opposite.} \label{fig:existing} \end{figure} \paragraph{Our Algorithm.} We give a new algorithm for counting triangles in insertion-only graph streams. \begin{restatable}{thm}{optimaltcalg} \label{thm:optimaltcalg} For every $\varepsilon, \delta \in (0,1)$, there is an algorithm for insertion-only graph streams that approximates the number of triangles in a graph $G$ to $\varepsilon T$ accuracy with probability $1 - \delta$, using \[ \bO{\frac{m}{T}\paren*{\Delta_E + \sqrt{\Delta_V}}\log n\frac{\log\frac{1}{\delta}}{\varepsilon^2}} \] bits of space, where $m$ is the number of edges in $G$, $T$ the number of triangles, $\Delta_E$ the maximum number of triangles which share a single edge, and $\Delta_V$ the maximum number of triangles which share a single vertex. \end{restatable} \noindent This matches, up to a log factor (and for constant $\varepsilon, \delta$), the lower bounds of \cite{BOV13} and \cite{KP17}. It subsumes both the algorithm of \cite{KP17} and the $\bOt{\frac{md}{T}}$ algorithm of \cite{PTTW13}, as in any graph with max degree $d$, we have $\Delta_E \le d$ and $\Delta_V \le {d \choose 2}$. This closes the line of work discussed above on the complexity of triangle counting in insertion-only streams. \paragraph{Other Related Work} In the \emph{multi-pass} streaming setting, an algorithm is allowed to pass over the input stream more than once. \cite{CJ14} shows multipass algorithms take $\bTht{m/\sqrt{T}}$ space for arbitrary graphs, giving an algorithm for two passes and a lower bound for a constant number of passes. \cite{KMPT12} shows a three pass streaming algorithm using $O(\sqrt{m} + m^{3/2}/T)$ space. \cite{BC17} gave a $\bO{m^{3/2}/T}$ four pass algorithm. In the \emph{adjacency-list} model, in which each vertex's list of neighbors is received as a block (and so in particular every edge is seen twice), \cite{MVV16} gave a $\bO{m/\sqrt{T}}$ space one-pass algorithm, while~\cite{KMPV19} gave $\bO{m/T^{2/3}}$ space 2-pass algorithm, as well as tight (but conditional on open communication complexity conjectures) lower bounds for both. The problem has also been studied in the query model, in which case rather than space the concern is minimizing time or query count. While this is a very different setting, similar concerns around mitigating the impact of ``heavy'' vertices or edges arise. \cite{ELRS15} considered triangle counting in this setting, which was extended by~\cite{ERS18} to general cliques and~\cite{AKK19} to arbitrary constant-size subgraphs. \section{Overview of the Algorithm} At a high-level, many triangle counting algorithms in the literature adhere to the following template: \textbf{(1)} design a sampling scheme to sample triangles, \textbf{(2)} count the number of triangles which survive after this sampling process, \textbf{(3)} rescale the number of empirically sampled triangles by the expected fraction of surviving triangles to obtain an unbiased estimator for $T$. As an example, one could sample each edge uniformly with probability $q$ (this is the approach taken in \cite{TKMF09}). Since for a triangle to survive all three of its edges must be sampled, the expected number of triangles that survive is $Tq^3$. Thus, rescaling the number of empirically sampled triangles by $1/q^3$ yields an unbiased estimator. How large must $q$ be to make this estimator accurate? In order to sample even a single triangle we need $Tq^3 \geq 1$, so clearly $q$ must be at least $1/T^{1/3}$. Moreover, if $\Delta_E$ is the largest number of triangles that share an edge, there might be as few as $T/\Delta_E$ ``heavy'' edges such that sampling a triangle requires sampling at least one of them, and so $q$ must be at least $\Delta_E/T$. It turns out that, up to constant factors, this is also sufficient, and so the space needed by this algorithm is $\bOt{m\paren*{\frac{1}{T^{1/3}} + \frac{\Delta_E}{T}}}$ bits. The starting point for our algorithm is the following simple observation, which can be seen as an optimization to the sampling algorithm above. Given three edges $uv,vw,wu \in E$ arriving in a stream in that order, once the first two edges $uv,vw$ have been sampled and stored, upon seeing the ``completing'' edge $wu$, we will know that the triangle $uvw$ exists in $G$, and may count it immediately---we get the closing edge of each triangle ``for free''. Now for a single triangle to be sampled, we only need to sample the first two edges, and so the probability of finding any given triangle improves to $q^2$, allowing a space complexity of $\bOt{m\paren*{\frac{1}{\sqrt{T}} + \frac{m \Delta_E}{T}}}$. However, when $\Delta_V = o(T)$, this is still weaker than allowed by the $\bOm{\frac{m}{T}\paren{\sqrt{\Delta_V} + \Delta_E}}$ lower bound that results from combining the results of \cite{BOV13,KP17}. While the aforementioned algorithm is sub-optimal in general, notice that it does match the lower bounds in the extreme case when $\Delta_V =T$, and all triangles share a single vertex. On the other hand, when $\Delta_V$ is smaller, there are more `fully disjoint'' triangles in the graph. Consequentially, we can afford to subsample by \emph{vertices}, as now dropping a single vertex cannot lose too large a fraction of our triangles. We may sample vertices uniformly with some probability $p$, and deterministically store all edges adjacent to at least one sampled vertex, again counting a triangle whenever we observe an edge $wu$ closing a sampled pair $uv$, $vw$. Each such triangle will be counted iff the ``first'' vertex $v$ of the triangle is sampled, and these may be divided among as few as $T/\Delta_V$ ``heavy'' vertices, so $p$ must be at least $\Delta_V/T$. This again turns out to be sufficient, for a space usage of $\bOt{\frac{m \Delta_V}{T}}$ (note that any pair of edges sharing an edge also share a vertex, so $\Delta_E \le \Delta_V$, and thus this does not violate the known lower bounds). While this is an improvement on the aforementioned adaptive edge-sampling scheme for small $\Delta_V$, it becomes worse once $\Delta_V > \sqrt{T}$. The crucial insight behind our algorithm is to merge the two aforementioned algorithms with a careful choice of parameterization. Specifically, we sample both edges \textit{and} vertices, before counting triangles that we see closing our sampled wedges. Specifically, we sample vertices $v \in V$ in the graph with probability $p \in (0,1\rbrack$, and then ``activate'' each edge $e \in E$ with probability $q \in (0,1\rbrack$. When an edge $uv \in E$ arrives in the stream, we store it iff $uv$ is active \textit{and} at least one of the vertices $u$ or $v$ was sampled. We denote by $S$ the set of all edges stored by the algorithm. Finally, when a closing edge $wu$ arrives that completes a triangle with edges $uv,vw$ that were previously added to $S$, we check if the vertex $v$ at the center of the wedge $uv,vw$ was sampled, and if so we deterministically increment a counter $\mathbf{C}$. Now observe that, for any given triangle $uvw$, the probability that $uvw$ causes $\mathbf{C}$ to be incremented is exactly $pq^2$. Thus, if we output the quantity $\mathbf{C}/(pq^2)$ at the end of the stream, we obtain an unbiased estimator for the number of triangles in $G$. Notice that when $p=1$ our algorithm reduces to the simpler edge-sampling algorithm stated above. At the other extreme, when $q =1$ our algorithm reduces to the vertex sampling algorithm. Intuitively, our choice of the parameters $p$ and $q$ are subject to the same constraints faced by the aforementioned edge- and vertex-sampling algorithms. Firstly, $p$ must be at least $\Delta_V/T$, otherwise the algorithm could miss a ``heavy'' vertex. Furthermore, the product $pq$ must be at least $\Delta_E/T$, to avoid missing ``heavy'' edges, and $pq^2$ must be at least $1/T$ to find any triangles at all. Putting these bounds together, it follows that $q$ must be at least $\max\set*{\frac{\Delta_E}{\Delta_V}, \frac{1}{\sqrt{\Delta_V}}}$. As with all the algorithms discussed so far, this turns out to also be sufficient---we demonstrate that by fixing the sampling parameters\footnote{As mentioned earlier, $\Delta_E \le \Delta_V$, while $\Delta_V \le T$ holds trivially. Thus $p,q$ are valid probabilities.} \[ p = \frac{\Delta_V}{T}, \quad \quad\quad \quad q\geq \max\left\{\frac{\Delta_E}{\Delta_V}, \frac{1}{\sqrt{\Delta_V}} \right\}\] we obtain an algorithm using space $\bO{\frac{m}{T}\paren*{\Delta_E + \sqrt{\Delta_V}}\log n}$ which yields an $\bO{T^2}$ variance estimator. We may therefore obtain a $(1 \pm \varepsilon)$ multiplicative estimate with probability $1 - \delta$ by using $\bO{\frac{1}{\varepsilon^2}\log\frac{1}{\delta}}$ copies of this algorithm. Consequentially one obtains an algorithm matching, up to a log factor, the lower bounds of \cite{BOV13,KP17}, with optimal space usage in terms of $m, T,\Delta_E,\Delta_V$.
{ "timestamp": "2021-07-16T02:03:57", "yymm": "2105", "arxiv_id": "2105.01785", "language": "en", "url": "https://arxiv.org/abs/2105.01785" }
\section{Introduction} \label{sec:intro} Deep neural networks (DNNs) have gained major interest in recent years due to their robust ability to learn based on large amounts of data. DNN-based approaches have been applied to computer vision~\cite{alexnet, resnet, yolo}, machine translation~\cite{sutskever2014sequence, transformer}, audio synthesis~\cite{wavenet}, recommendation models~\cite{naumov2019dlrm, gupta-dlrm-hpca2020}, autonomous driving~\cite{drivenet} and many other fields. Motivated by the high computational requirements of DNNs, there have been exciting developments in both research and commercial spaces in building specialized DNN accelerators for both edge\cite{eyeriss-isca2016, diannao, shidiannao-isca2015, cambricon, scnn, tpu_edge, nvdla-hotchips, samsung} and cloud applications~\cite{dadiannao, scaledeep, tpu-isca2016, brainwave-isca-2018, aws-inferentia, centaur-isca2020}. State-of-the-art DNN accelerators typically incorporate large arrays of processing elements to boost parallelism, together with a deep multi-level memory hierarchy and a flexible network-on-chip (NoC) to improve data reuse. While these architectural structures can improve the performance and energy efficiency of DNN execution, they also expose a large number of scheduling parameters to programmers who must decide when and where each piece of computation and data movement is mapped onto the accelerators both spatially and temporally. Here, we use \textit{schedule} to describe how a DNN layer is partitioned spatially and temporally to execute on specialized accelerators. Given a target DNN layer and a specific hardware architecture, there could be millions, or even billions, of valid schedules with a wide range of performance and energy efficiency~\cite{timeloop2019-ispass}. Considering the vast range of DNN layer dimensions and hardware architectures, there is a significant demand for a generalized framework to quickly produce efficient scheduling options for accelerators of varying hardware configurations. Achieving high performance on a spatially distributed architecture requires several factors to be carefully considered, including tiling for good hardware utilization, pipelining data movement with compute, and maximizing data re-use. Previous scheduling frameworks have attempted to reflect these considerations by formulating an analytical cost model, pruning the scheduling space with known hardware constraints, and then exhaustively searching for the best candidate based on their cost models~\cite{timeloop2019-ispass, interstellar-asplos2020, chatarasi2020marvel, dave2019dmazerunner}. However, navigating the scheduling space in such a brute-force fashion can easily become intractable for larger DNN layers and more complex hardware architectures. Other notable efforts have employed feedback-driven approaches, such as black-box tuning, beam search, and other machine learning algorithms with iterative sampling~\cite{tvm2018-osdi, jia2019beyond, adams2019learning}. However, these schedulers typically require massive training datasets and large-scale simulations to learn performance models, making it infeasible to extend them to other types of hardware accelerators, especially those still under development. Hence, there is a clear need for efficient scheduling mechanisms to \textit{quickly} navigate the search space and produce \textit{performant} scheduling options. In this work, we demonstrate \sys{}, a constrained-optimization-based approach to schedule DNN accelerators. In contrast to prior work that either requires exhaustive brute-force-based or expensive feedback-driven approaches, \sys expresses the DNN accelerator scheduling as a constrained-optimization problem that can be deterministically solved using today's mathematical optimization libraries in one pass. In particular, \sys leverages the regularities in both DNN layers and spatial hardware accelerators where the algorithmic and hardware parameters can be clearly defined as scheduling constraints. Specifically, \sys formulates the DNN scheduling problem as a prime-factor allocation problem that determines 1) tiling sizes for different memory levels, 2) relative loop ordering to exploit reuse, and 3) how computation should be executed spatially and temporally. \sys constructs the scheduling constraints by exposing both the algorithmic behaviors, e.g., layer dimensions, and hardware parameters, e.g., memory and network hierarchies. Together with clearly defined and composable objective functions, \sys can solve the DNN scheduling problem in one shot without expensive iterative search. Our evaluation demonstrates that \sys-generated schedules outperform state-of-the-art approaches by $2.5\times$ across different DNN network layers, while requiring $90\times$ less scheduling time as it does not require iterative search. \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{figs/tiling_characterization.pdf} \vspace{-5pt} \caption{\small Execution latency histogram of 40K valid scheduling choices for a ResNet-50 layer on a spatial accelerator.} \label{fig:tilling_characterization} \end{figure} In summary, this work makes the following contributions: \begin{itemize} \item We formulate DNN accelerator scheduling as a constrained-optimization problem that can be solved in a single pass. To the best of our knowledge, \sys is the first constrained-optimization-based approach to tackle major DNN scheduling decisions in one shot. \item We take a communication-oriented approach in the \sys formulation that highlights the importance of data transfer across different on-chip memories and exposes the cost through clearly defined objective functions. \item We demonstrate that \sys can quickly generate high-performance schedules outperforming state-of-the-art approaches for different DNN layers across different hardware architectures. \end{itemize} \section{Background and Motivation} \label{sec:background} In this section, we discuss the complexity of DNN scheduling space and the state-of-the-art schedulers to navigate the space. \subsection{DNN Scheduling Space} \label{sec:bg_motivation} Scheduling is a crucial decision-making process for the compilers to effectively assign workload to compute resources. With the emergence of numerous DNN accelerators with diverse architectures, there is a need for a fast, performant, and explainable approach to scheduling. Our work focuses on operator-level scheduling, which aims to optimize the performance of each operator, i.e. DNN layer, on specific hardware. Operator-level scheduling typically comprises three key loop optimizations: \textit{loop tiling}, \textit{loop permutation}, and \textit{spatial mapping}. \textit{Loop tiling} describes which loops are mapped to which memory hierarchy and the corresponding tile sizes. \textit{Loop permutation} determines the relative order of the loops, while \textit{spatial mapping} binds one or more loop dimensions to spatial hardware resources, such as parallel processing elements, instead of mapping them to temporal (i.e. sequential) execution. Each optimization can have a significant impact on the performance, and all three optimizations need to be considered together to achieve the best performance. Consider scheduling a 3$\times$3 convolution layer in ResNet50~\cite{resnet} with 256 input and output channels, and an output dimension of 14$\times$14, on an accelerator with five levels of memory. If we split each individual loop bound into its prime factors and assign each one to a memory level, we would have billions of schedules to consider. Among the randomly sampled schedules from all possible loop tilings, half of them fail to satisfy the buffer capacity constraints (e.g. a schedule is invalid if it requires a 4KB buffer, though the available buffer size is only 2KB.). \figureautorefname{}~\ref{fig:tilling_characterization} shows the performance distribution of the valid schedules. We observe a wide performance difference among the valid schedules, with the best one outperforming the worst one by $7.2\times$. In addition, we observe clusters of schedules that have similar latencies in the \figureautorefname{}~\ref{fig:tilling_characterization}, revealing structure in the solution space. \subsection{State-of-the-art Schedulers} \begin{table}[t] \footnotesize \adjustbox{width=\linewidth}{ \begin{tabular}{cc} \toprule Scheduler & Search Algorithm \\ \midrule \midrule \multicolumn{2}{l}{\textit{Brute-force Approaches:} } \vspace{3pt}\\ Timeloop~\cite{timeloop2019-ispass} & Brute-force \& Random \\%& Yes & Yes & Yes\\ dMazeRunner~\cite{dave2019dmazerunner} & Brute-force \\%& Yes & Yes & ?\\ Triton~\cite{tillet2019triton} &{Brute-force over powers of two}\\ Interstellar~\cite{interstellar-asplos2020} & Brute-force \\%& Yes & Yes & Yes\\ Marvel~\cite{chatarasi2020marvel} & Decoupled Brute-force \\%& Yes & Yes & Yes\\ \midrule \multicolumn{2}{l}{\textit{Feedback-based Approaches:} } \vspace{3pt}\\ AutoTVM ~\cite{tvm2018-osdi} & ML-based Iteration \\%& Yes & Yes & User-Defined\\ Halide~\cite{ragan2013halide} & Beamsearch~\cite{adams2019learning}, OpenTuner~\cite{ansel2014opentuner, mullapudi2016automatically} \\%& No & ? & Yes\\ FlexFlow~\cite{jia2019beyond} & MCMC \\%& No & No & Limited \\ Gamma~\cite{gamma-iccad2020} & Genetic Algorithm \\ Mind Mapping~\cite{hegde2021mind} & Gradient-based Search \\ \midrule \multicolumn{2}{l}{\textit{Constrained Optimization Approaches:} } \vspace{3pt}\\ Polly+Pluto~\cite{grosser2011polly, bondhugula2008practical, bondhugula2016pluto+} & \\ Tensor Comprehension~\cite{vasilache2018tensor} & Polyhedral Transformations \\ Tiramisu~\cite{bagehadi2019tiramisu} & \\ \midrule \textbf{CoSA} & \bf{Mixed-Integer Programming (MIP)} \\%& \bf{Yes} & \bf{ Yes} & \bf{Yes}\\ \bottomrule \end{tabular} \caption{\small State-of-the-art DNN accelerator schedulers.\label{table:related_work}} } \end{table} \smallskip \label{sec:framework} \begin{figure*}[t] \begin{minipage}{0.67\textwidth} \centering \includegraphics[width=0.95\linewidth]{figs/target_problem.pdf} \caption{\small DNN scheduling problem formulation with \sys. \sys takes 1) DNN layer dimensions and 2) DNN accelerator parameters and expresses the scheduling problem into a constrained optimization problem to produce a performant schedule in one shot. } \label{fig:overview} \end{minipage} \hfill \begin{minipage}{0.29\textwidth} \begin{lstlisting}[language=C++, caption={\small An example schedule using the loop nest representation for a DNN layer of dimension $R=S=3, P=Q=28, C=8, K=4, N=3$. Same variable prefix indicates tiles from the same problem dimension. },captionpos=b,floatplacement=t,label={lst:example_schedule}] //DRAM level for q2 = [0 : 2) : // Global Buffer level for p2 = [0 : 7) : for q1 = [0 : 7) : for n0 = [0 : 3) : spatial_for r0 = [0 : 3) : spatial_for for k1 = [0 : 2) : // Input Buffer level spatial_for k0 = [0 : 2) : // Weight Buffer level for c1 = [0 : 2) : for p1 = [0 : 2) : // Accumulation Buffer level for s0 = [0 : 3) : for p0 = [0 : 2) : spatial_for c0 = [0 : 8) : // Register for q0 = [0 : 2) : \end{lstlisting} \end{minipage} \vspace{-10pt} \end{figure*} \smallskip Given that the scheduling space for a DNN layer can have billions of valid schedules, finding a good schedule through exhaustive search can become an intractable problem. Table~\ref{table:related_work} shows some recent efforts to tackle this complexity. \subsubsection{Brute-force Approaches} Recent efforts combine exhaustive search with heuristics to manually prune the scheduling space~\cite{timeloop2019-ispass, dave2019dmazerunner, interstellar-asplos2020, tillet2019triton,chatarasi2020marvel}. To lower the cost of exhaustive search, schedulers in this category typically use a lightweight analytical model to estimate latency, throughput, and power consumption to compare all valid mappings of a given layer to find the best schedule. The disadvantages of this approach are two-fold. First, such a brute-force search tends to be exceedingly expensive for complex hardware architectures, making it infeasible to find a good schedule quickly. Second, the generated schedules often do not perform optimally since analytical models may fail to consider the communication latency across the spatial hardware. \subsubsection{Feedback-based Approaches} Other recent efforts use feedback-driven approaches along with machine learning or other statistical methods ~\cite{tvm2018-osdi, ragan2013halide, adams2019learning, jia2019beyond, gamma-iccad2020, hegde2021mind} to improve the accuracy of the cost model and search for the solution using black-box or gradient-based search. Although such approaches can potentially learn the distribution of the scheduling space, they typically require a large amount of training data due to their feedback-driven nature. As a result, these approaches are mainly applicable to post-silicon hardware where performing a large-scale measurement is possible but are not feasible for hardware under development. \subsubsection{Constrained-optimization Approaches} Constrained-optimization problems, in which objective functions are maximized or minimized subject to given sets of constraints, have demonstrated the ability to solve many complex large-scale problems in a reasonable time. Such methods have been widely used in architecture and systems research for instruction scheduling ~\cite{nowatzki2013general,nowatzki2018hybrid,chin2018architecture}, high-level synthesis~\cite{cong2006efficient}, memory partitioning~\cite{autotm-asplos2020, ilp-cases2001}~\cite{cong2011automatic}, algorithm selection~\cite{ilp-multiprocessor, janus-cgo2019}, and program synthesis~\cite{phothilimthana2014chlorophyll,sketching-pldi2005,superoptimizers-asplos2006,search-ps-cacm,swizzle-asplos2019}. In particular, polyhedral transformation has leveraged constrained-optimization-based approach for auto-vectorization and loop tiling~\cite{bondhugula2008practical, grosser2011polly, kong2013polyhedral,park2013predictive, baghdadi2015pencil, acharya2018polyhedral}. Prior work targets general-purpose CPUs and GPUs that run with fine-grained instructions and hardware-managed cache, as opposed to the software-managed spatial accelerators that we target. In addition, existing polyhedral-based approaches~\cite{bondhugula2008practical, baghdadi2015pencil, bagehadi2019tiramisu} lack direct support for tile-size optimization. Instead, they take the tile size as input and apply a transformation based on the given tile size. Due to this limitation, the tile size decision cannot be co-optimized with other loop transformations, e.g. loop permutation, in one pass, leading to sub-optimal schedules. To address the drawbacks of existing approaches and leverage the regularities from the DNN workloads and the accelerator design for optimization, \sys employs constrained optimization to tackle the DNN scheduling problem in one pass. \sys presents a unique domain-specific representation for DNN scheduling that better captures the utilization and communication cost and encodes different loop transformations, i.e., tiling size, loop permutation, and spatial mapping decisions, in one formulation. This unified representation enables us to solve for all three optimizations in one pass and produce efficient schedules for a complex accelerator system with a multi-level memory hierarchy. \subsection{\sys Overview} \sys{} optimizes operator-level schedules for mapping DNN layers onto spatial DNN accelerators. Specifically, \sys formulates the scheduling problem as a constrained-optimization problem with \textit{variables} representing the schedule, \textit{constraints} representing DNN dimensions and hardware parameters, and \textit{objective} functions representing goals, such as maximizing buffer utilization or achieving better parallelism. \figureautorefname~\ref{fig:overview} shows the target problem space of \sys{}. \sys{} takes the specifications of the DNN layers and the underlying spatial accelerator as input constraints and generates a valid and high-performance schedule based on the objective functions in one pass. \subsubsection{Target Workload} The work targets the DNN operators that can be expressed by a nested loop with 7 variables as loop bounds: $R, S, P, Q, C, K, N$. $R$ and $S$ refer to the convolution kernel width and height, $P$ and $Q$ refer to the output width and height, $C$ refers to the input channel size, $K$ refers to the output channel size, and $N$ refers to the batch size, as illustrated in \figureautorefname~\ref{fig:overview}. The convolution operation computes the dot product of the filter size $R\times S \times C$ of inputs and weights to generate one point in the output. Matrix multiplications can be expressed in this scheme as well. \subsubsection{Target Architecture} \sys targets spatial architectures with an array of processing elements (PEs) connected via an on-chip network and with multiple levels of memory hierarchy, a commonly adopted architecture template in today's DNN accelerator designs~\cite{scaledeep, dadiannao, tangram-asplos19, tetris-asplos17, interstellar-asplos2020, shao2019-micro, chen2019eyeriss, maeri-asplos2018, centaur-isca2020, sigma-hpca2020, plasticine-isca2017}. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figs/abls_perm.pdf} \vspace{-7pt} \caption{\small Performance comparison of schedules with different loop permutations for a convolution operator with the layer dimensions of $R=S=3$, $P=Q=8$, $C=32$, $K=1024$. The leftmost schedule (\texttt{CKP}) refers to a relative ordering where the input channel dimension (\texttt{C}) is the outermost loop and the output height dimension (\texttt{P}) is the innermost loop. Since this layer is weight-heavy, loop permutations that emphasize weight reuse, e.g., \texttt{PCK} and \texttt{PKC}, are more efficient.} \vspace{-3pt} \label{fig:abls_perm} \end{figure} \begin{figure}[t] \centering \includegraphics[trim=0 10 0 10, width=\linewidth ]{figs/abls_spatial_perm.pdf} \vspace{-17pt} \caption{\small Performance comparison of schedules with different spatial mappings for a convolution operator with the layer dimensions of $R=S=1$, $P=Q=16$, $C=256$, $K=1024$. Factors in \textit{s} list are for spatial mapping, and factors in \textit{t} list are for temporal mapping. For example, \texttt{s:P4C4,t:K4} represents a mapping where a factor 4 of the \texttt{P} dimension and a factor 4 of the \texttt{C} dimension are mapped to spatial execution in a system with 16 PEs, leaving \texttt{K}'s factor 4 to temporal mapping.} \label{fig:abls_spatial_perm} \vspace{-3pt} \end{figure} \subsubsection{Target Scheduling Decisions} \sys-generated schedules describe how a specified DNN layer is executed on a given spatial architecture. Listing~\ref{lst:example_schedule} shows an example of a schedule. Here, we use a loop-nest representation~\cite{timeloop2019-ispass} to explicitly describe how the computation of a convolution layer is mapped to levels of memory hierarchies. We highlight three aspects of the schedule: 1) \textbf{loop tiling}, which describes which loops are mapped to which memory level and the values of the loop bounds; 2) \textbf{loop permutation}, which handles the relative ordering between loops in the same memory hierarchy; and 3) \textbf{spatial mapping}, which defines which loops are mapped to parallel spatial resources (shown as \textcolor{blue}{\texttt{spatial\_for}} loops in Listing~\ref{lst:example_schedule}). All three factors play a key role in the efficiency of the scheduling choice. Next, we highlight the implications of loop permutation and spatial mapping, both of which are less explored than the well-studied loop tiling. \figureautorefname~\ref{fig:abls_perm} illustrates the impact of \textbf{loop permutation} for a convolution layer on a given hardware design. All the schedules use the same loop tiling and spatial mapping except the loop ordering at the global-buffer level, as indicated in the labels of the X-axis, where \texttt{CKP} means the input channel dimension (\texttt{C}) is the outermost loop, and the output height dimension (\texttt{P}) is the innermost loop. In this case, selecting P as the outermost loop, i.e. \texttt{PCK} and \texttt{PKC}, can lead to a 1.7$\times$ speedup for this layer, motivating the need to consider the implications of loop permutation in the scheduling problem. ~\figureautorefname~\ref{fig:abls_spatial_perm} shows the impact of \textbf{spatial mapping} on DNN execution. We notice that there is a 4.3$\times$ gap between best (rightmost) and worst (leftmost) schedules for the layer in consideration. The fundamental reason for the differences is the different communication traffic generated by different spatial mapping options. The best schedule, i.e., the rightmost schedule in the figure (\texttt{s:P2C4K2, t:P2K2}), is obtained when factors $P=2$, $C=4$, $K=2$ are mapped to the spatial loops, which cannot be achieved by simply choosing either model or data parallelism in the spatial partition. As a result, a systematic evaluation of different spatial mapping choices is required to find a good schedule. The rest of the section discusses how \sys formulates the scheduling variables, constraints, and objectives to solve the DNN scheduling problem. \subsection{\sys{} Variables and Constants} This section discusses the variables and constants, summarized in Table~\ref{table:notation}, used in \sys{} formulation. \subsubsection{Variable Representation} \label{sec:variable_representation} \begin{table}[h] \resizebox{1\columnwidth}{!} { \begin{tabular}{cc|cc|cc} \toprule \multicolumn{2}{c|}{ \textbf{\sys Variables} } & \multicolumn{2}{c|}{ \textbf{\sys Constants} } & \multicolumn{2}{c}{ \textbf{Indices} } \\ \midrule $\mathbf{X}$ & \multirow{4}{*}{ \pbox{20cm}{ binary matrix\\to represent\\ a schedule}} & $\mathbf{A}$ & \multirow{2}{*}{\pbox{20cm}{layer dimension to \\ data tensor mapping}} & $i$ & memory level \\ & & & & $j$ & layer dimension \\ & & $\mathbf{B}$ & \multirow{2}{*}{\pbox{20cm}{ memory level to \\ data tensor mapping} } & $n$ & prime factor index \\ & & & & $k$ & mapping choice \\ & & & & $z$ & permutation level \\ & & & & $v$ & data tensor\\ \bottomrule \end{tabular} } \vspace{-2pt} \caption{\small \sys Notations.} \label{table:notation} \end{table} \begin{table} \small \begin{tabular}{c} \textbf{DNN Layer:} $R=3, S=1, P=1, Q=1, C=1, K=4, N=3$ \\ $\xrightarrow[]{} $ \textbf{Prime Factors:} = [[3],[1],[1],[1],[1],[2,2][3]] \end{tabular} \large \resizebox{\columnwidth}{!} { \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \toprule Idx & \multicolumn{2}{c|}{ } & Perm & \multicolumn{9}{c}{ Schedule } \\ \hline $j$ & \multicolumn{2}{c|}{ Layer Dim. } & \multirow{3}{*}{ } & \multicolumn{2}{c|}{ R = 3 } & ... & \multicolumn{4}{c|}{ K = 4 } & \multicolumn{2}{c}{ N = 3 } \\ \cline{1-3} \cline{4-13} $n$ & \multicolumn{2}{c|}{ Prime Factors} & & \multicolumn{2}{c|}{ 3 } & ... & \multicolumn{2}{c|}{ 2 } & \multicolumn{2}{c|}{ 2 } & \multicolumn{2}{c}{ 3 } \\ \cline{1-3} \cline{4-13} $k$ & \multicolumn{2}{c|}{ s / t Mapping } & &s&t& & s & t & s & t & s & t \\ \midrule \cline{3-13} \multirow{8}{*}{ $i$} & \multirow{8}{*}{ \rotatebox[origin=c]{90}{Memory Levels} } & Register & ... & & & & & & & & & \\ \cline{3-13} & ... & ... & & & & & & & & & \\ \cline{3-13} & & InputBuf & ... & & & &\cmark& & & & & \\ \cline{3-13} & & \multirow{5}{*}{ GlobalBuf } & $O_0$ & & & & & & & & & \\ \cline{4-13} && & $O_1$ & & & & & & & & &\cmark\\ \cline{4-13} && & $O_2$ & & & & & &\cmark& & & \\ \cline{4-13} && & ... & & & & & & & & & \\ \cline{4-13} && & $O_{Z}$ &\cmark& & & & & & & & \\ \cline{2-13} \bottomrule \end{tabular} } \caption{\small Example binary matrix $\mathbf{X}$ representing a schedule. A checkmark in s, t indicates spatial or temporal mapping. A checkmark in $O_{0},...,O_{Z}$ indicates the rank for loop permutation. In this schedule, the loop tile of size $3$ from problem dimension $N$ is allocated within the GlobalBuf at the innermost loop level, assigned for temporal execution. Both loop tiles from $K$ are mapped to spatial resources.} \label{table:schedule} \end{table} We devise a mathematical representation for the DNN schedules and formulate the scheduling problem as a prime-factor allocation problem. Given a layer specification, we first factorize each loop bound into its $prime\_factors$. If the loop bound themselves are large prime number, we can pad them and then factorize. We assign each prime factor to a {\em scheduling configuration} that is composed of a combination of three decisions: 1) the mapped memory level, 2) the permutation order, and 3) the spatial mapping. Each prime factor has exactly one scheduling configuration. Here, we use a binary matrix $\mathbf{X}$ to represent the prime factor allocation, i.e., the scheduling space, shown in Table~\ref{table:schedule}. The four dimensions of $\mathbf{X}$ are: 1) the layer dimension variables (indexed by $j$), 2) the prime factors of the loop bounds (indexed by $n$), 3) whether it is a spatial or temporal mapping (indexed by $k$), and 4) the memory and the permutation levels (indexed by $i$). With the prime factor decomposition, \sys{}'s encoding can represent all possible schedules and guarantees that the optimization solves for the full search space. Table~\ref{table:schedule} shows an example binary matrix $X$ that represents the schedule shown in Listing~\ref{lst:example_schedule}. First, \sys{} performs the \textit{tiling} optimizations by assigning the prime factors to different memory levels. For example, dimension $K$ is split into two tiles, where the inner tile of size 2 is allocated to the input buffer, and the outer tile of size 2 is allocated in the global buffer. Second, mapping a prime factor to \textit{spatial} execution is indicated by whether the factor is mapped to a spatial column $s$ or a temporal column $t$ in the table. In this example, both prime factors for $K$ are spatially mapped. Finally, for loop \textit{permutation}, we add rank indices $O_0, O_1, ..., O_Z$ to the memory level of interest, where only one prime factor can be mapped to each rank. The lowest-ranked factor is allocated to the innermost loop, while the highest-ranked factor is allocated to the outermost loop. In the example shown in Table~\ref{table:schedule}, the problem dimension N is mapped at the $O_1$ level in the global buffer for temporal mapping, which means the factor $N=3$ will be assigned rank 1 in the global-buffer level. Without other factors in the global-buffer level, factor $N=3$ with the smallest rank will become the innermost loop in permutation. For the ranking of permutation, we reserve enough slots for all prime factors at all memory levels. Not all the slots need to be filled since a prime factor can only be allocated to one memory level. \subsubsection{Constant Parameters} \label{sec:framework_constants} \begin{table}[t] \footnotesize \begin{minipage}{.4\linewidth} \centering \begin{tabular}{c|c|c|c|c} \toprule \multirow{2}{*}{ } & \multicolumn{3}{c|}{ Related } & Idx \\ \cline{2-5} & W & IA & OA & $v$ \\ \midrule \hline R & \cmark & - & & \multirow{7}{*}{ $j$ } \\ \cline{1-4} S & \cmark & - & & \\ \cline{1-4} P & & \cmark & \cmark& \\ \cline{1-4} Q & & \cmark & \cmark & \\ \cline{1-4} C & \cmark & \cmark & & \\ \cline{1-4} K & \cmark & & \cmark & \\ \cline{1-4} N & & \cmark & \cmark & \\ \hline \bottomrule \end{tabular} \end{minipage} \qquad \begin{minipage}{.45\linewidth} \centering \begin{tabular}{c|c|c|c|c} \toprule \multirow{2}{*}{ } & \multicolumn{3}{c}{ Related } & Idx \\ \cline{2-5} & W & IA & OA & $v$ \\ \midrule \hline Register & \cmark & \cmark & \cmark & \multirow{6}{*}{ $i$ } \\ \cline{1-4} AccBuf & & & \cmark & \\ \cline{1-4} WBuf & \cmark & & & \\ \cline{1-4} InputBuf & & \cmark & & \\ \cline{1-4} GlobalBuf & \cmark & \cmark & & \\ \cline{1-4} DRAM & \cmark & \cmark & \cmark & \\ \hline \bottomrule \end{tabular} \end{minipage} \caption{\small Constant binary matrices $\mathbf{A}$ (left) and $\mathbf{B}$ (right). $\mathbf{A}$ encodes how different layer dimensions associate with data tensors. $\mathbf{B}$ encodes which data tensor can be stored in which memory hierarchy.} \label{tab:constant-matrix} \end{table} \smallskip In addition to the loop-related variables, we have intrinsic relations across different components in the architecture and layer specifications which must be encoded by constant parameters. \sys uses two constant binary matrices to encode the unique relations in the DNN scheduling space, shown in Tabel~\ref{tab:constant-matrix}. The first binary constant matrix, $A$, encodes the association between layer dimensions (i.e., rows of the matrix) and data tensors (i.e., columns of the matrix). For each input (IA), weight (W), and output (OA) tensor, matrix $A$ indicates which layer dimensions, i.e., $R, S, P, Q, C, K, N$, should be used to calculate the data transaction size as well as multicast and reduction traffic on the accelerators. In addition, we introduce another binary matrix $\mathbf{B}$ to represent which memory hierarchy can be used to store which data tensor. DNN accelerators typically deploy a multi-level memory hierarchy, where each memory level can be used to store different types of data tensors. For example, matrix $\mathbf{B}$ shown in Table~\ref{tab:constant-matrix} represents an architecture that has dedicated input and weight buffers for input activation and weight, respectively, while providing a shared global buffer to store input and output activations. \subsection{\sys{} Constraints} \label{sec:constraints} This section discusses the constraints derived from the target accelerator architecture that must be satisfied in \sys{} and shows how to express them with \sys{} variables and constants. \subsubsection{Buffer Capacity Constraint} \label{sec:buf_util} To generate a valid schedule in a software-managed memory system, a key constraint is to ensure that the size of data to be sent to the buffer does not exceed the buffer capacity. The hardware memory hierarchy can be represented by the binary constant matrix $\mathbf{B}$ discussed earlier. For each memory buffer, based on the tensor-dimension correlation matrix $\mathbf{A}$, we calculate the tiling size of each tensor by multiplying the relevant prime factors together indicated by $\mathbf{X}$. Both spatial and temporal factors should be included in the buffer utilization. Let $N_j$ be the number of prime factors for the layer dimension $j$. Then the utilization of the buffer level $I$ can be expressed as: \begin{equation} \begin{aligned} \prod^{I-1}_{i=0}\prod^{6,\, N_j}_{j=0, n=0}\prod^{1}_{k=0} \begin{cases} prime\_factor_{j,n},& X_{(j,n),i,k} A_{j,v}B_{I,v} = 1\\ 1, & \text{otherwise} \end{cases}\\ \end{aligned} \end{equation} We then set the upper bound of the buffer utilization to the capacity of different buffer sizes, represented using $M_{I,v}$. However, a problem with this utilization constraint is that it involves products of the decision variables $\mathbf{X}$, making it nonlinear and infeasible to solve with standard constraint solvers. To address this limitation, we take the logarithm of both sides of the constraints to obtain a linear expression for the utilization and encode the if-else statement as: \begin{equation} \begin{aligned} U_{I,v} & =\sum^{I-1}_{i=0}\sum^{6,\, N_j}_{j=0,n=0}\sum^{1}_{k=0} \log(prime\_factor_{j,n})A_{j,v}B_{I,v}X_{(j,n),i,k} \\ & \le \log(M_{I,v}), \forall I \end{aligned} \end{equation} To encode different precisions for different data tensors, we add the logarithm of the datatype sizes $precision_v$ to $U_{I,v}$. \subsubsection{Spatial Resource Constraint} Another set of \sys{} constraints is from the limited number of spatial resources. At the chip level, there is a limited number of PEs. At the PE level, there is a limited number of multiply-and-accumulate (MAC) units. In~\sys{}, once a factor is assigned to spatial mapping in the configuration, it needs to satisfy: 1) each problem factor can only be mapped to either spatial or temporal execution, 2) factors that map to spatial execution do not exceed the resource limit in the architecture. These two constraints can be expressed in the equations below: \begin{equation} \begin{aligned} \sum^{1}_{k=0} X_{(j,n),i,k} == 1, \forall (j,n), i \end{aligned} \end{equation} \noindent \begin{equation} \begin{aligned} \sum^{6,\,N_j}_{j=0,n=0}\log(prime\_factor_{j,n})X_{(j,n),I,0} \le \log(S_{I}), \forall I \end{aligned} \end{equation} \noindent where $S_{I}$ is the number of available spatial resources at the level $I$. \jenny{do we need to a table to illustrate it or no} \subsection{Objective Functions} \label{sec:framework_obj} In this section, we describe the objective functions for \sys. Each objective can be either used individually to optimize a single aspect of performance, e.g., utilization, compute, and communication, or combined with others. \subsubsection{Utilization-Driven Objective} High on-chip buffer utilization improves data-reuse opportunity. As demonstrated in the prior work~\cite{dinh2020communicationoptimal}, communication lower bounds can be achieved when the tiling block size is optimized for buffer utilization in a system with one-level cache. In this work, we formulate a utilization objective that aims to maximize the buffer utilization of all tensors, so the overall communication is minimized. We use the same formulation for the buffer utilization as in \ref{sec:buf_util} and maximize the following linear utilization function: \begin{equation} \begin{aligned} \hat{Util}= \sum^{I-1}_{i=0}\sum^{2}_{v=0}U_{i,v} \end{aligned} \end{equation} Here, maximizing the sum of utilization for all buffer levels and all tensors in the logarithm form is equivalent to maximizing the geometric mean of the buffer utilization. Users can also attach weights to the different buffer levels or different data tensors if they want to optimize for the utilization of a specific level of the memory. \subsubsection{Compute-Driven Objective} \label{sec:comput_obj} The total number of compute cycles is another factor that affects the quality of schedules. In this formulation, we multiply all the temporal factors for the estimated compute cycles in each PE. Intuitively, this objective allows the constraint solver to exploit the parallelism in the system by mapping more iterations to the spatial resources than to temporal iterations. The objective can be expressed as a linear function again with logarithm taken: \begin{equation} \begin{aligned} \hat{Comp} = \sum^{I}_{i=0}\sum^{6,\,N_j}_{j=0,n=0}\log(prime\_factor_{j,n})X_{(j,n),i,1} \end{aligned} \end{equation} \noindent \subsubsection{Traffic-Driven Objective} Communication latency is a key contributing factor to the performance of spatial architecture. \sys{} also includes a traffic-driven objective to capture the communication cost. Specifically, communication traffic can be decomposed into three terms: 1) data size per transfer, 2) spatial factors of multicast and unicast traffic, and 3) temporal iterations. Multiplying these three factors will get the total amount of traffic in the network. Next, we discuss how we capture each of these factors using \sys{}'s representation. First, similar to the buffer utilization expression, data size per transfer can computed using the allocated prime factors in matrix $\mathbf{X}$, together with the dimension-tensor correlation matrix $\mathbf{A}$, as shown in the equation below: \begin{equation} \begin{aligned} D_v = \sum^{I-1}_{i=0}\sum^{6,\,N_j}_{j=0,n=0}\sum^{1}_{k=0} \log(prime\_factor_{j,n})A_{j,v}X_{(j,n),i,k} \end{aligned} \end{equation} Second, spatial factors would incur different multicast, unicast, and reduction patterns. The dimension-tensor correlation matrix $\mathbf{A}$ discussed in Sec~\ref{sec:framework_constants} can be used to indicate different traffic patters. Specifically, depending on whether the spatial dimension, indicated by the binary matrix $\mathbf{X}$, is related to the specific tensor in consideration, represented by the constant matrix $\mathbf{A}$, different traffic patterns, e.g., multicast vs. unicast or reduction vs. unicast, would occur. \begin{figure}[t] \centering \includegraphics[trim=110 10 490 10, width=0.98\linewidth]{figs/noc_traffic_pattern.pdf} \caption{\small Different traffic patterns based on the constant matrix $\mathbf{A}$. The two figures (top) show how the constant $\mathbf{A}$ encodes the traffic types (multicast, unicast, reducation) for different data tensors from the global buffer to PEs. The figures on the bottom show its implication on output tensor reduction traffics. } \label{fig:noc_traffic_patterns} \end{figure} \figureautorefname~\ref{fig:noc_traffic_patterns} shows how the intrinsic tensor-dimension correlation matrix $\mathbf{A}$ can be used to calculate different traffic patterns for different variables. For example, as shown in \figureautorefname~\ref{fig:noc_traffic_patterns}a, if the dimension $P$ is mapped spatially, $A_{P, \text{W}}=0$ implies multicast traffic for weight tensor W. Since weight is not related to $P$, when we send weights from global buffer to PEs, the weight traffic will be multicasted to the destination PEs. If the dimension $C$ is mapped spatially, $A_{C, \text{W}}=1$ (\figureautorefname~\ref{fig:noc_traffic_patterns}b) implies unicast traffic for weight tensor W as weight is related to $C$. Similarly, if the dimension $C$ is mapped spatially, $A_{C, \text{OA}}=0$ (\figureautorefname~\ref{fig:noc_traffic_patterns}c) implies reduction traffic for output tensor OA, where partially sum needs to be reduced across $C$ before sending back to GB. If the dimension $P$ is mapped spatially, $A_{P, \text{OA}}=1$ (\figureautorefname~\ref{fig:noc_traffic_patterns}d) would indicate unicast traffic for output tensor OA, as each traffic contributes to different regions of the output. \sys{} formulates this relationship in the following equation: \begin{equation} \begin{aligned} L_v =\sum^{6,\,N_j}_{j=0,n=0}\log(prime\_factor_{j,n})X_{(j,n),I,0}A_{j,v} \end{aligned} \end{equation} The third term, temporal iteration is used to calculate the number of data transfers at the NoC level. We introduce a traffic iteration factor $\mathbf{Y}$ that is a function of $\mathbf{X}$ at the permutation level, $\mathbf{A}$, and $\mathbf{B}$. $\mathbf{Y}$ indicates if the outer NoC loop bound should be used for different variables. With $\mathbf{Y}$, we ensure that, for each variable, if a relevant factor term is seen inside the current loop level, the current loop level's factor should be used to compute the traffic iteration regardless of whether it is related to the data tensor of the variable of interest. This is a term that drives the reuse optimization. Mathematically, $\mathbf{Y}$ is constrained as: \begin{equation} \begin{aligned} &Y_{v,z} \ge \sum^{6,\,N_j}_{j=0,n=0}X_{(j,n),z,1}A_{j,v}B_{I,v}, \forall z, \forall v \\ & Y_{v,z} \ge Y_{v,z-1}, \forall z > 0, \forall v \end{aligned} \end{equation} \noindent \jenny{for simplicity we can ignore these formulations } Where $z$ represents the position index for permutation and $Z$ equals the total valid levels for permutation. The traffic iteration term can thus be expressed as: \begin{equation} \begin{aligned} T_v = \sum^{Z-1}_{z=0} \sum^{6,N_j}_{j=0,n=0}\log(prime\_factor_{j,n})Y_{v,z}X_{(j,n),z,1} \end{aligned} \end{equation} This turns the linear objective into quadratic as we multiply $\mathbf{Y}$ with $\mathbf{X}$ to indicate whether there is a factor at the current permutation level. After we calculate each individual term, we can combine them together for each tensor that contributes to the total traffic in the network. Similar to the logarithmic transformation we did earlier, instead of multiplying these three terms together, we take the logarithm on both sides to get a linear expression of the traffic, as shown in the equation below: \begin{equation} \begin{aligned} \hat{Traf} = \sum^{2}_{v=0} ( D_v + L_v + T_v ) \end{aligned} \end{equation} \noindent \subsubsection{Overall Objective} One can construct a composite objective comprised of a linear combination of $\hat{Util}$, $\hat{Comp}$, and $\hat{Traf}$, where we want to minimize the compute and communication latency while maximizing the on-chip buffer utilization: \begin{equation} \begin{aligned} \hat{O}= - w_U \hat{Util} + w_C\hat{Comp} + w_T \hat{Traf} \end{aligned} \label{eqn:obj} \end{equation} where $w_U, w_T, w_C$ are user-selected parameters controlling the importance of each objective. For a system with double-buffering optimization, $w_T$ can be set to map the traffic sizes to the cycles for memory accesses. This brings $w_T \hat{Traf}$ to be of the same importance as $w_C \hat{Comp}$ in the optimization. Another formulation of the overall objective function to balance the memory access and compute cycles is to minimize the difference of the two terms: $\hat{D} = w_T\hat{Traf} - w_C \hat{Comp}$. The weights of different objectives can be determined by using a set of micro-benchmarks that characterize the compute, memory, and communication latencies of the target architecture. \subsection{Limitation of \sys} \sys leverages the regularity from both the problem and the architecture space, where it assumes a dense CNN workload and does not exploit the sparsity of the data. It also best targets hardware systems with deterministic behavior and explicitly managed scratchpads. This is because, in systems with non-deterministic behaviors, it can be challenging to construct optimization objectives that capture the impact of such behaviors. However, \sys can be augmented with an iterative search on the objective functions and their corresponding hyperparameters to approximate the unknown hardware performance model and directly prune off the invalid points from the search space. \section{The \sys{} Framework} To navigate the large scheduling space of DNN accelerators, we develop \sys, a constrained-optimization-based DNN scheduler to automatically generate high-performance schedules for spatially distributed accelerators. \sys{} not only deterministically solves for a good schedule in one pass without the need for exhaustive search or iterative sampling, but can also be easily applied to different network layers and hardware architectures. This section discusses the \sys framework and how \sys formulates the DNN scheduling problem with mixed-integer programming (MIP). \input{3.1overview} \input{3.2variables.tex} \input{3.3constraints.tex} \input{3.4objective.tex} \input{3.5limitation} \section{Methodology} \label{sec:methodology} \begin{table*}[t] \centering \resizebox{0.7\linewidth}{!} { \begin{tabular}{cc|cc|cc} \toprule \multicolumn{2}{l|}{ \textit{Arithmetic :}} & \multicolumn{2}{l|}{ \textit{Storage :} } & \multicolumn{2}{l}{ \textit{Network :} } \\ \cmidrule(lr){1-2}\cmidrule(lr){3-4} \cmidrule(lr){5-6} \textbf{MACs} & 64 / PE & \textbf{Registers} & 64B / PE & \textbf{Dimension} & 4$\times$4 \\ \textbf{Weight/Input} & \multirow{2}{*}{ 8bit } & \textbf{Accum. Buffer} & 3KB / PE & \textbf{Router} & Wormhole \\ \textbf{Precision} & & \textbf{Weight Buffer} & 32KB / PE & \textbf{Flit Size} & 64b \\ \textbf{Partial-Sum} & \multirow{2}{*}{ 24bit } &\textbf{ Input Buffer} & 8KB / PE & \textbf{Routing} & X-Y \\ \textbf{Precision} & &\textbf{ Global Buffer} & 128KB & \textbf{Multicast} & Yes \\ \bottomrule \end{tabular} } \caption{\small The baseline DNN accelerator architecture.} \label{table:arch} \end{table*} This section discusses the evaluation platforms we use followed by the experimental setup for \sys evaluation. \subsection{Evaluation Platforms} \label{sec:infra} We evaluate the schedules generated by \sys{} on two platforms: 1) Timeloop for cycle performance and energy consumption, and 2) our cycle-exact NoC simulator for overall latency performance. The latter more accurately captures the communication overhead and concurrent hardware behaviors on a spatial architecture. \textbf{Timeloop} provides microarchitecture and technology-specific energy models for estimating the performance and energy on DNN accelerators. Timeloop reports the performance in terms of the maximum cycles required for each processing element to complete the workload and to perform memory accesses, assuming perfect latency hiding with double buffering. The energy consumption in Timeloop is calculated by multiplying the access count on each hardware component with the energy per access and summing the products up. The access count is inferred from the schedule and the energy per access is provided by an energy reference table in Timeloop. \textbf{NoC Simulator} augments the Timeloop analytical compute model for PEs with a synthesizable NoC implementation to reflect the communication cost. Communication is one of the key contributing factors for latency in a NoC-based system, especially for the communication bound schedules. The NoC simulator is transaction-based and cycle-exact for modeling the on-chip traffic. Leveraging the synthesizable SystemC router design from Matchlib~\cite{khailany2018modular} that supports unicast and multicast requests, we construct a resizable 2-D mesh network and implement an X-Y routing scheme. The simulator captures both computation and communication latencies by concurrently modeling data transfers in the NoC, the PE executions, and off-chip DRAM accesses based on the DRAMSim2 model~\cite{rosenfeld2011dramsim2}, where the impact of traffic congestion on the NoC can also be manifested. \subsection{Baseline Schedulers} We evaluate \sys with respect to two other scheduling schemes: 1) a \textbf{Random} scheduler that searches for five different valid schedules, from which we choose the one with the best result for the target metric, and 2) the \textbf{Timeloop Hybrid} mapper in Timeloop~\cite{timeloop2019-ispass} that randomly selects a tiling factorization, prunes superfluous permutations, and then linearly explores the pruned subspace of mappings before it proceeds to the next random factorization. For this mapper, we keep the default termination condition where each thread self-terminates after visiting 500 consecutive mappings that are valid yet sub-optimal. The mapper is run with 32 threads, each of which independently searches the scheduling space until its termination condition is met. Once all threads have terminated, Timeloop returns the best schedule obtained from all 16,000+ valid schedules. \subsection{Experiment Setup} \textbf{Mixed-Integer Program (MIP) Solver:} \sys uses Gurobi~\cite{gurobi}, a general-purpose optimization solver for MIP and other constrained programming, as the solver. We specify the \sys variables, constraints, and objective functions before we invoke the solver. The solver takes at most seconds to return a schedule for DNN layers. \joshil{Details on how we measure time-to-solution?} \joshil{Specifics of the Networks and Operators: 1D 2D 3D depth-wise, etc} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figs/cycle/plot_cycle_4x4_row0.pdf} \includegraphics[width=\linewidth]{figs/cycle/plot_cycle_4x4_row1.pdf} \caption{\small Speedup of different schedules relative to Random search on the baseline 4$\times$4 NoC architecture. X-axis labels follow the naming convention \texttt{R\_P\_C\_K\_Stride} where $S=R$ and $Q=P$ in all workloads. \sys achieves $5.2\times$ and $1.5\times$ higher geomean speedup across four DNN workloads compared to the Random and Timeloop Hybrid search.} \label{fig:4x4results_cycle} \end{figure*} \vspace{1ex} \textbf{DNN workloads:} We measure the performance of \sys-generated schedules over a wide range of DNN workloads targeting different DNN tasks with diverse layer dimensions, including: ResNet-50~\cite{resnet}, ResNeXt-50 (32x4d)~\cite{Xie2016resnext}, and Deepbench~\cite{deepbench} (OCR and Face Recognition). The precision used for the benchmarks is 8-bit for the input and weights, and 24-bit for the partial sums. We do not pad the dimensions to be multiples of 2, as it incurs more overhead and outweighs the benefits it provides to allow more scheduling options. \textbf{Baseline architecture:} We consider a spatial-array architecture like Simba~\cite{shao2019simba} as our baseline. Detailed specifications of the hardware constructs are summarized in Table~\ref{table:arch}. We demonstrate that the \sys framework is general to be applied for different architecture parameters while delivering high-performance scheduling options in one shot. \section{Evaluation} \label{sec:eval} In this section, we demonstrate the improved time-to-solution, performance, and energy of \sys compared to baseline schedulers, across different evaluation platforms and different DNN architectures on a diverse set of DNN layers. \subsection{Time to Solution} We compare the average time for \sys and the baseline schedulers to generate the schedule of each layer from the four target DNN workloads. Table~\ref{tab:time-to-solution} shows that \sys's optimization-driven approach offers more than 90$\times$ (4.2s vs. 379.9s) time-to-solution advantage over the Timeloop Hybrid search strategy. Timeloop Hybrid search sampled 67 million schedules per layer and evaluated more than 16 thousand valid ones among them, leading to a long runtime. With Random search, a random sampling of 20K samples in 4.6 seconds resulted in only five valid schedules, further demonstrating the need to have a constraint-based strategy to prune the invalid search space directly. In the following section, we show that \sys not only shortens the time-to-solution but also generates high-quality schedules. \begin{table}[h] \centering \resizebox{\linewidth}{!} { \begin{tabular}{cccc} \toprule & \sys{} & Random (5$\times$) & Timeloop Hybrid \\ \midrule Avg. Runtime / Layer & \textbf{4.2s} & 4.6s & 379.9s \\ Avg. Samples / Layer & \textbf{1} & 20K & 67M \\ Avg. Evaluations / Layer & \textbf{1} & 5 & 16K+ \\ \bottomrule \end{tabular} } \caption{\small Time-to-solution Comparison. \sys outputs only one valid schedule per layer. \sys's runtime is $1.1\times$ and $90\times$ shorter than the Random and Timeloop Hybrid search, respectively.} \label{tab:time-to-solution} \end{table} \subsection{Evaluation on Timeloop Performance and Energy Models} We compare the performance of the Random search, the Timeloop Hybrid mapper, and the \sys scheduler for four different DNN workloads. The evaluations are based on our baseline architecture described in Table~\ref{table:arch} and the Timeloop evaluation platform mentioned in Section~\ref{sec:infra}. \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{figs/energy/plot_energy_4x4_geomean.pdf} \caption{\small Improvements in total network energy reported by the Timeloop energy model. Energy estimations are normalized to results from Random search and are evaluated on the baseline 4$\times$4 NoC.} \label{fig:4x4results_energy} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/plot_obj_breakdown_one.pdf} \caption{\small Objective function breakdown for ResNet-50 layer $3\_7\_512\_512\_1$. The goal is to minimize the total objective in Eq.~\ref{eqn:obj}. \sys achieves the lowest values for all objective functions on this layer among all approaches.} \label{fig:obj_breakdown} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.98\columnwidth} \includegraphics[width=\linewidth]{figs/plot_latency_4x4_geomean_large_pe.pdf} \caption{\small $8\times8$ PEs} \label{fig:larger_pe} \end{subfigure} \begin{subfigure}[b]{0.98\columnwidth} \includegraphics[width=\linewidth]{figs/plot_latency_4x4_geomean_large_gb.pdf} \caption{\small Larger Buffers} \label{fig:larger_buffer} \end{subfigure} \caption{\small Speedup relative to Random search reported by Timeloop model on different hardware architectures. \sys's performance generalizes across different hardware architectures with different computing and on-chip storage resources.} \label{fig:diffarch} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=\linewidth]{figs/latency/plot_latency_4x4_row0.pdf} \includegraphics[width=\linewidth]{figs/latency/plot_latency_4x4_row1.pdf} \caption{\small Speedup reported by NoC simulator relative to Random search on the baseline 4$\times$4 NoC architecture. \sys achieves $3.3\times$ and $2.5\times$ higher geomean speedup across four DNN workloads compared to the Random and Timeloop Hybrid search on the more communication sensitive NoC simulator. } \label{fig:4x4results_latency} \end{figure*} \subsubsection{Performance} \figureautorefname~\ref{fig:4x4results_cycle} shows the speedup reported by Timeloop for different scheduling schemes relative to Random search. \figureautorefname~\ref{fig:4x4results_cycle} demonstrates that the \sys-generated schedules are not only valid but also outperform the ones generated by both Random search and Timeloop Hybrid search. The geometric mean of the speedups of \sys schedules relative to the Random and Timeloop Hybrid search ones are $5.2\times$ and $1.5\times$ respectively across four DNNs. In the few layers where Timeloop Hybrid search slightly outperforms \sys, we find a higher iteration count at the DRAM level in Timeloop Hybrid schedules, which helps to reduce the size of each DRAM transaction and balance the pipeline. Fine tuning the weights of the objective functions could be used to further improve the \sys-generated schedules. A more exhaustive Timeloop Hybrid search (32K valid schedules) results in an improvement of only 7.5\% in latency while increasing runtime by $2\times$. We find that even with $2\times$ more valid samples evaluated, Timeloop Hybrid search still cannot generate schedules that are of similar efficiency to \sys. \subsubsection{Energy} We use the Timeloop energy model to evaluate the energy of different schedules. Because energy cost is highly correlated with the access count on each hardware component, our traffic objective in \sys is used for the schedule optimization targeting energy efficiency. \figureautorefname~\ref{fig:4x4results_energy} demonstrates that \sys, using no simulation feedback, can generate schedules % 22\% more energy-efficient than the best Timeloop Hybrid solutions selected from 16,000+ valid schedules optimizing the energy. \subsubsection{Objective Breakdown} A detailed breakdown of the \sys objective function on ResNet50 layer $3\_7\_512\_512\_1$ is included in \figureautorefname{}\ref{fig:obj_breakdown}. Our overall objective function aims to capture an optimization heuristic to maximize the utilization and minimize the compute and traffic costs at the same time with a weighted sum of the three. \figureautorefname{}\ref{fig:obj_breakdown} shows that CoSA achieves the lowest total objective among all approaches, and optimizes all three sub-objectives simultaneously. This observation on the objective values aligns with our empirical results in~\figureautorefname{}~\ref{fig:4x4results_cycle}, where \sys schedule runs $7\times$ faster than the ones generated by Random and Timeloop Hybrid search. \subsubsection{Different HW Architectures} We further explore the performance of \sys with different DNN architecture parameters such as different PE array sizes and different SRAM buffer sizes. We apply the same weights for the evaluation on the same architecture and customize the objective weights in Eqn.\ref{eqn:obj} using a micro-benchmark for different architectures. \figureautorefname{}\ref{fig:diffarch} shows the geomean speedup of CoSA across all networks on two different hardware architectures. \textbf{PE Array Dimension}. We scale the number of PEs up by $4\times$ and increase both the on-chip communication and DRAM bandwidth by $2\times$ correspondingly. Both of these modifications significantly impact the compute and communication patterns of DNN layer executions. With a larger spatial array of arithmetic units, this case study presents a scheduling problem where decisions about spatial and temporal mapping can be especially crucial to attaining high performance. ~\figureautorefname{}~\ref{fig:larger_pe} shows that CoSA achieves $4.4\times$ and $1.1\times$ speedup compared to Random and Timeloop Hybrid search respectively across four networks. This shows that the performance of our scheduler can scale and generalize to NoCs with more PEs, which tend to be more affected by communication costs. \textbf{SRAM Size}. We also increase the sizes of the local and global buffers to demonstrate that \sys can achieve consistently good schedules across different architectures. The sizes of local buffers, i.e. accumulation, weight, and input buffers, are doubled and the global buffer size increased $8\times$. Modified memory capacities, at the PE and global buffer level, are likely to impact the optimal strategy for data re-use and NoC communication traffic reduction. With CoSA, we show $5.7\times$ speedup over Random and $1.4\times$ speedup over Timeloop Hybrid search in~\figureautorefname{}~\ref{fig:larger_buffer}, demonstrating \sys's capability across different architectures. \subsection{Evaluation on NoC Simulator} To further compare the quality of schedules generated by different scheduling schemes, we evaluate them on our NoC simulation platform. The NoC simulation platform more accurately captures the communication overhead from the on-chip network as compared to the Timeloop models. \figureautorefname~\ref{fig:4x4results_latency} shows the speedup relative to the Random baseline. We observe that \sys-generated schedules outperform the baseline schedules for all four DNN workloads, with the greatest performance gains occurring for convolutional layers, e.g. DeepBench layers. Intriguingly, for these same layers, Timeloop Hybrid scheduler actually under-performs Random search as its internal analytical model does not accurately capture the communication traffic in the network. On the other hand, there is no significant difference between the performance of FC layers among different schedules, as the FC layers are heavily memory-bound with low PE utilization. The DRAM access time dominates in these layers even with the best schedules with respect to reuse of buffered data. Overall, \sys achieves a geometric average of up to $3.3\times$ speedup relative to the best Random search solutions and $2.5\times$ relative to Timeloop Hybrid search schedules across the four networks. Furthermore, unlike the iterative nature of Random and Timeloop Hybrid search schedules, \sys schedules are consistently performant with the one-shot solution. \subsection{Evaluation on GPU} \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{figs/plot_4x4_tvm_resnet50.pdf} \caption{\small Speedup relative to TVM reported on K80 GPU. } \label{fig:tvm} \end{figure} To show the potential use of \sys for general-purpose hardware, we also formulate GPU scheduling as a constrained-optimization problem using \sys. We evaluate the performance of \sys on GPU and compare it against TVM~\cite{tvm2018-osdi}. \textbf{Target GPU.} We target NVIDIA K80 GPU with 2496 CUDA cores and a 1.5MB L2 cache. This GPU has a 48KB shared memory and 64KB local registers, shared by a maximum of 1024 threads in each CUDA thread block. The thread block is a programming abstraction that represents a group of threads that can be run serially or in parallel in CUDA. The maximum dimension of a thread block is (1024, 1024, 64). Violating these constraints in the CUDA kernel results in invalid schedules. \textbf{Constraints.} \sys expresses the hardware constraints for GPU thread groups and shared/local memory similarly to how we specify the spatial resource and buffer capacity constraints in Section~\ref{sec:constraints}. Each thread group can be seen as a spatial level with a specific size. The product of all three thread group sizes is enforced to be smaller than 1024. The share memory utilization is calculated as buffer capacity constraints, and the register utilization is calculated by multiplying the total number of threads with the inner loop register utilization. \textbf{Objective Functions.} In \sys, we compute the compute objective by discounting the total compute cycles with the total number of threads for GPU, to reflect the performance gain from thread-level parallelism. We then adjust the weights of the other objectives using a micro-benchmark. We run TVM with the XGBoost tuner for 50 trials per layer as the baseline. \sys generates valid schedules in one shot with a time-to-solution $2,500\times$ shorter than TVM (0.02s vs. 50s per layer). The \sys-generated schedules achieve $1.10\times$ geomean speedup compared to the TVM schedules on ResNet50 as shown in~\figureautorefname{}\ref{fig:tvm}. \section{Conclusion} \label{sec:conclusion} In this paper, we present~\sys{}, an optimization-driven approach to DNN scheduling. Harnessing the regularities from DNN workloads and target accelerator designs, we formulate scheduling into a constrained optimization problem that can be solved directly without incurring the high cost of iterative scheduling. We devise a single mathematical formulation to simultaneously solve for all three key optimizations in scheduling: loop tiling, loop permutation, and spatial mapping. Comparing our results to schedules generated from the state-of-the-art work, our approach achieves up to $2.5\times$ speedup and $22\%$ better energy-efficiency, with $90\times$ shorter time-to-solution. \section{Case Study: On-chip Memory Partitioning with~\sys{}} \label{sec:allocation} In addition to scheduling for a given hardware, due to the fast, one-shot nature of \sys{}, it can also be applied for hardware and scheduling co-design problem. In particular, on-chip memory partitioning is a critical design decision that can greatly impact not only the overall area budget but also the scheduling decision, especially in architectures with multi-level private buffers for different data tensors. Given an on-chip memory budget, the memory partitioning algorithm determines the portion of memory to assign to each local buffer. Current work on the design space exploration of accelerators for resource allocation~\cite{timeloop2019-ispass, interstellar-asplos2020, kao20confuciux, choi2020dance, zhang2020dna} rely on the iterative scheduling schemes that are computationally expensive and can yield sub-optimal solutions. Our work is the first work that formulates both the scheduling decisions and the on-chip memory partitioning problem as a single optimization problem. \subsubsection{Formulation} To co-optimize the scheduling and memory partitioning decisions, we modify the formulations in Section~\ref{sec:framework} to include memory sizes also as \sys variables. Instead of treating the log capacity $log(M_{I,v})$ for different buffers as constraints in Section~\ref{sec:buf_util}, we turn them into MIP variables $m_{I,v}$ that represents the $\log$ value of the actual buffer size $w_{I,v}$. Assume we have $H$ on-chip buffers, we then need to add another constraint to ensure the total size of all buffers does not exceed the allocated budget $G$: \begin{equation} \begin{aligned} \sum^{H-1}_{I=0}\sum^{2}_{v=0} w_{I,v} = \sum^{H-1}_{I=0}\sum^{2}_{v=0} 2^{m_{I,v}} \le G \end{aligned} \end{equation} \subsubsection{Evaluation on On-chip Memory Partitioning} \begin{figure}[t] \begin{subfigure}[t]{0.48\columnwidth} \includegraphics[width=1\linewidth]{figs/sram_part_alexnet.pdf} \caption{\small \sys-generated memory partitions for AlexNet layers. } \label{fig:sram_part} \end{subfigure} \hspace{1pt} \begin{subfigure}[t]{0.48\columnwidth} \includegraphics[width=1\linewidth]{figs/sram_part_sim_cycle.pdf} \caption{\small Speedup on hardware with \sys-generated memory partitions for AlexNet layers. } \label{fig:sram_sim_cycle} \end{subfigure} \end{figure} Co-optimizing hardware and schedule opens up new optimization opportunities to balance different tradeoffs not only in the scheduling space but also in the hardware design. As a starting point, we demonstrate how \sys{} can be extended to capture the design and scheduling tradeoffs. In particular, Figure~\ref{fig:sram_part} shows the on-chip memory partitions solved by \sys for each layer in AlexNet. We observe significantly different preferred partitions for different layers, where some partition leads to up to 85\% reduction in the total SRAM size. At the same time, Figure~\ref{fig:sram_sim_cycle} shows the corresponding performance of running each layer with \sys-generated schedules on the co-optimized hardware design. We see a 11\% improvement in the geomean speedup on the co-optimized hardware. This case study shows a promising application of \sys in hardware-software codesign. \sys can be further extended to co-optimize other hardware design decisions by relaxing the architectural constraints in a similar manner as illustrated in this case study. \section*{Acknowledgements} The authors would like to thank Lianmin Zheng for providing the TVM tuning scripts and scheduling templates, and Kostadin Ilov for the computing system support. This work was supported in part by the CONIX Research Center, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, Berkeley Wireless Research Center, ADEPT Lab industrial sponsors (Intel, Apple, Futurewei, Google, Qualcomm, Seagate, Western Digital), and a Facebook Faculty Research Award. \section{Motivation} Recent advances in Deep Neural Networks (DNNs) have led to active development of specialized DNN accelerators, many of which feature a large number of processing elements laid out spatially, together with a multi-level memory hierarchy and flexible interconnect. While DNN accelerators improve the peak throughput and data reuse opportunities, they also expose a large number of runtime parameters to the programmers where programmers need to explicitly manage how computation is scheduled both \textit{spatially} and \textit{temporally}. In fact, different scheduling choices could lead to widely varying performance and efficiency differences, motivating the need for fast and efficient search strategy to navigate the vast scheduling space. \section{Limitations of the State of the Art} Given how a mapspace for a single DNN layer can be extremely large ($\sim$ billions of valid schedules), finding the optimal schedule can quickly become an intractable problem. State-of-the-art schedulers tackle this complexity mainly using one of the following two approaches. First, many schedulers resort to exhaustive search ~\cite{parashar2019timeloop, dave2019dmazerunner,yang2020interstellar,chatarasi2020marvel}. Such an approach is rendered viable because the scheduler starts with pruning the mapspace based on both the hardware and convolutional layer constraints. Disadvantages of this approach are two-fold: not only does such a brute-force search tend to be exceedingly expensive for more complex hardware architectures, but the generated schedules often do not perform optimally since analytical models fail to accurately capture the communication latency across the NoC. Another class of current schedulers uses machine learning (ML) algorithms or other statistical methods ~\cite{chen2018tvm, ragan2013halide, jia2019_flexflow} to either improve the accuracy of the cost model or directly solve for the solution using blackbox-tuning. However, their computational costs can significantly outweigh their possible improvements in the generated schedule. (TBD) To-date, none of the schedulers following this approach have been able to address all three loop transformations we have proposed. \section{Key Insights} To address the large search space in the scheduling problem and the inefficiency in existing search strategies, we present \sys, a constraint-solver-based approach for scheduling DNN accelerators. Different from existing approaches that either rely on designers' heuristics or expensive iterative methods to prune the search space, the key idea of \sys is to formulate the scheduling decisions into a constraint-satisfaction problem that can be deterministically solved using constraint solvers. In particular, \sys leverages the regularities in DNN operators and hardware to formulate the DNN scheduling space into a mixed integer programming problem with algorithmic and architectural constraints, where it can automatically generate a highly efficient schedule in a single pass. We demonstrate that \sys-generated schedules significantly outperform state-of-the-art approaches by \fixme{2}$\times$ across a wide range of DNN networks while reducing the time-to-solution by \fixme{10}$\times$. \section{Framework} \sys{} targets the scheduling optimizations at the operation-level for mapping DNN workloads onto spatial accelerators. Similar to other standard constraint-solving-based frameworks, it consists of: \begin{enumerate}[leftmargin=.4in] \item program and architecture dependent constraints \item decision variables to solve for \item objective functions \end{enumerate} It then optimizes the objective function subject to the constraints. \noindent \subsection{\sys{} Variables} \sys{} takes the specifications of the DNN layers and the underlying spatial accelerator as input constraints and generates a valid schedule that is optimized towards the objective function. \sys{} makes three optimization decisions during the scheduling process: \begin{enumerate} \item \textbf{Loop Tiling} is a critical optimization to exploit the parallelism and data locality in a schedule. In Fig.~\ref{fig:tilling_characterization} of section~\ref{sec:bg_motivation}, we show that by only varying the tiling factors of a schedule, it can lead to up to 7$\times$ difference in performance. \item \textbf{Loop Permutation} affects the ordering of the outer temporal loop dimensions and changes the reuse patterns for different tensors. Loop permutation on a particular layer schedule alone can cause $\sim$30\% performance difference on our SystemC-based NoC simulator. \item \textbf{Spatial Mapping} determines whether to leverage spatial resources to parallelize the workload. Parallelization reduces the overall compute latency, however there is always a communication cost involved when mapping to spatial resource on a spatial NoC-based accelerator. Our framework aims to expose the search space for choosing a spatial dimension as well as determine to what extent we would like to parallelize each dimension as these also affects the traffic patterns on the NoC-based system. \end{enumerate} \noindent These three different optimizations should be co-optimized in one pass as they are interdependent. Decision made with each optimization will prune the search space for the others and the final performance gain in the optimization phase is not a linear combination of each optimization. \subsection{\sys{} Constraints} \begin{enumerate} \item \textbf{Buffer Utilization} \item \textbf{Spatial Resource Usage} \end{enumerate} \subsection{\sys{} Objective Functions} \begin{enumerate} \item \textbf{Utilization-Driven Objective} \item \textbf{Traffic-Driven Objective} \item \textbf{Compute-Driven Objective} \end{enumerate} \section{Methodology} \section{Key Results and Contributions} \section{Why ASPLOS} \section{Citation for Most Influential Paper Award} \section*{Acknowledgements} \end{document}
{ "timestamp": "2021-05-06T02:11:34", "yymm": "2105", "arxiv_id": "2105.01898", "language": "en", "url": "https://arxiv.org/abs/2105.01898" }
\section{Introduction} Deep learning (DL) establishes state-of-the-art performances in a number of learning tasks such as image recognition~\cite{russakovsky2015imagenet,NIPS2012_4824}, speech recognition~\cite{alex2018speech,chiu2018state}, and natural language processing~\cite{Bahdanau2014nlp,DBLP:journals/corr/abs-1810-04805}. In recent years DL approaches were also shown to outperform humans in classic games such as Go~\cite{Silver_2016}. The performance of DL models depends on three factors: the model's architecture, data set, and training infrastructure, which together constitute the artificial intelligence (AI) trinity framework~\cite{Trinity}). It has been observed empirically~\cite{chen2019data} that increasing size of the training data is an extremely effective way for improving the performance of DL models, more so than modifying the network's architecture or training infrastructure. Generalization ability of a network is defined as difference between the training and test performance of the network. The relationship between the generalization ability of DL model and the size of the training data set constitutes a fundamental characteristic of the AI trinity framework, yet it is poorly described in the literature. Addressing this problem is crucial for safety critical applications where it is desired to understand the sample complexity of the model, or in other words, estimate the amount of data needed to achieve a certain acceptable level of performance. This paper aims at describing this relationship, in the context of a supervised learning setting, through a set of mixed mathematical-empirical tools. Classic approaches in statistical learning theory for analyzing sample complexity rely on the measures of capacity of a classification model, such Vapnik-Chervonenkis (VC) dimension~\cite{vapnik2013nature} or Rademacher complexity~\cite{bartlett2002rademacher}, which are potentially prohibitive to extend to practical DL models and may lead to loose estimates (for example, in the case where these measures approach infinity). The aim of this work is to develop a framework to model the sample complexity of DNNs that is free from such measures and straightforward to use by practitioners. We propose to model the probability of making an error by a DNN on a test sample as a function of the distance in the feature space between that sample (we denote the feature vector of the test sample as $\hat{x}$) and its nearest training sample (we denote the feature vector of this training sample as $x(\hat{x})$). This probability takes the following form: \vspace{-10pt} \begin{equation} \label{eq:phi_of_x} \Phi(\hat{x}) \coloneqq \min\Big(1, \frac{||\hat{x} - x(\hat{x})||_2}{\delta}\Big), \end{equation} where $\delta$ is a positive constant that we call the radius\footnote{Our empirical results were not sensitive to the choice of distance measure. We obtained similar results for squared distance.}. Thus, when test and training examples are further than $\delta$ in the feature space, the test example will be misclassified according to our model. The choice of the form of $\Phi(\hat{x})$ is also intuitive since as we increase the amount of training data, it becomes more reasonable to expect a test point within a certain distance from the training data point. Our approach is motivated by the fact that fundamentally DNNs can be interpreted as methods of non-linear dimensionality reduction which cluster together input data with similar features while push further apart dis-similar data points~\cite{amjad2019learning,Yang_2016_CVPR}. Therefore, a test point far from the closest training point in the feature space is more likely to be incorrectly classified. In addition to assuming the aforementioned error mechanism for DNNs, we also assume that DNNs under consideration can learn perfectly and achieve zero error on the training data set. This assumption is in practice non-restrictive as it was observed empirically that DNNs of sufficient size can fit accurately labelled training data.\footnote{We assume that labeling of the training data is consistent and does not contain any mistakes.} Finally, in order to estimate the generalization error in the function of the training data size ($N$), we define the notion of model-dependent effective dimensionality of the data $d$. It is a minimum dimensionality to which one can compress the feature vector without affecting the performance of the model. We find that this dimensionality is very low in practice. Under the above framework we estimate that the generalization error of a DNN follows a power law and scales inversely to $\delta N^{1/d}$. To the best of our knowledge, our work is distinct from existing work in the literature on the sample complexity of DNNs in a number of ways: i) it provides generalization error estimates, rather than mathematically-rigid error bounds ii) our result is free from complex capacity measures and relies on quantities ($d$ and $\delta$) which can be easily obtained empirically, iii) we perform exhaustive experimental evaluation of our theoretical result, and iv) our estimates can be easily adopted by practitioners to asses the amount of training data needed to meet the performance requirements of their applications. The paper is organized as follows; Section~\ref{sec:related_work} reviews the related work, Section~\ref{sec:theoretical_framework} provides the mathematical derivations, Section~\ref{sec:experiments} reports the experimental results, and finally Section~\ref{sec:conclusion} concludes the paper. The Supplement contains additional mathematical derivations and empirical evidence. \section{Background and Related Works} \label{sec:related_work} Statistical learning theory typically bounds the generalization error~\cite{bousquet2003introduction,doi:10.1002/wics.179,jiang2020fantastic} using concentration inequalities, e.g., Hoeffding's inequality~\cite{hoeffding1994probability}. The error bounds depend on the measure of the complexity (capacity) of the hypothesis class that can be learned by a statistical classification algorithm. First existing bounds for simple learning algorithms, such as the histogram classifier, computed the complexity as the cardinality of the hypothesis class~\cite{ruicastrolecture1}. The corresponding bounds were inapplicable to problems involving an infinite class of functions, for which they became very loose. This led to the development of a new capacity measure - the VC dimension~\cite{Vapnik:1982:EDB:1098680,Vapnik2015,vapnik2013nature,sauer1972density,svm,ehrenfeucht1989general}, which is defined as the cardinality of the largest set of points that the algorithm can shatter and thus does not scale with the size of the hypothesis class. Resolving the VC bounds for neural networks~\cite{JMLR:v20:17-612,anthony_bartlett_1999} leads to the theoretical guarantees that depend on the number of network parameters. Such bounds are not useful for practical networks. The aforementioned error bounds are distribution-free and often loose in practice. This motivated work on distribution-dependent capacity measures such as VC entropy~\cite{Vapnik1998}, covering numbers~\cite{alon1997scale,zhang2002covering}, and Rademacher complexity~\cite{koltchinskii2001rademacher,bartlett2002rademacher, luxburg2004distance}. Bounds based on covering numbers were derived for a limited family of classifiers, such as linear functional classes or neural networks with identity activation functions~\cite{zhang2002covering}. Rademacher complexity measures the ability of functions in the hypothesis space to fit to random labels. It has been recently observed that DNNs are powerful enough to fit any set of random labels~\cite{DBLP:conf/iclr/ZhangBHRV17} thus rendering the Rademacher complexity based bounds inadequate. Other capacity measures for neural networks, not mentioned before, include unit-wise capacities~\cite{DBLP:journals/corr/abs-1805-12076}. They led to generalization bounds for two layer ReLU networks. An excellent comparison of existing DNN generalization measures can be found in~\cite{Fan2020}. Estimating generalization bounds using PAC-Bayesian approaches and margin based analysis ~\cite{mcallester1999pac,mcallester2003simplified,langford2003pac} is still an active area of research. More recently, several bounds based on PAC-Bayes approach have been presented for stochastic and compressed networks~\cite{dziugaite2017computing, arora2018stronger, zhou2018non} that are computational in nature, by exploring modifications of the standard training procedure to obtain tighter(non-vacuous) generalization guarantees. However, these bounds are still loose ($>>0$) to practically study the sample complexity of DNNs. There also exist works that study the generalization phenomenon in DL from the perspective of the behavior of the optimization algorithm that minimizes the training loss. They are focused on the convergence properties of the optimizers and therefore are outside of the focus of this paper, with the exception of ~\cite{neyshabur2014search} that argues the existence of the ``inductive bias'' imposed by the optimizer, such as SGD, that restricts neural networks to a simple class of functions. This idea is linked with the notion of network's capacity though it is unclear how to use it to obtain sample complexity guarantees. Above we discussed works that aim at proving theoretical bounds on the generalization error. Existing bounds that hold only for simplified DNNs typically scale with the training data size as $\mathcal{O}(1/\sqrt{N})$. An empirical family of approaches, that we will discuss next, instead studies the ways of extrapolating the learning curves, i.e. the dependence of the error on the amount of the training data, using parametric models. Among these works, we have linear, logarithmic, exponential, and power law parametric models~\cite{DBLP:conf/aistats/FreyF99} that were applied to decision trees. A subsequent paper~\cite{gu2001modelling} explored a vapor pressure model, the Morgan Mercer-Flodin (MMF) model, and the Weibull model to predict learning curves for classification algorithms such as decision tree and logistic discrimination. Empirical results obtained for a $2$-layer neural network on MNIST data set showed that learning curve decays following the power law with a decay factor in the range $[1,2]$~\cite{cortes1994learning,cortes1995limits}. This behavior of the learning curve was also observed in other applications~\cite{DBLP:journals/corr/abs-1712-00409}. These parametric modeling approaches are not supported by theoretical argument. Finally, research works that are most closely related to our approach present asymptotic estimates of learning curves for Gaussian processes~\cite{sollich2002gaussian,williams2000upper}, kernel methods~\cite{spigler2019asymptotic}, and wide neural networks trained in the regime of neural tangent kernels~\cite{cohen2019learning}. These works do not apply to a practical deep learning setting, but provide useful insights into the mathematical modeling of complex learning phenomena. \section{Generalization error estimation} \label{sec:theoretical_framework} In this section we derive the estimates for the generalization error of a DL model. Our analysis is performed under the assumptions that the model can learn the training data set with perfect accuracy and the probability of making an error on the test examples takes the form given in Equation~\ref{eq:phi_of_x}. \subsection{Effective dimensionality} Practical DNNs are over-parameterized, i.e., the number of parameters far exceeds the number of training data samples. This over-parameterization induces redundancy in network weights~\cite{denton2014exploiting,DBLP:conf/iclr/MolchanovTKAK17,han2015learning}, which particularly manifests itself on the output of the feature extractor of the network (the feature extractor typically precedes the fully connected layers of the model). The data representation there has to be simple enough so that the last layers of the network, which constitute shallow classifier, can perform accurate prediction. It has been noted in past works that this feature vector is low dimensional~\cite{ravichandran2019using,DBLP:journals/corr/abs-1804-07090,DBLP:journals/corr/abs-1906-00443,DBLP:journals/corr/abs-1905-12784}. We next describe how we define and find effective dimensionality of the feature space. We introduce a bottleneck network consisting of two linear layers, each followed by the ReLU non-linearity, before the output layer of the network. The bottleneck takes $D$-dimensional feature vector as input, projects it down to dimensionality $d^{'}$, and then projects it back up to input dimension $D$. We insert the bottleneck into the trained model and fine-tune the entire network. The effective dimensionality $d$ is the smallest value of $d^{'}$, for which the accuracy of the model with the bottleneck does not differ significantly from the accuracy of the original model without it. Empirical evaluation of $d$ for different networks and data sets is presented in the experimental section. The existence of small effective dimensionality of the feature space has been observed before in various works. Specifically,~\cite{ravichandran2019using} defines effective dimensionality of the feature maps in terms of singular values of its co-variance matrix. They observe that as we move from input to the output layer of the network the effective dimensionality first increases and then drops. They report an effective dimensionality at the final layer of the network and show that it is as low as 2 for tiny ImageNet and CIFAR-10 data sets. Furthermore, they also observe a much sharper decline in effective dimensionality for large networks compared to the small ones. Similarly,~\cite{DBLP:journals/corr/abs-1804-07090} observed that $<10$ singular values of the matrix of vectorized representations are enough to explain $>99\%$ of the variance. They noted that enforcing even stronger low rank structure for the feature co-variance matrix can lead to better performance and robustness to adversarial examples.~\cite{DBLP:journals/corr/abs-1906-00443,DBLP:journals/corr/abs-1905-12784} utilize an ``ID estimator" previously introduced in~\cite{facco2017estimating} that relies on the ratio of distances to the nearest and second nearest neighbor of a data point to analyze intrinsic network dimensionality. These authors also observe that the neural network first increases and then decreases its intrinsic dimensionality to as low as $10$ when moving towards network's output. Another work~\cite{goldfeld2018estimating} reports similar behavior of the mutual information. The mutual information was found to be as low as $<4$ nats closer to the final layer of the neural network. Finally, numerous network compression approaches implicitly rely on the existence of small effective dimensionality of the feature space when pruning network connections. They achieve $\approx 90\%$~\cite{DBLP:conf/iclr/ZhuG18} compression rate with negligible loss of the accuracy of the model. We next move to our mathematical modeling of the generalization error. For the purpose of simplifying our analysis, we first consider the case where the effective dimensionality of the feature space is one and then extend the analysis to the general case of arbitrary dimensionality. Let $f_{train}$ and $f_{test}$ denote probability density functions of the train and test feature distributions. Then under the proposed error model defined in Equation~\ref{eq:phi_of_x} the overall probability of making an error on the test set is given by the expectation $\mathbb{E}_{f_{\text{test}}}[\Phi]$. \vspace{-5pt} \subsection{Generalization error estimates for one dimensional case ($d$ = 1)} Let $\hat{x}$ be a given test point in the feature space whose immediate nearest training points in the feature space are $x_{i}$ and $x_{j}$, such that $\hat{x} \in (x_{i}, x_{j})$ and let $\rho(\hat{x}) = |x_{j} - x_{i}|$. Intuitively, as we increase the number of training data points sampled from $f_{train}$ the distance between two training samples i.e $\rho(x)$ decreases. Assume that $f_{test}$ is close to a uniform distribution, denoted as $u$, in the interval $(x_{i}, x_{j})$. Note that this is a realistic assumption, i.e. at the tail of the distribution we observe training samples rarely but at the same time the training distribution there is flat whereas in high-concentration regions, where the training distribution changes quickly, the training samples are observed close to each other. Thus in the latter case the dynamics of the changes of the distribution are compensated by the small distance between samples. Also the more data we have, which is the regime we are mostly interested in analyzing, the more accurate this assumption is. Since the test point $\hat{x}$ is uniformly distributed in the interval $(x_{i}, x_{j})$, the distance from the test point to its closest training point (denoted as $\psi(\hat{x})$) is also uniformly distributed in the range $[0, \frac{\rho(\hat{x})}{2}]$. We can compute the expectation of $\psi(\hat{x})$ as (see Derivations for Equation~\ref{eq:exp_psi_1d} in the Supplement), \begin{equation} \label{eq:exp_psi_1d} \mathbb{E}_u^{\langle x_{i}, x_{j} \rangle}[\psi(\hat{x})] = \frac{|x_j - x_i|}{4} = \frac{\rho(\hat{x})}{4}. \end{equation} In the large data regime, we can approximate the distance between two training points ($\rho(\hat{x})$) as the limit of the ratio of length of the interval to number of points lying in the interval as, \begin{align*} & \rho(\hat{x}) \approx \lim_{\Delta \to 0} \frac{\Delta}{\int_{\hat{x} - \frac{\Delta}{2}}^{\hat{x} + \frac{\Delta}{2}}Nf_{\text{train}}(x)dx} \\ &= \lim_{\Delta\to 0}\frac{\Delta}{N[F_{\text{train}}(\hat{x} + \frac{\Delta}{2}) - F_{\text{train}}(\hat{x} -\frac{\Delta}{2})]} = \frac{1}{Nf_{\text{train}}(\hat{x})} \end{align*} The above approximation does not include the local variance of $\rho(\hat{x})$. The effect of local variance results from the fact that neighboring training intervals should roughly have the same length but in practice they do not. Including that effect is crucial in the experiments. Thus we correct $\rho(\hat{x})$ by taking into account this local variance. We denote corrected $\rho(\hat{x})$ as $\rho'(\hat{x})$. The neighboring intervals should have same density function usually, so we calculate $\rho'(\hat{x})$ using $K$ left and $K$ right neighboring intervals of the training interval $\langle x_{i},x_{j}\rangle$. We refer to the lengths of these intervals as $\rho_{-K}(\hat{x}), \rho_{-K+1}(\hat{x}), \dots, \rho_{K}(\hat{x})$. Note that \begin{align*} \mathbb{E}^{\langle \rho_{-K},\rho_K\rangle}[\rho(\hat{x})]\coloneqq \frac{\sum_{i=-K}^K\rho_i(\hat{x})}{2K+1} = \rho(\hat{x}) \end{align*} and \begin{align*} \text{Var}^{\langle \rho_{-K},\rho_K\rangle}[\rho(\hat{x})] \coloneqq \frac{\sum_{i=-K}^K\rho_i^2(\hat{x})}{2K+1} - \left(\mathbb{E}^{\langle \rho_{-K},\rho_K\rangle}[\rho(\hat{x})]\right)^2 \end{align*} \begin{align*} & \rho'(\hat{x}) = \sum_{i=-K}^K\underbrace{\frac{\rho_i(\hat{x})}{\sum_{j=1}^K\rho_j(\hat{x})}}_{\text{prob. of falling into the interval}}\cdot\underbrace{\rho_i(\hat{x})}_{\text{interval length}}\\ &= \frac{1}{\sum_{j=-K}^K\rho_j(x_t)}\sum_{i=-K}^K\rho_i(\hat{x})^2dx = \frac{\sum_{i=-K}^K\rho_i(\hat{x})^2}{\sum_{j=-K}^K\rho_j(\hat{x})} \\ &= \frac{\text{Var}^{\langle \rho_{-K},\rho_K\rangle}[\rho(\hat{x})]}{\mathbb{E}^{\langle \rho_{-K},\rho_K\rangle}[\rho(\hat{x})]} +\mathbb{E}^{\langle \rho_{-K},\rho_K\rangle}[\rho(\hat{x})] \end{align*} \noindent For $1$-dimension case, we empirically verified $\mathbb{E}^{\langle \rho_{-K},\rho_K\rangle}[\rho(\hat{x})] \approx \frac{Var^{\langle \rho_{-K},\rho_K\rangle}[\rho(\hat{x})]}{\mathbb{E}^{\langle \rho_{-K},\rho_K\rangle}[\rho(\hat{x})]}$, thus: \begin{align} \label{eq:exp_rho_1d} \rho'(\hat{x}) = 2 \rho(\hat{x}) = \frac{2}{Nf_{\text{train}}(\hat{x})} \end{align} Using Equations~\ref{eq:phi_of_x},~\ref{eq:exp_psi_1d}, and~\ref{eq:exp_rho_1d} we can derive the probability of an error on the test set, $\mathbb{E}_{f_{\text{test}}}[\Phi]$, as follows: \begin{align} &\mathbb{E}_{f_{\text{test}}}[\Phi] = \int_{-\infty}^{+\infty} \Phi(\hat{x})f_{\text{test}}(\hat{x})d\hat{x} \nonumber\\ &= \int_{-\infty}^{+\infty} min\left(1, \frac{\psi(\hat{x})}{\delta}\right)f_{\text{test}}(\hat{x})d\hat{x}\nonumber\\ &\approx \int_{-\infty}^{+\infty} min\left(1, \frac{\rho(\hat{x})}{4\delta}\right)f_{\text{test}}(\hat{x})d\hat{x} \nonumber\\ &\approx \int_{-\infty}^{+\infty} min\left(1, \frac{1}{2Nf_{\text{train}}(\hat{x})\delta}\right)f_{\text{test}}(\hat{x})d\hat{x} \qedhere \label{eq:exp_phi_test_1d} \end{align} The integral in Equation~\ref{eq:exp_phi_test_1d} can be computed in the closed form for many standard distributions, such as Gaussian or uniform, else it can be computed using Monte Carlo method~\cite{caflisch_1998}. \subsection{Generalization error estimates for multi-dimensional case} \begin{figure*}[ht] \centering \includegraphics[width=0.32\textwidth]{results/bfe_dim_1_delta_1_M_10000_Nx_1000_pw_1_rx_20_sigma_100.png} \includegraphics[width=0.32\textwidth]{results/bfe_dim_2_delta_1_M_1000_Nx_100_pw_1_rx_20_sigma_1.png} \includegraphics[width=0.32\textwidth]{results/bfe_dim_4_delta_1_M_1000_Nx_100_pw_1_rx_20_sigma_1.png}\\ \caption{Monte Carlo simulation results (blue curve) confronted with theoretical derivations (red curve) for \textbf{(left)} d = 1 \textbf{(center)} d=2 and \textbf{(right)} d = 4. M = 1K, $f_{train}$ = $f_{test}$ = $\mathcal{N}_d(\mu=0, \Sigma=I)$. The error bars capture $2$ standard deviations.} \label{fig:brute_force_results} \vspace{-15pt} \end{figure*} Now we consider multi-dimensional feature distributions. Let $\hat{x}$ be a test point in the feature space whose immediate $2^d$ nearest training points in the feature space form a set $\bar{X}$ and let $\mathcal{P}$ be a convex hull spanned by these training points. Assume $\mathcal{P}$ contains $\hat{x}$. For the ease of further derivations, we assume training points from $\bar{X}$, sampled from distribution $f_{train}$, lie on the vertices of a $d$-dimensional hyper-cube $\mathcal{P}$ with side length $a(\hat{x})$. The side length of the hyper-cube $\mathcal{P}$ depends on the position of the test point $\hat{x}$. This is because in places with higher density of training data points we can construct a tighter convex hull around the test point $\hat{x}$, and hence the length of the side of the hyper-cube $\mathcal{P}$ should decrease then. Furthermore, similar to 1-dimensional case let the test feature distribution be close to uniform, denoted as $u$, in $\mathcal{P}$. In the large data regime, we approximate the distance of $\hat{x}$ to its closest training feature vector (denoted as $\psi(\hat{x})$) with the expected value of the distance of $\hat{x}$ to the closest training point in $\mathcal{P}$ (depending on the position in $\mathcal{P}$, the closest training point is one of the vertices of the hyper-cube $\mathcal{P}$). For ease of computation we assume $x(\hat{x})$ lies at the origin of the $d$-dimensional feature space hence, we can compute $\mathbb{E}_u^{\bar{X}}[\psi(\hat{x})]$ as, \begin{align} \label{eq:psi_hd} &\mathbb{E}_u^{\bar{X}}[\psi(\hat{x})] = \int_{\mathcal{P}}\|\hat{x}-x(\hat{x})\|_2u(\hat{x})d\hat{x} \nonumber \\ &= \int_0^\frac{a(\hat{x})}{2}\dots\int_0^\frac{a(\hat{x})}{2}\|\hat{x}\|_2\frac{1}{(\frac{a(\hat{x})}{2})^d}d\hat{x}_1\dots d\hat{x}_d \nonumber \\ &= \frac{1}{(\frac{a(\hat{x})}{2})^d}\int_0^\frac{a(\hat{x})}{2}\dots\int_0^\frac{a(\hat{x})}{2}\sqrt{\sum_{i=1}^d\hat{x}_i^2}d\hat{x}_1\dots d\hat{x}_d \end{align} In the large data regime, we can approximate the distance between two training data points, or in other words the side of the hyper-cube $\mathcal{P}$, $a(\hat{x})$, as the limit of the ratio of the volume of the hyper-cube $\mathcal{P}$ to the number of points lying in $\mathcal{P}$: \color{black} \begin{align} \label{eq:a_hd} a(\hat{x}) &\approx \left(\lim_{\text{Volume}(\mathcal{P}) \to 0} \frac{\text{Volume}(\mathcal{P})} {\int_{\mathcal{P}}Nf_{\text{train}}(\hat{x})dx}\right)^{1/d} \nonumber \\ &= \left(\lim_{\Delta\to 0} \frac{1}{Nf_{\text{train}}(\hat{x} + \Delta)}\right)^{1/d} \nonumber \\ &= \frac{1}{(Nf_{\text{train}}(\hat{x}))^{1/d}}. \end{align} In higher dimensions ($>$ 1), we empirically verified that no correction to $a(\hat{x})$ is required. Similarly to 1-dimensional case, we can use Equations~\ref{eq:phi_of_x},~\ref{eq:psi_hd}, and~\ref{eq:a_hd} to derive the probability of an error on the test set, $\mathbb{E}_{f_{\text{test}}}[\Phi]$, as follows: \begin{align} \label{eqn:Exp_Phi_test_hd} \mathbb{E}_{f_{\text{test}}}[\Phi] =& \int\limits_{-\infty}^{+\infty}\!\!\dots\!\!\int\limits_{-\infty}^{+\infty}\min\left(1, \frac{\psi(\hat{x})}{\delta}\right)f_{\text{test}}(\hat{x})d\hat{x} \end{align} where, \vspace{-0.7cm} \begin{align*} \psi(\hat{x}) = & \frac{1}{(\frac{a(\hat{x})}{2})^d} \int\limits_0^\frac{a(\hat{x})}{2}\!\!\dots\!\!\int\limits_0^\frac{a(\hat{x})}{2}||\hat{x}||_{2}d\hat{x}_1 \dots d\hat{x}_d \end{align*} and \vspace{-0.7cm} \begin{align*} a(\hat{x}) =& \frac{1}{(Nf_{train}(\hat{x}))^{\frac{1}{d}}}. \end{align*} The obtained integral cannot be computed in the closed form, however it can be computed using Monte Carlo methods (note that $d$ in our experiments is very small, i.e. it does not exceed $4$, which enables accurate Monte Carlo approximations). \section{Experiments}\label{sec:experiments} We conduct two types of experiments. First, we verify our derivations for the generalization error estimator using Monte Carlo simulations. We use toy data sets generated from the Gaussian distribution. We then move to the main experiments, which are performed on the real data. These experiments involve classification and regression problems. The classification task is performed on the following data sets: MNIST~\cite{lecun-mnisthandwrittendigit-2010}, CIFAR~\cite{cifar} and ImageNet~\cite{imagenet_cvpr09}). Our experiments utilize popular DNN architectures: LeNet~\cite{lecun-mnisthandwrittendigit-2010}, VGG16~\cite{simonyan2014very}, ResNet18, and ResNet50~\cite{DBLP:journals/corr/HeZRS15}. We used cross entropy loss functions and stochastic gradient descent at training. The regression task is performed on the Udacity~\cite{udacitydata} data set, which is typically used in the autonomous driving applications. It contains images from left, center and right cameras that are mounted on the vehicle and additional vehicle logs such as speed, steering command etc. The data set is imbalanced and contains mostly samples corresponding to driving straight. We sub-sampled those to balance the data. The final balanced data set contains $38936$ training examples, $6552$ validation examples, and $8190$ test examples. For the Udacity experiments we utilize a network described in Table~\ref{tab:CovNet} in the Supplement that takes single image as input and predicts the appropriate steering command. The network was trained using mean squared error loss and Adam optimizer~\cite{DBLP:journals/corr/KingmaB14}. \begin{figure*}[!ht] \centering \includegraphics[width=0.32\textwidth]{results/bottleneck_cifar10_vgg16_cifar10_vgg16_bottleneck-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/bottleneck_cifar10_resnet18_cifar10_resnet18_bottleneck-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/bottleneck_mnist_mnist_bottleneck-eps-converted-to.pdf} \\ \vspace{5pt} \includegraphics[width=0.32\textwidth]{results/bottleneck_cifar100_vgg16_cifar100_vgg16_bottleneck-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/bottleneck_cifar100_resnet18_cifar100_resnet18_bottleneck-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/bottleneck_imagenet_imagenet_bottleneck-eps-converted-to.pdf} \caption{Test accuracy for different classification data sets with bottleneck network inserted before the output layer of the model. \textbf{Top Row:} CIFAR-10 trained on \textbf{(Left)} VGG16, \textbf{(Center)} ResNet-18 and \textbf{(Right)} MNIST trained on LeNet. \textbf{Bottom Row:} CIFAR-100 trained on \textbf{(Left)} VGG16 , \textbf{(Center)} ResNet-18 and \textbf{(Right)} ImageNet trained on ResNet-50.} \label{fig:bottleneck} \vspace{-15pt} \end{figure*} \subsection{Monte Carlo simulations}\label{sec:bfexp} Here we assume that both train and test data points are $d$-dimensional vectors (we explored $d=1,2,4)$ that are drawn from a known Gaussian distribution $\mathcal{N}_d(\mu, \Sigma)$. We set $\delta = 1$. We generate the train and test sets containing respectively $N$ and $M$ data points. Since we are interested in this paper in examining how the error scales with $N$, our experiments are performed on training data sets with a growing size ($N$). In the simulation, for each test point we find the closest point in the training data set. We count the test point as a failure with the probability obtained using Equation~\ref{eq:phi_of_x}. The error rate is computed as a number of failures divided by the size of the test data set (blue curve in Figure~\ref{fig:brute_force_results}). We run the simulation for each value of $N$ twenty time with different seeds. We confront the error rate obtained from simulation with the theoretical one obtained using Equations ~\ref{eq:exp_phi_test_1d} and ~\ref{eqn:Exp_Phi_test_hd} (red curve in Figure~\ref{fig:brute_force_results}). We use Monte Carlo method to compute the integrals in these equations. The results are captured in Figure~\ref{fig:brute_force_results}. The experiment shows that simulated and theoretical curves match, which confirms the correctness of our theoretical derivations. \color{black} \subsection{Real data experiments} \label{sec:exps_main} \subsubsection{Finding effective dimensionality} \label{sec:d_eff} \begin{figure*}[!t] \centering \includegraphics[width=0.32\textwidth]{results/cifar10_vgg16_cifar_bestloglog-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/cifar10_resnet18_cifar_bestloglog-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/mnist_6_16_bestloglog-eps-converted-to.pdf}\\ \vspace{5pt} \includegraphics[width=0.32\textwidth]{results/cifar100_vgg16_cifar_bestloglog-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/cifar100_resnet18_cifar_bestloglog-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/imagenet_bestloglog-eps-converted-to.pdf} \vspace{5pt} \caption{Theoretical and empirical learning curves for classification experiments \textbf{Top Row:} CIFAR-10 trained on (\textbf{left}) VGG16, (\textbf{center}) ResNet-18 and (\textbf{right}) MNIST trained on LeNet. \textbf{Bottom Row:} CIFAR-100 trained on \textbf{(left)}VGG16 and \textbf{(center)} ResNet-18, and \textbf{(right)} ImageNet trained on ResNet-50 .} \label{fig:class_results} \vspace{-10pt} \end{figure*} \begin{table}[h] \centering \begin{tabular}{|p{1.5cm}|M{1cm}|M{1cm}|M{1cm}|} \hline \multirow{2}{1.5cm}{\# filters in Conv1} & \multicolumn{3}{c|}{\# filters in Conv2} \\ \cline{2-4} & 4 & 8 & 16 \\ \hline 2 & $3$ & $3$ & $3$ \\ \hline 4 & $2$ & $2$ & $2$ \\ \hline 6 & $2$ & $2$ & $2$ \\ \hline \end{tabular} \caption{$d$ values for networks of varying capacity (i.e. varying number of filters in the first (Conv1) and second (Conv2) convolutional layer of the LeNet model.} \label{tab:mnist_lenet} \end{table} \begin{table}[h] \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline Width & 10 & 20 & 50 & 100 & 200 & 300\\ \hline \hline d & 5 & 4 & 2 & 2 & 2 & 2\\ \hline \end{tabular} \caption{$d$ values for networks of varying capacity (MLP with single hidden layer and varying width).} \label{tab:mnist_fcnet} \vspace{-15pt} \end{table} In Figure~\ref{fig:bottleneck} and Table~\ref{tab:sup_bottleneck} in the Supplement we show the experiment capturing the selection of the effective dimensionality involving the injection of the bottleneck to the network (it was described in the Section~\ref{sec:theoretical_framework}) for MNIST, CIFAR10, CIFAR100, and ImageNet data sets. Effective dimensionality $d$ is chosen as the size of the bottleneck for which we start observing saturation. We empirically found (see Figure \ref{fig:supp_class_results}) that this choice of $d$ allows to accurately estimate the learning curve, even when the accuracies of bottleneck models do not reach the accuracies of the original models as is the case for CIFAR100 and ImageNet data sets. Note that the accuracy of the model with the bottleneck saturates at $d=2$ for MNIST and CIFAR10, $d=2/d=3$ for CIFAR100 data set, and $d=3/d=4$ for ImageNet data set. Furthermore, we also extracted feature vectors of different dimensions from the bottleneck model and performed nearest neighbor classification on the low-dimensional features. We found that the performance of the nearest neighbor saturates for the same values of $d$ as described above. These results are highlighted in Figure~\ref{fig:supp_knn} and Table~\ref{tab:supp_knn} in the Supplement. This ensures us that the dimensionality found using the bottleneck indeed captures enough variety in the data to perform accurate prediction. Finally, for the Udacity, the effective dimensionality we found was equal to $1$. Apart from training data set, the effective dimensionality of the feature space indirectly depends on the capacity of the neural network which in turn depends on network design. We verify this claim by training multiple LeNet and MLP models with varying capacity on MNIST data set and computing the effective dimensionality for each of the model. The LeNet model consists of two convolution layers with $6$ and $16$ filters respectively. We control the capacity of the network by decreasing the number of filters in each convolutional layer. For MLP, we use single hidden layer and vary its width. As the capacity of the network decreases we observe an increase in the effective dimensionality. The results are highlighted in Table~\ref{tab:mnist_lenet} and~\ref{tab:mnist_fcnet}. \subsubsection{Learning curves} \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{results/udacity_bestloglog-eps-converted-to.pdf} \caption{Theoretical and empirical learning curves for Udacity data set.} \label{fig:reg_results} \vspace{-15pt} \end{figure} \begin{table*}[!ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline &&\multicolumn{2}{c|}{$f_{train}$} & \multicolumn{2}{c|}{$f_{test}$} && \\ \cline{3-6} Data set & Model & $\mu$ & diag($\Sigma$) & $\mu$ & diag($\Sigma$)& $d$ & $\delta$ \\ \hline MNIST & LeNet & \bm{0.020}{0.012} & \bm{377.855}{264.029} & \bm{-0.061}{-0.036} & \bm{384.357}{270.399} & 2 & 17.375 \\[2ex] \hline \multirow{2}*[-0.9em]{CIFAR-10} & VGG-16 & \bm{-0.004}{0.024} & \bm{98.192}{80.676} & \bm{0.009}{-0.060} & \bm{87.691}{76.163} & 2 & 0.890 \\[2ex] \cline{2-8} & ResNet-18 & \bm{-0.003}{-0.001} & \bm{3.717}{2.789} & \bm{0.007}{0.003} & \bm{3.726}{2.662}& 2 & 0.262 \\[2ex] \hline \multirow{2}*[-0.9em]{CIFAR-100} & VGG-16 & \bm{0.023}{0.0464} & \bm{123.572}{121.108} & \bm{-0.0584}{-0.116} & \bm{102.838}{97.932} & 2 & 0.265 \\[2ex] \cline{2-8} & ResNet-18 & \bm{-0.001}{-0.010} & \bm{3.648}{3.449} & \bm{0.003}{0.0260} & \bm{2.914}{2.811} & 2 & 0.056 \\[2ex] \hline ImageNet & ResNet-50& \bmm{-0.002}{-0.007}{-0.020} &\bmm{19.940}{14.308}{12.331} &\bmm{ -0.002}{-0.007}{-0.020} &\bmm{19.940}{14.308}{12.331} & 3 & 0.340 \\[3.5ex] \hline Udacity & CovNet & $-0.0111$ & $4.3415$ & $0.0265$ & $6.1000$ & $1$ & $0.003$ \\ \hline \end{tabular} \caption{The effective dimensionality $d$ and $\delta$ parameter for different data sets and model architectures. The train and test feature distributions are denoted as $f_{train}$ and $f_{test}$. $\mu$ denotes the mean of the distribution and $diag(\Sigma)$ denotes the diagonal elements of the co-variance matrix (off-diagonal elements are equal to $0$).} \label{tab:sup_all_results} \vspace{-10pt} \end{table*} The empirical learning curves were obtained by testing DNNs trained on increasingly larger data sets. Thus we sampled MNIST, CIFAR and Udacity data set to obtain training data sets of size equal to $3.125\%$, $6.25\%$, $12.5\%$, $25\%$, $50\%$, $75\%$ and $100\%$ of the entire data set. For ImageNet we obtained training data sets of size equal to $3.90\%$, $5.85\%$, $7.80\%$, $11.70\%$, $15.61\%$, $19.51\%$, $31.22\%$, $39.02\%$, $58.54\%$, $62.44\%$, $78.05\%$ and $100\%$ of the entire data set. In the obtained training data sets, all classes are equally well-represented (they are balanced). For classification problems, we plot the experimental learning curve by computing the test error rate, i.e number of samples incorrectly classified by the model (see Figure~\ref{fig:class_results} and Table~\ref{tab:sup_all_results}; theoretical curves for various settings of effective dimensionality are reported in Figure~\ref{fig:supp_class_results} in the Supplement). For regression task, we count the test sample as misclassified if the predicted steering command deviates from the label by more than $0.1$ (in the Udacity the steering command is typically in the range (-0.5, 0.5); see Figure \ref{fig:reg_results} for the results). The theoretical learning curves were obtained according to Equations~\ref{eq:exp_phi_test_1d} and ~\ref{eqn:Exp_Phi_test_hd}, where the integral were computed using Monte Carlo method. Note that our generalization error estimate is dependent on the feature train and test distributions. In order to obtain the features for the distribution estimation, we train the DNN on a subset of the training data, i.e. $50\%$. Next, we process this subset as well as the subset of the test data with this DNN. The obtained features are then projected via PCA to the effective dimensionality $d$. We assume single $d$-dimensional Gaussian distribution for both the train and test data, whose parameters (mean and covariance) we estimate via maximum likelihood approach (it has been previously observed that features space learned by DNNs exhibit a simple clustering structure~\cite{goldfeld2018estimating}). Finally, we treat $\delta$ as a hyperparameter of the error estimate. It was obtained under small data regime ($\leq50\%$ of training data) by minimizing the distance between theoretical and empirical curve. Therefore, after training network on small amount of data, which is computationally much faster than training on the entire corpus, we estimate $\delta$ and predict the behavior of the learning curve in large data regime. Table~\ref{tab:sup_all_results} and~\ref{tab:my_label} in the Supplement summarizes the choice of hyperparameters for different data sets and architectures. As can be seen in Table~\ref{tab:sup_all_results}, $\delta$ heavily depends on the considered combination of data set and architecture (difference is often in order of magnitudes). Figure ~\ref{fig:class_results}, Figure~\ref{fig:reg_results} and Table~\ref{tab:sup_all_results} report the results confronting the theoretical and empirical learning curves. Note that among all our data sets, only ImageNet does not satisfy the assumption of zero training error (see Figures~\ref{fig:mnist},~\ref{fig:cifar10},~\ref{fig:cifar100},~\ref{fig:imagenet}; for Udacity data set the training error is close to zero as can be seen in Figure~\ref{fig:udacity_curves}), nevertheless even for this data set we could well-model the behavior of the learning curve using our theoretical framework. According to~\cite{DBLP:journals/corr/abs-1712-00409} the learning curve can be broken down into three regions: low data region, power law region, and the saturation region. In our experiments we observe first two regions. In low data regime we observe over-fitting. In this case we observe a mismatch between the theoretical and empirical curves (recall that our estimates of the generalization error become more accurate with increasing $N$). In the power law region, as we increase the amount of training data the performance of the network consistently improves. Our theoretical framework estimates the empirical learning curve in this region very well. \section{Conclusion}\label{sec:conclusion} In this paper we address the problem of describing the behavior of the generalization error of DL models with the growing size of the training data. We attempt to reconcile the dichotomy between existing theoretical approaches, which rely on capacity measures that are potentially impossible to obtain for practical DNNs, and existing empirical approaches that model the behavior of the error by fitting it to a parameterized curve and lack any theoretical description. Our error estimates stem from a simple model of a DL machine that we propose and analyze. Our approach relies on modeling assumptions, which are however not unrealistic and gives rise to the estimates of the generalization error curves that closely resemble the ones empirically observed. We verify our approach on several learning tasks involving various realistic architectures and data sets. \clearpage {\small \bibliographystyle{ieee_fullname} \section{Derivations for Equation~\ref{eq:exp_psi_1d}} \begin{align*} &\mathbb{E}_u^{\langle x_{i}, x_{j} \rangle}[\psi(\hat{x})] = \int\displaylimits_{x_{i}}^{x_{j}}|\hat{x}-x(\hat{x})|u(\hat{x})d\hat{x} = \!\!\!\! \int\displaylimits_{x_{i}}^{\frac{x_{i} + x_{j}}{2}} \!\!\!\! (\hat{x}-x_{i})u(\hat{x})d\hat{x} \, + \!\!\!\! \int\displaylimits_{\frac{x_{i} + x_{j}}{2}}^{x_{j}}\!\!\!\!(x_{j} - \hat{x})u(\hat{x})d\hat{x} \\ &= \int\displaylimits_{x_{i}}^{x_{i} + \frac{\rho(\hat{x})}{2}}\!\!\!(\hat{x}-x_{i})u(\hat{x})d\hat{x} \, + \!\! \int\displaylimits_{x_{i} + \frac{\rho(\hat{x})}{2}}^{x_{i} + \rho(\hat{x})}\!\!(x_{i} + \rho(\hat{x})-\hat{x})u(\hat{x})d\hat{x} \\ &= \! \frac{1}{\rho(\hat{x})}\Bigg(\int\displaylimits_{x_{i}}^{x_{i} + \frac{\rho(\hat{x})}{2}}\!\!\!(\hat{x}-x_{i})d\hat{x} \, + \int\displaylimits_{x_{i} + \frac{\rho(\hat{x})}{2}}^{x_{i} + \rho(\hat{x})}\!\!\!(x_{i} + \rho(\hat{x})-\hat{x})d\hat{x} \Bigg) \\ &= \frac{1}{\rho(\hat{x})}\Bigg(\Bigg[\frac{\hat{x}^2}{2}-x_{i}\hat{x}\vc{\Bigg]}{x_{i}}{x_{i} + \frac{\rho(\hat{x})}{2}} \!\!\!\!\!\!\!\! + \Bigg[x_{i}\hat{x} + \rho(\hat{x})\hat{x} - \frac{\hat{x}^2}{2}\!\!\vc{\Bigg]}{x_{i} + \frac{\rho(\hat{x})}{2}}{x_{i} + \rho(\hat{x})}\Bigg) \\ &= \!\!\frac{1}{\rho(\hat{x})}\Bigg(\!\!\!-\frac{x_{i}\rho(\hat{x})}{2} \!+\! \frac{x_{i}\rho(\hat{x})}{2} \!+\! \frac{\rho(\hat{x})^2}{2} \!+\! \frac{1}{2} \Big(x_{i} \!+\! \frac{\rho(\hat{x})}{2} \Big)^2 - \frac{x_{i}^2}{2} - \frac{(x_{i} + \rho(\hat{x}))^2}{2} \frac{1}{2}\left(x_{i} + \frac{\rho(\hat{x})}{2}\right)^2\Bigg) \\ &= \frac{1}{\rho(\hat{x})}\Bigg(\frac{\rho(\hat{x})^2}{2} \!+\! \frac{x_{i}\rho(\hat{x})}{2} \!+\! \frac{\rho(\hat{x})^2}{8} - \frac{x_{i}^2}{2} - x_{i}\rho(\hat{x}) - \frac{\rho(\hat{x})^2}{2} \!+\! \frac{x_{i}^2}{2} \!+\! \frac{x_{i}\rho(\hat{x})}{2} \!+\! \frac{\rho(\hat{x})^2}{8}\Bigg) \\ &= \frac{\rho(\hat{x})}{4} \qedhere \end{align*} \section{Real Data Experiments} \subsection{Finding effective dimensionality} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Data set} & \multirow{2}{*}{Architecture} & \multirow{2}{*}{Baseline} & \multicolumn{6}{c|}{Bottleneck width ($d'$)} \\ \cline{4-9} & & & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline MNIST & LeNet & 0.992 & 0.407 & 0.971 & 0.986 & 0.988 & 0.989 & 0.989 \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet18 & 0.955 & 0.842 & 0.945 & 0.950 & 0.954 & 0.948 & 0.953 \\ \cline{2-9} & VGG16 & 0.930 & 0.886 & 0.926 & 0.926 & 0.928 & 0.925 & 0.927 \\ \hline \multirow{2}{*}{CIFAR-100} & ResNet18 & 0.791 & 0.172 & 0.590 & 0.699 & 0.729 & 0.739 & 0.735 \\ \cline{2-9} & VGG16 & 0.712 & 0.161 & 0.610 & 0.672 & 0.683 & 0.689 & 0.695 \\ \hline ImageNet & ResNet 50 & 0.763 & 0.018 & 0.272 & 0.626 & 0.674 & 0.686 & 0.697 \\ \hline Udacity & CovNet & 0.950 & 0.9786 & 0.983 & 0.992 & 0.991 & 0.989 & 0.992 \\ \hline \end{tabular} \caption{Test accuracy captured for baseline model (model without bottleneck trained on full training data set) and models with different widths ($d' = \{1,2,3,4,5,6\}$) of the bottleneck.} \label{tab:sup_bottleneck} \end{table} We perform nearest neighbor classification on the low-dimensional features extracted from the bottleneck model described in Section~\ref{sec:d_eff}. The effective dimensionality we find using nearest neighbor framework for different data sets is as follows: $d=2$ for CIFAR-10 and MNIST, $d=2/3$ for CIFAR-100 and $d=3/4$ for ImageNet, and it is consistent with the effective dimensionality found in the bottleneck experiments. \begin{table*}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Data set} & \multirow{2}{*}{Model} & \multirow{2}{*}{Baseline} & \multicolumn{6}{c|}{Feature vector size} \\ \cline{4-9} & & & 1 & 2 & 3 & 4 & 5 & 6\\ \hline MNIST & LeNet & 0.992 & 0.3922 & 0.970 & 0.986 & 0.987 & 0.988 & 0.989 \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet18 & 0.955 & 0.838 & 0.944 & 0.950 & 0.952 & 0.951 & 0.952 \\ \cline{2-9} & VGG16 & 0.930 & 0.798 & 0.923 & 0.924 & 0.921 & 0.923 & 0.927 \\ \hline \multirow{2}{*}{CIFAR-100} & ResNet18 & 0.791 & 0.205 & 0.579 & 0.664 & 0.688 & 0.709 & 0.718 \\ \cline{2-9} & VGG16 & 0.712 & 0.169 & 0.575 & 0.643 & 0.645 & 0.668 & 0.677 \\ \hline ImageNet & ResNet-50 & 0.763 & 0.0153 & 0.311 & 0.585 & 0.626 & 0.642 & 0.649 \\ \hline \end{tabular} \caption{Nearest neighbor classification accuracy for different sizes of feature vectors ($d'=1,2,3,4,5,6$). Baseline model is the original DNN without bottleneck trained on full training data set.} \label{tab:supp_knn} \end{table*} \begin{figure}[H] \centering \includegraphics[width=0.32\textwidth]{results/cifar10_knn_vgg16-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/cifar10_knn_resnet18-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/mnist_knn-eps-converted-to.pdf} \\ \includegraphics[width=0.32\textwidth]{results/cifar100_knn_vgg16-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/cifar100_knn_resnet18-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/imagenet_knn_resnet50-eps-converted-to.pdf} \caption{Nearest neighbor classification accuracy for different sizes of feature vectors ($d'=1,2,3,4,5,6$). Baseline model is the original DNN without bottleneck trained on full training data set and is marked with dashed line on the plots. \textbf{Top Row, from left to right:} VGG16 trained on CIFAR-10, ResNet-18 trained on CIFAR-10, and LeNet trained on MNIST, \textbf{Bottom Row, from left to right:} VGG16 trained on CIFAR-100, ResNet-18 trained on CIFAR-100, and ResNet-50 trained on ImageNet.} \label{fig:supp_knn} \end{figure} \subsection{Learning Curves} \begin{table}[H] \centering \begin{tabular}{|l|l|l|c|c|c|p{1.6cm}|p{1cm}|} \hline Data & Model & Loss & BS & Opt & LR & LR decay ($\times 0.1$ at ep) & Weight decay \\[2ex] \hline MNIST & LeNet & CE & 128 & SGD & 0.01 & - & 0.0 \\[0.4ex] \hline CIFAR-10 & ResNet18 & CE & 128 & SGD & 0.1 & [150, 250] & $0.0005$ \\[0.4ex] \hline CIFAR-10 & VGG16 & CE & 128 & SGD & 0.01 & [150, 250] & $0.0005$ \\[0.4ex] \hline CIFAR-100 & ResNet18 & CE & 128 & SGD & 0.1 & [150, 250] & $0.0005$ \\[0.4ex] \hline CIFAR-100 & VGG16 & CE & 128 & SGD & 0.01 & [150, 250] & $0.0005$ \\[0.4ex] \hline ImageNet & ResNet-50 & CE & 256 & SGD & 0.1 & [30,60,90] & $0.0001$ \\[0.4ex] \hline Udacity & CovNet & MSE & 64 & Adam & 0.0002 & [150] & $0.0$ \\[0.4ex] \hline \end{tabular} \caption{Hyper-parameters of different experiments. CE - Cross Entropy, MSE - Mean Square Error, Opt - Optimizer and LR - Learning Rate.} \label{tab:my_label} \end{table} \clearpage \subsubsection{CIFAR-10} \vspace{-0.3cm} \begin{figure}[H] \centering \includegraphics[width=0.35\textwidth]{results/cifar10_resnet18_train_error-eps-converted-to.pdf} \includegraphics[width=0.35\textwidth]{results/cifar10_resnet18_test_error-eps-converted-to.pdf} \includegraphics[width=0.35\textwidth]{results/cifar10_vgg16_train_error-eps-converted-to.pdf} \includegraphics[width=0.35\textwidth]{results/cifar10_vgg16_test_error-eps-converted-to.pdf} \caption{\textbf{(Left:)} Train and \textbf{(Right:)} test error vs number of epochs for \textbf{(Top:)} ResNet-18 \textbf{(Bottom:)} VGG16 models trained on increasingly larger subsets of CIFAR-10 data set.} \label{fig:cifar10} \end{figure} \subsubsection{CIFAR-100} \begin{figure}[H] \centering \includegraphics[width=0.35\textwidth]{results/cifar100_resnet18_train_error-eps-converted-to.pdf} \includegraphics[width=0.35\textwidth]{results/cifar100_resnet18_test_error-eps-converted-to.pdf} \includegraphics[width=0.35\textwidth]{results/cifar100_vgg16_train_error-eps-converted-to.pdf} \includegraphics[width=0.35\textwidth]{results/cifar100_vgg16_test_error-eps-converted-to.pdf} \caption{\textbf{(Left:)} Train and \textbf{(Right:)} test error vs number of epochs for \textbf{(Top:)} ResNet-18 \textbf{(Bottom:)} VGG16 models trained on increasingly larger subsets of CIFAR-100 data set.} \label{fig:cifar100} \end{figure} \subsubsection{ImageNet} \begin{figure}[H] \centering \includegraphics[width=0.42\textwidth]{results/imagenet_train_error_epoch-eps-converted-to.pdf} \includegraphics[width=0.42\textwidth]{results/imagenet_test_error_epoch-eps-converted-to.pdf} \caption{\textbf{(Left:)} Train and \textbf{(Right:)} test error vs number of epochs for ResNet-50 model trained on increasingly larger subsets of ImageNet data set.} \label{fig:imagenet} \end{figure} \subsubsection{MNIST} \begin{figure}[H] \centering \includegraphics[width=0.42\textwidth]{results/lenet_train_error-eps-converted-to.pdf} \includegraphics[width=0.42\textwidth]{results/lenet_test_error-eps-converted-to.pdf} \caption{\textbf{(Left:)} Train and \textbf{(Right:)} test error vs number of epochs for LeNet models trained on increasingly larger subsets of MNIST data set.} \label{fig:mnist} \end{figure} \subsubsection{Udacity} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Layer & Output Size & Kernel & Stride & Padding \\ \hline Conv & $32 \times 64 \times 64$ & $3\times3$ & 1 & 1 \\ \hline Conv & $64 \times 32 \times 32$ & $3\times3$ & 1 & 1 \\ \hline Conv & $128 \times 16 \times 16$ & $3\times3$ & 1 & 1 \\ \hline Conv & $128 \times 8 \times 8$ & $3\times3$ & 1 & 1 \\ \hline Linear & $ 1024$ & - & - & - \\ \hline Linear & $ 1$ & - & - & - \\ \hline \end{tabular} \caption{CNN architecture used in Udacity experiments. Each Convolution layer is followed by ReLU, Maxpool, and Dropout layers and each Linear layer is followed by ReLU and Dropout except the last linear layer.} \label{tab:CovNet} \end{table} \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{results/udacity_train_loss-eps-converted-to.pdf} \includegraphics[width=0.45\textwidth]{results/udacity_val_loss-eps-converted-to.pdf} \caption{\textbf{(Left:)} Train and \textbf{(Right:)} test error vs number of epochs for CNN models trained on increasingly larger subsets of Udacity data set.} \label{fig:udacity_curves} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.32\textwidth]{results/bottleneck_cifar10_vgg16_cifar_bestloglog-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/bottleneck_cifar10_resnet18_cifar_bestloglog-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/bottleneck_mnist_6_16_loglog-eps-converted-to.pdf}\\ \includegraphics[width=0.32\textwidth]{results/bottleneck_cifar100_vgg16_cifar_loglog-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/bottleneck_cifar100_resnet18_cifar_loglog-eps-converted-to.pdf} \includegraphics[width=0.32\textwidth]{results/bottleneck_imagenet_best_loglog-eps-converted-to.pdf} \\ \includegraphics[width=0.32\textwidth]{results/bottleneck_udacity_best_loglog-eps-converted-to.pdf} \caption{Empirical and theoretical learning curves (the latter obtained for different values of effective dimensionality).} \label{fig:supp_class_results} \end{figure}
{ "timestamp": "2021-05-06T02:09:47", "yymm": "2105", "arxiv_id": "2105.01867", "language": "en", "url": "https://arxiv.org/abs/2105.01867" }
\subsection{#1} #2} \vspace{-2pt} } \begin{document} \title{Fair and Truthful Mechanism with Limited Subsidy} \author[H. Goko, A. Igarashi, Y. Kawase, K. Makino, H. Sumita, A. Tamura, Y. Yokoi, M. Yokoo]{Hiromichi Goko, Ayumi Igarashi, Yasushi Kawase, Kazuhisa Makino, Hanna Sumita, Akihisa Tamura, Yu Yokoi, Makoto Yokoo} \begin{abstract} The notion of \emph{envy-freeness} is a natural and intuitive fairness requirement in resource allocation. With indivisible goods, such fair allocations are unfortunately not guaranteed to exist. Classical works have avoided this issue by introducing an additional divisible resource, i.e., money, to subsidize envious agents. In this paper, we aim to design a truthful allocation mechanism of indivisible goods to achieve both fairness and efficiency criteria with a limited amount of subsidy. Following the work of Halpern and Shah our central question is as follows: to what extent do we need to rely on the power of money to accomplish these objectives? For general valuations, the impossibility theorem of combinatorial auction translates to our setting: even if an arbitrarily large amount of money is available for use, no mechanism can achieve truthfulness, envy-freeness, and utilitarian optimality simultaneously when agents have general monotone submodular valuations. By contrast, we show that, when agents have matroidal valuations, there is a truthful allocation mechanism that achieves envy-freeness and utilitarian optimality by subsidizing each agent with at most $1$, the maximum marginal contribution of each item for each agent. The design of the mechanism rests crucially on the underlying M-convexity of the Lorenz dominating allocations. \end{abstract} \begin{titlepage} \maketitle \end{titlepage} \section{Introduction} Consider a group of employees with preferences over their shifts; some may prefer to work in the morning, whereas others may prefer to work in the afternoon. All employees are willing to work, but they may differ in the extent to which they like each time slot. How can shifts be scheduled such that the resulting allocation is fair among employees? This question falls under the realm of the fair division problem, whereby indivisible resources are distributed among heterogeneous participants. The notion of fairness that has been extensively studied in the literature is {\em envy-freeness} \cite{Foley}. It requires that no agent wants to swap their bundle with that of another agent. When the resource to be allocated is divisible, the classical result ensures the existence of an envy-free allocation \cite{Varian}; when the resource is indivisible, envy-freeness is not a reasonable goal. A relevant example is the case of one item and two agents: no matter how we allocate the single item, the agent who gets nothing envies the other. Hence, the only “fair” solution is to give nothing to both agents. One way to circumvent this issue is monetary compensation. Money is a powerful tool used to incentivize people and to accomplish fairness desirably. In the preceding example, the employer may attempt to balance inequality, e.g., by compensating employees who are assigned to the night shifts. Another example is the governmental body that subsidizes health workers in rural and remote areas. In mechanism design with money, envy-freeness can indeed be achieved by the well-known Vickrey–Clarke–Groves (VCG) auction mechanism \cite{clarke,groves:econometrica:1973,vickrey:1961} in cases where when each agent's valuation is superadditive \cite{papai:scw:2003}; in principles, this mechanism is guaranteed to be envy-free, truthful, and utilitarian optimal if one allocates enough amount of money to participants\footnote{We will formalize this argument in Section \ref{sec:superadditive}.} assuming each agent's valuation is superadditive. In several applications, however, the resulting outcome of VCG may be unsatisfactory in the following three respects. First, the social planner may have a limited amount of money that can be used to subsidize the participants; for example, employees are usually paid additional compensation up to some limit. Second, the allocation itself only aims to maximize the utilitarian social welfare, ignoring the requirement of fairness; in an extreme case, one agent who happens to have higher utilities may get all the resources, whereas others get nothing. Third, when some agent has a non-superadditive valuation, VCG fails to satisfy envy-freeness. In this paper, we study the allocation mechanisms of indivisible items with limited subsidies. Formally, we work in the setting of Halpern and Shah \cite{HalpernShah2019}. There, a set of indivisible items together with subsidies are to be distributed among agents who have quasi-linear preferences. The objective is to bound the amount of subsidies necessary to accomplish envy-freeness, assuming that the maximum value of whole items is at most the number $m$ of items for each agent.\footnote{Halpern and Shah \cite{HalpernShah2019} dealt with additive valuations and assumed that the maximum value of each single item is at most $1$; our works deals with valuations that are not necessarily additive, and assumes a more general condition--, i.e., that $v_i(M) \leq m$ for each agent $i$.} Although Halpern and Shah \cite{HalpernShah2019} and subsequent works \cite{Brustle2020,caragiannis2020computing,Aziz2020} are mostly concerned with fairness criteria, we take the mechanism-design perspective: in practice, agents may behave strategically rather than truthfully when reporting their preferences. The goal of this paper is to analyze the amount of subsidies required to accomplish the three basic desiderata of a mechanism: truthfulness, envy-freeness, and utilitarian optimality. \subsection{Our contributions} We will first focus upon a class of valuations that exhibit \emph{substitutability}, which is an essential characteristic that is common in many practical situations. For example, employees would like to work in some time slots, but working all day long is not preferable because of overwork. To capture such a phenomenon, it is natural to consider the class of submodular valuations. Although the impossibility result of combinatorial auction immediately applies to these valuations, our question is whether there is any {well-structured} subclass of submodular valuations that guarantees a desired mechanism. A subclass of monotone submodular valuations that arises in several applications is that of matroidal valuations, i.e., submodular functions with dichotomous marginals. For example, suppose that employees are allocated to tasks of various types; they may either approve or disapprove of each task depending on their abilities and can perform certain combinations of tasks under hierarchical constraints (e.g., the number of tasks that can be assigned to each employee is determined per morning/afternoon or day/week/month). One can naturally express such situations by the matroidal valuations associated with laminar matroids (see, e.g., \cite{fife2017laminar}). Another application is when a social planner desires to allocate public housing to people in a way that is fair across different social/ethnic groups \cite{benabbou}; this situation can be modeled by binary assignment valuations, which belong to a class of matroidal valuations: each group's satisfaction is set to be the optimal value of assignments of items to group members. The class of matroidal valuations turns out to be fruitful in the standard setting of fair division without subsidy \cite{babaioff2020fair,benabbou2020,halpern2020fair}; particularly, they do admit an allocation rule that is truthful, approximately fair, and efficient. Babaioff et al.~\cite{babaioff2020fair} very recently designed such a mechanism, called the prioritized egalitarian (PE) mechanism. With ties broken according to a prefixed ordering over the agents, the mechanism returns a \emph{clean Lorenz dominating allocation}, i.e., an allocation whose valuation vector (weakly) Lorenz dominates those under the other allocations and whose bundles include no redundant items\footnote{Items that can be removed without decreasing the agents' valuations.}. Now, returning to our setting, is it possible to design a desired mechanism with a limited amount of subsidy? In Section \ref{sec:matroidal}, we show that, when agents have matroidal valuations, there is a polynomial-time implementable truthful mechanism that achieves both envy-freeness and utilitarian optimality, with a subsidy of at most $1$ for each agent. Note that a mere extension of the previously known mechanism \cite{halpern2020fair,babaioff2020fair} does not achieve these properties; informally, by distributing the commonly desirable good, some agents may be incentivized to pretend that they envy others although they do not.\footnote{We will formally discuss details in Example \ref{ex:lying}.} To prevent agents from misreporting, we design a polynomial-time implementable mechanism, the so-called \emph{subsidized egalitarian} (SE) mechanism, which resembles the classic VCG in the sense that it \emph{punishes} agents who may potentially decrease others' valuations. At a high-level, the mechanism hypothetically distributes $1$ dollar to each agent upfront and implements the auction over the set of {clean Lorenz dominating allocations} ($\cLD$). By contrast with the PE mechanism, the actual allocation can be taken \emph{arbitrarily} from these allocations; then, each agent who benefits from the allocation pays $1$ dollar back to the mechanism designer. Note that the total amount of subsidies required by the SE mechanism is at most $n-1$, which cannot be improved as the worst-case guarantee.\footnote{Consider one item and $n$ agents. If all agents desire the single item, subsidy $1$ must be given to every agent but the one who gets the item to achieve envy-freeness.} A further, perhaps surprising, remark is that, in the output of the SE mechanism, the utility of each agent \emph{does not} change depending on the choice of an allocation. Technically, we build an essential connection between discrete convex analysis and fair division. Indeed, our result crucially rests on the underlying structural property of clean Lorenz dominating allocations; when agents have matroidal valuations, the valuation vectors associated with the $\cLD$ enjoy matroidal M-convexity, allowing one to obtain a new clean Lorenz dominating allocation from another via natural exchange operations. In Section~\ref{sec:withoutfreedisposal}, we further discuss the setting without the free-disposal assumption. It is often assumed that the mechanism can throw away any part of the resource and may thus leave some items unallocated; however, there are several practical scenarios wherein this is not ideal. Examples include allocating shifts to medical workers and assigning tasks to employees. Unfortunately, we observe that, even when agents have binary additive valuations, no truthful and envy-free mechanism allocates all items and returns a Lorenz dominating allocation with each agent being subsidized by at most $1$. However, dropping the truthfulness requirement, we show that there is a polynomial time algorithm that accomplishes envy-freeness and utilitarian optimality while each agent is subsidized at most $1$ and all items are allocated to some agent for matroidal valuations. Of independent interest, we also prove in the Appendix that the resulting allocation of the algorithm satisfies an approximate fairness notion, called envy-freeness up to any good (EFX). If agents are broadly expressive(i.e., the family of agents' valuations satisfies the so-called \emph{convexity} condition\footnote{See Definition $1$ of \cite{holmstrom:econometrica:1979}.}), Groves mechanisms are known to be the unique family of mechanisms that satisfy truthfulness and utilitarian optimality \cite{holmstrom:econometrica:1979}. Hence, our hopes are centered on such mechanisms for a rich class of valuations. Although VCG fails to satisfy envy-freeness for monotone submodular valuations \cite{Feldman2012}, P\'{a}pai \cite{papai:scw:2003} showed that it is envy-free when agents have superadditive valuations--i.e., when agents' preferences do not exhibit substitutability. These results have immediate implications for our setting. In Section \ref{sec:superadditive}, we show that, for superadditive valuations, there is a truthful mechanism that achieves envy-freeness and utilitarian optimality, with each agent receiving a subsidy of at most $m$; furthermore, we show that the amount $m$ is necessary even when agents have additive valuations. In Section \ref{sec:general}, we observe that, even if an arbitrarily large amount of money is available for use, no mechanism can achieve truthfulness, envy-freeness, and utilitarian optimality simultaneously. Figure~\ref{fig:valuation} illustrates the relationship among classes of valuation functions \vspace{2em} \begin{figure}[ht] \centering \begin{tikzpicture}[xscale=0.8, yscale=0.8, transform shape,every node/.style={minimum size=6mm, inner sep=1pt}] \draw[rotate=-5,fill=gray!10] (-2,0) ellipse (3.8cm and 2.5cm); \draw[gray,ultra thick,fill=gray, fill opacity=0.6] (1,-0.2) ellipse (2.6cm and 1.2cm); \draw[rotate=5] (1,0) ellipse (3.5cm and 2.2cm); \node[opacity=0.75] at (0,-0.5){\bf Binary Additive}; \node at (0,1.5){\bf Additive}; \node at (1.2,0.5){\Large \bf Matroidal}; \node at (2.6,1.6){\bf Submodular}; \node at (-3.8,1.6){\bf Superadditive}; \end{tikzpicture} \caption{Classes of valuation functions with which we deal in this paper. The SE mechanism applies to the class of matroidal valuations (dark gray area), whereas VCG applies to the class of superadditive valuations (light gray area). } \label{fig:valuation} \end{figure} \subsection{Related work} The idea of compensating an indivisible resource allocation with money has been prevalent in classical economics literature \cite{Alkan1991,Maskin1987,Klijn,moulin2004fair,SunYang2003,Svensson1983,Tadenuma1993}; much of the classical work has focused on the unit-demand case in which each agent is allocated at most one good. Examples include the famous rent-division problem of assigning rooms to several housemates and dividing the rent among them \cite{SuRentalHarmony}. It is known that, for sufficient amount of subsidies, an envy-free allocation exists \cite{Maskin1987} and can be computed in polynomial time \cite{Aragones,Klijn}. Most classical literature, however, has not considered a situation in which the number of items to be allocated exceeds the number of agents, in contrast to the rich body of recent literature on the multi-demand fair division problem. Halpern and Shah \cite{HalpernShah2019} recently extended the model to the multi-demand setting wherein each agent can be assigned to multiple items. Despite the existence of numerous related papers, \cite{HalpernShah2019} is the first work to study the asymptotic bounds on the amount of subsidy required to achieve envy-freeness. Halpern and Shah \cite{HalpernShah2019} showed that, for binary additive valuations, an allocation that maximizes the Nash welfare (MNW) can be made envy-free with a subsidy of at most $1$ for each agent. They further conjectured that, for general additive valuations in which the value of each item is at most $1$, giving at most $1$ to each agent is sufficient to eliminate envies. Brustle et al.~\cite{Brustle2020} affirmatively settled this conjecture by designing an algorithm that iteratively solves a maximum-matching instance; note that our work is the first to show that for valuations that are not necessarily additive, envy-freeness and completeness can be accomplished by giving each agent at most $1$ subsidy. Caragiannis and Ioannidis \cite{caragiannis2020computing} studied the computational complexity of approximating the minimum amount of subsidies required to achieve envy-freeness. Aziz \cite{Aziz2020} considered another fairness requirement, the so-called \emph{equitability}, in conjunction with envy-freeness and characterized an allocation that can be made both equitable and envy-free with money. Closely related to the present study are the works of \cite{babaioff2020fair,benabbou2020}, who study the fair allocation of indivisible items with matroidal valuations. The PE mechanism of \cite{babaioff2020fair} returns an allocation that maximizes the Nash welfare and achieves envy-freeness up to any good (EFX) and utilitarian optimality. Benabbou et al.~\cite{benabbou2020} focused more upon the balance between efficiency and fairness. They showed that when agents have matroidal valuations, leximin allocations are equivalent to MNW allocations, which-- together with the result of Babaioff et al.~\cite{babaioff2020fair}--implies that MNW allocations are Lorenz dominating. The case where (possibly) multiple items can be allocated to each agent while the agents pay some amount of money to the mechanism designer (or a special agent called seller), is extensively studied in combinatorial auctions \cite{cramton:2005}. A representative mechanism is the well-known VCG mechanism, which is truthful and utilitarian optimal. Envy-freeness is not a central issue in combinatorial auctions, with a notable exception presented by P\'{a}pai \cite{papai:scw:2003}. The results obtained by P\'{a}pai \cite{papai:scw:2003} can be applied to cases wherein each agent receives a non-negative amount of money (subsidy). Our paper is at the intersection of discrete convex analysis and economics. Recent advances in discrete convex analysis have found a variety of applications in economics, including exchange economies with indivisible goods \citep{Murota:SIAM:2003,MT:dca:2003,SY:dca:2013}, inventory management \citep{Zipkin:Inventory:2008,Huh:Inventory:2010}, auctions \citep{MSY:auction:2013}, and two-sided matching \citep{murota:metr:2013,kty:2018}.\footnote{% See the survey paper by Murota~\cite{murota:dca:2016}.} As this long, but incomplete list would suggest, techniques from this literature are useful for a wide variety of problems. We add fair division problems with a limited subsidy to this list. \section{Model}\label{sec:model} We model fair division with a subsidy as follows. For $k \in \mathbb{N}$, we denote $[k]=\{1,\ldots,k\}$. Let $N=[n]$ be the set of given $n$ agents and let $M=[m]$ be the set of given $m$ indivisible goods. Each agent $i$ has a \emph{valuation function} $v_i : 2^M \rightarrow \mathbb{R}_+$ with $v_i(\emptyset)=0$, where $\mathbb{R}_+$ is the set of non-negative reals. For notational simplicity, we write $v_i(e)$ instead of $v_i(\{e\})$ for all $e \in M$. In this paper, we assume that valuation functions are \emph{monotone}: $v_i(X) \leq v_i(Y)$ for any $X \subseteq Y \subseteq M$. \subsubsection*{Valuation functions} We focus upon the following classes of valuation functions: \begin{description} \item[General:] We assume that the maximum valuation is bounded, i.e., $v_i(M) \leq m$ holds for all $i \in N$; \item[Superadditive:] A special case of the general valuations, where for any $i \in N$, $X, Y \subset M$, s.t. $X \cap Y = \emptyset$, $v_i(X) + v_i(Y) \leq v_i(X\cup Y)$ holds; \item[Additive:] A special case of the superadditive valuations, where $v_i(X)=\sum_{e\in X} v_i(e)$ holds for any $X \subseteq M$, $i\in N$; \item[Binary additive:] A special case of the additive valuations, where $v_i(e)\in \{0,1\}$ for any $e \in M$ and $i \in N$; \item[Matroidal:] A super class of the binary additive valuations, where (i) the marginal contribution $v_i(X\cup \{e\})-v_i(X)$ is either 0 or 1 for all $X \subsetneq M$ and $e\in M\setminus X$, and (ii) $v_i$ is submodular, i.e., $v_i(X)+v_i(Y) \geq v_i(X\cup Y)+v_i(X\cap Y)$ holds for all $X, Y \subseteq M$. \end{description} We remark that a matroidal valuation function is a rank function of a matroid\footnotemark. For a matroidal valuation $v_i$, each set $X \subseteq M$ such that $v_i(X)=|X|$ is called an \emph{independent set}. \footnotetext{Let $E$ be a finite set and let $r: 2^E \rightarrow \mathbb{Z}_+$. A set system $(E, r)$ is a \emph{matroid} if for all $X, Y \subseteq E$, (i) $X \subseteq Y \Rightarrow$ $r(X) \leq r(Y) \leq |Y|$, and (ii) $r(X)+r(Y) \geq r(X\cup Y)+r(X\cap Y)$. If $(E, r)$ is a matroid, then $r$ is called a \emph{rank} function.} \subsubsection*{Allocations} An allocation of goods is an ordered subpartition of $M$ into $n$ bundles. We denote an allocation by $A=(A_1, \ldots, A_n)$ such that $A_i \subseteq M$ for all $i \in N$ and $A_i \cap A_j = \emptyset$ for any $i \neq j$. In allocation $A$, agent $i$ receives a bundle $A_i$ of goods. We will deal with two types of allocation: (1) a \emph{complete} allocation (that is, every good must be allocated to some agent), and (2) an incomplete allocation (that is, we can leave some goods unallocated). We introduce notions of efficiency that we use in this paper. We say that $A$ is \emph{Pareto optimal} if there exists no allocation $A'$ such that $v_i(A_i)\leq v_i(A'_i)$ for any $i\in N$ and $v_i(A_i) < v_i(A'_i)$ for some $i\in N$. The {\em utilitarian social welfare} of an allocation $A$ is $\sum_{i\in N} v_i(A_i)$, and $A$ is a \emph{utilitarian optimal} allocation if it maximizes the utilitarian social welfare among all allocations. A refinement of utilitarian optimality is Lorenz dominance: given allocations $A$ and $B$, we say that $A$ \emph{Lorenz dominates} $B$ if, for every $k\in[n]$, the sum of the smallest $k$ values in $(v_1(A_1),\ldots, v_n(A_n))$ is at least as large as that of $(v_1(B_1),\ldots, v_n(B_n))$, i.e., if $v_{i_1}(A_{i_1})\leq \dots \leq v_{i_n}(A_{i_n})$ and $v_{j_1}(B_{j_1})\leq \dots \leq v_{j_n}(B_{j_n})$ (where $\{i_1,\dots,i_n\}=\{j_1,\dots,j_n\}=[n]$), then $\sum_{\ell=1}^k v_{i_\ell}(A_{i_\ell}) \geq \sum_{\ell=1}^k v_{j_\ell}(B_{j_\ell})$ holds for each $k$. A \emph{Lorenz dominating allocation} is an allocation that Lorenz dominates every other allocation. The following proposition holds from the definition of Lorenz dominance with $k=n$. \begin{proposition} Every Lorenz dominating allocation is utilitarian optimal. \end{proposition} Lorenz dominance is also an egalitarian fairness notion in the sense that the least happy agent becomes happier to the greatest extent possible. Another allocation that often achieves the sweet spot of efficiency and fairness is a {maximum Nash welfare (MNW)} allocation \cite{CKM+16a,babaioff2020fair,benabbou2020}. We say that $A$ is a \emph{maximum Nash welfare (MNW)} allocation if it maximizes the number of agents receiving positive utility and, subject to that, maximizes the product of the positive utilities, i.e., $\prod_{i\in N:\, v_i(A_i)>0} v_i(A_i)$. It is known that for matroidal valuations, MNW allocations coincide with Lorenz dominating allocations\footnote{More generally, the set of Lorenz dominating allocations is equivalent to that minimizing a symmetric strictly convex function~\cite{FM19,benabbou2020}.} \cite{benabbou2020,babaioff2020fair}. Note that a Lorenz dominating allocation always exists for matroidal valuation functions, whereas it may not exist in general. To find efficient allocations, it is often necessary to avoid redundancy in allocations. An allocation $A$ is called {\em clean} if $v_i(A_i\setminus\{e\})<v_i(A_i)$ for any $i\in N$ and $e\in A_i$. Note that any allocation can be transformed into a clean one without changing valuations by removing items of zero marginal gain from respective agents. For matroidal valuations, $A$ is clean if and only if $v_i(A_i)=|A_i|$ for each $i \in [n]$ (see also~\cite{benabbou2020}); thus, we see that for matroidal valuations, an allocation $A$ is clean Lorenz dominating if and only if for every clean allocation $B$, the total size of the smallest $k$ bundles in $A$ is at least as large as that of $B$ for each $k\in[n]$. This fact will be used in Section~\ref{sec:matroidal}. \subsubsection*{Fairness with a subsidy} Our goal is to achieve an envy-free allocation of indivisible goods using a limited amount of \emph{subsidy}, which is an additional divisible good. We denote by $p=(p_1, \ldots, p_n)\in \mathbb{R}_+^n$ a subsidy vector, whose $i$th entry $p_i$ is the amount of subsidy received by agent $i$. For allocation $A$ and a subsidy vector $p$, we call $(A,p)$ an allocation with a subsidy; we assume that each agent has a standard quasi-linear utility, i.e., the utility of agent $i$, who obtains a bundle $X$ and subsidy $p_i$, is equal to: $v_i(X) + p_i$. The envy-freeness for an allocation with a subsidy is defined as follows: \begin{definition} An allocation with a subsidy $(A, p)$ is \emph{envy-free} if $v_i(A_i)+p_i \geq v_i(A_j)+p_j$ for all agents $i,j\in N$. \end{definition} An allocation $A$ is called {\em envy-freeable} if there exists a subsidy vector $p$ such that $(A, p)$ is envy-free. Halpern and Shah \cite{halpern2020fair} characterized envy-freeable allocations using envy graphs defined as follows. For an allocation $A$, its envy graph $G_A$ is the complete weighted directed graph whose node set is the agent set $N$; for each $i,j\in N$, the arc $(i,j)$ has weight $w(i,j)=v_i(A_j)-v_i(A_i)$, which represents the amount of envy of $i$ towards $j$. This value can be negative if $i$ prefers their bundle to $j$'s bundle. A {\em walk} $Q$ in $G_A$ is a sequence of nodes $(i_1,i_2,\dots,i_k)$, and its weight is defined as $w(Q)=\sum_{t=1}^{k-1}w(i_t,i_{t+1})$. A walk is a {\em path} if all nodes are distinct, and a {\em cycle} if $i_1,i_2,\dots,i_{k-1}$ are all distinct and $i_1=i_k$. The following theorem combines Theorems~1 and 2 of \cite{halpern2020fair}. \begin{theorem}[Halpern and Shah \cite{halpern2020fair}]\label{thm:envy-graph} For any allocation $A=(A_1,\dots,A_n)$ and any $q\in \mathbb{R}_+$, the following two are equivalent: \begin{itemize} \item[(a)] $A$ is envy-freeable with a subsidy of at most $q$ for each agent. \item[(b)] $G_A$ has neither a positive-weight cycle nor a path with a weight larger than $q$. \end{itemize} When (b) holds, if we set $p_i$ as the maximum weight of any path starting at $i$ in $G_A$ for each $i\in N$, then $(A,p)$ is envy-free. \end{theorem} The equivalence of envy-freeability and the nonexistence of positive-weight cycles in $G_A$ is shown in Theorem~1 of Halpern and Shah~\cite{halpern2020fair}. The relationship between the bound of subsidy and the maximum weights of paths in $G_A$ follows from their Theorem~2. \subsubsection*{Mechanisms} In each subsequent section, we assume that a valuation function of each agent is taken from some specified function class $V$. For example, in Section~\ref{sec:matroidal}, we let $V$ be the set of all matroidal functions on $M$. A {\em valuation profile}, or just a {\em profile}, is a tuple $(v_1,\dots,v_n)\in V^N$ of the valuation functions of the all agents in $N$. For resource allocation with a subsidy, a {\em mechanism} is a mapping of a valuation profile to an allocation with a subsidy. A mechanism first asks each agent to report a valuation function and then outputs an allocation with subsidy on the basis of the reported valuations. We notice that the reported valuations may be different from the true ones. Some agents may have incentives to report a false valuation function to obtain a larger utility. To prevent such manipulation, truthfulness is a standard requirement for mechanisms. A mechanism is \emph{truthful} if reporting the true valuation function maximizes the agent's utility, given the fixed reports of the other agents. A more precise definition is as follows: for every agent $i$, every profile $(v_1,\dots,v_n)\in V^N$, and every $v'_i\in V$, if we denote by $(A, p)$ and $(A',p')$ the outputs of the mechanism for the profiles $(v_1,\dots,v_i,\dots,v_n)$ and $(v_1,\dots v'_i,\dots,v_n)$, respectively, then $v_i(A_i)+p_i\geq v_i(A'_i)+p'_i$. We say that a mechanism satisfies property P if it outputs an allocation satisfying P. For example, a mechanism satisfies envy-freeness if it outputs an envy-free outcome, and similarly for other properties such as MNW, completeness, and utilitarian optimality. \section{Matroidal valuations}\label{sec:matroidal} In previously explained applications such as shift scheduling, goods usually have substitute properties; therefore, we are interested in the setting with submodular valuation functions. For such a setting, can we design a mechanism that simultaneously achieves truthfulness, efficiency, and fairness with small amount of subsidies? Generally, the impossibility result of the combinatorial auction applies to monotone submodular valuations \cite{Feldman2012}; we are, however, able to answer this question affirmatively for the class of matroid rank valuations, i.e., submodular functions with dichotomous marginals. By setting the domain $V$ of the valuation functions as matroidal functions, we can show that giving at most $1$ subsidy to each agent suffices to accomplish these goals. Note that such a mechanism has not been shown to exist even for binary additive valuations. Our main theorem in this section is stated as follows: \begin{theorem}\label{thm:matroidrank:truthful} For matroidal valuations, there is a polynomial-time implementable mechanism that is truthful, utilitarian optimal, and envy-free with each agent receiving subsidy $0$ or $1$, and the the total subsidy being at most $n-1$. \end{theorem} Before presenting a mechanism, let us illustrate how the problem can become tricky even when agents have binary additive valuations: The following simple example shows that we have to give some subsidy to agents who want nothing. \begin{example}\label{ex:lying} Consider two agents $N=\{1,2\}$ and one item $M=\{e\}$ with each agent either wanting the item or not (i.e., valuation for the item is either $0$ or $1$). Suppose that there is a mechanism that is truthful, envy-free, and utilitarian optimal. Consider two profiles $P_1$ and $P_2$. In $P_1$, both agents desire the single item. In this case, the outcome must be such that one agent receives the item and the other receives nothing. Without loss of generality, we assume that agent $1$ receives the item. By envy-freeness, agent $2$ must obtain at least $1$ subsidy. In $P_2$, agent $1$ reports that she wants the item but agent $2$ does not; then, the item must be allocated to agent $1$ who desires the item by utilitarian optimality. Now, it appears that no subsidy is needed in $P_2$ because agents do not envy each other. However, it turns out that we \emph{do} have to subsidize the agent who wants nothing; otherwise, agent $2$ would benefit by misreporting that she wants the item. \end{example} Note that, for a Lorenz dominating allocation, we can easily compute the amount of subsidy required to make it envy-free by Theorem~\ref{thm:envy-graph}; however, as we observed in Example~\ref{ex:lying}, the mechanism should account for an exponential number of profiles if it aims to compute the minimum amount of additional subsidies to achieve truthfulness. Rather, the mechanism “generously” distributes subsidies. Our mechanism, which we refer to as \emph{subsidized egalitarian} (SE), proceeds as follows. First, it arbitrarily chooses a clean Lorenz dominating allocation that coincides with a clean MNW and is thus guaranteed to exist under matroidal valuations~\cite{babaioff2020fair,benabbou2020}; then, it subsidizes agents with the following condition: the valuation of allocated bundle is (i) the same as the worst (clean) Lorenz dominating allocation and (ii) not the largest among the agents. The mechanism thus ensures that the utility of agent $i$ is equal to the valuation of the worst clean Lorenz dominating allocation plus $1$ if she is not the one who receives the largest bundle. Recall that, for matroidal valuations, allocation $A$ is clean if and only if $v_i(A_i)=|A_i|$ for any $i\in N$. For a profile $P=(v_1,\dots,v_n)$, let $\cLD[P]$ be the set of clean Lorenz dominating allocations. To ease notation, we often omit the argument $P$ if no confusion will arise. Formally, our mechanism is summarized as follows. \begin{tcolorbox}[title=Subsidized Egalitarian, left=0mm] \begin{enumerate}[label=\textbf{Step \arabic*.},leftmargin=*] \item Allocate items according to an arbitrarily chosen $A \in \cLD$. \item Give $1$ subsidy to each $i\in N$ if (i)~$|A_i|=\min_{B\in\cLD}|B_i|$ and (ii)~$|A_i|<\max_{j\in N}|A_j|$. \end{enumerate} \end{tcolorbox} The mechanism returns a utilitarian optimal allocation according to the property of Lorenz dominating allocations. Clearly, the subsidy for each agent is $0$ or $1$. The total subsidy is at most $n-1$ since at least one agent (who receives $\max_{j\in N}|A_j|$ items) gets no subsidy. Remarkably, we observe that the difference between the valuations of the best and the worst Lorenz dominating allocations is at most one for every agent (Proposition~\ref{prop:01}) and that the utility of each agent does not change with the choice of an allocation in Step~1 (Proposition~\ref{prop:SE_uu}). Hence, the utility of each agent is at least the valuation of the best clean Lorenz dominating allocation. Here, we note that the SE mechanism imposes the condition (ii)~$|A_i|<\max_{j\in N}|A_j|$ in Step 2 to avoid giving all agents subsidy $1$. In fact, a variant of the SE mechanism in which condition (ii)~is removed fulfills all properties required by Theorem~\ref{thm:matroidrank:truthful} except that the total subsidy is at most $n$, instead of $n-1$. It is also worth noting that, without subsidy, simply picking an arbitrary allocation in $\cLD$ does not guarantee truthfulness \cite[Example 4]{babaioff2020fair}. We will prove that the SE mechanism satisfies the desired properties in Theorem~\ref{thm:matroidrank:truthful} through the following steps. First, we will provide the structural properties of $\cLD$ in Section~\ref{subsec:structure} and prove that the SE mechanism is polynomial time implementable in Lemma~\ref{lem:SE_poly}. We will further show that the mechanism is envy-free and truthful in Lemmas~\ref{lem:SE_EF} and~\ref{lem:SE_truthful}, respectively. Throughout, we assume that all agents have matroidal valuations. \subsection{Structure of clean Lorenz dominating allocations}\label{subsec:structure} \begin{comment} It is known that the set of clean allocations can be represented by a matroid intersection in the following way~\cite{Edmonds1970}. Let $E'\coloneqq E\times N$, and we define two matroids on $E'$. For $X\subseteq E'$ and $i\in N$, we denote $X_i\coloneqq \{e\in E\mid (e,i)\in X\}$. The rank functions of the matroids are given by $s_1(X)\coloneqq \sum_{i\in N}r_i(X_i)$ and $s_2(X)\coloneqq \bigl|\bigcup_{i\in N}X_i\bigr|$ for $X\subseteq E'$. Then an allocation $(A_1,\dots,A_n)$ is clean if and only if $X=\bigcup_{i\in N}\{(e,i)\mid e\in A_i\}$ is a common independent set of the two matroids (i.e., $s_1(X)=s_2(X)=|X|$). We identify $X$ with an allocation $(X_1,\dots,X_n)$. For any two matroids $M_1=(S,\mathcal{I}_1)$, $M_2=(S,\mathcal{I}_2)$ and any $I\in\mathcal{I}_1\cap\mathcal{I}_2$, we define the auxiliary graph $D_{M_1,M_2}(I)$ to be a directed bipartite graph with bipartition $I$ and $S\setminus I$ such that, for each $x\in I$ and $y\in S\setminus I$, $(x,y)$ is an arc if and only if $I-x+y\in\mathcal{I}_1$ and $(y,x)$ is an arc if and only if $I-x+y\in\mathcal{I}_2$. Define $Y_1=\{y\in S\setminus I\mid I+y\in\mathcal{I}_1\}$ and $Y_2=\{y\in S\setminus I\mid I+y\in\mathcal{I}_2\}$. Then, there exists no $Y_1$--$Y_2$ paths if and only if $I$ is a maximum-size common independent set. \begin{lemma}[\cite{}]\label{lem:augment} If there is no $Y_1$--$Y_2$ path, then $I$ is a maximum-size common independent set. Otherwise, let $y_0,x_1,y_1,\ldots,x_r,y_r$ be an $X_1$--$X_2$ that does not contain any shortcuts (possibly $r=0$ if $Y_1\cap Y_2\ne\emptyset$). Then, \[(I\cup\{y_0,y_1,\dots,y_r\})\setminus\{x_1,\dots,x_r\}\in\mathcal{I}_1\cap\mathcal{I}_2.\] \end{lemma} \end{comment} As a preparation for the proof of Theorem~\ref{thm:matroidrank:truthful}, we introduce some notations. For an allocation $A$, let $\mathrm{sv}(A)$ be a size vector $(|A_1|,\dots,|A_n|)$ and let $\mathrm{sv}^\uparrow(A)$ be a vector obtained from $\mathrm{sv}(A)$ by rearranging its components in increasing order. Recall that a clean allocation $A$ is Lorenz dominating if and only if for each clean allocation $B$, it holds that $\sum_{i=1}^k\mathrm{sv}^\uparrow(A)_i\ge \sum_{i=1}^k\mathrm{sv}^\uparrow(B)_i$ for each $k\in [n]$. Note that $\mathrm{sv}^\uparrow(A)$ is unique across all $A\in\cLD[P]$ according to the definition of $\cLD[P]$. For any finite set $E$ and any $i\in E$, a characteristic vector $\chi_{i}$ is an $E$-dimensional vector whose $i$-th entry is $1$ and whose other entries are all $0$. For two vectors $x, y\in \mathbb{Z}^E$, we define $\supp^+(x-y)\coloneqq \{i\in E\mid x(i)>y(i)\}$ and $\supp^-(x-y)\coloneqq\{i\in E\mid x(i)<y(i)\}$. For a valuation function $v_i$ and $X \subseteq M$, a set function $\restr{v_i}{X}$ defined as $\restr{v_i}{X}(Y)=v_i(X\cap Y)$ for all $Y\subseteq M$ is called a restriction of $v_i$ to $X$. Recall that, for a matroidal valuation function $v_i$, a subset $X\subseteq M$ is called {\em independent} if $v_i(X)=|X|$. The family of independent sets of any matroidal function is known to satisfy the following {\em augmentation property}: if both $X$ and $Y$ are independent and $|X|<|Y|$, then there exists an item $e\in Y\setminus X$ such that $X\cup\{e\}$ is also independent. A maximal independent set is called a {\em base}; by the augmentation property, all bases have the same cardinality. \smallskip We first present the following lemma, shown in the proof of \cite[Lemma 17]{babaioff2020fair}, concerning an operation that moves an allocation closer to another allocation in the terms of size vectors. Note that this operation can be interpreted as an augmenting path in the exchange graph of a matroid intersection (see, e.g., \cite{Schrijver2003} for details). For an allocation $A$, we use $A_0$ to denote the set of unallocated items $M\setminus\bigcup_{i\in N}A_i$. \begin{lemma}\label{lem:transfer} Let $A$ and $B$ be two clean allocations, and let $i$ be an agent. If $|A_i|>|B_i|$, there exists a sequence of clean allocations $C^0,C^1,\dots,C^r$ with the following properties: \begin{enumerate}[label={\rm(\roman*)}] \item $C^0=B$, $k^0=i$, \item $e^{t}$ is an item such that $e^{t}\in A_{k^{t-1}}\setminus C^{t-1}_{k^{t-1}}$ and $C^{t-1}_{k^{t-1}}\cup\{e^t\}$ is independent for $v_{k^{t-1}}$ ($t=1,\dots,r$),\label{condition:item} \item $k^t\in N\cup\{0\}$ is the index such that $e^t\in C^{t-1}_{k^t}$ ($t=1,\dots,r$), \item $C^{t}$ is the allocation that is obtained from $C^{t-1}$ by transferring $e^{t}$ from $k^{t}$ to $k^{t-1}$ ($t=1,\dots,r$), \item $|A_{k^r}|<|B_{k^r}|$. \end{enumerate} \end{lemma} \begin{proof} We can find the sequence $C^0,C^1,\dots,C^r$ by arbitrarily selecting the items $e^1,e^2,\dots,e^r$ satisfying condition \ref{condition:item} of Lemma~\ref{lem:transfer} iteratively. In fact, if $|A_{k^{t-1}}|\ge |B_{k^{t-1}}|$, we have $|A_{k^{t-1}}|>|C^t_{k^{t-1}}|$ because $|B^t_{k^{t-1}}| > |C^t_{k^{t-1}}|$; hence, there exists $e^{t}$ such that $e^{t}\in A_{k^{t-1}}\setminus C^{t-1}_{k^{t-1}}$ and $C^{t-1}_{k^{t-1}}\cup\{e^{t}\}$ is independent for $v_{k^{t-1}}$ by the matroid augmentation property. This procedure terminates in a finite number of steps because $\sum_{i\in N}|A_i\bigtriangleup C_i^t|$ is strictly monotone decreasing with respect to $t$. \end{proof} Note that $\mathrm{sv}(C^t)=\mathrm{sv}(C^0)+\chi_{k^0}-\chi_{k^t}$ if $k^t\in N$; additionally, if allocation $B$ is utilitarian optimal, then $k^r$ must be in $N$. \begin{comment} \begin{proof} We construct the sequence by the following way. Let $C^0=A$ and $k^0=i$. We transfer an item $e^1$ such that $e^1\not\in C^0_{k^0}$, $e^1\in B_{k^0}$, and $C^0_{k^0}\cup\{e^1\}$ is clean (independent) for $k^0$. Such an item must exist by the matroid augmentation property (recall that $|C^0_{k^0}|=|A_i|<|B_i|$). We define $k^1$ as the index with $e^1\in C^0_{k^1}$ and $C^1$ as the allocation that is obtained from $C^0$ by transferring $e^1$ from $k^1$ to $k^0$. If $|A_{k^1}|>|B_{k^1}|$, then $k^1$ is a desired index since $\mathrm{sv}(C^1)=\mathrm{sv}(A)+\chi_i-\chi_{k^1}$. Otherwise, we repeat the same procedure until we find a desired index. Let $C^t$ and $k^t$ be the allocation and the index obtained at the end of $t$-th round, respectively. Note that $C^t$ is a clean allocation such that $\mathrm{sv}(C^t)=\mathrm{sv}(A)+\chi_i-\chi_{k^t}$. In addition, $|C^t_{k^t}|<|B_{k^t}|$ since otherwise $k^t$ is a desired index. We transfer an item $e^{t+1}$ such that $e^{t+1}\not\in C^0_{k^t}$, $e^0\in B_{k^t}$, and $C^t_{k^t}\cup\{e^t\}$ is clean (independent) for $k^t$. Such an item must exist by the matroid augmentation property since $|C^t_{k^t}|<|B_{k^t}|$. We define $k^{t+1}$ as the index with $e^{t+1}\in C^t_{k^{t+1}}$ and $C^{t+1}$ as the allocation that is obtained from $C^t$ by transferring $e^{t+1}$ from $k^t$ to $k^{t+1}$. The above procedure terminates in a finite number of steps since the current allocation becomes closer to $B$ in each step. \end{proof} \end{comment} A key structure of $\cLD$ is the \emph{M-convex} structure of size vectors. A non-empty set $S\subseteq\mathbb{Z}^E$ is said to be \emph{M-convex} if it satisfies the following (simultaneous) \emph{exchange property}: \begin{description} \item[(B-EXC)] For any $x,y\in S$ and for any $i\in\supp^+(x-y)$, there exists some $j\in\supp^-(x-y)$ such that $x-\chi_i+\chi_j\in S$ and $y+\chi_i-\chi_j\in S$. \end{description} It is known that M-convex sets are also characterized in terms of the following (seemingly weaker but actually equivalent) exchange property~\cite{Murota1998}: \begin{description} \item[(B-EXC${}_+$)] For any $x,y\in S$ and for any $i\in\supp^+(x-y)$, there exists some $j\in\supp^-(x-y)$ such that $y+\chi_i-\chi_j\in S$. \end{description} An M-convex set $S$ is said to be \emph{matroidal M-convex} if $|x_e-y_e|\le 1$ for any $x,y\in S$ and any $e\in E$.% \footnote{In other words, an M-convex set is matroidal if it is obtained from some matroid on $E$ by translating the characteristic vectors of the bases by the same integral vector.} Lemma~\ref{lem:transfer} implies that the set of size vectors of the clean allocations and the clean utilitarian optimal allocations satisfy (B-EXC${}_+$). \begin{lemma}\label{lem:M} The following sets are M-convex: \begin{align} &S_1=\bigl\{(|A_0|,|A_1|,\dots,|A_n|)\in\mathbb{Z}^{N\cup\{0\}}\mid \text{$A$ is clean allocation}\bigr\} \quad\text{and} \label{eq:M0}\\ &S_2=\bigl\{(|A_1|,\dots,|A_n|)\in\mathbb{Z}^{N}\mid \text{$A$ is a clean utilitarian optimal allocation}\bigr\}.\label{eq:M1} \end{align} \end{lemma} Note that, for each of the above M-convex sets $S_i$ for $i=1,2$, the following problems are solvable in polynomial time via matroid intersection~\cite{Edmonds1970}: \begin{description} \item[(Initialization)] computing an element of $S_i$, and \item[(Membership)] deciding whether a given size vector is in $S_i$. \end{description} Also, for a given vector $x$ in $S_1$ or $S_2$, there is a polynomial time algorithm that finds an allocation whose size vector is equal to $x$; indeed, we can find such an allocation by computing a clean utilitarian optimal allocation for the profile $P'=(v_1',\dots,v_n')$ such that $v_i'(X)=\min\{v_i(X),\,x_i\}$ for each $i\in N$ and $X\subseteq N$. Frank and Murota~\cite[Theorem 5.7]{FM19} proved that the set of Lorenz dominating elements\footnotemark of an M-convex set is a matroidal M-convex set. \footnotetext{For a given set of vectors, Lorentz dominating element is an element such that the smallest entry is as large as possible; within this, the next smallest entry is as large as possible; and so on.} Furthermore, they showed that, in the matroidal M-convex set, a Lorenz dominating element that minimizes a linear function can be found in polynomial time if (Initialization) and (Membership) for the M-convex set can be solved in polynomial time. By combining this with the fact that $S_2$ is a M-convex set, we obtain the following lemma. \begin{lemma}\label{lem:Mdmin} The set of size vectors corresponding to clean Lorenz dominating allocations $S^*\coloneqq\{\mathrm{sv}(A)\mid A\in\cLD\}$ is a matroidal M-convex set. Additionally, for a given weight $w\in\mathbb{R}^N$ a minimum-weight clean Lorenz dominating allocation $\argmin_{s\in S^*}\sum_{i\in N}w_is_i$ can be found in polynomial time. \end{lemma} Since $S^*$ is a matroidal M-convex set, the difference between values of the best and the worst clean Lorenz dominating allocations for each agent is at most one. \begin{proposition}\label{prop:01} \begin{align} \max_{B\in\cLD}|B_i|-\min_{C\in\cLD}|C_i|\in\{0,1\} \quad \text{for any $i\in N$}. \end{align} \end{proposition} Combining this with the fact that $\mathrm{sv}^\uparrow(A)$ is unique across all $A\in\cLD$, we obtain the following essential property of the SE mechanism. \begin{proposition}\label{prop:SE_uu} The utility of each agent $i$ does not change according to the choice of allocation at Step~$1$ in the SE mechanism. \end{proposition} \begin{proof} Let $A$ be the allocation chosen at Step 1. Then, for agent $i$ with $|A_i|<\max_{j \in N}|A_j|$, $i$ gets the final utility $\min_{B \in \cLD}|B_i| +1$, which does not depend on the choice of $A$ (note that if the difference is $1$, $\max_{B \in \cLD}|B_i|= \min_{B \in \cLD}|B_i| +1$). For agent $i$ with $|A_i|=\max_{j \in N}|A_j|$, $i$ gets no subsidy and the final utility is $|A_i|=\max_{j \in N}|A_j|=\max_{j \in N}|B_j|$ for any $B$ in $\cLD$ (because the size vectors are identical across all allocations in $\cLD$); again, this final utility does not depend on the choice of $A$. \end{proof} Note that $\min_{B\in\cLD}|B_i|$ can be computed in polynomial time for each $i$ using Lemma~\ref{lem:Mdmin}, e.g., by setting the weight $w$ as $w_i=0$ and $w_j=1$ for all $j\in N\setminus\{i\}$. Hence, the outcome of the SE mechanism can be computed in polynomial time. \begin{lemma}\label{lem:SE_poly} The SE mechanism is polynomial time implementable. \end{lemma} Now, we show three other important properties of $\cLD$. First, we show that the minimum valuation realized by each agent in clean Lorenz dominating allocations is monotone with respect to a restriction of their valuation function. Note that this property can be also proved via the monotonicity of the PE mechanism~\cite[Lemma~21]{babaioff2020fair} with respect to the priority ordering for which $i$ has the lowest priority. \begin{lemma}\label{lem:min_monotone} For any $i\in N$ and sets $X\subseteq Y\subseteq M$, \begin{align} \min_{B\in\cLD[P]}|B_i|&\le\min_{B'\in\cLD[P']}|B'_i|.\label{eq:monotone1} \end{align} where $P=(v_1,\dots,\restr{v_i}{X},\dots,v_n)$ and $P'=(v_1,\dots,\restr{v_i}{Y},\dots,v_n)$. \end{lemma} \begin{proof} It is sufficient to prove the case when $|Y|=|X|+1$ (since we can apply induction). Let $a$ be the item that is in $Y$ but not in $X$ (i.e., $Y=X\cup\{a\}$). Let $A\in\argmin_{B\in\cLD[P]}|B_i|$ and $A'\in\argmin_{B'\in\cLD[P']}|B'_i|$ such that $\sum_{j\in N}j\cdot|B_j|$ and $\sum_{j\in N}j\cdot|B'_j|$ are minimized, respectively. Suppose to the contrary, we assume that $|A_i|>|A'_i|$. Let $R$ be the set of agents $s$ such that $|A_s|>|A'_s|$ and subject to that $|A_s|$ is minimized. If $R=\{i\}$, then let $s=i$, and otherwise, let $s$ be the agent with smallest index in $R\setminus\{i\}$. As $A'$ allocates at least as many items as $A$, there exists an agent $j$ such that $|A_j|<|A'_j|$. Let $t$ be the agent with $|A_t|<|A'_t|$ such that $|A_t|$ is minimized, and if there are multiple such agents, choose the one with the smallest index. Consider the case where $|A_s|<|A'_t|$, or $|A_s|=|A'_t|$ and $i\ne s<t$. By the exchange property for $A$ and $A'$ with $|A_s|>|A'_s|$ (recall that \(\{(|B'_0|,|B'_1|,\dots,|B'_n|)\mid \text{$B'$ is clean under $P'$}\}\) is M-convex by \eqref{eq:M0}), there exists a clean allocation $C$ under $P'$ such that $\mathrm{sv}(C)=\mathrm{sv}(A')+\chi_s-\chi_k$ for some $k\in N$ with $|A'_k|>|A_k|$. Note that $|A'_s|<|A_s|\le|A'_t|\le|A'_k|$. If $|A'_s|\le |A'_k|-2$, then $A'$ does not Lorenz dominate $C$, a contradiction. Otherwise, i.e., $|A'_s|+1=|A_s|=|A'_t|=|A'_k|$, we have $C\in\cLD[P']$ and $|C_i|=|A'_i|$ by $s\ne i$. By $s<t\le k$, we have $\sum_{j\in N}j\cdot |C_j|<\sum_{j\in N}j\cdot|A'_j|$, a contradiction. Finally, consider the other case, i.e., (i) $|A_s|>|A'_t|$, (ii) $|A_s|=|A'_t|$ and $i\ne s>t$, or (iii) $|A_s|=|A'_t|$ and $s=i$. Let $A''=(A'_1,\dots,A'_i\setminus\{a\},\dots,A'_n)$. By the exchange property for $A$ and $A''$ with $|A_t|<|A'_t|$ (recall that \(\{(|B_0|,|B_1|,\dots,|B_n|)\mid \text{$B$ is clean under $P$}\}\) is M-convex by \eqref{eq:M0}), there exists a clean allocation $D$ under $P$ such that $\mathrm{sv}(D)=\mathrm{sv}(A)+\chi_t-\chi_\ell$ for some $\ell\in N$ with $|A''_\ell|<|A_\ell|$ (here, $\ell\ne 0$ by $|A''_0|=|M|-\sum_{j\in N}|A''_j|\ge |M|-\sum_{j\in N}|A_j|=|A_0|$). Note that $|A_t|<|A'_t|\le |A_s|\le |A_\ell|$. If $|A_t|\le |A_\ell|-2$, then $A$ does not Lorenz dominate $D$, a contradiction. Otherwise, i.e., $|A_t|+1=|A'_t|=|A_s|=|A_\ell|$, we have $D\in\cLD[P]$. If $\ell\ne i$, we have $\sum_{j\in N}j\cdot |D_j|<\sum_{j\in N}j\cdot|A'_j|$ by $t<s\le \ell$, a contradiction. If $\ell=i$, we have $|D_i|<|A_i|$, which contradicts the assumption that $A\in\argmin_{B\in\cLD[P]}|B_i|$. \end{proof} We next show that, if an agent receives a bundle $A_i$ and changes her report to the restriction $\restr{v_i}{X}$ for $X \subseteq A_i$, then she would be allocated the set $X$ at some allocation in $\cLD$. Again, this property can be proved by the strong faithfulness of the PE mechanism \cite[Lemma~22]{babaioff2020fair} with respect to appropriate priority orders. \begin{lemma}\label{lem:sfaithful0} Let $P=(v_1,\dots,v_n)$, $i\in N$, $A\in\cLD[P]$, $X\subseteq A_i$, and $P'=(v_1,\dots,\restr{v_i}{X},\dots,v_n)$. Then, we have \begin{itemize} \item $B_i=X$ for some $B\in\cLD[P']$, and \item if $|A_i|=\min_{A'\in\cLD[P]}|A'_i|$, then $B_i=X$ for any $B\in\cLD[P']$. \end{itemize} \end{lemma} \begin{proof} Let $P''=(v_1,\dots,\restr{v_i}{A_i},\dots,v_n)$. Note that $A\in\cLD[P'']\subseteq\cLD[P]$ since $A$ is clean under $P''$. In addition, $B''_i=A_i$ for any $B''\in\cLD[P'']$ if $|A_i|=\min_{A'\in\cLD[P]}|A'_i|$ (since otherwise $B''_i\subsetneq A_i$ for some $B''\in\cLD[P'']\subseteq\cLD[P]$, and hence $|B''_i|<|A_i|=\min_{A'\in\cLD[P]}|A'_i|\le \min_{A''\in\cLD[P'']}|A''_i|$, a contradiction). To prove the proposition, we show that \begin{align} \text{$C_i=Y$ for all $C\in\cLD[Q]$ if $C'_i=Z$ for some $C'\in\cLD[Q']$} \end{align} for any $Y\subsetneq Z\subseteq M$ where $Q=(v_1,\dots,\restr{v_i}{Y},\dots,v_n)$ and $Q'=(v_1,\dots,\restr{v_i}{Z},\dots,v_n)$. It is sufficient to prove the case when $|Z|=|Y|+1$ (since we can apply induction). Let $a$ be the item that is in $Z$ but not in $Y$ (i.e., $X=Y\cup\{a\}$). Assume towards a contradiction that $C_i\subsetneq Y$ for some $C\in\cLD[Q]$ but $C'_i=Z$ for some $C'\in\cLD[Q']$. Let $C\in\cLD[Q]$ be an allocation with $|C_i|=\min_{A'\in\cLD[Q]}|A'_i|$ such that $\sum_{j\in N}j\cdot|C_j|$ is minimized. Similarly, let $C'\in\cLD[Q']$ be an allocation with $C'_i=Z$ such that $\sum_{j\in N}j\cdot|C'_j|$ is minimized. In addition, let $C''=(C'_1,\dots,C'_i\setminus\{a\},\dots,C'_n)$. Note that $C''_i=Y$ and $C''$ is clean under $Q$. Let $R$ be the set of agents $s$ such that $|C_s|<|C''_s|$ and subject to that $|C''_s|$ is minimized. If $R=\{i\}$, then let $s=i$, and otherwise, let $s$ be the agent with smallest index in $R\setminus\{i\}$. As $C$ allocates at least as many items as $C''$, there exists an agent $j$ such that $|C_j|>|C''_j|$. Let $t$ be the agent with $|C_t|>|C''_t|$ such that $|C_t|$ is minimized, and if there are multiple such agents, choose the one with the smallest index. Consider the case where $|C''_s|<|C_t|$, or $|C''_s|=|C_t|$ and $i\ne s<t$. By the exchange property for $C$ and $C''$ with $|C_s|<|C''_s|$, (recall that \(\{(|A'_0|,|A'_1|,\dots,|A'_n|)\mid \text{$A'$ is clean under $Q$}\}\) is M-convex by \eqref{eq:M0}), there exists a clean allocation $D$ under $Q$ such that $\mathrm{sv}(D)=\mathrm{sv}(C)+\chi_s-\chi_k$ for some $k\in N$ with $|C_k|>|C''_k|$. Note that $|C_s|<|C''_s|\le|C_t|\le|C_k|$. If $|C_s|\le |C_k|-2$, then $C$ does not Lorenz dominate $D$, a contradiction. Otherwise, i.e., $|C_s|+1=|C''_s|=|C_t|=|C_k|$, we have $D\in\cLD[Q]$ and $|D_i|=|C_i|$ by $s\ne i$. By $s<t\le k$, we have $\sum_{j\in N}j\cdot |D_j|<\sum_{j\in N}j\cdot|C_j|$, a contradiction. Finally, consider the other case, i.e., (i) $|C''_s|>|C_t|$, (ii) $|C''_s|=|C_t|$ and $i\ne s>t$, or (iii) $|C''_s|=|C_t|$ and $s=i$. By the exchange property for $C$ and $C'$ with $|C_t|>(|C''_t|=)|C'_t|$, (recall that \(\{(|A'_0|,|A'_1|,\dots,|A'_n|)\mid \text{$A'$ is clean under $Q'$}\}\) is M-convex by \eqref{eq:M0}), there exists a clean allocation $D'$ under $Q'$ such that $\mathrm{sv}(D')=\mathrm{sv}(C')+\chi_t-\chi_\ell$ for some $\ell\in N$ with $|C'_\ell|>|C_\ell|$. Note that $|C'_t|<|C_t|\le|C''_s|\le|C''_\ell|\le |C'_\ell|$. If $|C'_t|\le |C'_\ell|-2$, then $C'$ does not Lorenz dominate $D'$, a contradiction. Otherwise, i.e., $|C'_t|+1=|C_t|=|C''_s|=|C''_\ell|=|C'_\ell|$, we have $D'\in\cLD[Q']$ and $\ell\ne i$. Thus, we have $\sum_{j\in N}j\cdot |D'_j|<\sum_{j\in N}j\cdot|C'_j|$ by $t<s\le\ell$, a contradiction. \end{proof} Finally, we analyze the effect of restriction upon the number of items allocated to each agent. \begin{lemma}\label{lem:maximp} Fix agent $i$. Let $P=(v_1,\dots,\restr{v_i}{X},\dots,v_n)$ and $P'=(v_1,\dots,\restr{v_i}{Y},\dots,v_n)$ for subsets $X\subseteq Y\subseteq M$. Suppose that for some $A'\in\cLD[P']$, $A'_i=Y$, and $i$'s bundle has a strictly smaller size than the largest bundle, i.e., $|A'_i|<\max_{j\in N}|A'_j|$. Then, $|A_i|<\max_{j\in N}|A_j|$ for any $A\in\cLD[P]$. \end{lemma} \begin{proof} It is sufficient to prove the case when $|Y|=|X|+1$ because we can apply induction if $|Y|>|X|$. Let $a$ be the item that is in $Y$ but not in $X$, i.e., $Y=X\cup\{a\}$. Suppose that $|A'_i|<\max_{j\in N}|A'_j|$ and $A'_i=Y$ for some $A'\in\cLD[P']$. Consider any allocation $A\in\cLD[P]$ and let $A''=(A'_1,\dots,A'_i\setminus\{a\},\dots,A'_n)$. As $A$ is clean under $P'$ and $A''$ is clean under $P$, we have \begin{align} \sum_{j=1}^k\mathrm{sv}^\uparrow(A'')_j\le \sum_{j=1}^k\mathrm{sv}^\uparrow(A)_j\le \sum_{j=1}^k\mathrm{sv}^\uparrow(A')_j\label{eq:maximp} \end{align} for any $k\in\{1,\dots,n\}$. Let $j^*$ be the index such that $\mathrm{sv}^\uparrow(A')_{j^*-1}<|X|$ and $\mathrm{sv}^\uparrow(A')_{j^*}\ge |X|$. Note that $|A'_i|$ is placed on or after $j^*$ because $|A'_i|=|X|+1$. Then, for any $j<j^*$, we see that $\mathrm{sv}^\uparrow(A')_j=\mathrm{sv}^\uparrow(A'')_j$, and hence $\mathrm{sv}^\uparrow(A)_j=\mathrm{sv}^\uparrow(A')_j$. Here, $\sum_{j\in N}|A_j|\ge \sum_{j\in N}|A''_j|=\sum_{j\in N}|A'_j|-1$ by \eqref{eq:maximp} with $k=n$. As $|X|<|Y|=|A'_i|<\max_{j\in N}|A'_j|$, we have $|X|+2\le\max_{j\in N}|A'_j|$. This together with $\mathrm{sv}^\uparrow(A')_j\ge |X|$ for each $j\ge j^*$, we have \begin{align} \sum_{j=j^*}^n \mathrm{sv}^\uparrow(A)_j \ge \sum_{j=j^*}^n \mathrm{sv}^\uparrow(A')_j-1 \ge \sum_{j=j^*}^n |X|+1. \end{align} Thus, $\max_{j\in N}|A_j|\ge |X|+1$. As $A_i\subseteq X$, we obtain $|A_i|\le |X|<\max_{j\in N}|A_j|$. \end{proof} \subsection{Envy-freeness of the SE mechanism} We are now ready to show that the SE mechanism is envy-free. \begin{lemma}\label{lem:SE_EF} The SE mechanism is envy-free. \end{lemma} \begin{proof} Let $(A,p)$ be the clean allocation with a subsidy returned by the SE mechanism. To obtain a contradiction, suppose that $i$ envies $j$, i.e., $v_i(A_i)+p_i<v_i(A_j)+p_j$. We separately consider three cases: $v_i(A_i)>v_i(A_j)$, $v_i(A_i)<v_i(A_j)$, and $v_i(A_i)=v_i(A_j)$. \smallskip \noindent\textbf{Case 1.} Suppose that $v_i(A_i)>v_i(A_j)$. This case is impossible since $v_i(A_i)+p_i<v_i(A_j)+p_j$ and $p_i,p_j\in\{0,1\}$. \smallskip \noindent\textbf{Case 2.} Suppose that $v_i(A_i)<v_i(A_j)$. By the matroid augmentation property, there exists an item $e\in A_j$ such that $v_i(A_i\cup\{e\})=v_i(A_i)+1$. Let $B$ be the allocation that is obtained from $A$ by moving item $e$ from $j$ to $i$. As $|A_i|<|A_j|$ and $A$ Lorenz dominates $B$, we have that $|B_i|=|A_i|+1=|A_j|=|B_j|+1$. Hence, $B$ is also a clean Lorenz dominating allocation. Thus, $\max_{C\in\cLD}|C_i|=|A_i|+1$ and $\min_{C\in\cLD}|C_j|=|A_j|-1$, which implies $p_i=1$ and $p_j=0$ by Proposition~\ref{prop:01}. This contradicts the assumption that $i$ envies $j$ because $v_i(A_i)+p_i=|A_i|+1=|A_j|=v_i(A_j)+p_j$. \smallskip \noindent\textbf{Case 3.} Suppose that $v_i(A_i)=v_i(A_j)$. Note that $|A_i|=v_i(A_i)=v_i(A_j) \leq |A_j|$. As $v_i(A_i)+p_i<v_i(A_j)+p_j$, the subsidies must be $p_i=0$ and $p_j=1$. Then $|A_j|=\min_{A'\in\cLD}|A'_j|<\max_{k\in N}|A_k|$. We observe that $\min_{A'\in\cLD}|A'_i|=|A_i|-1$, because otherwise $\min_{A'\in\cLD}|A'_i|=|A_i|=\max_{k\in N}|A_k| > |A_j|$, and hence $v_i(A_i)=|A_i|>|A_j| \geq v_i(A_j)$, which is a contradiction. As $\cLD$ is an M-convex set, there is a clean Lorenz dominating allocation $B$ such that $\mathrm{sv}(B)=\mathrm{sv}(A)-\chi_i+\chi_k$ for some $k\in N$. As $A$ and $B$ are both in $\cLD$ and hence $\mathrm{sv}^\uparrow(A)=\mathrm{sv}^\uparrow(B)$, we have that $|B_i|+1=|A_i|=|B_k|=|A_k|+1$. Note that $k\ne j$ because $|A_i|\le |A_j|$ by $v_i(A_i)=v_i(A_j)$. By applying Lemma~\ref{lem:transfer} to $B$ and $A$ (note that the roles are interchanged), we obtain a sequence of clean allocations $C^0,C^1,\dots,C^r$ with $k^0,k^1,\dots,k^r$ and $e^1,\dots,e^r$ where $C^0=A$, $k^0=k$, $k^r=i$, and $\mathrm{sv}(C^r)=\mathrm{sv}(C^0)+\chi_{k^0}-\chi_{k^r}=\mathrm{sv}(B)$. If $k^t=j$ for some $t$, then $\mathrm{sv}(C^t)=\mathrm{sv}(A)+\chi_k-\chi_j$ and $|A_k|+1=|A_i|\leq|A_j|$, and hence $C^t$ is a clean Lorenz dominating allocation with $|C^t_j|<|A_j|$. This implies $p_j=0$, which is a contradiction. Otherwise, i.e., $k^t\ne j$ for all $t$, we have $C^r_j=A_j$. Then, there exists an element $e\in C^r_j$ such that $v_i(C^r_i\cup\{e\})=|A_i|$ by $v_i(C^r_i)=|A_i|-1<|A_i|=v_i(C^r_j)$ and the matroid augmentation property. Thus, the allocation that is obtained from $C^r$ by transferring $e$ from $j$ to $i$ is a clean Lorenz dominating allocation. This also implies that $p_j=0$, which is a contradiction. \end{proof} \subsection{Truthfulness of the SE mechanism} Finally, we prove that the SE mechanism is truthful. In a setting without money, Babaioff et al.~\cite{babaioff2020fair} proved that a mechanism is truthful if it satisfies \emph{monotonicity} and \emph{strong faithfulness}. We introduce two similar properties that can be applied to a setting with subsidies. \begin{definition}[\smonotone] We say that a mechanism is \emph{\smonotone} if the utility of an agent is monotone with respect to the restriction, i.e., \[v_i(A_{i})+p_{i}\le v_i(A'_{i})+p'_{i}\] for any valuation function $(v_1,\dots,v_n)$, agent $i\in N$, and subsets $X\subseteq Y\subseteq M$, where $(A,p)$ and $(A',p')$ are the allocations with subsidies returned by the mechanism when agents report $P=(v_1,\ldots,\restr{v_i}{X},\ldots,v_n)$ and $P'=(v_1,\ldots,\restr{v_i}{Y},\ldots,v_n)$, respectively. \end{definition} \begin{definition}[\sfaithful] We say that a mechanism is \emph{\sfaithful} if \[v_i(X)+p_i\le v_i(A'_i)+p'_i\] for any valuation function $(v_1,\dots,v_n)$, agent $i\in N$, and subset $X\subseteq A_i$, where $(A,p)$ and $(A',p')$ are the allocations with subsidies returned by the mechanism when agents report $P=(v_1,\ldots,v_i,\ldots,v_n)$ and $P'=(v_1,\ldots,\restr{v_i}{X},\ldots,v_n)$, respectively. \end{definition} \begin{lemma}\label{lem:ss} A mechanism is truthful if it is \smonotone and \sfaithful. \end{lemma} \begin{proof} Let $(A,p)$ be the allocation with a subsidy returned by the mechanism when agents report $(v_1,\ldots,v_{i},\ldots,v_n)$ truthfully; let $v'_i$ be the matroidal valuation function for $i$ such that $v'_i \neq v_i$; let $(A',p')$ be the allocation with a subsidy returned by the mechanism when agents report $(v_1,\ldots,v'_{i},\ldots,v_n)$. We will show that agent $i$ will not benefit from misreporting $v'_i$, i.e., \begin{align}\label{eq:truthful} v_i(A'_i) +p'_i \leq v_i(A_i) + p_i. \end{align} To this end, let $X$ be a minimum subset of $A'_i$ such that $v_i(X)=v_i(A'_i)$; equivalently, $X$ is a maximum-size independent set contained in $A'_i$ under the valuation $v_i$. Let $(A'',p'')$ be the allocation with the subsidy returned by the mechanism when agents report $(v_1,\ldots,\restr{v'_i}{X},\ldots,v_n)$. By the \sfaithful~property, \begin{align}\label{eq:faithful} v_i(A'_i)+p'_i=|X|+p'_i=v'_i(X)+p'_i \leq v'_i(A''_i) + p''_i=|A''_i|+p''_i. \end{align} Further, since $\restr{v'_{i}}{X}= \restr{v_{i}}{X}$, agent $i$ obtains $A''_i$ together with $p''_i$ when $i$ reports $\restr{v_i}{X}$. Now, by the \smonotone~property, \begin{align} |A''_i|+p''_i = v_i(A''_i) + p''_i \leq v_i(A_i) +p_i, \end{align} which, together with \eqref{eq:faithful}, implies inequality \eqref{eq:truthful}. \end{proof} Below, we prove that the SE mechanism is \smonotone and \sfaithful. \begin{lemma}\label{lem:smonotone} The SE mechanism is \smonotone. \end{lemma} \begin{proof} Consider any agent $i$ and sets $X \subseteq Y \subseteq M$. It is sufficient to prove the case when $|T|=|S|+1$ since we can apply induction. Let $a$ be the item that is in $Y$ but not in $X$ (i.e., $Y=X\cup\{a\}$); let $(A,p)$ and $(A',p')$ be the allocations with the subsidy returned by the mechanism when agents report $P=(v_1,\ldots,\restr{v_i}{X},\ldots,v_n)$ and $P'=(v_1,\ldots,\restr{v_i}{Y},\ldots,v_n)$, respectively. Without loss of generality, we assume that, $|A_i|=\min_{B\in\cLD[P]}|B_i|$ and $|A'_i|=\min_{B'\in\cLD[P']}|B'_i|$ (recall that the utility of every agent does not change with the choice of clean Lorenz dominating allocation). By Lemma~\ref{lem:min_monotone}, we have $|A_i|\le |A'_i|$. If $|A_i|<|A'_i|$ or $(p_i,p'_i)\ne (1,0)$, then \begin{align} v_i(A_i)+p_i=|A_i|+p_i\le |A'_i|+p'_i=v_i(A'_i)+p'_i. \end{align} Hence, we only need to prove that $|A_i|=|A'_i|$ and $(p_i,p'_i)=(1,0)$ cannot be satisfied simultaneously. To the contrary, suppose that $|A_i|=|A'_i|$ and $(p_i,p'_i)=(1,0)$. Let $\gamma=|A_i|$. If $a$ is not in $A'_i$, the allocation $A'$ must be in $\cLD[P]$ and hence $\mathrm{sv}^\uparrow(A')=\mathrm{sv}^\uparrow(A)$. By $(p_i,p'_i)=(1,0)$, we have $\gamma<\max_{j\in N}|A_j|=\max_{j\in N}|A'_j|=\gamma$, a contradiction. Thus, we assume that $a\in A'_i$. Let $A''=(A'_1,\dots,A'_i\setminus\{a\},\dots,A'_n)$ and $s=|\{j\in N\mid |A'_j|=\gamma\}|$. Then, $A$ Lorenz dominates $A''$ since $A''$ is clean under $P$. Also, $A'$ Lorenz dominates $A$ since $A$ is clean under $P'$. Hence, we obtain \begin{align} \sum_{j=1}^{k}\mathrm{sv}^\uparrow(A'')_j\le \sum_{j=1}^{k}\mathrm{sv}^\uparrow(A)_j\le\sum_{j=1}^{k}\mathrm{sv}^\uparrow(A')_j \end{align} for any $k=1,\dots,n$. For $j=1,2,\dots,n-s$, we have $\mathrm{sv}^\uparrow(A')_j=\mathrm{sv}^\uparrow(A'')_j$, and hence $\mathrm{sv}^\uparrow(A)_j=\mathrm{sv}^\uparrow(A')_j=\mathrm{sv}^\uparrow(A'')_j$. Note that $\mathrm{sv}^\uparrow(A)_n>\gamma$ by $p_i=1$. As $A'$ allocates at least as many items as $A$, we have $\mathrm{sv}^\uparrow(A)=\mathrm{sv}^\uparrow(A')-\chi_{n-s+1}+\chi_n$ (see Figure~\ref{fig:smonotone}). \input{sizevector} Thus, $A$ and $A'$ allocate the same number of items, i.e., $\sum_{j\in N}|A_j|=\sum_{j\in N}|A'_j|$. By the exchange property for $A''$ and $A$ with $|A''_0|>|A_0|$ (recall that \(\{(|B_0|,|B_1|,\dots,|B_n|)\mid \text{$B$ is clean under $P$}\}\) is M-convex by \eqref{eq:M0}), there exists a clean allocation $C$ under $P$ such that $\mathrm{sv}(C)=\mathrm{sv}(A'')+\chi_\ell$ for some $\ell\in N$. As $A$ Lorenz dominates $C$ (and hence $\sum_{j=1}^{n-1}\mathrm{sv}^\uparrow(A)_j\ge\sum_{j=1}^{n-1}\mathrm{sv}^\uparrow(C)_j$), $\sum_{j\in N}|A_j|=\sum_{j\in N}|C_j|$, and $\mathrm{sv}^\uparrow(A)_n>\gamma$, we have $|C_\ell|=\gamma+1$ and $\ell\ne i$ (recall that $|A''_i|=\gamma-1$ and $|A''_j|\le\gamma$ for all $j\in N$). Then, $\mathrm{sv}^\uparrow(C)=\mathrm{sv}^\uparrow(A)$ and hence $C\in\cLD[P]$. This implies that $|A_i|=\gamma>\gamma-1=|A''_i|=|C_i|\ge \min_{B\in\cLD[P]}|B_i|$, which contradicts the assumption that $|A_i|=\min_{B\in\cLD[P]}|B_i|$. \end{proof} \begin{lemma}\label{lem:sfaithful} The SE mechanism is \sfaithful. \end{lemma} \begin{proof} Consider any agent $i$. Let $(A,p)$ be the allocation with a subsidy returned by the SE mechanism when agents report $P=(v_1,\ldots,v_{i},\ldots,v_n)$ and fix any $X \subseteq A_i$. Let $(A',p')$ and $(A'',p'')$ be the allocations with the subsidies returned by the SE mechanism when agents report $P'=(v_1,\ldots,\restr{v_i}{A_i},\ldots,v_n)$ and $P''=(v_1,\ldots,\restr{v_i}{X},\ldots,v_n)$, respectively. If $p_i=0$, we have $v_i(X)+p_i=|X|\le \max_{B''\in\cLD[P'']}|B''_i|\le v_i(A''_i)+p''_i$ by Lemma~\ref{lem:sfaithful0}. In what follows, we assume $p_i=1$, i.e., $|A_i|=\min_{B\in\cLD[P]}|B_i|<\max_{j\in N}|A_j|$. By Lemma~\ref{lem:sfaithful0}, $B'_i=A_i$ for any $B'\in\cLD[P']$; thus, $A'_i=A_i$ and $|A'_i|=|A_i|<\max_{j\in N}|A_j|=\max_{j\in N}|A'_j|$ (the last equality holds since $\mathrm{sv}^\uparrow(A)=\mathrm{sv}^\uparrow(A')$ by $A,A'\in\cLD[P']$). By Lemma~\ref{lem:sfaithful0}, $B''_i=X$ for any $B''\in\cLD[P'']$, and in particular $A''_i=X$. Also, by Lemma~\ref{lem:maximp}, $|A''_i|<\max_{j\in N}|A''_j|$. Therefore, $p'_i=1$ and hence $v_i(X)+p_i=|X|+1=v_i(A''_i)+p''_i$. \end{proof} By combining Lemmas~\ref{lem:smonotone} and~\ref{lem:sfaithful} and using Lemma~\ref{lem:ss}, we obtain the desired truthfulness. \begin{lemma}\label{lem:SE_truthful} The SE mechanism is truthful. \end{lemma} \subsection{Without the free-disposal assumption}\label{sec:withoutfreedisposal} In Theorem~\ref{thm:matroidrank:truthful}, we presented the so-called SE mechanism, which simultaneously attains truthfulness, utilitarian optimality, and envy-freeness with each agent receiving a subsidy of $0$ or $1$. In the mechanism's output, however, the allocation may not be complete (i.e., some items may be left unallocated). In some situations, this disposal of items is not allowed. For example, when we consider a shift scheduling at a call center or a production factory, we must allocate all shifts to employees in order not to stop the operation, even if no one finds value in that time slot. It is ideal if there is a mechanism that outputs a complete allocation while attaining the nice properties of the SE mechanism (i.e., truthfulness, utilitarian optimality, and envy-freeness with each agent receiving a subsidy of at most $1$). However, as shown below, the amount of subsidy needed to pay can be proportional to the number of items if we aim to attain truthfulness, envy-freeness, and completeness, while using Lorenz dominating allocations. \begin{theorem}\label{thm:impossibility2} If a truthful mechanism is envy-free, and returns a complete Lorenz dominating allocation, it requires a subsidy of $\Omega(m)$, even when there are two agents with binary additive valuations. \end{theorem} \begin{proof} The following proof is inspired by the proof of \cite[Theorem 5]{halpern2020fair}, which shows the nonexistence of a mechanism without money that satisfies truthfulness, EFX, and completeness, while returning a Lorenz dominating allocation. Fix a positive integer $k$; let $N=\{1,2\}$ and $M=\{e_1,\dots,e_{6k}\}$. Consider two profiles, $P_1$ and $P_2$. In $P_1$, both agents report that they want the $2k$ items $\{e_1,\dots,e_{2k}\}$. In this case, each agent receives exactly $k$ items by Lorenz dominance; additionally, one agent receives at least half of the items in $\{e_{2k+1},\dots,e_{6k}\}$. Without loss of generality, agent $1$ gets a set of items including $\{e_1,\dots,e_k,e_{2k+1},\dots,e_{4k}\}$. In $P_2$, agent $1$ reports that she wants $\{e_1,\dots,e_{4k}\}$ and agent $2$ reports that she wants $\{e_1,\dots,e_{2k}\}$. In this case, the items $e_{1},\dots,e_{2k}$ are allocated to agent $2$ and the items $e_{2k+1},\dots,e_{4k}$ are allocated to agent $1$. If $P_2$ is the true valuation profile, agent $1$ has an incentive to report that she wants $\{e_1,\dots,e_{2k}\}$ unless she gets at least $k~(=\Omega(m))$ subsidy because she receives $2k$ valuable items in $P_2$ but $3k$ in $P_1$. \end{proof} Here, we provide an algorithm that returns a Lorenz dominating allocation and simultaneously attains completeness and envy-freeness with each agent receiving a subsidy of at most $1$ while tolerating a violation of truthfulness. Note that an allocation that is both complete and envy-freeable with a subsidy of at most $1$ (where $1$ is the maximum marginal value) for each agent has been shown to exist only for additive valuations \cite{halpern2020fair, Brustle2020}. The following theorem guarantees the existence of such an allocation for matroidal valuations, which are non-additive. \begin{theorem}\label{thm:matroidrank:complete} For matroidal valuations, there is a polynomial-time algorithm for computing an allocation with a subsidy that is complete, utilitarian optimal, and envy-free, with each agent receiving a subsidy of $0$ or $1$ and the total subsidy being at most $n-1$. \end{theorem} We construct the allocation required in the theorem by extending an arbitrary clean Lorenz dominating allocation $A=(A_1, A_2,\dots,A_n)$; that is, we initialize $A$ to be the one computed in Step 1 of the SE mechanism. By Theorem~\ref{thm:matroidrank:truthful}, $A$ then maximizes the utilitarian social welfare $\sum_{i\in N}v_i(A_i)$ and is envy-freeable with a subsidy of at most 1 for each agent. Therefore, we can obtain a desired allocation if we can allocate items in $M\setminus \bigcup_{i\in N}A_i$ while preserving the utilitarian optimality and the bound 1 of the subsidy for each agent. Note that, for binary additive valuations, this task is trivial because an item unallocated in $A$ has a value of $0$ for all agents by the utilitarian optimality; hence allocating it to any agent does not cause envy. However, a similar argument does not apply to matroidal valuations, as shown by the following example. \begin{example} Let $N=\{1,2,3\}$ and $M=\{e_1,e_2,e_3,e_4,e_5\}$ and define the matroidal valuations $v_1,v_2,v_3$ by $v_1(X)=|X\cap\{e_1,e_2\}|$, $v_2(X)=|X\cap \{e_1,e_2,e_3\}|$, and $v_3(X)=|X\cap \{e_1,e_2,e_3\}|+\min\bigl\{1,\,|X\cap \{e_4,e_5\}|\bigr\}$. Then $A=(A_1,A_2,A_3)=\bigl\{\{e_1,e_2\},\{e_3\},\{e_4\}\bigr\}$ is a clean Lorenz dominating allocation. It is not difficult to see that we cannot increase the utility of any agent by allocating $e_5$, which is currently unallocated. However, if we allocate $e_5$ to agent $2$, the amount $w(3,2)=v_3(A_2)-v_3(A_3)$ of envy agent $3$ has towards $2$ changes from $0$ to $1$. To eliminate envy for the resultant allocation $A'=(A'_1,A'_2,A'_3)=\bigl\{\{e_1,e_2\},\{e_3,e_5\},\{e_4\}\bigr\}$, we need to pay at least one dollar to agent 2 because her envy towards agent 1 is $v_2(A'_1)-v_2(A'_2)=1$. Then $v_3(A'_2)+p_2\geq 3$ while $v_3(A'_3)=1$, and to eliminate the envy of agent $3$ towards agent 2, we must pay at least $2$ dollars to agent $3$. \end{example} Here, we present the {\em subsidized egalitarian with completion} (SEC) algorithm, which extends any clean Lorenz dominating allocation to a complete allocation while preserving the property that each agent requires at most 1 subsidy. Recall that, as defined in Section~\ref{sec:model}, the envy graph $G_A$ for an allocation $A$ is a complete directed graph with a node set $N$ in which the arc weights represent the amounts of envies with respect to $A$. Since matroidal valuations are integer-valued, each arc weight is an integer. \begin{tcolorbox}[title=Subsidized Egalitarian with Completion, left=0mm] \begin{enumerate}[label=\textbf{Step \arabic*.},leftmargin=*] \item Allocate items according to an arbitrarily chosen $A \in \cLD$. \item For each unallocated item $e\in M\setminus\bigcup_{i\in N}A_i$, do the following. \begin{itemize} \setlength{\leftskip}{3mm} \item[(a)] Take an agent $i$ arbitrarily. \item[(b)] Let $A^{i,e}\coloneqq(A_1,\dots,A_i\cup\{e\},\dots,A_n)$. If $G_{A^{i,e}}$ has a positive-weight path ending at $i$, then take such a path $P_i$ arbitrarily and update $i$ by the initial agent of $P_i$ and go to (b). Otherwise, go to (c). \item[(c)] Update $A\gets A^{i,e}$ (i.e., $A_i\gets A_i\cup\{e\}$). \end{itemize} \item Give $1$ subsidy to each agent $i\in N$ such that the envy graph $G_{A}$ has a path of weight $1$ starting at $i$. \end{enumerate} \end{tcolorbox} From the description, even the finite termination of the SEC algorithm is not obvious. We will show this later and here provide the conditions preserved in the algorithm. \begin{lemma}\label{lem:complement2} The following conditions hold throughout the SEC algorithm. \begin{itemize} \item[\rm (i)] $A=(A_1,\dots,A_n)$ is utilitarian optimal. \item[\rm (ii)] $G_{A}$ has neither a path of weight more than $1$ nor a positive-weight cycle. \end{itemize} \end{lemma} \begin{proof} Just after Step 1, the conditions (i) and (ii) follow from Theorems~\ref{thm:envy-graph} and \ref{thm:matroidrank:truthful}. Since each $A_i$ is monotone increasing in Step 2, the condition (i) is clearly preserved throughout the algorithm. Then, $G_{A}$ has no positive-weight cycle, since otherwise we can increase the utilitarian social welfare by exchanging bundles along that cycle, in contradiction of (i). We now show that, the nonexistence of a path of weight greater than $1$ is preserved whenever $A$ is updated. Let $i\in N$ be an agent who receives an item $e$ in Step 2 (c). By Step 2 (b), $G_{A^{i,e}}$ has no positive-weight path ending at $i$. By condition (i), we have $v_{i}(A_{i}\cup\{e\})=v_{i}(A_{i})$. Then, the weights of the arcs leaving $i$ do not change in $G_{A}$ and $G_{A^{i,e}}$. Also, it is clear that arcs irrelevant to $i$ do not change their weights. On the other hand, each $j\in N\setminus\{i\}$ satisfies $v_{j}(A_{i}\cup\{e\})\in\{v_{j}(A_{i}), v_{j}(A_{i})+1\}$. Take any path $P$ in $G_{A^{i,e}}$. If it does not contain $i$, then its weight is the same as that in $G_{A}$ and is at most 1. If a path contains $i$, divide $P$ into $P'$ and $P''$, where the former is a subpath from the initial node to $i$ and the latter is a subpath from $i$ to the last node. Since $G_{A^{i,e}}$ has no positive-weight path ending at $i$, the weight of $P'$ is non-positive. Moreover, the weight of $P''$ does not differ in $G_A$ and $G_{A^{i.e}}$, and hence is at most 1 also in $G_{A^{i.e}}$. Hence, the weight of $P$ is at most 1 in $G_{A^{i.e}}$. \end{proof} By condition (ii) in Lemma~\ref{lem:complement2} and Theorem~\ref{thm:envy-graph}, the allocation with a subsidy returned by the SEC algorithm is envy-free, with each agent receiving a subsidy of $0$ or $1$. Furthermore, there is at least one agent $i\in N$ such that $G_A$ has no path of weight $1$ starting at $i$ (since otherwise there exists a positive-weight cycle in $G_A$, which contradicts (ii)). Thus, the total subsidy is at most $n-1$. By the algorithm and condition (i) in Lemma~\ref{lem:complement2}, the allocation is complete and utilitarian optimal. To complete the proof of Theorem~\ref{thm:matroidrank:complete}, we show the following claim, which is needed to demonstrate that the algorithm does not fall into an infinite loop at Step 2 (b). \begin{lemma}\label{lem:complement} In Step 2, for each item $e$, any agent is chosen as $i$ in (b) at most once. Hence, (b) is repeated at most $n$ times. \end{lemma} \begin{proof} For any agent chosen in Step 2 (b), the weight of a path $P_i$ is at least $1$ in $G_{A^{i,e}}$. By the argument in the proof of Lemma~\ref{lem:complement2}, then its weight in $G_A$ is at least $0$ (i.e., we have $w(P_i)\geq 0$, where $w$ denotes the weight function with respect to $G_A$). Moreover, note that $G_A$ has no positive-weight cycle by (ii) in Lemma~\ref{lem:complement2}. Suppose, to the contrary, that some agent is chosen in Step (b) multiple times. Without loss of generality, let $1$ be such an agent and suppose that $2,3,\dots,k$ appear in this order between the first and the second appearance of $1$. Then each $P_i~(i=1,2,\dots,k-1)$ is a path from an agent $i+1$ to $i$, and $P_k$ is a path from $1$ to $k$. By connecting paths $P_i~(i=1,2,\dots,k)$, we can obtain a directed walk $Q$ that starts and ends at $1$. If some arcs are used in multiple paths, then we replace them with multi-arcs so that each arc is used exactly once in $Q$. Each multi-arc has the weight same as that of the original arc. By $w(P_i)\geq 0~(i=1,2,\dots,k)$, we have $w(Q)=\sum_{i=1}^k w(P_i)\geq 0$; for each node, the indegree and outdegree in $Q$ coincide. Then, the walk $Q$ is partitioned into a family $\mathcal{C}$ of directed cycles, satisfying $\sum_{C\in \mathcal{C}}w(C)=w(Q)\geq 0$. Since each path has a non-positive weight in $G_{A}$, every cycle in $\mathcal{C}$ has weight $0$. Then we obtain $w(P_i)=0~(i=1,2,\dots,k)$. Fix any $i^*\in \{1,2,\dots,k\}$ and let $j^*$ be the second last node in $P_{i^*}$; that is, $(j^*,i^*)$ is the last arc in $P_{i^*}$. Since $P_{i^*}$ has a positive weight in $G_{A^{i^*,e}}$ while $w(P_{i^*})=0$ in $G_A$, we see that the weight of $(j^*,i^*)$ in $G_{A^{i^*,e}}$ is larger than its weight in $G_{A}$ by $1$, i.e., $v_{j^*}(A_{i^*}\cup\{e\})-v_{j^*}(A_{i^*})=1$. Among the cycles in $\mathcal{C}$, let $C^*$ be the one containing $(j^*,i^*)$. Let us define $A'\coloneqq(A'_1, A'_2,\dots,A'_n)$ as follows: $A'_{j^*}\coloneqq A_{i^*}+e$, $A'_j\coloneqq A_i~(\forall(j,i)\in C^*:j\neq j^*)$, $A'_j\coloneqq A_j~(\forall j\in N\setminus C^*)$. Since $C^*$ has weight $0$ in $G_A$, we have $\sum_{(j,i)\in C^*}(v_j(A_i)-v_j(A_j))=0$. Using this, we obtain \begin{align*} \sum_{j\in N}v_j(A'_j)-\sum_{j\in N}v_j(A_j) =&~\sum_{j\in C^*}\bigl(v_j(A'_j)-v_j(A_j)\bigr)\\ =&\sum_{(j,i)\in C^*}\bigl(v_j(A_i)-v_j(A_j)\bigr)+\bigl(v_{j^*}(A_{i^*}\cup\{e\})-v_{j^*}(A_{i^*})\bigr) =1. \end{align*} Then, the utilitarian social welfare of $A'$ is strictly larger than that of $A$, contradicting condition~(i) in Lemma~\ref{lem:complement2}. \end{proof} We now show that the SEC algorithm runs in polynomial time. Step 1 of this algorithm is the same as that of the SE mechanism and hence can be computed in polynomial time by Lemma~\ref{lem:SE_poly}. Moreover, Steps 2 and 3 can be computed in polynomial time by the method used by Halpern and Shah \cite{halpern2020fair}, i.e., by applying the Floyd-Warshall algorithm to the graph obtained by negating all arc weights in the envy graph. We remark that, as shown in Appendix (Proposition~\ref{prop:EFX}), the allocation returned by the SEC algorithm is EFX. Thus, we obtain the following result: \begin{corollary} For matroidal valuations, there exists a complete allocation that is utilitarian optimal and EFX. \end{corollary} \section{Superadditive valuations}\label{sec:superadditive} In this section, we consider a class of valuations that do not possess the substitution property, namely, a class of superadditive valuations. Holmstr\"{o}m \cite{holmstrom:econometrica:1979} proved that when the set $V$ of valuations satisfies the \emph{convexity} condition, the Groves mechanisms are the only utilitarian optimal and truthful mechanisms. For superadditive valuations, some instances of the Groves mechanisms, including the VCG mechanism, satisfy envy-freeness \cite{papai:scw:2003}. The class of superadditive valuations also satisfies convexity. Thus, according to Holmstr\"{o}m \cite{holmstrom:econometrica:1979}, the Groves mechanisms are the only utilitarian optimal and truthful mechanisms for such valuations. We require that the subsidy for each agent must be non-negative; to fulfill this goal, we can use the following mechanism: \begin{tcolorbox}[title=VCG with an upfront subsidy $m$, left=0mm] \begin{enumerate}[label=\textbf{Step \arabic*.},leftmargin=*] \item Allocate items according to an arbitrarily chosen $A^* \in \arg\max_{A} \sum_{j \in N} v_j(A_j)$. \item Give $m - \bigl(\max_{A} \sum_{j \neq i} v_j(A_j) - \sum_{j\neq i} v_j(A^*_j)\bigr)$ subsidy to each $i \in N$. \end{enumerate} \end{tcolorbox} \begin{theorem} For superadditive valuations, the VCG with an upfront subsidy $m$ is truthful, utilitarian optimal, and envy-free, and each subsidy is in $[0,m]$. \end{theorem} \begin{proof} By definition, the resulting allocation is utilitarian optimal. Note that the second term of the subsidy (i.e., $\max_{A} \sum_{j \neq i} v_j(A_j) - \sum_{j\neq i} v_j(A^*_j)$) is equal to the standard VCG payment. Thus, this mechanism is equivalent to the following mechanism; first, each agent obtains an upfront subsidy $m$. Then, items are allocated using the standard VCG, where each agent pays the VCG payment from the upfront subsidy. By distributing the same amount of upfront subsidy for each agent, the overall mechanism still satisfies envy-freeness and truthfulness. Moreover, the standard VCG payment is non-negative and at most $v_i(A^*_i)$. Since we assume that $v_i(M) \leq m$ holds, the subsidy is non-negative and at most $m$. \end{proof} We note that, for additive valuations, one can compute a utilitarian optimal allocation in polynomial time.\footnote{This can be done by allocating each item to the agent who likes the most.} Hence, the above mechanism is polynomial-time implementable for a class of additive valuations. However, generally, the problem is NP-hard for superadditive valuations (see, e.g., Proposition 11.5 of \cite{Nisan}). Now, can the amount of subsidy for each agent be reduced while achieving envy-freeness and utilitarian optimality? The next theorem shows that the required subsidy for each agent is in fact $m$ even when there are two agents with additive valuations. \begin{theorem} \label{thm:impossibility1} For any $\epsilon>0$, if a mechanism is envy-free and utilitarian optimal, it requires a subsidy of $m-\epsilon$ for each agent, even when there are only two agents with additive valuations such that the value of each item is at most $1$. \end{theorem} \begin{proof} Suppose that there are two agents $N=\{1,2\}$ with valuation functions $v_1(X)=|X|$ and $v_2(X)=(1-\epsilon/m)|X|$ for each $X\subseteq M$. Then, the unique utilitarian optimal allocation is $A=(M,\emptyset)$, and the mechanism must pay $m-\epsilon$ subsidy to agent $2$. \end{proof} \section{General monotone valuations}\label{sec:general} In this section, we consider a class of monotone valuations. P\'{a}pai \cite{papai:scw:2003} showed that for general monotone valuations, no instance of the Groves mechanisms \cite{groves:econometrica:1973} satisfies envy-freeness; hence, for monotone valuations, there exists no mechanism satisfying utilitarian optimality, truthfulness, and envy-freeness. This negative result has been strengthened into a class of monotone submodular valuations by Feldman and Lai~\cite{Feldman2012}.\footnote{Specifically, Theorem \ref{thm:impossibility:general} applies to a subclass of monotone submodular valuations called \emph{capacitated valuations} \cite{Feldman2012}.} \begin{theorem}[Feldman and Lai~\cite{Feldman2012}]\label{thm:impossibility:general} No mechanism satisfies truthfulness, envy-freeness, and utilitarian optimality, even when all agents have monotone submodular valuations. \end{theorem} If we require completeness instead of utilitarian optimality, we can construct a mechanism that satisfies truthfulness and envy-freeness; in fact, Caragiannis and Ioannidis~\cite{caragiannis2020computing} pointed out that the following mechanism satisfies these properties: allocate all items to the agent $i^*$ that values it the most and pay subsidy of $v_{i^*}(M)$ to every other agent. Note that the subsidy for each agent is at most $m$ by the assumption that the maximum valuation is bounded by $m$. On the other hand, a complete and envy-free mechanism requires each agent to receive a subsidy of $m$.\footnote{Note that we do not know whether a similar example exists under the assumption that the maximum marginal contribution of each item is $1$ for each agent.} \begin{theorem}\label{thm:general:complete} If a mechanism satisfies completeness and envy-freeness, then it requires a subsidy of $m$ for each agent, even when there are two agents. \end{theorem} \begin{proof} Suppose that there are $m$ items $M=\{e_1,\dots,e_m\}$ and two agents $N=\{1,2\}$ with valuation functions \begin{align} v_1(X)=v_2(X)= \begin{cases} m & \text{if }e_1\in X,\\ 0 & \text{if }e_1\not\in X. \end{cases} \end{align} By completeness, one agent receives $e_1$ and the other agent does not. Without loss of generality, we assume that agent $1$ receives $e_1$; then, by envy-freeness, agent $2$ must be subsidized by at least $m$. \end{proof} \section{Conclusion} We have studied the mechanism design for allocating an indivisible resource with a limited amount of subsidy. Although it is difficult in general to provide any theoretical guarantees, we identified that a class of matroidal valuations does admit a desired mechanism using a subsidy of at most $1$ for each agent. For superadditive valuations, we showed that there is a truthful mechanism that is both envy-free and utilitarian optimal and that requires a subsidy of $m$ for each agent. There remain several questions left open. Although our work is primarily concerned with utilitarian optimality as an efficiency criterion, it would be interesting to study the compatibility of truthfulness and fairness with other efficiency requirements, such as completeness and non-wastefulness. As a specific question, the VCG mechanism with an upfront subsidy can allocate all items and achieves the bound of $m$ for additive valuations. An obvious direction would be to study whether the amount of $m$ is necessary to achieve a truthful, envy-free, and complete mechanism when agents have additive valuations, assuming that the maximum value of each item is $1$. Another important topic is to understand how truthful and complete mechanisms with limited subsidy look like. In the mechanism design without money, Amanatidis et al.~\cite{Amanatidis} characterized such mechanisms with concerning two agents. It remains a challenge to extend the result of \cite{Amanatidis} to the setting with a subsidy. We also highlight that, for general monotone valuations beyond matroidal and additive valuations, it remains an open question as to what is the asymptotically minimal amount of subsidies required to make some allocation envy-free. Brustle et al.~\cite{Brustle2020} showed that, for monotone valuations, an envy-free allocation with subsidy $2(n-1)$ for each agent exists, assuming that the maximum marginal contribution of each item is $1$ for each agent; however, it is unclear whether this bound is tight. \clearpage \bibliographystyle{plain
{ "timestamp": "2021-05-06T02:06:58", "yymm": "2105", "arxiv_id": "2105.01801", "language": "en", "url": "https://arxiv.org/abs/2105.01801" }
\section{0pt}{12pt plus 4pt minus 2pt}{8pt plus 4pt minus 2pt} \titleformat{\section}[block]{\scshape\filcenter}{}{1em}{} \renewcommand{\figurename}{\textbf{Fig.}} \renewcommand\thefigure{\textbf{\arabic{figure}}} \begin{document} \twocolumn[{ \begin{center} \large\textbf{Unusual magnetotransport in twisted bilayer graphene} \end{center} \begin{center} \small{Joe Finney$^{1,2}$, Aaron L. Sharpe$^{2,3}$, Eli J. Fox$^{1,2}$, Connie L. Hsueh$^{2,3}$, Daniel E. Parker$^4$, Matthew Yankowitz$^{5,6}$, Shaowen Chen$^{4,7,8}$, Kenji Watanabe$^{9}$, Takashi Taniguchi$^{10}$, Cory R. Dean$^7$, Ashvin Vishwanath$^4$, Marc Kastner$^{1,2,11}$, David Goldhaber-Gordon$^{1,2,12}$} \end{center} \begin{center} \footnotesize{$^1$\textit{Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305, USA}}\\ \footnotesize{$^2$\textit{Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA}}\\ \footnotesize{$^3$\textit{Department of Applied Physics, Stanford University, 348 Via Pueblo Mall, Stanford, CA 94305, USA}}\\ \footnotesize{$^4$\textit{Department of Physics, Harvard University, Cambridge, MA 02138, USA}}\\ \footnotesize{$^5$\textit{Department of Physics, University of Washington, Seattle, WA, USA}}\\ \footnotesize{$^6$\textit{Department of Materials Science and Engineering, University of Washington, Seattle, WA, USA}}\\ \footnotesize{$^7$\textit{Department of Physics, Columbia University, New York, NY 10027, USA}}\\ \footnotesize{$^8$\textit{Department of Applied Physics and Applied Mathematics, Columbia University, New York, NY 10027, USA}}\\ \footnotesize{$^9$\textit{Research Center for Functional Materials, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan}}\\ \footnotesize{$^{10}$\textit{International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan}}\\ \footnotesize{$^{11}$\textit{Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA}}\\ \footnotesize{$^{12}$To whom correspondence should be addressed; E-mail: \texttt{goldhaber-gordon@stanford.edu}} \end{center} \begin{abstract} We present transport measurements of bilayer graphene with 1.38° interlayer twist and apparent additional alignment to its hexagonal boron nitride cladding. As with other devices with twist angles substantially larger than the magic angle of 1.1°, we do not observe correlated insulating states or band reorganization. However, we do observe several highly unusual behaviors in magnetotransport. For a large range of densities around half filling of the moiré bands, magnetoresistance is large and quadratic. Over these same densities, the magnetoresistance minima corresponding to gaps between Landau levels split and bend as a function of density and field. We reproduce the same splitting and bending behavior in a simple tight-binding model of Hofstadter’s butterfly on a square lattice with anisotropic hopping terms. These features appear to be a generic class of experimental manifestations of Hofstadter’s butterfly and may provide insight into the emergent states of twisted bilayer graphene. \end{abstract} }] \section{Introduction} \noindent The mesmerizing Hofstadter butterfly spectrum arises when electrons in a two-dimensional periodic potential are immersed in an out-of-plane magnetic field. When the magnetic flux $\Phi$ through a unit cell is a rational multiple $p/q$ of the magnetic flux quantum $\Phi_0=h/e$, each Bloch band splits into $q$ subbands \cite{hofstadterEnergyLevelsWave1976}. The carrier densities corresponding to gaps between these subbands follow straight lines when plotted as a function of normalized density $n/n_s$ and magnetic field \cite{wannierResultNotDependent1978}. Here, $n_s$ is the density of carriers required to fill the (possibly degenerate) Bloch band. These lines can be described by the Diophantine equation $(n/ns)=t(\Phi/\Phi_0)+s$ for integers $s$ and $t$. In experiments, they appear as minima or zeroes in longitudinal resistivity coinciding with Hall conductivity quantized at $\sigma_{xy}=te^2/h$ \cite{stredaQuantisedHallEffect1982,thoulessQuantizedHallConductance1982}. Hofstadter originally studied magnetosubbands emerging from a single Bloch band on a square lattice. In the following decades, other authors considered different lattices \cite{claroMagneticSubbandStructure1979,hasegawaStabilizationFluxStates1990,liTightbindingElectronsTriangular2011}, the effect of anisotropy \cite{barelliMagneticFieldInducedDirectionalLocalization1999,hasegawaStabilizationFluxStates1990,powellDensityWaveStates2019,sunPossibilityQuenchingIntegerquantumHall1991}, next-nearest-neighbor hopping \cite{claroSpectrumTightBinding1981,gvozdikovEnergySpectrumBloch1994,hanCriticalBicriticalProperties1994,hatsugaiEnergySpectrumQuantum1990,ohEnergySpectrumTriangular2000}, interactions \cite{barelliTwoInteractingHofstadter1997,mishraEffectsInteractionHofstadter2016}, density wave states \cite{powellDensityWaveStates2019}, and graphene moirés \cite{bistritzerMoirButterfliesTwisted2011,moonEnergySpectrumQuantum2012}. It took considerable ingenuity to realize clean systems with unit cells large enough to allow conventional superconducting magnets to reach $\Phi/\Phi_0\sim 1$. The first successful observation of the butterfly in electrical transport measurements was in GaAs/AlGaAs heterostructures with lithographically-defined periodic potentials \cite{albrechtEvidenceHofstadterFractal2001,geislerDetectionLandauBand2004,schlosserLandauSubbandsGenerated1996}. These experiments demonstrated the expected quantized Hall conductance in a few of the largest magnetosubband gaps. In 2013, three groups mapped out the full butterfly spectrum in both density and field in heterostructures based on monolayer \cite{huntMassiveDiracFermions2013,ponomarenkoCloningDiracFermions2013} and bilayer \cite{deanHofstadterButterflyFractal2013} graphene. In all three cases, the authors made use of the 2\% lattice mismatch between their graphene and its encapsulating hexagonal boron nitride (hBN) dielectric. With these layers rotationally aligned, the resulting moiré pattern was large enough in area that gated structures studied in available high-field magnets could simultaneously approach normalized carrier densities and magnetic flux ratios of 1. Later work on hBN-aligned bilayer graphene showed that, likely because of electron-electron interactions, the gaps could also follow lines described by fractional $s$ and $t$ \cite{spantonObservationFractionalChern2018}. In twisted bilayer graphene (TBG), a slight interlayer rotation creates a similar-scale moiré pattern. Unlike with graphene-hBN moirés, in TBG there is a gap between lowest and next moiré subbands \cite{caoSuperlatticeInducedInsulatingStates2016}. As the twist angle approaches the magic angle of 1.1° the isolated moiré bands become flat \cite{bistritzerMoireBandsTwisted2011,liObservationVanHove2010}, and strong correlations lead to fascinating insulating \cite{caoCorrelatedInsulatorBehaviour2018,caoUnconventionalSuperconductivityMagicangle2018,polshynLargeLinearintemperatureResistivity2019,saitoIndependentSuperconductorsCorrelated2020,sharpeEmergentFerromagnetismThreequarters2019,stepanovCompetingZerofieldChern,yankowitzTuningSuperconductivityTwisted2019,zondinerCascadePhaseTransitions2020}, superconducting \cite{caoUnconventionalSuperconductivityMagicangle2018,polshynLargeLinearintemperatureResistivity2019,saitoIndependentSuperconductorsCorrelated2020,stepanovCompetingZerofieldChern,yankowitzTuningSuperconductivityTwisted2019,zondinerCascadePhaseTransitions2020}, and magnetic \cite{serlinIntrinsicQuantizedAnomalous2019,sharpeEmergentFerromagnetismThreequarters2019,stepanovCompetingZerofieldChern} states. The strong correlations tend to cause moiré subbands within a four-fold degenerate manifold to move relative to each other as one tunes the density, leading to Landau levels that project only toward higher magnitude of density from charge neutrality and integer filling factors \cite{wongCascadeElectronicTransitions2020,zondinerCascadePhaseTransitions2020}. This correlated behavior obscures the single-particle Hofstadter physics that would otherwise be present. In this work, we present measurements from a TBG device twisted to 1.38° with apparently aligned hBN. When we apply a perpendicular magnetic field, a complicated and beautiful fan diagram emerges. In a broad range of densities on either side of charge neutrality, the device displays large, quadratic magnetoresistance. Within the magnetoresistance regions, each Landau level associated with $\nu=\pm 8, \pm 12, \pm 16, ...$ appears to split into a pair, and these pairs follow complicated paths in field and density, very different from those predicted by the usual Diophantine equation. Phenomenology similar in all qualitative respects appears in measurements on several regions of this same device with similar twist angles, and in a separate device at 1.59° (see Supplementary Materials for details.) We can reproduce the unusual features of the Landau levels in a simple tight-binding model on a square lattice with anisotropy and a small energetic splitting between two species of fermions. This is at first glance surprising, since that model does not represent the symmetries of the experimental moire structure. We speculate that the unusual LL features we experimentally observe can generically emerge from spectra of Hofstadter models that include the same ingredients we added to the square lattice model. With further theoretical work it may be possible to use our measurements to gain insight into the underlying Hamiltonian of TBG near the magic angle. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig1.pdf} \captionof{figure}{\textbf{Low-field magnetotransport.} (\textbf{A}) Optical micrograph of the device showing contacts and top gate in gold and hBN in green. We use the large top and bottom contacts to source and drain current. The channel width is 1 \textmu m, and all longitudinal contact pairs are separated by 3 squares. The white line indicates the contact pair that we study throughout this work. Scale bar: 5 \textmu m. (\textbf{B}) Longitudinal resistivity of the device as density is tuned through empty to full moiré cell at several fixed magnetic fields (in Tesla). The peak at $n=0$ is charge neutrality, and the peaks at the edges of the plot are full filling/emptying of the moiré unit cell. At nonzero fields, there are regions on either side of charge neutrality with large, positive magnetoresistance. (\textbf{C}) Magnetoresistance ratio as a function of field for several fixed densities on a log-log plot. Each trace is offset vertically for clarity. The black dashed line is a quadratic.} \end{figure} \begin{figure*}[t!] \includegraphics[width=\textwidth]{fig2.pdf} \captionof{figure}{\textbf{Unusual Landau fan diagram}. (\textbf{A}) Landau fan diagram taken at 26 mK. Landau level gaps are observed as minima in longitudinal resistivity. (\textbf{B}) Schematic fan diagram corresponding to a). Red shaded regions are regions with large magnetoresistance at low field. Solid (dotted) lines are symmetry-preserving (-broken) LLs coming from either charge neutrality or a band edge. Dashed lines are resistance minima corresponding to non-zero $s$ and $t$. The light grey dashed boxes indicate regions reproduced in Fig 3.} \end{figure*} \section{Measurements} \noindent We fabricated this TBG device using the “tear-and-stack” dry transfer method along with standard lithographic techniques \cite{caoSuperlatticeInducedInsulatingStates2016,kimVanWaalsHeterostructures2016}. We encapsulated the device in hBN and included both a graphite back gate and a Ti/Au top gate. When stacking, we attempted to crystallographically align the top layer of graphene to the top layer of hBN. Based on optical micrographs taken during stacking, we appear to have succeeded to within 1°. Using both gates, we could independently tune density $n$ and perpendicular displacement field $D$ \cite{oostingaGateinducedInsulatingState2008}. Fig. 1a shows an optical micrograph of the completed device: a standard Hall bar with nine voltage probes on each side. The conduction channel is 1 µm wide and all contact pairs are separated by three squares. In this work, we focus on measurements from only one contact pair with twist angle 1.38°±0.01°, however the supplemental material has more information on the other contact pairs. In summary, the twist angle for most contact pairs varies between 1.29° and 1.45°, with the magnetotransport effects that are the focus of this work being peaked around 1.36°. Curiously, two sets of contact pairs near 1.33° display superconductivity (see the supplemental material for details); this is far outside the range of twists around the magic angle where superconductivity has been previously reported for twisted bilayer graphene. Upon tuning the top gate at fixed magnetic field, we do not observe correlated insulating states at partial fillings of the flat bands (Fig. 1b). This behavior is consistent with reports of samples similarly far above the magic angle \cite{saitoIndependentSuperconductorsCorrelated2020}. Nor do we observe the opening of a gap at charge neutrality or any signatures of ferromagnetism, behaviors which are associated with aligned hBN near the magic angle \cite{serlinIntrinsicQuantizedAnomalous2019,sharpeEmergentFerromagnetismThreequarters2019,stepanovCompetingZerofieldChern}. Instead, in a broad range of densities near half filling, we observe large positive magnetoresistance for both electron and hole doping. The magnetoresistance ratio $[\rho(B)-\rho(0)]/\rho(0)$ is approximately quadratic at low field, reaches over 300, and appears to saturate above 5 T (Fig. 1c). As we tune both field and density, a complicated series of quantum oscillations originates at the charge neutrality point and $B = 0$ and propagates outwards (Fig. 2a). Near charge neutrality, the Landau levels look similar to those of ordinary magic-angle TBG devices \cite{caoUnconventionalSuperconductivityMagicangle2018,saitoIndependentSuperconductorsCorrelated2020,yankowitzTuningSuperconductivityTwisted2019}, with filling fractions $\nu = \pm 4, \pm 8, \pm 12, ...$ being the most prominent. To within experimental precision, these have zero longitudinal resistance and quantized Hall resistance. As we tune the density into the regions with large magnetoresistance, the Landau levels $\nu = \pm 6, \pm 10, ...$ disappear. Each fourfold degenerate Landau level appears to split into a pair with slopes roughly corresponding to $\nu = \pm 8\pm 0.5, \pm 12\pm 0.5$, and so on (our field range does not allow tracking the $\nu = \pm 4$ levels into the magnetoresistance regions). These split levels do not have zero longitudinal resistance, reaching a minimum of a few hundred ohms. Nor do they follow exactly straight lines. Instead, they bend when approaching other levels. For lack of a better term, we will continue to refer to them as Landau levels. Landau levels also propagate inward from full filling/emptying of the isolated moiré bands toward lower electron/hole filling, respectively, and these behave similarly to those originating from charge neutrality. We can determine $\Phi/\Phi_0$ by considering the points where these levels cross those originating at charge neutrality. For instance, the level with $\nu = +8$ originating at $n/n_s = -4$ must intersect the level with $\nu = -12$ originating at charge neutrality at $\Phi/\Phi_0=1/5$. In the following discussion, we refer to fields by their values of $\Phi/\Phi_0$. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig3.pdf} \captionof{figure}{\textbf{Split Landau level overlap behavior in experiment and computation}. (\textbf{A}) Detail of the crossing of split LLs $-12$ from charge neutrality ($s$, $t$ = $0$, $-12$) and $+8$ from $n/n_s=-4$ ($s$, $t$ = $-4$, $8$), and (\textbf{B}) $-8$ from charge neutrality and $+8$ from $n/n_s=-4$. The horizontal lines are at the indicated $\Phi/\Phi_0$, and the lines with steep slopes are the average ($s$, $t$) of the crossing LLs. For the case of panel A, this is the average of ($0$, $-12$) and ($-4$, $8$) which is ($-2$, $-2$) as indicated. Stars indicate the ends of the faint “extra” LLs originating from the intersections of lower (upper) with upper (lower) split LLs. (\textbf{C}) Computed inverse density of states for $q=1999$ near the crossing of the split levels $s$, $t$ = $-2$, $4$ and $4$, $-6$ for $a_x=1$, $a_y=2$, and $V=0.2$. The nearly vertical dotted line is the average of the two levels, $s$, $t$ = $1$, $-1$, and the horizontal line is $\Phi/\Phi_0= 3/5$. (\textbf{D}) Computed inverse density of states for the crossing of the split levels $s$, $t$ = $0$, $6$ and $2$, $-4$ for $a_x=1$, $a_y=2$, and $V=0.3$. The nearly vertical dotted line is now $s$, $t$ = $1$, $1$, and the horizontal line is $\Phi/\Phi_0= 1/5$. The color scale is as in Fig. 4. The difference in slopes of the crossing diamond shape in model vs. experiment reflects different values of $s$ and $t$ based on a combination of computational convenience and (likely) differences in underlying Hamiltonian.} \end{figure} The phenomenology near the intersection of split Landau levels travelling in opposite directions follows a consistent pattern throughout the fan diagram. The example of the $+8$ and $-12$ levels from the previous paragraph is shown in Fig. 3a. As mentioned above, each Landau level splits into a lower and upper level. When a lower (upper) level overlaps with a lower (upper) level moving in the other direction, it changes direction to follow a line originating from half filling ($s = \pm 2$, steeply sloped dashed lines in Fig. 3) with slope equal to the average of the two intersecting levels, which is $-2$ in this case. Within the overlap, the resistivity minima tends to be deeper. Two crossings of a lower with an upper level occur at the same field that the non-split Landau levels would have intersected (horizontal dashed lines in Fig. 3), which is $1/5$ for this example. There is no drop in resistivity where these two intersect. Instead, they appear to displace horizontally by the width of the level that they are crossing before resuming their previous slope. The result of these changes in direction is that in between the overlaps the split LLs are shifted slightly toward each other. The overlap of the split LLs around $+8$ and $-8$ originating from $n/n_s = -4$ and charge neutrality, respectively, shows the same phenomena (Fig. 3b). In this case, the intersection is at $\Phi/\Phi_0 = 1/4$, and the average slope is $0$, so we see a vertical line of low resistivity. In addition, there are faint additional levels emanating outwards from the two intersections of lower with upper levels. \begin{figure*}[t!] \includegraphics[width=\textwidth]{fig4.pdf} \captionof{figure}{\textbf{Replication of unusual magnetotransport features in Hofstadter’s butterfly}. (\textbf{A-D}) Energy spectra for the indicated parameters for $q = 1999$, discussed in the main text. Zero energy is the rightmost side of each panel, and they are symmetric about that line. (\textbf{E-H}) Inverse density of states corresponding to spectra in the above panels. The dashed red line bounds one of the high density-of-states region where the two butterflies overlap which correspond to where we see large magnetoresistance in transport.} \end{figure*} \section{Discussion} \noindent Surprisingly, we find that we can reproduce the basic phenomenology observed in the Landau fan diagram with a single-particle calculation that is a simple extension of Hofstadter’s butterfly model. Rather than starting with the standard continuum model of twisted bilayer graphene \cite{bistritzerMoirButterfliesTwisted2011,koshinoEffectiveContinuumModel2020} and attempting to introduce the effects of interactions, we make our calculation in a simple rectangular lattice. We do not expect that the exact details of our calculation match the details in our TBG device, including but not limited to the degeneracies arising from spin and valley. Rather, the replication of several distinctive behaviors in such a different lattice suggests that they are generic features of Hofstadter-like models with anisotropy. The ordinary Hofstadter butterfly is the result of applying a magnetic field to the tight binding Hamiltonian \begin{equation} H = \sum_{\langle i,j \rangle}a_{ij}c_i^\dagger c_j + \mathrm{h.c.} \end{equation} where $i,j\in \mathbb{Z}$ index lattice sites, $a_{ij}=a_x=1$ for neighboring sites in the $x$ direction, and $a_{ij}=a_y=1$ for neighboring sites in the $y$ direction. Following many prior works, we numerically solve the associated eigenvalue equation to find the energy spectrum. We then make the simple step of displaying $d\mu /dn$, the inverse density of states \textit{as a function of density}. As we show below, if we slightly modify Hofstadter’s tight-binding Hamiltonian, $d\mu /dn$ emulates the striking phenomenology of our device’s magnetoresistance. Specifically, we augment the Hamiltonian by allowing the hopping amplitudes to be different in $x$ and $y$ directions ($a_x \neq a_y$) and adding a second fermion species with a tunable energy splitting $V$, yielding \begin{multline} H = \sum_{\alpha\in\{A,B\}}\sum_{\langle i,j\rangle}a_{ij}c_{i\alpha}^\dagger c_{j\alpha}\\ + V\sum_i\left(c_{iA}^\dagger c_{iA} - c_{iB}^\dagger c_{iB}\right)+ \mathrm{h.c.} \end{multline} In the following text and figures, we set $a_x=1$ and consider only constant $V$. If $V$ is instead set proportional to $B$, to reflect Zeeman splitting of either spins or valleys, the phenomenology is not substantially changed. Fig. 4 shows spectra and the corresponding inverse density of states from the model of Eq. 2 for several values of $a_y$ and $V$. The spectrum for $a_x=a_y=1$ is identical to the classic isotropic butterfly, and the corresponding inverse density of states demonstrates clear Diophantine behavior (Fig. 4e). However, as there are two fermion species, the Landau levels are doubly degenerate and the gaps follow even-integer slopes only. Anisotropy has previously been shown to smear out the energy levels and partially close the gaps in the spectrum \cite{barelliMagneticFieldInducedDirectionalLocalization1999,hasegawaStabilizationFluxStates1990,powellDensityWaveStates2019,sunPossibilityQuenchingIntegerquantumHall1991}, which we reproduce by tuning $a_y$ away from $a_x$ (Fig. 4b, f). Upon then introducing a small amount of $V$, a second butterfly pattern appears (Fig. 4c). At low fields and low densities the two butterflies are almost parallel and seldom overlap, and every integer filling of Landau levels gives a ground state with a gap for excitations. However, at higher fields and energies, the anisotropy-broadened butterflies overlap, and odd-integer Landau level fillings have no gap to excitations. The even-integer Landau levels appear to split and bend in the same way as the measured Landau levels in our device, and the behavior at crossings of opposite-polarity Landau levels is also the same as in our device, as shown in the bottom two panels in Fig. 3. The shape of the split LLs is in rough agreement with our experiment for the range of parameters $1.5 < a_y < 3$ and $0.1 < V < 0.3$. The supplemental material shows the behavior of the model outside of this parameter range, and more fully explains the cause of the offsets in split LLs as they intersect. It is surprising to us that such complex behavior can be reproduced with a simple single-particle model. Nonetheless, several features call for further examination: First, what causes the striking magnetoresistance? We see very large magnetoresistance in the density-field region of our experimental fan diagram corresponding to where the two broadened butterflies overlap in our model, as indicated in Fig. 4. Though the phenomenological association is clear, it is not obvious to us why overlapping Landau levels should produce such prominent magnetoresistance. One might instead imagine that the magnetoresistance results from coexistence of charge carriers of both signs, since compensated semimetals show some of the strongest known near-quadratic magnetoresistance \cite{aliLargeNonsaturatingMagnetoresistance2014,fatemiMagnetoresistanceQuantumOscillations2017,liangUltrahighMobilityGiant2015}. This phenomenologically-tempting explanation does not simply accord with the persistence of magnetoresistance over a broad gate voltage range (for a fuller discussion, see supplemental material). Second, what role is played by alignment of the twisted bilayer to hBN? This alignment may modify the single-particle band structure by breaking sublattice symmetry. Nearer the magic angle, this can result in quantum anomalous Hall effect, perhaps particularly when the graphene-graphene and graphene-hBN moiré patterns are commensurate \cite{shiMoirCommensurabilityQuantum2021}. We do not see any features in transport clearly associated with the hBN alignment, so we do not know what role such alignment is playing, if any. In fact, the $\sim 1.5\pm 0.5$° alignment of facets that we observe visually may be between zigzag in one material and armchair in the other, in which case the effect of the hBN on the graphene electronic structure may be much weaker. Third, does the apparent anisotropic effective Hamiltonian emerge from electron interactions, from uniaxial strain alone, or from some combination such as electronic ordering with order parameter set by strain? Some previous theoretical \cite{chichinadzeNematicSuperconductivityTwisted2020,fernandesNematicityTwistRotational2020,kangNonAbelianDiracNode2020,liuNematicTopologicalSemimetal2021,samajdarElectricfieldtunableElectronicNematic2021,parkerStraininducedQuantumPhase2020} and experimental \cite{choiElectronicCorrelationsTwisted2019,jiangChargeOrderBroken2019,kerelskyMaximizedElectronInteractions2019,rubio-verduUniversalMoirNematic2020} results suggest nematic order at a variety of filling factors within the lowest energy moire miniband manifold, both in TBG relatively close to the magic angle of 1.1°, and in twisted double bilayer graphene. Emergence of nematic order likely heralds an anisotropic effective Hamiltonian, plausibly explaining the correspondence between our device’s behavior and that of our simple model in which anisotropy was built in. We hope our results prompt examination of whether and how such ordering can emerge even far from the magic angle, as in our sample. Landau fan diagrams have been a staple of electrical transport measurements for decades, because they give clear insight into the spectrum of electronic states and their filling. In this work, we have identified an entirely new confluence of phenomena in the fan diagram of a TBG device and have found, to our surprise, that this same combination emerges naturally from a single-particle Hamiltonian with anisotropic tunneling. \setlength\bibitemsep{0pt} \printbibliography \noindent\textbf{Acknowledgments}: This work greatly benefited from the advice of M. Zaletel and A. MacDonald, along with ideas from the many scientists with whom we shared these measurements over the last year, including P. Jarillo-Herrero, S. Kivelson, A. Young, B. Feldman, A. Pasupathy, T. Senthil, and A. Mackenzie. \textbf{Funding}: Device fabrication, measurements, and analysis were supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under contract DE-AC02-76SF00515. Measurement infrastructure was funded in part by the Gordon and Betty Moore Foundation’s EPiQS Initiative through grant GBMF3429 and grant GBMF9460, and D.E.P. was supported in part by grant GBMF8683. D.G.-G. gratefully acknowledges support from the Ross M. Brown Family Foundation. Part of this work was performed at the Stanford Nano Shared Facilities (SNSF), supported by the National Science Foundation under award ECCS-1542152. S.C., M.Y., and C.R.D. were supported as part of Programmable Quantum Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under award DE-SC0019443. K.W. and T.T. acknowledge support from the Elemental Strategy Initiative conducted by the MEXT, Japan, Grant Number JPMXP0112101001 and JSPS KAKENHI Grant Number JP20H00354. \textbf{Author contributions}: J.F. and C.H. fabricated the device. J.F., A.L.S., and E.J.F. performed transport measurements and analysis. D.E.P. and A.V. contributed the two-species toy model. M.Y., S.C., and C.R.D. fabricated and measured samples D34 and D25. K.W. and T.T. generously supplied the hBN crystals. M.A.K. and D.G.-G. supervised the experiments and analysis. The manuscript was prepared by J.F. with input from all authors. \textbf{Competing interests}: M.A.K. is chair of the Department of Energy Basic Energy Sciences Advisory Committee. Basic Energy Sciences provided funding for this work. \textbf{Data and materials availability}: The data from this study along with all code used to perform analysis and simulation are available at \texttt{https://github.com/spxtr/noblehierarch}. \onecolumn \begin{center} \large\textbf{Supplementary Material} \end{center} \section{Fabrication} We assembled our device using a “tear-and-stack” method. We first prepared a Poly(Bisphenol A carbonate) film stretched over a gel (Gel-Pak DGL-17-X8) and affixed it to a glass slide with double-sided tape. To start stacking, we picked up the top layer of hBN at 80℃. We then used the edge of the hBN flake to pick up and tear the graphene at room temperature. The lower temperature compared to the other steps helps to prevent a common cause of stacking failure for us: graphene outside the region directly contacted by the hBN being picked up or dragged. In this step, we attempted to optically align a long, straight edge of the hBN to a similar edge of the graphene. We then rotated the remaining portion of the graphene flake by 1.2°, picked it up at 80℃, picked up the bottom hBN at 80℃, and then finally picked up a flake of few-layer graphite at 80℃ to form the back gate. We transferred the final stack at 150℃ onto 300-nm-thick SiO$_2$ on degenerately doped Si with pre-patterned alignment marks. The resulting heterostructure is shown in Fig. S1a. We then used several iterations of standard e-beam lithography to define the Hall bar. We deposited a Ti/Au top gate, etched the Hall bar region using CHF$_3$/O$_2$ (50/5 sccm), and then deposited Cr/Au edge contacts. The device geometry is labelled in Fig. S1b. \section{Measurements} All measurements in the main text were taken in a dilution refrigerator with a base temperature of 26 mK at the mixing chamber. The measurement lines include low-pass RF and discrete RC filters at the mixing chamber stage. We used a Stanford Research SR830 lock-in amplifier with a 1 GΩ bias resistor to source an alternating current of 1 nA at roughly 1 Hz. We measured differential voltage pairs with NF Corporation LI-75A voltage preamplifiers and SR830 lock-in amplifiers. We applied gate voltages using Yokogawa 7651 DC voltage sources. We held the Si back gate at a constant 30 V for all measurements to promote transparent contacts. \section{Twist angle determination} We initially tested the device in a variable temperature insert (VTI) system at ~1.7 K using a homemade set of lock-in amplifiers that allowed us to measure every longitudinal and Hall voltage pair simultaneously. By sweeping density and field, we can see the overlap of the Landau levels originating at charge neutrality and full-filling, respectively. The spacing of these overlap points is constant in $1/B$ and is directly related to the area of the moiré unit cell, which allows for accurate calculation of the twist angle \cite{caoCorrelatedInsulatorBehaviour2018}. In Fig S1c we show the twist angle variation across the device. The topmost contact pairs have a dramatically different twist angle than the rest. The rest of the contacts vary between 1.28° and 1.45°, with the majority near 1.36°. The contacts near 1.36° display the strongest magnetoresistance. All contact pairs with $\mathrm{MR} > 10$ show the unusual LL behavior discussed in the main text to some degree. \section{Effect of displacement field} The Si back gate is screened by the graphite back gate which extends only over the main conduction channel, but is not screened near the contacts. This allows us to set the carrier density near contacts to be high to encourage contact transparency. Meanwhile, the top and back gates allow us to individually tune density and displacement field. Unfortunately, when the back gate is near 0 V we find that our contact resistances dramatically increase in a magnetic field, leading to a loss of signal at lower temperatures. This happens regardless of the Si back gate voltage. Thus, for the measurements in the main text, we fix the back gate at 1.5 V and then sweep the top gate. This means that the measurements are not performed at a constant displacement field as we would prefer. Instead, they follow the black dashed line in Fig. S2c. As we will next discuss, varying the displacement field does not substantially change the phenomenology presented in the main text. We calculate the displacement field as in reference \cite{sharpeEmergentFerromagnetismThreequarters2019}. Fig. S2 shows the displacement field dependence of the contact pair from the main text at a few fixed magnetic field values. There is no apparent effect at 0 T. At 3 T, negative displacement field enhances the peak value of the magnetoresistance and widens the density extent of the magnetoresistance region. The resistivity at the charge neutrality point increases with increasing magnitude of displacement field, regardless of the polarity of this field. In Fig. S2c, we can clearly observe the split LLs within the magnetoresistance regions at 8 T as pairs of vertical lines, corresponding to constant density, independent of displacement field. \section{Behavior of other contact pairs} All measurements in the main text are of contact pair 16 - 17. We present Landau fan diagrams and gate maps of two other contact pairs in Fig. S3. In both cases, the basic phenomenology presented in the main text is reproduced. Contact pair 7 - 8 is very similar to 16 - 17. Contact pair 4 - 5 displays split LLs, though less clearly defined. In both of these cases, there are oscillations as we tune the displacement field at certain densities. We do not have a satisfactory explanation for this observation, however we note that they appear to be more closely related to the value of the back gate voltage rather than the displacement field (this is reflected in their downward slope in Fig. S3d; lines of constant gate voltage are sloped since a transformation has been applied to make the axes of that figure n and D). Just as the back gate gates the moire channel, the moire channel gates the back gate, so band filling in the 4-5 layer thick back gate is determined primarily by applied back gate voltage. Empirically, at zero back gate voltage, contacts to the moire channel become high resistance, perhaps reflecting low density of carriers in the back gate. At high magnetic field as in Fig. S3c, carrier density in the back gate might be substantially and nonmonotonically modulated with back gate voltage. One way this could affect electronic properties of the moire channel is through changes in screening, given that the lower hBN layer is only about 13 nm thick, comparable with the moiré wavelength. Regardless of the reason for the back-gate-specific effect, the choice of fixing back gate voltage and sweeping top gate for the figure in the main text may actually be superior to holding the displacement field fixed, since to do that would require us to vary the back gate at the same time. \section{Additional phenomenology near charge neutrality} Fig. S4 shows the longitudinal magnetotransport at low field and low density for contact pair 7 - 8. The other two contact pairs behave in a similar manner. Some of the Landau level gaps disappear and then reappear as the field is increased. This same behavior can be seen in the model, in Fig. 4g and h from the main text, which is reproduced in S4b: there we can see the Landau levels from the two butterflies intersect at low field, leading to a gradual disappearance and then reappearance of the gap. Unusual horizontal lines (constant field) appear in between many of the Landau levels. These lines appear to take steps upon crossing Landau levels. These are seen to some extent in all three contact pairs, and are also seen near full-filling and emptying (n/ns = ±4). Within a simple Hofstadter model with only one fermion species but additionally with a next-neighbor hopping term, these horizontal lines can be qualitatively reproduced (Fig S4c and d): B-field-periodic modulation in the width of the Landau levels causes horizontal lines of reduced resistivity corresponding to fields where the Landau level is narrower and thus has increased DOS. The parameter tuning that gives such modulation is independent of that needed for the phenomenology highlighted in the main text (one can get either, both, or neither). In a more physically realistic model of the moire this phenomenon may be more firmly linked to the rest. \section{Superconductivity at 1.33°} Contact pairs 13 - 14 ($\theta$ = 1.33°) and to a lesser extent 3 - 4 ($\theta$ = 1.35°) display evidence of superconductivity near half filling of holes. We show measurements from 13 - 14 in Fig. S5. To our knowledge, superconductivity has not been previously reported for a sample so far above the magic angle, and in our measurements the superconductivity is significantly weaker than that in samples near the magic angle, as demonstrated by its low critical temperature of ~150 mK, low critical field of ~3 mT, and low critical current of ~12 nA. \section{Low-field Hall effect} Fig. S6 shows the Hall density for contact pair 13 - 3 ($\theta = 1.33°$). We do not observe a resetting of the Hall density at integer filling factors, as is seen in samples near the magic angle. Instead, roughly in the center of each magnetoresistance region, the Hall density diverges and then changes sign. All contact pairs in the device display similar behavior, as does D34 in the supplement. In these magnetoresistance regions, the Hall slope becomes nonlinear at higher fields. The Hall density shown here is only based on the low-field slope. \section{Temperature dependence of the magnetoresistance} Fig. S7 shows the behavior of the magnetoresistance as a function of temperature, qualitatively similar to that seen in WTe$_2$ \cite{aliLargeNonsaturatingMagnetoresistance2014,fatemiMagnetoresistanceQuantumOscillations2017}, Cd$_3$As$_2$ \cite{liangUltrahighMobilityGiant2015}, and other compensated semimetals. In these materials, it is understood that the Hall voltage from one carrier type cancels that from the other type, leading to circular charge carrier trajectories and thus reduced carrier diffusion and positive magnetoresistance. If the magnetoresistance in our device were the result of a compensated semimetal, we would not expect the magnetoresistance to be so consistent over such a large range of gate-tuned total density. Also, we would expect to see sets of Landau levels originating at the edges of each isolated pocket in the Fermi surface, whereas we only see them originating at charge neutrality and moiré band edges. The split Landau levels have the same periodicity in $1/B$ as each other (Fig. S8), so they correspond to the same density offset from their respective band edge. This has motivated us to seek an alternative explanation. \section{Hofstadter calculation details} We start with the Hamiltonian from Eq. 2 of the main text: \begin{equation} H = \sum_{\alpha\in\{A,B\}}\sum_{\langle i,j \rangle}a_{ij}c_{i\alpha}^\dagger c_{j\alpha} + V\sum_i\left(c_{iA}^\dagger c_{iA} - c_{iB}^\dagger c_{iB}\right)+ \mathrm{h.c.} \end{equation} where $i$ and $j$ are lattice sites of a square lattice with unit lattice constant, and $A$ and $B$ are the two fermion species. In the Landau gauge, the vector potential $\mathbf{A}=(0,Bx,0)$, and the hopping terms in $y$ pick up a Peierl’s phase of $eBx/\hbar=2\pi \phi x$ for $\phi=eB/2\pi\hbar$. Considering only rational values $\phi=p/q$ for coprime $p$ and $q$, the Peierl’s phase repeats after $q$ hops in the $x$ direction, and so we define the magnetic Brillouin zone (MBZ, not to be confused with mini or moiré BZ) as $0<k_x<2\pi/q$ and $0<k_y<2\pi$. The Peierl’s substitution leads to $c_n^\dagger c_m \rightarrow e^{2\pi i\phi m}c_m^\dagger c_m$, where site $n$ is directly above site $m$ in the $y$ direction, and removes all hopping terms in the $y$ direction from the Hamiltonian, so eigenstates should be plane waves in $y$, which leads to the Hamiltonian \begin{multline*} H = \sum_{\alpha\in\{A,B\}}\sum_{m=1}^q\left[ a_y\cos (2\pi \phi m-k_y) c_{m\alpha}^\dagger c_{m\alpha} + (a_xe^{-ik_x}c^\dagger_{m+1,\alpha}c_{m,\alpha} + \mathrm{h.c.})\right]\\ + V\sum_{m=1}^q\left(c_{mA}^\dagger c_{mA} - c_{mB}^\dagger c_{mB}\right)+ \mathrm{h.c.} \end{multline*} As this is a finite 1D Hamiltonian, it is now straightforward to numerically compute its eigenvalues. The most convenient way to compute the spectrum over a large range of magnetic field values is to set $q$ to some large prime number (typically thousands, depending on the available computation power) and then solve the above equation for $p$ from 1 to $q - 1$. In principle, we should vary $k_x$ and $k_y$ throughout the MBZ to extract the full energy spectrum. For such large values of $q$, the width of each subband becomes extremely small such that the spectra are effectively constant throughout the MBZ, so we only solve for $k_x=k_y=0$. As the gaps in the spectrum appear as extremely fine, bright lines in inverse density of states, they are susceptible to aliasing artifacts when the images are downsampled. To suppress these artifacts, we apply a fine gaussian filter ($\sigma \leq 1/q$) to these plots. This does not affect the phenomenology. We show the behavior of the split LLs for a range of hopping parameters in Fig. S9. We have also made calculations on a hexagonal lattice with anisotropy and two fermion species, and have verified that the behavior is qualitatively similar to that on the square lattice with anisotropy and a second fermion species. Without fine tuning we can replicate the disappearance of odd LLs, the split even LLs, and the split intersection behavior. We also checked that the same behaviors replicate for $V \propto B$ (Fig. S10). Though these are not microscopically faithful models for TBG, many features of the Hofstadter problem are set by the topology of the bands alone and thus should be only weakly model-dependent. \section{Split Landau level intersection behavior in the model} The complicated split LL intersection behavior from our model is not difficult to understand, however it is somewhat intricate. We begin by noting that many of the gaps in Hofstadter’s butterfly take horizontal steps\textemdash a discontinuity in energy at a given magnetic field\textemdash when they cross gaps moving the other direction (Fig. S11 a and b). When the second butterfly appears out of the first (Fig. S11c and d), it is shifted horizontally in energy. We then effectively have two Xs next to each other. When two upper or two lower split LLs overlap, the system is simultaneously in the gap of both butterflies, and so we see the average $(s, t)$ of the two intersections. When an upper meets a lower split LL, we are seeing the intersection of LLs within a single butterfly. Therefore, it happens at the appropriate field for that intersection. As there is a background DOS from the other butterfly, we see the horizontal step in energy that is present even in undoubled lattices. \section{Comparison to additional devices} We present measurements from two additional devices with twist angles 1.59° (D34, Fig. S12) and 1.52° (D25, Fig. S13). Unlike the device from the main text, both devices have dual graphite gates, and neither device has purposefully-aligned hBN. They were measured in a four-probe configuration at 300 mK with a 10 nA AC current bias at 17.7 Hz. For more details, see ref. 31 and its supplementary text. Both devices show the same generic Landau level progression as the device from the main text, with dominant Landau levels $\nu = \pm 4, \pm 8, \pm 12, …$. Both devices have regions of positive magnetoresistance, reaching ratios of $\sim 30$ in D34 and $\sim 10$ in D25 at 3 T. The magnetoresistance is significantly weaker than that of the device in the main text, and it is also not a clean quadratic at low field. While D25 does have an intricate and beautiful fan diagram, it does not display split Landau levels. D34 shows Landau level splitting comparable to that in the device from the main text for filling fractions $\nu = \pm 12, \pm 16, \pm 20, …$. Although much of the fine detail of the split Landau levels appears washed out, the intersecting behavior from the text is also visible for some pairs of overlapping levels. We summarize the magnetotransport phenomenology for the three devices and their separate contact pairs in Table S1. \renewcommand{\tablename}{\textbf{Table}} \renewcommand\thetable{\textbf{S\arabic{table}}} \begin{table*}[t] \centering \begin{tabular}{c c r r r r c} Device & Contact pair & T (mK) & $\theta$ (°) & $B_{sat}$ (T) & MR$_{max}$ & Split LLs? \\ \hline Main text & 16 - 17 & 26 & 1.38 & 8 & 340 & Very clear\\ Main text & 7 - 8 & 26 & 1.37 & 8 & 280 & Very clear \\ Main text & 4 - 5 & 26 & 1.35 & 5 & 210 & Clear \\ D34 & & 300 & 1.59 & 3 & 30 & Clear \\ D25 & Holes & 300 & 1.52 & 3 & 10 & Absent \\ D25 & Electrons & 300 & 1.52 & 3 & 10 & Absent \\ \end{tabular} \caption{\textbf{Summary of the separate contact pairs and devices presented in the text.} The columns are device and contact pair, temperature at which the measurements were taken, twist angle, saturation field, largest magnetoresistance ratio, and whether or not we see clear split Landau levels. Though measurements were made at different temperatures for different devices, all those temperatures were well below 1K, whereas MR is not strongly temperature-dependent below 3K (cf. Fig. S7.) All rows except ``D25 electrons'' refer to the magnetoresistance region on the hole side. For all rows, the resistivity at zero field in the magnetoresistance regime is several tens of Ohms. $B_{sat}$ is a very rough estimate of the lowest field at which the MR is within a few \% of its maximum, MR$_{max}$. These values are hard to quantify to better accuracy than $\sim 10\%$ because quantum oscillations tend to become comparable in amplitude with the magnetoresistance at roughly the field at which it appears to saturate.} \end{table*} \setcounter{figure}{0} \renewcommand\thefigure{\textbf{S\arabic{figure}}} \newpage \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{figS1.pdf} \captionof{figure}{\textbf{Twist angle variation throughout the device}. (\textbf{A}) Device layout prior to lithography with separate layers outlined. From top to bottom we have top hBN (yellow, 20 nm thick), top graphene (white), bottom graphene (black), bottom hBN (cyan, 13 nm thick), and graphite back gate (blue). The plus-shaped alignment marks in the corners are 100 \textmu m apart. (\textbf{B}) Finished device layout after lithography. The labels are for the current source contact (S), current drain contact (D), back gate (BG), top gate (TG), Hall voltage probes on the left (1-9), and on the right (11-19). (\textbf{C}) Twist angle variation along the device as measured by the longitudinal magnetotransport at high density and field. The first set of contact pairs on either side is at 1.9°. (\textbf{D}) Magnetoresistance ratio at 2 T at fixed density $2.3\times 10^{12}$ cm$^{-2}$ at 1.7 K. The magnetoresistance for the contact pairs at 1.9° (not shown) are 0.8 and 0.3 on the left and right respectively.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figS2.pdf} \captionof{figure}{\textbf{Effect of electric displacement field}. (\textbf{A}) Gate maps at 0 field, (\textbf{B}) at 3 T, and (\textbf{C}) at 8 T. The black dashed line is the cut used for the figures in the main text, while the white ellipse indicates the loss of signal when the back gate is fixed at 0 V. The arrow indicates one pair of split LLs, which appear as pairs of vertical lines within the magnetoresistance regions. That they are vertical indicates that they do not vary with displacement field.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figS3.pdf} \captionof{figure}{\textbf{Similar behavior in other contact pairs}. (\textbf{A}) Fan diagram for contact pair 7 - 8 and (\textbf{B}) 4 - 5. (\textbf{C}) Gate map at 8 T for contact pair 7 - 8 and (\textbf{D}) 4 - 5.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{figS4.pdf} \captionof{figure}{\textbf{Additional phenomenology captured by the model}. (\textbf{A}) Magnetotransport for contact pair 7 - 8 at low field and low density. The dashed circle indicates the feature reproduced in simulation in panel B, and the arrow indicates one of the features reproduced in C and D. (\textbf{B}) Inverse density of states near a band edge at low field for $a_x=1$, $a_y=2$, $V=0.2$, the same parameters as in Fig 4h. This is taken at $q = 1999$. (\textbf{C}) Energy spectrum for $a_x=2$, $a_y=1$, $a_{2x}=0.6$, where $a_{2x}$ represents hopping two sites in $x$. (\textbf{D}) Inverse density of states for the spectrum in C. In this panel, the color scale is logarithmic.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{figS5.pdf} \captionof{figure}{\textbf{Superconductivity at 1.33°}. (\textbf{A}) Resistivity of contact pair 13 - 14 as a function of density at the indicated temperatures. (\textbf{B}) Temperature dependence of the superconductivity at fixed density at $n=-3.2\times 10^{12}$ cm$^{-2}$. (\textbf{C}) Differential resistance as a function of DC bias current and field demonstrating weak Fraunhofer-like behavior at $n=-3.2\times 10^{12}$ cm$^{-2}$. We have removed a small flux jump around $B = -7$ mT in preparing this figure.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.6\columnwidth]{figS6.pdf} \captionof{figure}{\textbf{Low-field Hall effect}. The Hall density for contact pair 13 - 3, as measured between -0.5 and 0.5 T. The regions of large magnetoresistance are highlighted in red.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{figS7.pdf} \captionof{figure}{\textbf{Temperature dependence of large magnetoresistance}. (\textbf{A}) Temperature dependence of longitudinal resistivity (contact pair 16 - 17, symmetrized) and (\textbf{B}) Hall resistance (contact pair 6 - 16, antisymmetrized) at $n/n_s=2.5$. (\textbf{C}) Temperature dependence of longitudinal resistivity at fixed fields at $n/n_s=-2.8$.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figS8.pdf} \captionof{figure}{\textbf{Split Landau levels as a function of $\mathbf{1/B}$}. Longitudinal resistivity at fixed density within the magnetoresistance region as a function of $1/B$. The vertical lines indicate expected Landau level filling factors originating from charge neutrality. Note that the split LLs correspond to roughly $\nu = 11.4, 12.6, 15.4, 16.6, …$ ($\nu = 8$ has extra features originating from $n/n_s = -4$). That the split LLs have the same period in $1/B$ indicates that they correspond to the same density offset from charge neutrality and are not a product of multiple Fermi surfaces or band reorganization.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figS9.pdf} \captionof{figure}{\textbf{Model behavior as a function of $a_y$ and $V$}. Each plot shows the same region of ($n$, $B$) as in Fig. 3c of the main text, taken at $q = 1999$ and $a_x=1$. The combination of experimentally observed qualitative phenomenology is reproduced by the model not just at a fine-tuned value of the parameters $a_y$ and $V$ but rather for ranges of parameters $a_y$ and $V$ such that $1.5 < a_y < 3$ and $0.1 < V < 0.3$. The ranges are comparable for different ($n$, $B$). For parameter values that show clear split LLs, increasing $V$ increases the distance between the split LLs (which goes to zero when $V = 0$), and increasing $a_y$ decreases their width. Increasing $a_y$ beyond the stated range suppresses the discontinuities at LL crossings, as a direct consequence of the decreasing width of the LLs: compare ($a_y$, $V$) = (2, 0.3) to (3, 0.3).} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figS10.pdf} \captionof{figure}{\textbf{Comparison of Zeeman-like splitting to fixed splitting.} Energy spectra ($q=1499$) and inverse density of states for $a_x=1$, $a_y=2$, and (\textbf{A-B}) $V=0.2$, (\textbf{C-D}) $V=\Phi/\Phi_0$. The two models produce qualitatively very similar phenomenology. There is one notable but subtle differences: At low fields near charge neutrality and full filling, the constant-$V$ Landau levels from the two butterflies cross each other, whereas in the Zeeman-like model the two butterflies are not split at low field and thus cannot cross at low field. These crossings lead in the constant $V$ model to the disappearance and reappearance of gaps along the density/field trajectory of a given Landau level, as in Fig. S4b. This phenomenon seems to happen in experimental plot S13b at ($n=0.9\times 10^{12}$ cm$^{-2}$, $B=3$ T) and less prominently in measurements on at least three contact pairs of the device from the main text that show LL splitting. Though this may favor the constant $V$ model as a description of the experiment, more measurements would be needed to definitively support that assignment.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figS11.pdf} \captionof{figure}{\textbf{Split LL detail in the model.} (\textbf{A-D}) Energy spectra for the indicated parameters for $q = 1999$, zoomed in to the intersection of the LLs $s, t = 4, -6$ (gap indicated with the dashed line) and $-2, 4$. (\textbf{E-H}) Associated inverse DOS for the respective spectra. Panel \textbf{H} shows the same parameters as Fig. 3c.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figS12.pdf} \captionof{figure}{\textbf{Replication of split LLs in a second device.} (\textbf{A}) Longitudinal resistance vs density and perpendicular magnetic field for device D34 ($\theta = 1.59$°). The window with increased contrast (color bar does not apply) highlights the intersection of $\nu = 12$ from charge neutrality and $\nu = -12$ from full filling. There is a faint vertical line, corresponding to the average of the slopes of the two Landau levels. (\textbf{B}) Schematic fan diagram corresponding to (A), with the same legend as Fig. 2 in the main text. (\textbf{C}) Line cuts from panel A at the indicated fields, showing regions of magnetoresistance with location and shape similar to those in Fig. 1b.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figS13.pdf} \captionof{figure}{\textbf{Lack of split LLs in a third device similarly far from magic angle.} Longitudinal resistance vs density and perpendicular magnetic field for device D25 ($\theta = 1.52$°) for holes (\textbf{A}) and electrons (\textbf{B}). The measurements for electrons and holes are taken in different contact pairs and at different Si gate voltages. (\textbf{C-D}) Schematic fan diagram corresponding to (A) and (B) respectively, with the same legend as Fig. 2 in the main text.} \end{figure} \end{document}
{ "timestamp": "2021-06-15T02:07:17", "yymm": "2105", "arxiv_id": "2105.01870", "language": "en", "url": "https://arxiv.org/abs/2105.01870" }
\section*{Introduction} This short note describes an interesting new phenomenon about the length spectrum compactification of $\mathrm{Sp}(4,\mathbb{R})$-maximal representations of the fundamental group of a closed surface $S$. These were first introduced by Burger-Iozzi-Wienhard (\cite{BIW_maximal}) as generalizations of discrete and faithful representations into $\mathbb{P}\mathrm{SL}(2,\mathbb{R})$ and have been studied since then by many authors for their geometric and dynamical properties (\cite{BGPG_Hermitian}, \cite{BP_maximal}, \cite{maximal_Fenchel_Nielsen}). In particular, Gothen (\cite{gothen2001components}) showed that there are precisely $3\cdot 2^{2g}+2g-4$ connected components of maximal representations, of which only $2^{2g}+2g-3$ are smooth. Of these, $2^{2g}$ are isomorphic copies of the $\mathrm{Sp}(4,\mathbb{R})$-Hitchin component, which were analyzed in a previous work (\cite{OT_Sp4}). The remaining $2g-3$ are referred to as Gothen components in the literature and are the main object of study of this paper. \\ Collier (\cite{collier2016maximal}), extending previous work of Labourie (\cite{Labourie_cyclic}), showed that for every maximal representation in $\mathrm{Sp}(4,\mathbb{R})$ there is a unique equivariant conformal harmonic map from the universal cover of $S$ into the symmetric space $\mathrm{Sp}(4,\mathbb{R})/\mathrm{U}(2)$. This allows for a parameterization of the connected components of maximal representations as bundles over the Teichm\"uller space of $S$ (\cite{AC_Sp4}). Exploiting the exceptional isomorphism $\mathbb{P}\mathrm{Sp}(4,\mathbb{R})\cong \mathrm{SO}_{0}(2,3)$, Collier-Tholozan-Toulisse (\cite{BTT}) interpreted the unique equivariant minimal immersion into the symmetric space as the Gauss map of the unique equivariant maximal surface (i.e with vanishing mean curvature) in the pseudo-hyperbolic space $\mathbb{H}^{2,2}$. Moreover, the different connected components of maximal representations correspond in this language to the different isomorphism classes that the orthogonal bundle to the maximal surface can have. \\ In a previous paper (\cite{OT_Sp4}), we used this pseudo-Riemannian point of view to define the length spectrum of an $\mathrm{Sp}(4,\mathbb{R})$-Hitchin representation and describe the boundary of this component as projectivized mixed structures, that is hybrid geometric objects on the surface that are measured laminations on some collection of incompressible subsurfaces and flat metrics of finite area induced by meromorphic quartic differentials on the complement. In this note, we show that all Gothen components share the same boundary, precisely \vspace{0.1cm} \begin{bigthm}\label{thmA} For all $1\leq k \leq 2g-3$, the boundary of the Gothen component $\mathpzc{G}(k)$ can be identified with the space of projectivized mixed structures $\mathbb{P}\mathrm{Mix}_{4}(S)$. \end{bigthm} \vspace{0.1cm} This behavior is in stark contrast to a result of Wolff (\cite{wolff2011connected}), where the projective measured laminations intersect as a closed nowhere dense set in the boundary of the non-Teichm{\"u}ller components of the $\mathbb{P}\mathrm{SL}(2, \mathbb{R})$-character variety. Moreover, Theorem \ref{thmA} is germane only to $\mathrm{Sp}(4, \mathbb{R})$, as these exceptional Gothen components no longer appear in the $\mathrm{Sp}(2n, \mathbb{R})$-character variety once $n$ is at least 3 (\cite{garcia2013higgs}). Garc{\'i}a-Prada-Gothen-Mundet showed there are only $3\cdot 2^{2g}$ maximal components in the $\mathrm{Sp}(2n, \mathbb{R})$-character variety once $n$ exceeds 2.\\ The main idea behind the proof of Theorem \ref{thmA} lies in a comparison between the induced metric on the maximal surface associated to a maximal representation and the flat metric $|q|^{\frac{1}{2}}$ with cone singularities, where $q$ is the holomorphic quartic differential in the Hitchin base. We show that for any diverging sequence in $\mathpzc{G}(k)$, the length spectra of these two metrics (after a suitable renormalization) converge to that of mixed structures which enjoy the same decomposition into subsurfaces and coincide on their non-laminar part. \\ As an application of our estimates, we also describe the length spectrum for the induced metrics on the unique minimal surfaces in $\mathrm{Sp}(4,\mathbb{R})/\mathrm{U}(2)$. \begin{bigthm}\label{thmB} Let $\rho_{n}$ be a sequence of representations in a Gothen component that leaves every compact set. Let $g_{n}$ be the corresponding sequence of induced metrics on the unique equivariant minimal surfaces in $\mathrm{Sp}(4,\mathbb{R})/\mathrm{U}(2)$. Then the marked length spectrum of $g_{n}$ converges projectively to that of a mixed structure $\eta \in\mathbb{P}\mathrm{Mix}_{4}(S)$. Moreover, all such mixed structures can be attained in the limit. \end{bigthm} Similar results have been proven for the Blaschke metrics on affine spheres (\cite{OT}) and equivariant minimal Lagrangian surfaces in $\mathbb{H}^{2}\times \mathbb{H}^2$ (\cite{Charles_dPSL}), thus this work concludes the study of the length spectrum compactification of higher Teichm\"uller components for the classical Lie groups of rank $2$. \subsection*{Acknowledgement} This paper was written while the first author was visiting Rice University during the academic year 2020-21. He thanks the Department of Mathematics for their kindness and hospitality. The second author acknowledges support from the National Science Foundation through grant DMS-2005501. \section{Background material} \subsection{Maximal representations} Let $G$ be a Lie group, whose associated symmetric space is Hermitian. Then to any representation $\rho: \pi_{1}(S) \to G$, there is a smooth $\pi_{1}(S)$-equivariant map $\widetilde{f}_{\rho}: \widetilde{S} \to G/K$ given by taking any section of the flat bundle $E_{\rho}=\widetilde{S} \times_{\rho} G/K \rightarrow S$. The $G$-invariant two-form $\omega$ on $G/K$ pulls back via $\widetilde{f}_{\rho}$ to a well-defined two form on $S$. The \textit{Toledo invariant} of $\rho$ is defined as $T_{\rho} := \int_{S} \widetilde{f}_{\rho}^{*} \omega$. As the fiber of the bundle $E_{\rho}$ is contractible, a choice of any other section would yield another map differing by a $\rho$-equivariant homotopy, so that the integral is well-defined. In particular, one has the following Milnor-Wood type inequality: $|T_{\rho}| \leq (2g-2) \, \text{rank}(G)$ (see \cite{burger2005maximal}). As the Toledo invariant is constant on connected components of the representation variety, those attaining the maximal Toledo number are called \textit{maximal representations}. These have been extensively studied: see for instance \cite{BIW_maximal}, \cite{burger2005maximal}. For example, in the setting of $G=\text{PSL}(2,\mathbb{R})$, Goldman (\cite{goldmanthesis}) shows that each component is characterized by its Toledo number, with the two maximal components each being a copy of Teichm{\"u}ller space. Hence maximal components can be viewed as higher rank analogues of the classical Teichm{\"u}ller space. When $G= \mathrm{Sp}(4,\mathbb{R})$, Gothen (\cite{gothen2001components}) showed there are precisely 3 $\cdot 2^{2g} + 2g-4$ maximal components, of which only $2^{2g} + 2g-3$ are smooth (\cite{bradlow2012deformations}). Of these, $2^{2g}$ are isomorphic copies of the $\mathrm{Sp}(4, \mathbb{R})$-Hitchin component, so the remaining $2g-3$ are often called Gothen components in the literature. Like their Hitchin counterparts, each representation $\rho$ in the Gothen components has a unique Riemann surface structure associated to them (see \cite{collier2016maximal}) so that the $\rho$-equivariant harmonic map from the universal cover of that Riemann surface into the symmetric space is conformal. Using the preferred Riemann surface, under the non-abelian Hodge correspondence, Gothen representations are associated to Higgs bundles of the form $$ \mathcal{E} = N \oplus N K^{-1} \oplus N^{-1}K \oplus N^{-1}, \qquad \phi= \begin{pmatrix} 0 & 0 & 0 & \nu\\ 1 & 0 & 0 & 0 \\ 0 & \mu & 0 & 0\\ 0 & 0 & 1 & 0 \\ \end{pmatrix}, $$ where $N$ is a line bundle for which $g-1< \deg N \leq 3g-3$ and $0 \neq \mu \in H^{0}(X, N^{-2}K^{3})$ and $\nu \in H^{0}(X, N^{2}K)$. In particular, $\mu \nu \in H^{0}(X, K^{4})$. When $N = K^{3/2}$ and $\mu =1$, one recovers cyclic Higgs bundles corresponding to Hitchin $\mathrm{Sp}(4, \mathbb{R})$-representations (see \cite{Labourie_cyclic}). Using the preferred Riemann surface, the Hitchin equations for the Higgs bundles corresponding to Gothen representations are given by \begin{equation}\label{eq:Hitchin} \begin{cases} \Delta \psi_{1} =e^{\psi_{1} - \psi_{2}}-|\nu|^{2}e^{-2\psi_{1}} \\ \Delta \psi_{2} =|\mu|^{2}e^{2\psi_{2}}- e^{\psi_{1} - \psi_{2}} \end{cases} \ , \end{equation} where $e^{\psi_{1}}$ and $e^{\psi_{2}}$ are the local expressions of the hermitian harmonic metrics on the line bundles $N^{-1}$ and $N^{-1}K$ respectively. We remark that the family of Higgs bundles shown above do not always give rise to different representations. In fact, when $\deg N < 3g-3$ there is a natural $\mathbb{C}^{*}$ action that sends the pair $(\mu, \nu)$ to $(\lambda \mu, \lambda^{-1}\nu)$. Then, by a recent result of Alessandrini and Collier (\cite{AC_Sp4}), the Gothen component $\mathpzc{G}(k)$ is homeomorphic to the bundle over the Teichm\"uller space of $S$ whose fiber is \[ \{ (\mu, \nu) \in H^{0}(X,N^{-2}K^{2}) \times H^{0}(X,N^{2}K)\} / \mathbb{C}^{*} \] where $N$ is a line bundle of degree $k$. Recall that the Hitchin fibration associates to any Higgs bundle the coefficients of the characteristic polynomial of the Higgs field, which are holomorphic differentials on $X$. In this case, the only non-zero differential is the holomorphic quartic differential $q=\mu \nu \in H^{0}(X, K^{4})$. Higgs bundles with the same quartic differentials are said to be in the same Hitchin fiber. Among these, the Higgs bundle \[ \mathcal{E} = K^{\frac{3}{2}} \oplus K^{\frac{1}{2}} \oplus K^{-\frac{1}{2}}K \oplus K^{-\frac{3}{2}}, \qquad \phi= \begin{pmatrix} 0 & 0 & 0 & q\\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ \end{pmatrix} \] is the unique one whose associated representation belongs to the Hitchin component. \section{Boundary of the Gothen components}\label{sec:background} \subsection{Length spectrum of a representation in the Gothen components} Given a Higgs bundle parametrized by $(\mu, \nu)$ in a Gothen component $\mathpzc{G}(k)$ with $g-1<k<3g-3$, we consider the Riemannian metric $h$ on $S$ written locally as $e^{\psi_{1}-\psi_{2}}|dz|^{2}$, where the pair $(\psi_{1}, \psi_{2})$ is the solution to the Hitchin equations \eqref{eq:Hitchin}. It is not hard to show (see for instance \cite{TW}) that, up to a multiplicative constant, $h$ is the induced metric on the associated maximal surface in $\mathbb{H}^{2,2}$ found in \cite{BTT}. We denote by $\mathcal{M}(k)$ the space of such metrics up to isotopy. In order for this matching to be well-defined, we need to make sure that $h$ does not depend on the particular pair $(\mu, \nu)$ chosen, as pairs that are in the same $\mathbb{C}^{*}$-orbit give rise to the same representation. \begin{lemma}\label{lm:well_def} Assume that $(\mu, \nu)$ and $(\widetilde{\mu}, \widetilde{\nu})$ lie in the same $\mathbb{C}^{*}$-orbit. Then $h=\widetilde{h}$. \end{lemma} \begin{proof} It is sufficient to show that if $\widetilde{\mu}=\lambda\mu$ and $\widetilde{\nu}=\lambda^{-1}\nu$ for some $\lambda \in \mathbb{C}^{*}$, then $\widetilde{\psi}_{1}=\psi_{1}-\log(|\lambda|)$ and $\widetilde{\psi}_{2}=\psi_{2}-\log(|\lambda|)$ solve the system \begin{equation*} \begin{cases} \Delta \widetilde{\psi}_{1} =e^{\widetilde{\psi}_{1} - \widetilde{\psi}_{2}}-|\widetilde{\nu}|^{2} e^{-2\widetilde{\psi}_{1}} \\ \Delta \widetilde{\psi}_{2} =|\widetilde{\mu}|^{2}e^{2\widetilde{\psi}_{2}}- e^{\widetilde{\psi}_{1} - \widetilde{\psi}_{2}} \end{cases} \ . \end{equation*} This follows from a simple direct computation. \end{proof} The following result shows that $h$ is negatively curved, which is a fundamental step in our construction. The proof is an easy adaptation of the arguments in \cite{OT}. \begin{prop}\label{prop:neg_curv} Let $\psi_{1}$ and $\psi_{2}$ be the solutions to Equations (\ref{eq:Hitchin}) with given data $(\mu, \nu)$. Then the metric $e^{\psi_{1}-\psi_{2}}|dz|^{2}$ is strictly negatively curved. \end{prop} \begin{proof} The curvature of $h$ is \[ \kappa(h)=-2\Delta_{h}\log(h)=2\left(\frac{e^{-2\psi_{1}}|\nu|^{2}}{e^{\psi_{1}-\psi_{2}}}+\frac{e^{2\psi_{2}}|\mu|^{2}}{e^{\psi_{1}-\psi_{2}}}-2\right), \] hence it is strictly negative if and only if $f_{1}+f_{2}<2$, where \[ f_{1}=\frac{e^{-2\psi_{1}}|\nu|^{2}}{e^{\psi_{1}-\psi_{2}}} \ \ \ \ \text{and} \ \ \ \ f_{2}=\frac{e^{2\psi_{2}}|\mu|^{2}}{e^{\psi_{1}-\psi_{2}}} \ . \] Now, outside the zeros of $\mu$ and $\nu$ we have that \begin{align*} \Delta_{h}\log(f_{1})&=-3\Delta_{h}\psi_{1}+\Delta_{h}\psi_{2}=3\frac{e^{-2\psi_{1}}|\nu|^{2}}{e^{\psi_{1}-\psi_{2}}}+\frac{e^{2\psi_{2}}|\mu|^{2}}{e^{\psi_{1}-\psi_{2}}}-4=-4+3f_{1}+f_{2} \\ \Delta_{h}\log(f_{2})&=3\Delta_{h}\psi_{2}-\Delta_{h}\psi_{1}=3\frac{e^{2\psi_{2}}|\mu|^{2}}{e^{\psi_{1}-\psi_{2}}}+\frac{e^{-2\psi_{1}}|\nu|^{2}}{e^{\psi_{1}-\psi_{2}}}-2=3f_{2}+f_{1}-2 \ . \end{align*} Then \begin{align*} \Delta_{h}\log(f_{1}+f_{2})&\geq \frac{f_{1}\Delta_{h}\log(f_{1})+f_{2}\Delta_{h}\log(f_{2})}{f_{1}+f_{2}}\\ &=\frac{-4f_{1}+3f_{1}^{2}+2f_{1}f_{2}+3f_{2}^{2}+f_{1}f_{2}-2f_{2}}{f_{1}+f_{2}}\\ &>\frac{-4f_{1}-4f_{2}+3f_{1}^{2}+2f_{1}f_{2}+3f_{2}^{2}+f_{1}f_{2}}{f_{1}+f_{2}}\geq -4+2(f_{1}+f_{2}) \ . \end{align*} We deduce that, if $f_{1}+f_{2}$ takes its maximum outside the zeros of $\mu$ and $\nu$, then $f_{1}+f_{2}<2$ and $\kappa(h)<0$. On the other hand, it is clear that $f_{1}+f_{2}$ cannot be maximal at a common zero of $\mu$ and $\nu$ because it is always non-negative. Then, if $f_{1}+f_{2}$ takes its maximum at a zero $p$ of $\nu$ but not of $\mu$, we have $f_{1}(p)=0$ and $p$ is a maximum of $f_{2}$. But then, \[ 0\geq \Delta_{h}\log(f_{2})(p)=-2+3f_{2}(p)+f_{1}(p)=-2+3f_{2}(p) \] which implies that $f_{2}(p)\leq \frac{2}{3}$ and $f_{1}+f_{2}\leq f_{1}(p)+f_{2}(p)\leq \frac{2}{3}$ everywhere on $S$. Hence $\kappa(h)<0$ in this case as well. Similarly, if $f_{1}+f_{2}$ takes its maximum at a zero $p$ of $\mu$ but not of $\nu$, then $f_{2}(p)=0$ and $p$ is a maximum of $f_{1}$. But then, \[ 0\geq \Delta_{h}\log(f_{1})(p)=-4+3f_{1}(p)+f_{2}(p)=-4+3f_{1}(p) \] which implies that $f_{1}(p)\leq \frac{4}{3}$ and $f_{1}+f_{2}\leq f_{1}(p)+f_{2}(p)\leq \frac{4}{3}$ and the curvature is negative everywhere on $S$. \end{proof} Negative curvature guarantees that for every $\gamma \in \pi_{1}(S)$ there is a unique geodesic representative for $h$ in its free homotopy class. We denote by $\ell_{h}(\gamma)$ its length and define the length spectrum of the Gothen representation $\rho \in \mathpzc{G}(k)$ corresponding to the Higgs bundle with parameter $(\mu, \nu)$ as the collection $\{\ell_{h}(\gamma)\}_{\gamma \in \pi_{1}(S)}$. \\ By a result of Otal (\cite{Otal}), the length spectrum of $h$ can be realized by a geodesic current $L_{h}$, thus we can map each Gothen component $\mathpzc{G}(k)$ inside the space of geodesic currents $\mathcal{C}(S)$ (see \cite{bonahonbouts}, \cite{Bonahon_currents}). In Section \ref{subsec:limits} we will prove that this inclusion is proper and describe its projective closure. \subsection{Comparison results} We collect here some results about how the metric $h=e^{\psi_{1}-\psi_{2}}|dz|^{2}$ compares with the quartic differential metric $|\mu\nu|^{\frac{1}{2}}$ and the hyperbolic metric in the conformal class. Most of the results here already appeared in \cite{QL_cyclic} in some form. \begin{lemma}\label{lm:lower_bound_flat} Let $h=e^{\psi_{1}-\psi_{2}}|dz|^{2}$ be the metric associated to the pair $(\mu, \nu)$. Let $q=\mu\nu$ be the holomorphic quartic differential in the Hitchin base. Then $|q|^{\frac{1}{2}}\leq h$. \end{lemma} \begin{proof} We consider the function $u=e^{4\psi_{2}-4\psi_{1}}|q|^{2}$, which is well-defined everywhere on $S$. It is clear that $u\geq 0$ and takes a maximum outside the zeros of $q$. Now, outside the zeros of $q$, the function $u$ satisfies \begin{align*} \Delta \log(u) &= 4\Delta \psi_{2}-4\Delta \psi_{1} \\ &= 4|\mu|^{2}e^{2\psi_{2}}-8e^{\psi_{1}-\psi_{2}}+4|\nu|^{2}e^{-2\psi_{1}} \\ &\geq 8|q|e^{\psi_{2}-\psi_{1}}-8e^{\psi_{1}-\psi_{2}} \\ &= 8e^{\psi_{1}-\psi_{2}}(e^{2\psi_{2}-2\psi_{1}}|q|-1)\\ &= 8e^{\psi_{1}-\psi_{2}}(\sqrt{u}-1) . \end{align*} Hence, at a maximum of $u$, we have \[ 0\geq \Delta_{h}\log(u) \geq 8e^{\psi_{1}-\psi_{2}}(\sqrt{u}-1) \ . \] We deduce that $u\leq 1$, which implies that \[ |q|^{\frac{1}{2}}\leq e^{\psi_{1}-\psi_{2}} |dz|^{2}=h \ . \] \end{proof} \begin{lemma}\label{lm:compare_Hitchin} Let $h=e^{\psi_{1}-\psi_{2}}|dz|^{2}$ and $\widetilde{h}=e^{\widetilde{\psi}_{1}-\widetilde{\psi}_{2}}|dz|^{2}$ be the metrics associated to the pairs $(\mu, \nu)$ and $(1,q)$ with $q=\mu\nu$, respectively. Then $h\leq \widetilde{h}$. \end{lemma} \begin{proof} The function $u=\psi_{1}-\psi_{2}-\widetilde{\psi}_{1}+\widetilde{\psi}_{2}$ satisfies the equation \begin{align*} \Delta u &= \Delta \psi_{1}-\Delta \psi_{2} -\Delta\widetilde{\psi}_{1}+ \Delta \widetilde{\psi}_{2} \\ &= 2e^{\psi_{1}-\psi_{2}}-|\nu|^{2}e^{-2\psi_{1}}-|\mu|^{2}e^{2\psi_{2}}-2e^{\widetilde{\psi}_{1}-\widetilde{\psi}_{2}}+|q|^{2}e^{-2\widetilde{\psi}_{1}}+e^{2\widetilde{\psi}_{2}} \\ &=2e^{\widetilde{\psi}_{1}-\widetilde{\psi}_{2}}(e^{u}-1)-(|\nu|e^{-\psi_{1}}-|\mu|e^{\psi_{2}})^{2}+(|q|e^{-\widetilde{\psi}_{1}}-e^{\widetilde{\psi}_{2}})^{2}+2|q|e^{-\widetilde{\psi}_{1}+\widetilde{\psi}_{2}}(1-e^{-u}) \\ &\geq 2e^{\widetilde{\psi}_{1}-\widetilde{\psi}_{2}}(e^{u}-1) + 2|q|e^{-\widetilde{\psi}_{1}+\widetilde{\psi}_{2}}(1-e^{-u}) \end{align*} because by the proof of \cite[Theorem 6.2, page 1257]{QL_cyclic} the term $(|q|e^{-\widetilde{\psi}_{1}}-e^{\widetilde{\psi}_{2}})^{2}-(|\nu|e^{-\psi_{1}}-|\mu|e^{\psi_{2}})^{2}$ is positive. It follows that the constant function $0$ is a supersolution, so $u\leq 0$ and $h=e^{\psi_{1}-\psi_{2}}|dz|^{2}=e^{u}e^{\widetilde{\psi}_{1}-\widetilde{\psi}_{2}}|dz|^{2}\leq \widetilde{h}$. \end{proof} \begin{rmk}The metric $\tilde{h}$ is, up to a multiple, the induced metric on the maximal surface in $\mathbb{H}^{2,2}$ that is equivariant under a Hitchin representation. This metric was studied extensively in \cite{OT_Sp4}. \end{rmk} \begin{cor}\label{cor:area_bound} The area of $S$ endowed with the metric $h$ satisfies \[ \mathrm{Area}(S, |q|^{\frac{1}{2}}) \leq \mathrm{Area}(S,h) \leq \frac{3}{2}\mathrm{Area}(S, |q|^{\frac{1}{2}})+\frac{3\pi}{2}|\chi(S)| \ . \] \end{cor} \begin{proof} The lower bound follows immediately from Lemma \ref{lm:lower_bound_flat}. For the upper bound we use Lemma \ref{lm:compare_Hitchin} and \cite[Proposition 3.9]{OT_Sp4}. \end{proof} \begin{lemma}\label{lm:lower_bound_hyp} Let $\sigma=\sigma(z)|dz|^{2}$ be the hyperbolic metric in the conformal class. Then $\frac{1}{4}\sigma \leq h$. \end{lemma} \begin{proof} Consider the function $f=e^{\psi_{1}-\psi_{2}}\sigma^{-1}$, which is well-defined everywhere on $S$. Then, \begin{align*} \Delta \log(f) &=\Delta \psi_{1}-\Delta \psi_{2}-\Delta \log(\sigma)\\ &= -e^{2\psi_{2}}|\mu|^{2}+2f\sigma-|\nu|^{2}e^{-2\psi_{1}}-\frac{1}{2}\sigma \\ &\leq \left(2f-\frac{1}{2}\right)\sigma \end{align*} By the maximum principle, $f\geq \frac{1}{4}$ and $h=f\sigma\geq \frac{1}{4} \sigma$ \end{proof} \subsection{Projective limits in the space of currents}\label{subsec:limits} By Proposition \ref{prop:neg_curv}, the metric $h$ is negatively curved, thus we can embed $\mathcal{M}(k)$ into the space of geodesic currents for all $g-1<k<3g-3$. We will denote by $L_{h}$ the associated geodesic current. We recall the two main features of this current: \begin{enumerate}[i)] \item for every curve $\gamma \in \pi_{1}(S)$, we have $\ell_{h}(\gamma)=i(L_{h},\delta_{\gamma})$, where $\ell_{h}(\gamma)$ denotes the length of the unique geodesic representative of $\gamma$ for $h$; \item $i(L_{h},L_{h})=\frac{\pi}{2}\mathrm{Area}(S,h)$ \ . \end{enumerate} \begin{thm} \label{thm:limit_induced} Let $\rho_{n}\in \mathpzc{G}(k)$ be a sequence of Gothen representations leaving every compact set. Let $(\mu_{n}, \nu_{n})$ be the sequence parameterizing the corresponding Higgs bundles and let $h_{n}$ be the corresponding sequence of Riemannian metrics on $S$. Then there is a sequence of positive real numbers $t_{n}$ and a mixed structure $\eta$ so that $t_{n}L_{h_{n}}\to \eta$. \end{thm} \begin{proof} Set $q_{n}=\mu_{n}\nu_{n}$. We distinguish two cases, depending on whether the area $\|q_{n}\|$ is uniformly bounded or not. \\ \underline{\textit{First case:} $\sup\|q_{n}\|<\infty$.} By Corollary \ref{cor:area_bound}, the self-intersection $i(L_{h_{n}},L_{h_{n}})$ is uniformly bounded. We notice that, because $\rho_{n}$ leaves every compact set, the sequence of hyperbolic metrics $\sigma_{n}$ in the conformal class of $h_{n}$ must necessarily diverge. Otherwise, up to subsequences, we could assume that $\sigma_{n}\to \sigma \in \mathcal{T}(S)$ and we could write $q_{n}=\tau_{n}\tilde{q}_{n}$, where \[ \tau_{n}=\|q_{n}\|_{\infty}:=\max_{S}\frac{|q_{n}|^{2}}{\sigma_{n}^{4}} \] and $\tilde{q}_{n}$ converges uniformly to a non-vanishing quartic differential $\tilde{q}_{\infty} \in \mathcal{Q}(S,\sigma_{\infty})$, as unit spheres of holomorphic differentials over the thick part of Teichm\"uller space are compact. Since \[ \|q_{n}\|=\int_{S}|\tau_{n}|^{\frac{1}{2}}|\tilde{q}_{n}|^{\frac{1}{2}}=|\tau_{n}|^{\frac{1}{2}}\int_{S} \frac{|\tilde{q}_{n}|^{\frac{1}{2}}}{\sigma_{n}} dA_{\sigma_{n}} \] and \[ \frac{|\tilde{q}_{n}|^{\frac{1}{2}}}{\sigma_{n}} dA_{\sigma_{n}} \to \frac{|\tilde{q}_{\infty}|^{\frac{1}{2}}}{\sigma_{\infty}} dA_{\sigma_{\infty}} \neq 0 \ , \] the bound on $\|q_{n}\|$ implies that $\tau_{n}$ is uniformly bounded and $q_{n}$ converges up to subsequences. Because $q_{n}=\mu_{n}\nu_{n}$ and, up to the $\mathbb{C}^{*}$-action we can assume that $|\mu_{n}|$ and $|\nu_{n}|$ are uniformly bounded, the pair $(\mu_{n}, \nu_{n})$ would also converge up to subsequence, thus contradicting the fact that the representations $\rho_{n}$ leave every compact set. Therefore, the sequence $\sigma_{n}$ of hyperbolic metrics in the conformal class of $h_{n}$ diverges, as claimed. Then, by Lemma \ref{lm:lower_bound_hyp}, the length spectrum of $L_{h_{n}}$ is unbounded. Since $\mathbb{P}\mathcal{C}(S)$ is compact, there exists a sequence $t_{n}\to 0$ such that $t_{n}L_{h_{n}}\to L_{\infty}$. We easily deduce that $i(L_{\infty},L_{\infty})=0$, hence $L_{\infty}$ is a measured lamination, which we can interpret as a mixed structure with no flat parts. \\ \underline{\textit{Second case:} $\sup\|q_{n}\|=\infty$.} By Corollary \ref{cor:area_bound}, the self-intersection of $L_{h_{n}}$ diverges as $\|q_{n}\|$. Since every geodesic current has finite self-intersection, we need to rescale $L_{h_{n}}$ at least by $\frac{1}{\sqrt{\|q_{n}\|}}$. Let us denote \[ \hat{L}_{h_{n}}=\frac{1}{\sqrt{\|q_{n}\|}}L_{h_{n}} \ . \] If the length spectrum of $\hat{L}_{h_{n}}$ is still unbounded, then there is a sequence $t_{n}\to 0$ such that $t_{n}\hat{L}_{h_{n}} \to \hat{L}_{\infty}$, which has vanishing self-intersection, thus $L_{h_{n}}$ converges projectively to a measured lamination. If the length spectrum of $\hat{L}_{h_{n}}$ is uniformly bounded, then by Lemma \ref{lm:lower_bound_flat}, the length spectrum of the unit area flat metrics $|q_{n}|^{\frac{1}{2}}/\|q_{n}\|$ is uniformly bounded as well. Thus, by \cite[Theorem 2.5]{OT_Sp4}, the geodesic currents $L_{q_{n}}$ converges in $\mathbb{P}\mathcal{C}(S)$ to a mixed structure $m$ that is not purely laminar. This furnishes an orthogonal decomposition (for the intersection form $i$) of the surface $S$ into a collection of incompressible subsurfaces $\{S_{j}'\}_{j=1}^{l}$, obtained by cutting $S$ along the simple closed curves $\gamma_{i}$, for which $m$ is induced by a flat metric on each $S_{j}'$ and is a measure lamination on the complement. Moreover, we can assume that each simple closed curve $\gamma_{i}$ bounds at least one flat part, induced by a meromorphic quartic differential $\tilde{q}_{j}$. On each $S_{j}'$, the inequalities $|q_{n}|^{\frac{1}{2}}\leq h_{n} \leq \widetilde{h}_{n}$ given by Lemma \ref{lm:compare_Hitchin} and Lemma \ref{lm:lower_bound_flat}, together with \cite[Corollary 3.14]{OT_Sp4}, imply the sequence $h_{n}$, renormalized by $\|q_{n}\|$, converges to $|\tilde{q}_{j}|^{\frac{1}{2}}$ uniformly on compact sets outside the zeros and poles of $\tilde{q}_{j}$. We deduce that on each $S_{j}'$, we have \[ \hat{L}_{\infty}=\lim_{n \to +\infty}\hat{L}_{h_{n}}=\lim_{n\to +\infty}\frac{1}{\sqrt{\|q_{n}\|}}L_{q_{n}}=L_{\tilde{q}_{j}} \ , \] because uniform convergence of metrics implies convergence in the length spectrum (\cite[Proposition 5.3]{Charles_dPSL}). In particular, we notice that \[ \lim_{n \to +\infty}i(\hat{L}_{h_{n}}, \delta_{\gamma_{i}})=0 \] so that the same collection of curves $\gamma_{i}$ can be used for the orthogonal decomposition of $\hat{L}_{\infty}$. Therefore, we can write \[ \hat{L}_{\infty}=\sum_{j=1}^{l}L_{\tilde{q}_{j}}+\lambda \ , \] where $\lambda$ is a geodesic current supported in the complement of $\bigcup_{j}S_{j}'$. We claim that $\lambda$ is a measure lamination: in fact using again the inequalities $|q_{n}|^{\frac{1}{2}} \leq h_{n}\leq \tilde{h}_{n}$ and \cite[Lemma 3.15]{OT_Sp4}, we have \begin{align*} \frac{\pi}{2}&=\frac{\pi}{2}\lim_{n\to +\infty}\frac{\mathrm{Area}(S,h_{n})}{\|q_{n}\|} =\lim_{n \to +\infty}i(\hat{L}_{h_{n}},\hat{L}_{h_{n}})\\ &=i(\hat{L}_{\infty}, \hat{L}_{\infty}) =\sum_{j=1}^{l} i(L_{\tilde{q}_{j}}, L_{\tilde{q}_{j}})+i(\lambda, \lambda)\\ &=\lim_{n\to +\infty} \frac{1}{\|q_{n}\|}i(L_{q_{n}},L_{q_{n}})+i(\lambda, \lambda)=\frac{\pi}{2}+i(\lambda, \lambda) \ , \end{align*} so $\lambda$ has vanishing self-intersection. \end{proof} \begin{proof}[Proof of Theorem A] By Theorem \ref{thm:limit_induced}, we know that $\partial \overline{\mathcal{M}(k)}\subseteq \mathbb{P}\mathrm{Mix}_{4}(S)$. Consider now the family of metrics $h_{t}$ associated to a ray $tq$ in the Hitchin base for a fixed unit area quartic differential. By the proof of Theorem \ref{thm:limit_induced}, we know that $L_{h_{t}}$ converges projectively to $L_{q}$ as $t\to +\infty$. Therefore, $\partial \overline{\mathcal{M}(k)}\supseteq \overline{\mathrm{Flat}_{4}(S)}=\mathbb{P}\mathrm{Mix}_{4}(S)$ for all $g-1<k<3g-3$. Since the map $\mathpzc{G}(k) \rightarrow \mathpzc{M}(k)$ is proper by Theorem \ref{thm:limit_induced}, we can identify the boundary of the Gothen components $\mathpzc{G}(k)$ with $\mathbb{P}\mathrm{Mix}_{4}(S)$. \end{proof} \section{Induced metrics on the minimal surfaces} Given a maximal representation $\rho:\pi_{1}(S) \rightarrow \mathrm{Sp}(4,\mathbb{R})$, there is a unique $\rho$-equivariant conformal harmonic map $f_{\rho}:\tilde{S} \rightarrow \mathrm{Sp}(4,\mathbb{R})/\mathrm{U}(2)$ (\cite{Corlette}, \cite{collier2016maximal}). The image of $f_{\rho}$ is thus a minimal surface in the symmetric space and one could have defined the length spectrum of the maximal representation by considering the induced metric on this surface. In this section we show that how this would have still led to identifying the boundary of the Gothen and Hitchin components with the space of projectivized mixed structures. \\ Dai and Li (\cite{QL_cyclic}) write an explicit expression for the induced metric $g$ on the minimal surface in terms of the Higgs bundle data: in local coordinates \[ g=16(|\nu|^{2}e^{-2\psi_{1}}+2e^{\psi_{1}-\psi_{2}}+|\mu|^{2}e^{2\psi_{2}})|dz|^{2} \] where $\psi_{1}$ and $\psi_{2}$ are the solutions to Hitchin equations (\ref{eq:Hitchin}). We denote by $\tilde{g}$ the induced metric on the minimal surface associated to the Higgs bundle in the Hitchin component in the same fiber, i.e. \[ \tilde{g}=16(|q|^{2}e^{-2\tilde{\psi}_{1}}+2e^{\tilde{\psi}_{1}-\tilde{\psi}_{2}}+e^{2\tilde{\psi}_{2}})|dz|^{2} \] where $q=\mu\nu$. Let us start by analyzing the behavior of the metric $\tilde{g}$. By \cite[Theorem 5.6]{QL_cyclic}, the metric $\tilde{g}$ is strictly negatively curved and thus there is a unique geodesic current $L_{\tilde{g}}$ that records its marked length spectrum. \begin{thm}\label{thm:B_Hitchin} Let $\rho_{n}$ be a sequence of representations leaving every compact set in the Hitchin component and let $L_{\tilde{g}_{n}}$ be the corresponding sequence of geodesic currents. Then there is a sequence $t_{n} \to 0$ and $\eta \in \mathbb{P}\mathrm{Mix}(S)$ such that $t_{n}L_{\tilde{g}_{n}}$ converges to $\eta$. \end{thm} \begin{proof} We first observe that, by the proof of Proposition \ref{prop:neg_curv}, we know that \[ |q|^{2}e^{-2\tilde{\psi}_{1}}+e^{2\tilde{\psi}_{2}} \leq 2e^{\tilde{\psi}_{1}-\tilde{\psi}_{2}} \ , \] hence $\tilde{g}$ is uniformly bi-Lipschitz to $\tilde{h}$, as \begin{equation}\label{eq:bi_Lip} 32\tilde{h} \leq \tilde{g} \leq 64\tilde{h}. \end{equation} In particular, the length spectrum of $\tilde{g}_{n}$ diverges at the same rate as the length spectrum of $\tilde{h}_{n}$. Let $t_{n}$ be the scaling factors such that the geodesic currents $L_{\tilde{h}_{n}}$ converge to a projectivized mixed structure $\eta'$. Then, because the length spectrum of $t_{n} L_{\tilde{g}_{n}}$ is uniformly bounded, up to subsequences, $t_{n}L_{\tilde{g}_{n}}$ converges to a geodesic current $L_{\infty}$. Now, if $\eta'$ is purely laminar, then \[ 0=\iota(\eta', \eta')=\lim_{n \to +\infty}t_{n}^{2}\iota(L_{\tilde{h}_{n}}, L_{\tilde{h}_{n}})=\lim_{n \to +\infty} \frac{t_{n}^{2}\pi}{2}\mathrm{Area}(S, \tilde{h}_{n}) \] and by Equation (\ref{eq:bi_Lip}) we deduce that \[ 0=\lim_{n \to +\infty}\frac{t_{n}^{2}\pi}{2}\mathrm{Area}(S, \tilde{g}_{n})=\iota(L_{\infty}, L_{\infty}) \] so $L_{\infty}$ is a measured lamination as well. Assume now that $\eta'$ is not purely laminar and let $S_{j}$ be a subsurface on which $\eta'$ is given by a meromorphic quartic differential of finite area $q_{j}$. Note that this case can happen (see \cite[Theorem 3.16]{OT_Sp4}) only when the area of the quartic differentials metrics $|q_{n}|^{\frac{1}{4}}$ is unbounded, and such $q_{j}$ arises then as limit of the rescaled flat metrics $t_{n}^{2}|q_{n}|^{\frac{1}{2}}$. Set $\tilde{u}_{1,n}=\tilde{\psi}_{1,n}-\log(|q_n|^{\frac{3}{4}})$ and $\tilde{u}_{2,n}=\tilde{\psi}_{2,n}-\log(|q_n|^{\frac{1}{4}})$. We can then re-write $\tilde{g}_n$ as \[ \tilde{g}_{n}=16|q_{n}|^{\frac{1}{2}}(e^{-2\tilde{u}_{1,n}}+2e^{\tilde{u}_{1,n}-\tilde{u}_{2,n}}+e^{2\tilde{u}_{2,n}}) \] By \cite[Corollary 3.13]{OT_Sp4}, the sequences $u_{1,n}$ and $u_{2,n}$ converge to $0$ uniformly on compact sets outside the zeros and poles of $q_{j}$, whereas the renormalized flat metrics $t_{n}^{2}|q_{n}|^{\frac{1}{2}}$ converge to $|q_{j}|^{\frac{1}{2}}$ by assumption. Hence, we can conclude that on $S_{j}$ we have \[ L_{{\infty}_{|_{S_{j}}}}=8L_{q_{j}}=8 {\eta'}_{|_{S_{j}}} \ . \] Therefore, we can write $L_{\infty}=\lambda+\sum_{j}8L_{q_{j}}$, where $\lambda$ is a geodesic current supported on the complement of the flat subsurfaces $\bigcup S_{j}$ in the decomposition of $\eta'$. We claim that $\lambda$ is a measured lamination. Indeed, recalling that $\iota(\eta', \eta')=\frac{\pi}{2}$ (see \cite[Thoerem 3.16]{OT_Sp4}), we have \begin{align*} \frac{64\pi}{2} &=\iota(8\eta', 8\eta')=64\lim_{n \to +\infty}t_{n}^{2}\iota(L_{\tilde{h}_{n}}, L_{\tilde{h}_{n}})=64\lim_{n\to +\infty}\frac{t_{n}^{2}\pi}{2}\mathrm{Area}(S, \tilde{h}_{n}) \\ & \geq \lim_{n \to +\infty} \frac{t_{n}^{2}\pi}{2}\mathrm{Area}(S, \tilde{g}_{n})=\lim_{n \to +\infty}t_{n}^{2}\iota(L_{\tilde{g}_{n}}, L_{\tilde{g}_{n}})= \iota(L_{\infty}, L_{\infty})\\ & =\frac{64\pi}{2}+\iota(\lambda, \lambda). \end{align*} We conclude that $\lambda$ has vanishing self-intersection and thus is a measured lamination. If we further renormalize $L_{\infty}$ by dividing by $8$, we obtain a mixed structure $\eta$ with self-intersection $\frac{\pi}{2}$, in other words, the areas of the flat subsurfaces sum up to $1$. \end{proof} We now deduce Theorem \ref{thmB} for induced metrics arising from representations in the Gothen components. \begin{proof}[Proof of Theorem \ref{thmB}] Since Equation (\ref{eq:bi_Lip}) holds for the Gothen components as well, as it is based on Proposition \ref{prop:neg_curv}, we know that the length spectrum of $g_{n}$ grows at the same rate as the length spectrum of the metrics $h_{n}$, thus if we choose scaling factors such that $t_{n}L_{h_{n}}$ converges to a mixed structure $\eta$, then $t_{n}L_{g_{n}}$ must converge as well. Let $L_{\infty}$ be such limit. The same argument as in the proof of Theorem \ref{thm:B_Hitchin} shows that if $\eta$ is purely laminar, then $L_{\infty}$ is a measured lamination. Let now $S_{j}$ be a subsurface in which $\eta$ is the Liouville current of a flat metric induced by a meromorphic quartic differential $q_{j}$. By the proof of \cite[Theorem 6.2, page 1257]{QL_cyclic}, the following inequalities hold \[ 16(|q_{n}|^{2}e^{-2\tilde{\psi}_{1}}+ 2|q_{n}|^{\frac{1}{2}}+|q_{n}|^{2}e^{-2\tilde{\psi}_{1}}) \leq g_{n} \leq \tilde{g}_{n} \ . \] Introducing again the function $\tilde{u}_{1,n}=\tilde{\psi}_{1,n}-\log(|q_n|^{\frac{3}{4}})$, we can rewrite the left-hand side as \[ 32|q_{n}|^{\frac{1}{2}}(e^{-2\tilde{u}_{1,n}}+1) \ . \] Because $\tilde{u}_{1,n}$ converges uniformly to $0$ outside the zeros and poles of $q_{j}$ and the metrics $\tilde{g_{n}}$ and $|q_{n}|^{\frac{1}{2}}$ rescaled by $t_{n}^{2}$ both converge to $64|q_{j}|^{\frac{1}{2}}$ and $|q_{j}|^{\frac{1}{2}}$ respectively, we can conclude that $g_{n}$ converges to $64|q_{j}|^{\frac{1}{2}}$ and thus $L_{\infty}$ restricted to the subsurface $S_{j}$ coincides with $64L_{q_{j}}$. If we then write $L_{\infty}=\sum L_{q_{j}} + \lambda$, where $\lambda$ is a measured lamination supported on on the complement of the subsurfaces $S_{j}$, then the same area argument as in the proof of Theorem \ref{thm:B_Hitchin} shows that $\lambda$ has self-intersection $0$ and thus is a measured lamination. Hence, $\eta=\frac{1}{64}L_{\infty}$ is a mixed structure where the flat pieces have unit area. \end{proof} \bibliographystyle{alpha}
{ "timestamp": "2021-05-06T02:05:24", "yymm": "2105", "arxiv_id": "2105.01779", "language": "en", "url": "https://arxiv.org/abs/2105.01779" }
{ "timestamp": "2021-08-03T02:28:03", "yymm": "2105", "arxiv_id": "2105.01857", "language": "en", "url": "https://arxiv.org/abs/2105.01857" }
\section{Introduction} \label{sec:introduction} Many autonomous systems implement model-based control schemes, in which the system uses an internal model of itself, its environment, and the consequences of its own actions. In robotic settings, these models typically map the robot's configuration, its velocities, and the forces applied to it to the acceleration on the robot's degrees-of-freedom. There is a growing need to efficiently learn these models from potentially noisy time-series describing the evolution of the system. Structured Mechanical Models (SMMs)~\cite{Gupta2020, gupta2019general} provide a black-box but data-efficient parameterization of mechanical systems that have been shown to be well-suited to the task of learning models of dynamics from data. Instead of directly parameterizing a function that predicts accelerations given the robot configuration, velocity, and inputs, SMMs parameterize the Lagrangian of a mechanical system. The accelerations on the system are then computed from the Lagrangian using the Euler-Lagrange equation~\cite{Gupta2020,gupta2019general, lutter2018deep, cranmer2020lagrangian}. Prior work fit SMMs by minimizing the error between the predicted and observed accelerations~\cite{lutter2018deep, cranmer2020lagrangian}, or the predicted and observed next-states of the system~\cite{Gupta2020}. In cases in which dynamics are continuous, time-series of observations can typically be filtered or smoothed in order to recover estimates of the system's configurations, velocities, and accelerations, from which we can train a model. However, in the case where dynamics are inherently discontinuous, such as domains involving contact, a continuous formulation of Euler-Lagrange equations will yield predictions of infinite acceleration, and is thus inappropriate. In this work, we propose an alternative methodology for fitting SMMs to a time-series of system configuration measurements. We observe that for a trajectory to have been generated by a given Lagrangian, the discrete Euler-Lagrange (DEL) equations must be zero along the trajectory \cite{marsden2001discrete}. We therefore propose to minimize the DEL \textit{residual}, attempting to find a Lagrangian such that the DEL equations are zero along the trajectories being fit. This approach is not sufficient in itself, however, due to \textit{gauge invariance}. Specifically, if the DEL equations are satisfied along a trajectory given a Lagrangian $\Lag$, then they will also be satisfied given a Lagrangian $\alpha \Lag + \beta$, for $(\alpha, \beta) \in \bbR$. As a result, a Lagrangian that is everywhere equal to a constant satisfies the DEL equations. To avoid this degenerate solution, we propose a regularization term that ensures the learned Lagrangian is non-constant along the trajectory. To validate our approach, we fit SMMs to data from damped and undamped double-pendulums. We show that when training with our method instead of using acceleration or next-state regression, we are able to reliably learn models with lower error. We demonstrate that this result holds true when fitting SMMs to data recovered from noisy observations of an undamped and damped double-pendulum. The contributions of this work are as follows: \begin{itemize} \item We propose an optimization objective for fitting SMMs based on the discrete Euler-Lagrange residual, \item We propose a regularization term that guarantees our method does not find degenerate solutions, and, \item We demonstrate that SMMs fit with our method are of better quality than those fit with conventional approaches. \end{itemize} We conclude this work with a discussion of potential application domains in which minimizing the DEL-residual is more appropriate than acceleration or next-state regression, such as domains involving contact. The codebase containing the experiments conducted and an example implementation of our methodology can be found at \url{https://github.com/sisl/delsmm}. \section{Background} \label{sec:background} \subsection{Lagrangian Dynamics} Consider a configuration space of a mechanical system $q\in \bbR^n$ and associated velocity $\qdot \in \bbR^n$. The evolution of the system can be understood by specifying the \textit{Lagrangian} of the system, i.e. \begin{equation} \Lag(q,\qdot) % = \frac12 \qdot^\top \M(q) \qdot - V(q) \end{equation} where $\frac12 \qdot^\top \M(q) \qdot$ and $V(q)$ are the kinetic and potential energy of the system, respectively, and $\M(q) \succ 0$ is the \textit{mass-matrix}. The system maybe also be \textit{forced} via an external forcing function $F(q,\qdot, u)$, where $u$ is a control-input. Note that these forces are \textit{non-conservative} in that they change the total-energy of the system. For simplicity, we considered uncontrolled systems in this work, and thus drop the dependence on $u$. \subsubsection{The Euler-Lagrange Equation} The Lagrangian of a system can be used to specify the dynamics of the system via the \textit{Euler-Lagrange} equation. The continuous-time version of this equation is: \begin{equation} \frac{d}{dt}\left(\frac{\partial \Lag}{\partial \qdot}\right) = \frac{\partial \Lag}{\partial q} + F(q,\qdot) \end{equation} This equation can be used to find the accelerations $\qddot$ on the system, as follows: \begin{equation} \label{eqn:acc} \qddot = \left(\frac{\partial^2 \Lag}{\partial \qdot^2}\right)^{-1}\left[ F(q,\qdot) + \frac{\partial \Lag}{\partial q} - \left(\frac{\partial^2 \Lag}{\partial \qdot\partial q}\right)\qdot \right] \end{equation} The trajectory of the system given $q_0, \qdot_0$ can then be found by a numerical integration scheme such as Runge-Kutta~\cite{runge1895}. Suppose we instead are given $q_0, q_1$, which are the configurations of the system at the first and second time-step of interest. We can find the configuration $q_3$ that succeeds these initial conditions by considering a \textit{discrete} formulation of the Lagrangian and the Euler-Lagrange equation~\cite{marsden2001discrete}. We define the discrete-Lagrangian $\Lag_d$ and discrete generalized-force $F_d$ given simulation step-size $h$ to be: \begin{equation} \begin{aligned} \Lag_d(q_1, q_2, h) &= h\Lag\left(\frac{q_1 + q_2}{2}, \frac{q_2-q_1}{h}\right)\\ F_d(q_1,q_2,h) &= h F\left(\frac{q_1 + q_2}{2}, \frac{q_2-q_1}{h}\right) \end{aligned} \end{equation} and the discrete Euler-Lagrange (DEL) equation as: \begin{equation} \label{eqn:deleqn} \begin{aligned} \text{DEL}(q_{1:3}) &= D_2 \Lag_d(q_1, q_2) + D_1 \Lag_d(q_2,q_3) \\ &\quad + \frac{1}{2}\left( F_d(q_1,q_2) + F_d(q_2, q_3) \right)\\ &= 0 \end{aligned} \end{equation} Here we use the \emph{slot derivative} $D_i$ to indicate partial differentiation with respect to a function's $i$-th argument. \subsubsection{Gauge Invariance} It should be noted that, given a system's dynamics, the corresponding Lagrangian is not unique. Specifically, if $\tilde{\Lag} = \alpha \Lag + \beta$ for some $(\alpha, \beta) \in \bbR$, then it is trivial to see from \Cref{eqn:acc} and \Cref{eqn:deleqn} that the dynamics that correspond to $\Lag$ and $\tilde{\Lag}$ are identical. That is, if there is a tuple $(q, \qdot, \qddot)$ and Lagrangian $\Lag$ for which \Cref{eqn:acc} holds, or a tuple $q_{1:3}$ and discrete Lagrangian $\Lag_d$ for which \Cref{eqn:deleqn} holds, then it will also hold for $\tilde{\Lag} = \alpha \Lag + \beta$. \subsection{Structured Mechanical Models} Many works in recent years have proposed a black-box parameterization for the dynamics of a mechanical system~\cite{gupta2019general, Gupta2020, lutter2018deep, cranmer2020lagrangian, greydanus2019hamiltonian}. \citet{gupta2019general} and \citet{lutter2018deep} proposed to parameterize the components of the Lagrangian of a dynamical system using neural networks, and then derive the accelerations on the system via \Cref{eqn:acc}. Specifically, they proposed to parameterize the Cholesky-factor of $\M(q)$ using a neural-network mapping $q\in\bbR^n\rightarrow \bbR^{\frac{n^2-n}{2}}$, and another neural network representing the potential-energy $V(q)$ of the system, mapping $q\in\bbR^n\rightarrow \bbR$. \citet{gupta2019general} also propose to parameterize the generalized forces $F(q,\qdot,u)$ using a neural network, and show that the expressive power of this model is equivalent to that of a neural network directly mapping $(q,\qdot,u)$ to $\qddot$, though with considerably better generalization properties. \citet{cranmer2020lagrangian} extend these works by parameterizing the Lagrangian as any neural-network mapping $(q,\qdot)\rightarrow \bbR$, allowing for the use of novel architectures such as graph neural-networks. Each of these works refers to these parameterizations by a different name, such as Deep Lagrangian Networks~\cite{lutter2018deep}, and Lagrangian Neural Networks~\cite{cranmer2020lagrangian}. We follow the naming convention of \citet{Gupta2020}, calling a model parameterizing $\M(q), V(q)$ and $F(q,\qdot)$ with neural networks a \textit{structured mechanical model} (SMM). \subsubsection{Parameter Optimization in Prior Work} We are given a time-series of sampled system configurations $y_{1:T}$, possibly corrupted by some observation noise. It is common to use a \textit{smoothing} routine to try to recover a time-series of the system's configurations $\tq$, velocities $\tqd$, and accelerations $\tqdd$ from $y_{1:T}$. One popular approach that is used by this paper is to assume the system is a double-integrator and find the desired series using Kalman smoothing \cite{aravkin2017generalized}. The parameters of an SMM are thus optimized to minimize the empirical risk between $\tqdd$ and the acceleration predicted by the SMM given $\tq$ and $\tqd$~\cite{lutter2018deep, cranmer2020lagrangian}. \citet{Gupta2020} propose instead to minimize the empirical risk between the predicted $\tq', \tqd'$ and smoothed next-states given $\tq, \tqd$. We refer to these two approaches as \textit{acceleration} and \textit{next-state regressions} respectively. In the next section, we propose an alternative methodology for optimizing the parameters of an SMM. \section{Methodology} As shown in \Cref{sec:background}, a system's dynamics can be derived from its Lagrangian using a continuous or discrete-time formulation. In this section, we describe a methodology for fitting an SMM to data using the discrete Euler-Lagrange equation. Again, we assume we have access to a time-series $\tq_{1:T}$ found by smoothing an observation time-series, but do not assume access to the time-derivatives of this series. We begin by observing that, given an arbitrary tuple of configurations $q_{1:3}$, we do not expect $\text{DEL}(q_{1:3}) = 0$ as in \Cref{eqn:deleqn}. We define the \textit{DEL residual} $\rho(q_{1:3})$ to be: \begin{equation} \rho(q_{1:3}) = \lVert \text{DEL}(q_{1:3}) \rVert_2^2 \end{equation} The essence of our methodology is to find the parameters $\theta$ of an SMM by minimizing the DEL-residual. Unfortunately, due to gauge-invariance, a trivial solution exists that has a zero DEL residual while incorrectly estimating the dynamics. Specifically, this solution is a constant Lagrangian, i.e. $\Lag(q,\qdot) = \beta, F(q,\qdot) = 0~\forall (q,\qdot)$, for any constant $\beta \in \bbR$, which gives a zero DEL residual for all trajectories. We can avoid this trivial solution by adding the constraint that the mass-matrix is strictly \emph{positive definite}. We incorporate this constraint by adding a barrier-function commonly used in semidefinite programming that allows us to lower-bound the minimum eigenvalue of the mass-matrix. Let $\rho(q_{1:3} \mid \theta)$ be the DEL-residual given that we parameterize the SMM components $M_\theta$, $V_\theta$, and $F_\theta$. The loss $L(\theta)$ is defined as follows: \begin{equation} \label{eqn:delloss} \begin{aligned} L(\theta) =& \overset{L_1(\theta)}{\overbrace{\bbE_{\tilde{q}_{1:3}\sim \mathcal{D}}\left[ \rho(\tilde{q}_{1:3} \mid \theta)\right]}} \\ &- \mu \underset{L_2(\theta)}{\underbrace{\bbE_{\tilde{q}\sim \mathcal{D}}\left[\log\det(\M_\theta(\tilde{q}) - \alpha I)\right]}} \end{aligned} \end{equation} where $\alpha$ is an arbitrarily chosen eigenvalue lower-bound, and $\mathcal{D}$ is the training dataset. The addition of the regularizer ensures that $\min \text{eig}(M_\theta(\tilde{q})) > \alpha~\forall~\tilde{q}\in \mathcal{D}$. A sensible choice for $\alpha$ is simply a fraction smaller than the smallest eigenvalue of the mass-matrix over the dataset given the initial parameter guess $\theta_0$. We show in \Cref{thm:unbiased} that minimizing this loss as opposed to the DEL-residual does not yield a biased solution. \begin{theorem} \label{thm:unbiased} Let the model class $\mathcal{M} = \{\M_\theta, V_\theta, F_\theta\}$ be closed under multiplication by a positive scalar. Then, there exists a minimizer $\theta$ of $L(\theta)$ that is also a non-trivial minimizer of $L_1(\theta)$. \end{theorem} \def\tilde{\theta}^*_\gamma{\tilde{\theta}^*_\gamma} \def\frac{d}{d\theta}{\frac{d}{d\theta}} \begin{proof} If $\mathcal{M} = \{\M_\theta, V_\theta, F_\theta\}$ is closed under multiplication by a positive scalar, then there exists $\tilde{\theta}_\gamma(\theta, \gamma)$ such that: \begin{equation} \begin{aligned} \forall~(q,\qdot)~&(\gamma \M_\theta(q) = \M_{\tilde{\theta}_\gamma}(q)) \\ \wedge&(\gamma V_\theta(q) = V_{\tilde{\theta}_\gamma}(q)) \\ \wedge& (\gamma F_\theta(q,\qdot) = F_{\tilde{\theta}_\gamma}(q,\qdot)). \end{aligned} \end{equation} Furthermore, let: \begin{equation} \begin{aligned} \theta^* = ~&\underset{\theta}{\arg\min}\quad &L_1(\theta) \\ & \textrm{s.t.} \quad & \M_\theta(q) \succ 0~\forall~q\in \mathcal{D} \end{aligned} \end{equation} be a non-trivial minimizer of $L_1(\theta)$, and let $\tilde{\theta}^*_\gamma = \tilde{\theta}_\gamma(\theta^*, \gamma)$. Because differentiation is linear, we know that: \begin{equation} \begin{aligned} \frac{d}{d\theta} L(\tilde{\theta}^*_\gamma) &= \frac{d}{d\theta} L_1(\tilde{\theta}^*_\gamma) - \mu \frac{d}{d\theta} L_2(\tilde{\theta}^*_\gamma). \\ \end{aligned} \end{equation} By gauge-invariance, \begin{equation} \begin{aligned} \frac{d}{d\theta} L_1(\tilde{\theta}^*_\gamma) &= 0 \\ \end{aligned} \end{equation} and as $\gamma \rightarrow \infty$: \begin{equation} \begin{aligned} \frac{d}{d\theta} L_2(\tilde{\theta}^*_\gamma) &= \frac{d}{d\theta} \bbE_{\tilde{q}\sim \mathcal{D}}\left[\log\det(\M_{\tilde{\theta}^*_\gamma}(\tilde{q}) - \alpha I)\right] \\ &\approx \frac{d}{d\theta} \bbE_{\tilde{q}\sim \mathcal{D}}\left[\log\det(\M_{\tilde{\theta}^*_\gamma}(\tilde{q}))\right] \\ &= \frac{1}{\gamma}\frac{d}{d\theta} \bbE_{\tilde{q}\sim \mathcal{D}}\left[\log\det(\M_{\theta^*}(\tilde{q}))\right]\\ &\rightarrow 0. \end{aligned} \end{equation} Therefore, as $\gamma \rightarrow \infty$, \begin{equation} \frac{d}{d\theta} L(\tilde{\theta}^*_\gamma) \rightarrow 0 \end{equation} implying that there is exists a minimizer of $L(\theta)$, i.e. $\tilde{\theta}^*_{\gamma\rightarrow\infty}$, that is also a minimizer of $L_1(\theta)$. \end{proof} In the next section, we compare the accuracy of SMMs learned using acceleration and next-state regression to those learned by minimizing \Cref{eqn:delloss}. \section{Experiments} \begin{figure}[t] \centering \includegraphics[width=0.6\columnwidth]{Figures/smoothing.pdf} \caption{Observed, true, and smoothed trajectories used for training.} \label{fig:smoothed} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.45\columnwidth]{Figures/undamped_results.pdf} \includegraphics[width=0.465\columnwidth]{Figures/damped_results.pdf} \caption{Comparison of training methodologies on data from a damped and undamped double pendulum.} \label{fig:undamped} \end{figure} \label{sec:experiments} This section demonstrates that, when data is collected in the presence of noise, the SMMs learned by minimizing \Cref{eqn:delloss} are more accurate than those learned by acceleration or next-state regression. We do so by studying a damped and undamped double-pendulum for which we learn $F_\theta$ and set it to zero, respectively. \subsection{Double Pendulum Domain} We simulate a double pendulum by specifying its Lagrangian and the forces that act on it. The mass matrix and potential energy forming the Lagrangian are specified in \Cref{app:dpendyn}. In the undamped setting, the forces $F(q,\qdot)$ that act on the system are zero, and in the damped case, the forces are $F(q, \qdot) = -\eta \circ \qdot$, for some positive $\eta \in \bbR^2$, where $\circ$ represents the Hadamard product. The double pendulum is simulated for 200 time-steps using a step-size of 0.05 and a variational integration scheme. We initialize the system at rest with joint-angles uniformly distributed between $[-\pi/2, \pi/2)$. We simulate 16 trajectories, using 8 trajectories as part of the training set, 4 as part of a validation set, and 4 as part of a test set. The trajectories comprising these sets are randomized across seeds. We observe the joint angles of systems at every time-step with an additive Gaussian observation noise with zero-mean and a standard deviation of 0.1 $\si\radian$, yielding observations $y_{1:T}$. We also conduct the experiment using noise standard deviations of 0.05 and 0.4 $\si\radian$ and present the results in \Cref{app:noiseexp}. \subsection{Experimental Protocol} We estimate the smoothed joint angles $\tq_{1:T}$, velocities $\tqd_{1:T}$, and accelerations $\tqdd_{1:T}$ from $y_{1:T}$. Specifically, we assume the data is generated from a linear dynamical system with dynamics: \begin{equation} \begin{aligned} x_{t+1} = \begin{bmatrix} q_{t+1}\\\qdot_{t+1}\\\qddot_{t+1} \end{bmatrix} &= \exp\left(\begin{bmatrix}0 & 1 & 0\\0 & 0 & 1\\0&0&0\end{bmatrix}\Delta t\right)x_t + w_t\\ y_t &= \begin{bmatrix}1&0&0\end{bmatrix}~x_t +v_t \end{aligned} \end{equation} where $w_t \sim \mathcal{N}( 0, Q)$ and $v_t \sim \mathcal{N}(0, R)$. We use expectation-maximization to find the likelihood-maximizing observation covariance matrix and the distributions over initial conditions. We use a fixed covariance matrix of $Q=\text{diag}([\num{e-3},\num{e-3},1.0])$. A sample trajectory recovered from this smoothing procedure is depicted in \Cref{fig:smoothed}, which shows that high-frequency content is not recovered well in the smoothed accelerations. We fit to the smooth data by minimizing the appropriate loss using stochastic gradient descent. Specifically, we use a batch size of $256$ tuples, and use Adam~\cite{kingma2014adam} to optimize with varied initial learning rates $\xi_0$. All learning rates follow a decay schedule of $\xi_k = 500 \xi_0 /(500 + k) $, where $k$ is the training epoch. We minimize each loss for 500 epochs. To avoid overfitting, we select a model with the lowest predicted acceleration error on the validation set. When selecting this model, we use smoothed validation data. We then evaluate the quality of the fit by comparing the predicted and true accelerations on the test data. The model with the lowest error in predicted acceleration is deemed the best performing model. When fitting to data from an undamped double pendulum, we use a conservative SMM in which $F_\theta = 0$. When fitting to data from a damped double pendulum, we represent $F_\theta$ with a neural network taking inputs $q$ and $\qdot$. To compare the three methodologies (minimizing the regularized DEL-residual, acceleration regression, and next-state regression), we randomize the trajectories in the train, test, and validation datasets, as well as the initialization of the SMM, over 10 random seeds. We report the mean error in predicted accelerations on the test set, as well as the standard error of this estimate. For each methodology, we compare results for learning rates $\xi_0 \in \{ \num{e-2}, \num{e-3}, \num{e-5} \}$. \subsection{Results} In \Cref{fig:undamped}, we present the mean and standard errors of test performance for various learning rates, and for all methodologies. We see that in both cases, the quality of models learned by minimizing the DEL-residual via \Cref{eqn:delloss} yields improvements over using either of the baseline methodologies, for an appropriate choice of learning rate. We present additional experiments for different noise standard deviations in \Cref{app:noiseexp}, which support these conclusions. The experiments conducted suggest that fitting SMMs to time-series by minimizing the DEL-residual via \Cref{eqn:delloss} yields models that are better than those learned by fitting SMMs via acceleration or next-state regression. \section{Conclusion} \label{sec:conclusion} In this work, we proposed a methodology for fitting SMMs to a time-seires of observed states by minimizing the discrete Euler-Lagrange residual. To avoid allowing a trivial solution to be found, we introduced a regularization term that guarantees that the mass matrix has a lower-bounded eigenvalue for all states in the dataset. We proved that using the regularized loss does not bias the learning objective. Furthermore, we showed in experiments on nosiy data from damped and undamped pendulums that our methodology learns better quality solutions than acceleration or next-state regression. An application of particular interest is the fitting of SMMs to contact-rich systems. In such systems, the use of the continuous formulation of the Euler-Lagrange equations is inappropriate, owing to the presence of infinitely large accelerations on contact events. The discrete formulation of these equations straightforwardly incorporates contact by introducing inequality constraints while minimizing the DEL-residual. Extending our methodology to enable the fitting of SMMs to contact-rich avenues is a promising avenue for future work. \section{Experiments} \label{app:exp} In this section, we detail the dynamics of the double pendulum used in our experiments, and display the result of experiments performed with additional noise standard deviations. \subsection{Double Pendulum Dynamics} \label{app:dpendyn} We use double pendulum dynamics in our experiments. The mass matrix $\M_{sys}(q)$ and potential energy $V_{sys}(q)$ of the double pendulum are as follows: \begin{equation} \M_{sys}(q) = \begin{bmatrix} I_{11} & I_{12} \\ I_{12} & I_2 \end{bmatrix} \end{equation} where \begin{align} I_1 &= \frac{1}{3} m_1 l_1^2 \\ I_2 &= \frac{1}{3} m_2 l_2^2 \\ I_{11} &= I_1 + I_2 + m_2 {l_1}^2 + m_2 l_1 l_2 \cos{q_2} \\ I_{12} &= I_2 + \frac{1}{2} m_2 l_1 l_2 \cos{q_2} \end{align} \begin{equation} V_{sys}(q) = - \frac{1}{2} m_1 g l_1 \cos{q_1} - m_2g\left(l_1 \cos{q_1} + \frac{l_2}{2}\cos(q_1+q_2)\right) \end{equation} For the experiments, we used the parameters specified in \Cref{tab:sysparams}. \begin{table}[h!] \centering \caption{System dynamics parameters.} \begin{tabular}{ccs|ccs} \toprule {Parameter} & {Value} & {Unit}&{Parameter} & {Value} & {Unit} \\ \midrule $m_1$ & 1.0 & \si{\kilogram} & $\eta_1$ & 0.5 & \si{\newton\meter\second\per\radian} \\ $m_2$ & 1.0 & \si{\kilogram} & $\eta_1$ & 0.5 & \si{\newton\meter\second\per\radian} \\ $l_1$ & 1.0 & \si{\meter} & & & \\ $l_2$ & 1.0 & \si{\meter} & & & \\ $g$ & 10.0 & \si{\meter\per\second^2} & & & \\ \bottomrule \end{tabular} \label{tab:sysparams} \end{table} \subsection{Experiments on More Noise Standard Deviations} \label{app:noiseexp} In \Cref{fig:undamped005} and \Cref{fig:undamped04}, we compare methodologies on data from undamped and damped pendulums simulated with noise standard deviations of 0.05 and 0.4 $\si\radian$ respectively. \begin{figure}[h!] \centering \includegraphics[width=0.45\columnwidth]{Figures/undamped_0.05_results.pdf} \includegraphics[width=0.465\columnwidth]{Figures/damped_0.05_results.pdf} \caption{Comparison of training methodologies on data from a double pendulum simulated with a noise standard deviation of 0.05 $\si\radian$.} \label{fig:undamped005} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\columnwidth]{Figures/undamped_0.4_results.pdf} \includegraphics[width=0.45\columnwidth]{Figures/damped_0.4_results.pdf} \caption{Comparison of training methodologies on data from a double pendulum simulated with a noise standard deviation of 0.4 $\si\radian$.} \label{fig:undamped04} \end{figure} As we can see, training with our methodology continues to outperform acceleration and next-state regression on the undamped pendulum, though has comparable performance on a damped pendulum. We suspect that the reason for this difference is the fact that as the damped signal decays, noise dominates the null signal, thereby equalizing the performance of methodologies.
{ "timestamp": "2021-05-06T02:07:23", "yymm": "2105", "arxiv_id": "2105.01811", "language": "en", "url": "https://arxiv.org/abs/2105.01811" }
\section*{Appendix} \begin{figure}[h] \includegraphics{anechoic.pdf} \\ \noindent The drone testing and evaluation setup. (a) and (b) show the drones that we used (DJI Flamewheel F450 and Holybro S500) in the outdoor UAS testing facility. (c) shows the indoor anechoic chamber used for GPS spoofing and jamming experiments. \label{fig:anechoic} \end{figure} \section{GPS and Spoofing Attacks} \label{sec:background} \subsection{GPS Overview} Global Positioning System (GPS) is the most widely used Global Navigation Satellite System (GNSS) that uses the L1 frequency band. GPS consists of 31 operational satellites at an altitude of approximately 20,220 km\footnote{As of January 2021. \cite{gps2020gov}}. Each satellite continuously transmits navigation messages containing timing information, satellites' ephemeris data, and other necessary information that enables the receiver on the ground to localize itself. The navigation messages are spread using a coarse-acquisition (C/A) code unique for each satellite and transmitted using a $1575.42\unit{MHz}$ carrier. The C/A code is public and contains $1023$ bits (also referred to as \emph{chips}) repeated every $1\unit{ms}$. Military GPS signals use a longer and a secret spreading code. This paper focuses on civilian GPS signals as they are widely used even in security-critical applications~\cite{silverstein2016electric, goldstein2013gps}. The navigation data comprises of five subframes. Each subframe contains $1500\unit{bits}$ at $50\unit{bps}$~\cite{borre2007software}. These subframes contain satellite clock informatio and satellite orbital information. The ephemeris data is updated every 2 hrs and is valid for 4 hrs~\cite{dunn2012global}. \\ \noindent A typical GPS receiver consists of four main components, i) RF front end, ii) Acquisition module, iii) Tracking module, and iv) Position Velocity Time (PVT) module. \noindent\textbf{RF front-end} receives raw RF signals and converts the raw signal to an intermediate frequency for efficient processing. Each satellite is assigned a ``channel''. This channel is similar to a hardware pipeline for processing a single satellite. \noindent\textbf{Acquisition module} performs a two-dimensional search for visible satellite signals in the received signal by correlating the received signal with a locally generated replica of each satellite's C/A code. A two-dimensional search is a time-domain and frequency-domain search to account for code phsae delay and Doppler shift that arise because of the satellite's and the receiver's motion. If the code and Doppler searches result in a correlation peak above a certain threshold, the receiver then switches to tracking and demodulating the navigation message data. \noindent \textbf{Tracking module} is responsible for tracking the code phase and the Doppler shift provided by the acquisition module. It also demodulates the navigation messags and passes on to the PVT module. \noindent \textbf{Position Velocity Time Estimation (PVT)} module decodes raw navigation bits and calculates pseudorange between the satellite and the receiver. A receiver requires information from at least four satellites for accurately calculating position, velocity, and time. The PVT module is the last block of the GPS receiver and implements algorithms to compute navigation solutions and delivers information in appropriate formats (e.g.,\xspace RINEX, UBX, NMEA~\cite{navmsgformat}) for further processing. \subsection{Attacker goals and assumptions} \label{subsec:spoofing_attacks} In a GPS spoofing attack, an adversary transmits specially-crafted radio signals identical to authentic GPS satellite signals. The spoofing signals are generated for an attacker-defined trajectory or static position and transmitted typically using a software-defined radio hardware platform. All the necessary information for generating GPS signals like modulation schemes, message formats, and spreading codes is public knowledge. The goal of an attacker can be to i) force the user to calculate a wrong geographic location, ii) forge timing information, or iii) execute a denial of service attack by causing interference. During a spoofing attack, the GPS receiver locks onto the stronger signal i.e.,\xspace the attacker's signals, ignoring the weaker legitimate satellite signals. This results in the receiver computing a false position, velocity, and time-based on the spoofing signals. Note that the received GPS signal power on the ground is typically around $-127.5\unit{dBm}$ and, therefore, trivial for an attacker to overshadow the legitimate signal with the spoofing signal. In this work, we focus on an attacker that forces the user to calculate a wrong geographic location. We do not consider an attacker whose goal is to cause a denial of service attack by transmitting jamming signals. An attacker can manipulate the calculated PVT solution in two ways: i) manipulate ToA of messages or ii) manipulate navigation message contents (e.g.,\xspace{ satellite location, transmission time}). We base the attacker model on work done in~\cite{tippenhauer2011requirements} and drone hijacking strategies proposed in~\cite{noh2019tractor}. We assume the following about the attacker. The attacker can have omnidirectional or directional antennas. They can spoof any number of satellites. We do not restrict the position of the attacker. The attacker is aware of SemperFi and can craft spoofing signals accordingly. We assume that the attacker has not compromised the onboard sensors and that these sensors provide valid, unadulterated data. The attacker can execute a spoofing attack using any of the methods mentioned earlier. The attacker is capable of executing the sophisticated seamless-takeover attack as described in~\cite{tippenhauer2011requirements}. In this attack, the receiver does not undergo abrupt loss of signal reception or lock. In a seamless takeover attack, the attacker keeps the navigation message content identical to the legitimate GPS signals and gradually introduces offsets in the code phase delays, affecting the pseudorange calculations. The most popular way of executing such spoofing attacks is to use GPS signal generators (both hardware~\cite{labsat} or software~\cite{osqzss2015gpssim}) to generate the spoofing signals. Our proposed GPS receiver, \NoCaseChange{SemperFi\xspace}, can counteract all the types of attackers mentioned above. We focus specifically on stealthy seamless takeover attacks and, in general, attacks that are hard to not only detect but pose challenges to realizing a fully-autonomous GPS receiver capable of uninterrupted true location estimates even in an adversarial setting. \section{Conclusion} In this paper, we presented \NoCaseChange{SemperFi\xspace}, a single-antenna spoofer signal eliminating GPS receiver that is capable of providing uninterrupted legitimate locations even in the presence of a strong adversary. We designed, implemented SemperFi in GNSS-SDR capable of real-time operations and evaluated it using various GPS signal traces, real drones and popular embedded platforms. We showed that \NoCaseChange{SemperFi\xspace} is capable of identifying adversarial peaks by executing flight patterns less than 50 m long and recover the true location in under 10 s. Finally, we release the implementation of our receiver design to the community for usage and further research. \section{Discussion} \paragraph{Flexible design:} SemperFi is designed to be flexible and versatile. In addition to integrating SemperFi into the acquisition module as shown in~\Cref{sec:implementation}, we can use \NoCaseChange{SemperFi\xspace} as a plug-in module that can filter out adversarial signals and pass on legitimate signals to a conventional receiver. This mode of operation requires minimal modifications to the existing receivers. SIC technique allows \NoCaseChange{SemperFi\xspace} to be used as a plug-in module. Furthermore, SemperFi's capabilities can also be extended to other satellite navigation systems like GALILEO as they follow a similar operating principle of code division multiple access using spreading codes and computation of pseudoranges. As a result of lack of robust countermeasures, they face similar security issues. \paragraph{Limitations:} The current design restricts \NoCaseChange{SemperFi\xspace} to aerial vehicles. It is challenging to design these maneuvers for terrestrial vehicles due to the vehicle's mobility constraints. As shown in ~\cite{narain2019security}, an attacker can exploit the short-term stability of IMU sensors due to predictable maneuvers and limited mobility. As seen in recent GPS-related incidents, ~\cite{tesla2019hack,shanghai2019gpshack} road and oceanic navigation pose a challenge in inefficient peak identification given the limitation of performing maneuvers. Even if the drone is operable, frequently changing weather conditions, especially wind vectors, can affect the drone's maneuverability, especially high-velocity crosswinds. However, the algorithm can be modified to work with crosswinds by determining the force and velocity of wind as proposed in~\cite{neumann2015real} or by equipping the drone with solid-state anemometers. An attacker aware of SemperFi can indeed start transmitting different spoofing signals periodically. However, this will result in an attack similar to jamming, which can be trivially detected. For an attacker to stealthily circumvent SemperFi, the spoofing signals should reflect UAV's random maneuver to match the GPS derived motion with the IMU derived one. An attacker capable of generating spoofing signals in real-time based on the random maneuver will succeed, and the feasibility of executing such an attack remains to be studied. An attacker will have to predict the maneuver accurately to be able to evade peak identification. To do this, an attacker will have to track the drone in real-time with delays of only a few milliseconds as in this time; the attacker has to predict the next coordinate to spoof, construct the GPS signal and transmit. The refresh rate of typical GPS receivers provides the attacker about 200\unit{ms} to perform all these tasks. An attacker can do this by deploying passive quadcopter detection RADARs as described here~\cite{guvencc2017detection, DroneDefense, fang2018experimental}. The above references all detect and track drones using advanced techniques like ultra-wideband scanning. However, none generate GPS spoofing signals to emulate the drone's behavior. Given the time constraints, it is hard for an attacker to generate spoofing signals and track the drone's movements. Concerning the generalization of the proposed solution, Semper-Fi exploits the short-term stability of IMU and executes quick maneuvers. Designing these maneuvers for terrestrial vehicles is a challenge due to the constraints on the vehicle's mobility. The problem is similar when a UAV is flying between obstacles. Note that the attacker also has similar constraints to force the drone on a different path successfully. A potential solution is to let the UAV decide the maneuver in real-time based on its degree of freedom; however, this might increase the probability of the attacker guessing the maneuver correctly. Another limitation of \NoCaseChange{SemperFi\xspace} is that tracking legitimate signals fails if the attacker has a power advantage of more than 15\unit{dB}, and the attacker is transmitting different navigation messages. However, this 15 dB limit is a limit defined by our signal processing. Peripherals like multiple directional antennas and receivers can extend the 15\unit{dB} limit. Moreover, an attacker transmitting with more than 15\unit{dB} of power advantage can easily be detected and localized by the receiver. An attacker can cause a denial of service attack by transmitting multiple signals that can overload the system. Even though \NoCaseChange{SemperFi\xspace} can handle multiple peaks through an iterative cancellation process, it is prone to resource exhaustion as each iterative cancellation increases process overhead. The spoofing detection technique that we have adopted also has some limitations. For example, it can reliably detect spoofing only if auxiliary peaks are visible. However, as mentioned earlier, SemperFi can work with multiple spoofing detection techniques that do not rely on auxiliary peaks. \section{Security and Performance Evaluation} We evaluate \NoCaseChange{SemperFi\xspace}{} and showcase its performance in recovering legitimate GPS signals under various attack settings and signal traces. Specifically, we use three different datasets that contain both spoofing and legitimate signals: i) Synthetic GPS signals generated using COTS GPS simulators, ii) a public repository of GPS spoofing signals (TEXBAT)~\cite{humphreys2012texas}, and iii) recorded real-world GPS signals. \bigskip \subsection{Evaluation Traces} \label{sec:sig_traces} \textbf{GPS Simulator:} We performed most of our evaluation on synthetic signal traces generated locally using GPS-SDR-SIM\cite{osqzss2015gpssim}, an open-source tool for generating GPS signals. This provides granular control over signal properties such as power levels, temporal delays, and Doppler shifts; thus enabling the user to generate a variety of spoofing scenarios. We evaluated \NoCaseChange{SemperFi\xspace} against both static (stationary locations) and dynamic scenarios (motion trajectories). These signals were transmitted using two USRP B210s, one each for the legitimate and attacker signal. We recorded the signals using a USRP N210 at a rate of $10\unit{MSa/s}$. We wired all RF-frontends to prevent signal leakage as it is illegal and hazardous to transmit GPS signals.. For static and dynamic scenarios, we picked locations in downtown San Francisco. We generated the attacker's signal such that the obtained location is at a specific offset from the legitimate location. We picked locations with the offset increasing in steps of 500 m up to a maximum spoofed offset of 3500 m. The offset locations were specifically selected to simulate various scenarios. \noindent \textbf{Texas Spoofing Test Battery (TEXBAT):} TEXBAT is a set of civilian GPS spoofing scenarios that are a standard for evaluating spoofing countermeasures. The repository consists of spoofing signals traces that include both position and time push scenarios. TEXBAT also provides scenarios where the attacker's signals and the legitimate signals are synchronized, similar to the strong seamless-takeover attack. We evaluate the effectiveness of \NoCaseChange{SemperFi\xspace} against both static and dynamic position push. These signal traces were recorded at $25\unit{MSa/s}$. The traces are 7 mins long, and the attacker starts spoofing roughly $90-100\unit{s}$ into the signal trace. \begin{figure}[t] \centering \includegraphics[width=0.8 \columnwidth]{live_setup.pdf} \caption{Signal recording setup A) GPS signal RX (USRP N210), B) ANT-555 active GPS antenna with a 5V bias-tee, C) GPS signal TX and D) GPS simulator control unit.} \label{fig:live_setup} \end{figure} \noindent \textbf{Live GPS Recordings:} We also evaluated \NoCaseChange{SemperFi\xspace} against a combination of live legitimate GPS signals and attacker signals. This scenario covers the real-world spoofing scenario where the attacker transmits spoofing signals while the receiver is locked on to legitimate signals. We recorded a set of real-world GPS signal traces through extensive war-driving in our locality\footnote{location anonymised.}. We recorded the legitimate GPS signals using the setup shown in~\Cref{fig:live_setup}. We captured the GPS signals using an ANT-555 antenna supplied with a 5 V DC power supply. We combined the received signal with the attacker signals using a combiner. We used GPS-SDR-SIM to generate attacker's signals. The spoofed location was set to 4.1\unit{km} away from the original location. Hard-wiring the attacker allowed us to test in a best-case scenario for the attacker as they have a clear channel to the victim receiver and evaluate its performance in eliminating the spoofing signal. \subsection{Evaluation Metrics and Results} In this section we evaluate our implementation of \NoCaseChange{SemperFi\xspace}{} and its components. We evaluate API's performance by studying the maneuvers' feasibility and the ability to correctly distinguish legitimate signals and adversarial signals. The evaluation was performed using real drones and also using Gazebo~\cite{gazebo}, a robotics simulator. Evaluation metrics for evaluating the recovery process are amplitude estimation accuracy, the accuracy of the recovered location, and the time required to perform recovery. To further evaluate the results, we study how attacker synchronization and the power advantage an attacker has over the legitimate signals affect the recovered location's accuracy. We also evaluate the effects of jamming attacks on the drones. \paragraph{Adversarial Peak Identification:} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{maneuvers_v1.pdf} \caption{Pseudorandom trajectories generated by API. The maneuvers are triggered at location (0,0). The region marked in red is considered to be the ``No-maneuver'' zone as the attacker has higher probability of estimating the drone's movement.} \label{fig:api_all_maneuvers} \end{figure} It was evaluated in a simulated environment using Gazebo~\cite{gazebo} and on real drones. For evaluation, we follow a scenario where the attacker is successful in executing a spoofing attack. The attacker executes either a seamless takeover or a hard spoofing attack. In either case, API will trigger the maneuver when the peak separation is more than 500\unit{ns} as explained in~\cite{ranganathan2016spree}. This is, however, configurable. We evaluate the peak identification strategy by studying the feasibility of the maneuvers and the ability to accurately distinguish between the trajectory of the drone as tracked by IMU sensors and GPS receiver. IMU sensors tend to accumulate errors over time. For a commercial navigation grade MEMS sensor, these errors or `random walks', can be up to 1.59\unit{km/hr}\cite{narain2019security}. Thus for a 60\unit{s} maneuver, position estimates can drift up to 26\unit{m}. Based on these characteristics, the API uses a threshold of 5\unit{m} to decide whether the receiver is tracking legitimate signals or adversarial signals. Researchers in \cite{woodman2007introduction} show that a sensor fusion algorithm that combines IMU measurements with a magnetometer can significantly increase the sensors' accuracy. \begin{figure}[t] \centering \includegraphics[width= 0.9 \columnwidth]{real_maneuver.pdf} \caption{(a) shows the comparison of position estimates of EKF and GPS in an adversarial setting where the attacker is unable to estimate the drones maneuver. (b) shows a comparison of position estimates of EKF and GPS in a non-adversarial setting. The trajectories in (b) show trivial deviation whereas, trajectories in (a) show significant deviation. (c) shows the flight log, the UAV's actual trajectory and the maneuver generated by the API.} \label{fig:real_maneuver} \end{figure} We tested maneuvers with flight time of 10\unit{s} to 60\unit{s}. After a thorough evaluation, we concluded that increasing the time duration will: i) cause more inaccuracies because of IMU drifts and ii) allows the attacker an increased window to predict the maneuver and spoof. To accurately determine malicious signals, it is essential to follow a trajectory that is unknown to the attacker. Performing the maneuver using the current trajectory will indeed reduce the required time. However, a major limitation is that the current trajectory is either defined by the attacker or known to the attacker. A random trajectory increases complexity and time but, since the attacker is unaware of this trajectory, it has a better chance of succeeding. \Cref{fig:api_all_maneuvers} shows few pseudorandom trajectories generated by API. Based on factors like accuracy of civilian GPS, IMU errors and drifts, and the EKF leash described in~\cite{noh2019tractor} we determined a ``No-maneuver'' zone that is 20 m wide. It is necessary for the drone to execute a maneuver that deviates from the current trajectory and one that is not straight. Trajectory no. 2 in~\Cref{fig:api_all_maneuvers} deviates from the current trajectory, however, follows a straight line and hence it is possible for an attacker to predict it and generate spoofing signals accordingly. Thus, it is necessary to perform this invasive maneuver. Flight time for these maneuvers was between 10-20\unit{s}. For deciding the maneuver, it is vital to consider the drifts that IMUs will experience, the speed at which the UAV can execute maneuvers (UAVs will reduce speed significantly when turning), and the accuracy of GPS measurements. The surrounding terrain and weather play an important role in determining the maneuver. We assume that if the drone is operable for our evaluation, it can perform the required maneuvers. ArduCopter can provide accurate positioning with IMU measurements for 10\unit{s}~\cite{arducoptergps}, after which IMU measurements start drifting due to lack of rectification. For successful identification, UAV should complete the maneuver under 20\unit{s} and travel at-least 30\unit{m} from the original position where the identification maneuver was triggered. ~\Cref{fig:real_maneuver}(a) and (b) shows the deviations in trajectories as estimated by EKF and GPS sensors in adversarial and non-adversarial settings, respectively. The correlation algorithm explained in~\Cref{sec:api} can detect such deviations. ~\Cref{fig:real_maneuver}(c) shows the actual flight path flown by the UAV and the project path as generated by the maneuver generation algorithm.\footnote{The actual maneuver takes 15\unit{s}. The plot includes the time the drone takes to arm/disarm.} We used a Holybro S500 drone to generate the data in~\Cref{fig:real_maneuver}. Prior works like~\cite{savior2020usenix} show the use of sensor fusion to detect spoofing attacks. However, these solutions may not provide reliable attack detection as an attacker can stealthly introduce spoofing signals. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{amp_eval.pdf} \caption{Amplitude estimation accuracy in various scenarios. Attacker power advantage does not apply to cases 1 and 3. In case 4, each satellite is spoofed. Power advantage refers to the advantage that the attacker's signal has over the legitimate signal.} \label{fig:amp_eval} \end{figure} \paragraph{Amplitude Estimation:} It plays a vital role in successful signal recovery. In SemperFi, we leverage the max CAF value or the correlation coefficient value to estimate the original signal's amplitude. In this strategy, the estimate's accuracy is susceptible to various factors like interference caused by signals from other satellites, the presence of adversarial signals, and artifacts introduced by a wireless channel. For evaluating the accuracy, we conducted an experiment where we executed amplitude estimation in four cases. Refer to~\Cref{fig:amp_eval} for results. The accuracy of amplitude increases as the attacker's power advantage increases. As a result of inaccuracies in amplitude estimation caused by Doppler shifts, clock skews, and phase shifts, SemperFi may perform multiple iterations to attenuate the adversarial signal and recover successfully. \paragraph{Recovered Location Accuracy:} We evaluate \NoCaseChange{SemperFi\xspace}'s effectiveness in eliminating the spoofing signal by determining the location's accuracy after passing through the various blocks of \NoCaseChange{SemperFi\xspace}. We use the Universal Transverse Mercator (UTM)~\cite{usgsutm} system to present our location accuracy results. We evaluated the performance of \NoCaseChange{SemperFi\xspace} against both static and dynamic scenario spoofing attacks present in the datasets described in~\Cref{sec:sig_traces}. \begin{figure}[t] \centering \includegraphics[width=0.8 \columnwidth]{recovery_utm_plot.pdf} \caption{Changes in recovered path and legitimate path coordinates as represented in UTM coordinate system for recovery of dynamic spoofing scenario (GPS Simulator).} \label{fig:recovery_utm_plot} \end{figure} First, we evaluate the performance of \NoCaseChange{SemperFi\xspace} against the dataset generated using GPS signal generators. The UTM plots depict the variations in locations and a timeline of events. As seen in~\Cref{fig:recovery_utm_plot}, at point (a) the spoofer starts. Roughly for 15 s, the attacker is in sync with the legitimate receiver. During this time, the acquisition plot shows no auxiliary peaks. After 15 s at point (b), the attacker starts introducing offsets in the calculated location. As a result, the receiver starts deviating from the expected trajectory. As soon as peak separation is enough to rule out multi-path transmissions, \NoCaseChange{SemperFi\xspace}{} is activated, and at point (c), the receiver starts following the recovered trajectory in-spite of adversarial presence. Average deviation of recovered path from legitimate path is 10.1\unit{m} (Easting) and 2.6\unit{m} (Northing). \Cref{fig:static_recovery} shows the recovery operation results on static scenarios across all three datasets. GPS simulator traces where the spoofed offset is 6.2\unit{km} with recovered offset of 2\unit{m}. Live recording with a recovered offset of 6\unit{m}. TEXBAT's power matched position push scenario where the attacker spoofs only in $Z$ plane. Figure~\ref{fig:recovery_ds6_plot} shows the performance of \NoCaseChange{SemperFi\xspace} against TEXBAT's dynamic position push. Note that TEXBAT's position push consists only of altitude, which is known to be error-prone for GPS~\cite{rothacher2002estimation}. \NoCaseChange{SemperFi\xspace} was able to recover the spoofed $Z$ plane offset with an accuracy of 108\unit{m}. \begin{figure}[t] \centering \includegraphics[width= 0.8 \columnwidth]{static_recovery.pdf} \caption{The recovered offset and spoofed offset for three scenarios.} \label{fig:static_recovery} \end{figure} \begin{figure}[t] \centering \includegraphics[width= \columnwidth]{recovery_ds6_plot.pdf} \caption{Variations in recovered path and legitimate path coordinates represented in UTM coordinate system.} \label{fig:recovery_ds6_plot} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.8 \columnwidth]{ase_pse_3dB.pdf} \caption{The effect of peak separation on accuracy of the recovered location. The closer the peaks, the harder it gets to accurately track them.} \label{fig:ase_pse} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.8 \textwidth]{10dB_2_step_rec_acq_plot.pdf} \caption{Two step signal attenuation of a strong adversarial signal. (a) shows the original acquisition plot, (b) shows acquisition plot where legitimate peak is slightly visible and (c) final acquisition plot with fully suppressed adversarial peak.} \label{fig:high_power_acqplot} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{3db_15db_iq.pdf} \caption{Discrete time scatter plot of recovered nav message where attacker has (a) 3 dB , (b) 5 dB, (c) 10 dB and (d) 15 dB power advantage. A powerful attacker adds noise and hence distorts legitimate nav messages.} \label{fig:3db_15db_iq} \end{figure} \begin{figure}[t] \centering \includegraphics[width= 0.9 \columnwidth]{pse_15dB.pdf} \caption{Spoofed offset vs offset in recovered location for attacker with 15 dB power advantage. \NoCaseChange{SemperFi\xspace} uses Pseudorange Rectifier{} for recovery. For locations refer to~\Cref{sec:sig_traces}.} \label{fig:pse_15db} \end{figure} \paragraph{Attacker synchronization:} One major factor which affects recovered locations is attacker synchronization with legitimate signals. In other words, the effectiveness of eliminating spoofing signals depends on the temporal shifts in the ToA of legitimate and spoofing satellite navigation messages. The closer the synchronization, the harder it is to recover entirely without additional processing. We evaluated the effects of attacker synchronization by generating spoofing scenarios where the attacker spoofs locations with an offset in the increments of 500 m from the original position. This results in a corresponding temporal shift between the attacker's spoofing signal and the legitimate signal. The minimum peak separation was 800 ns at 500 m, and the maximum peak separation 5500 ns at 3500 m. Note that this peak separation depends on the satellite constellation at any point in time. ~\Cref{fig:ase_pse} shows the results of this experiment. Evenly spaced separation in the distance (m) is replaced by corresponding mean peak separation in tracked satellites' represented in nanoseconds. Peak separation directly affects how the attacker's signals interact with legitimate signals ( e.g.,\xspace 800 ns replace 500 m). This poses a challenge to the tracking loops, and as a result, the tracking loops undergo signal cross-over. I.e., the peaks are so close that the tracking loop starts tracking the wrong signal. This is evident from higher recovered location variation for scenarios with closer peaks. \paragraph{Effect of Attacker's Power Advantage:} Finally, we evaluate the performance of \NoCaseChange{SemperFi\xspace} against attackers with varying power levels up to 15 dB. Note that in seamless takeover attacks, the maximum power difference required to execute the attack successfully is not more than $2-3\unit{dB}$~\cite{tippenhauer2011requirements,humphreys2012texas}. TEXBAT repository's seamless takeover attack data-trace has a power difference of not more than 10\unit{dB}. We created spoofing scenarios where the attacker has a power advantage of 3 to 15 dB. \NoCaseChange{SemperFi\xspace} can attenuate stronger peaks and make the suppressed weaker legitimate peaks visible in the acquisition plot. ~\Cref{fig:high_power_acqplot} shows a multi-stage attenuation process for an adversary with 15 dB power advantage. However, as seen in discrete-time scatter plot in~\Cref{fig:3db_15db_iq}(d), in the case of an attacker with a 15 dB power advantage, the adversarial signal introduces much noise, which distorts the navigation bits. In such a scenario, despite the reduced accuracy the receiver can switch to pseudorange rectifier and recover the correct location by rectifying pseudoranges. ~\Cref{fig:pse_15db} shows the results of signal recovery in the presence of an attacker with a 15 dB power advantage. \begin{table}[t] \centering \begin{tabular}{|c|c|} \hline \textbf{Model} & \textbf{Processing time} \\ \hline RPi 3B+ & 11.57 s/itr \\ \hline RPi 4 & 6.98 s/itr \\ \hline Jetson Nano & 8.35 s/itr \\ \hline Jetson Xavier & 5.19 s/itr \\ \hline Intel Core i7 & 2.79 s/itr \\ \hline Intel Xeon & 2.12 s/itr \\ \hline \end{tabular} \caption{A comparison of time required by the corresponding system to perform one iteration of signal cancellation.} \label{tbl:performance} \end{table} \paragraph{Real-time performance:} Implementation of SemperFi on GNSS-SDR allows us to deploy SemperFi on various mobile platforms and embedded systems. We evaluate the performance by deploying and executing SemperFi on the following systems. i) Raspberry Pi 3B+, ii) Raspberry Pi 4, iii) NVIDIA Jetson Nano, iv) NVIDIA Jetson Xavier, v) Intel Core i7\footnote{https://www.dji.com/manifold-2}, and vi) Intel Xeon E5-2630\footnote{not currently used in any UAV platform and ported only for comparison}. These systems are some of the standard systems used as flight controllers onboard UAVs. We use signal traces as described in~\Cref{sec:sig_traces} for evaluating the performance. The sampling rate of $10\unit{MSa/s}$ plays a significant role in determining the performance of SemperFi as it is directly related to the processing overhead. Our primary evaluation metric is the time required per iteration of cancellation. It is important to note that GNSS-SDR is itself a resource-demanding application. Refer to~\Cref{tbl:performance} for a comparison showing each system's performance. Each platform may take up to 5 iterations for complete recovery depending on the accuracy of the estimates. Thus, complete signal recovery may add delay to the calculation of the PVT solution; in the case of Jetson Xavier, for example, by 25.96\unit{s}. It is important to note that these values are from a sub-optimized version of SemperFi. It is possible to improve the performance by optimizing SemperFi for a specific system that leverages its unique characteristics and features. For example, SemperFi can be re-programmed to use CUDA cores available of NVIDIA Jetson Nano and Xavier. In general, it is best to deploy SemperFi on an FPGA as it will significantly improve the performance. \paragraph{UAVs' Resilience to Jamming Attacks:} Unlike in the spoofing attack, the attacker's primary goal is to cause a denial of service attack on GPS in jamming attacks. Hard spoofing attacks as described in~\cite{noh2019tractor} and jamming attacks will cause the receiver to lose the lock. However, in case of a hard spoofing attack, the receiver will be able to re-acquire the lock, while on the other hand, in case of a jamming attack, it will not. For a successful jamming attack, an attacker has to transmit noise, or a simple amplitude modulated continuous wave at the GPS carrier frequency. An attacker can also transmit modulated dummy GPS navigation messages (random 1s and 0s) using GPS modulation techniques and execute a successful denial of service attack, It is important to note that GPS receivers provide a processing gain of 43.1\unit{dB}. Thus, the jamming signal should be strong enough to overpower the legitimate signal at the receiver. Since we performed GPS jamming experiments in an anechoic chamber, creating a GPS denied region was accomplished by simply turning off our GPS transmitter. All modern drones are equipped with fail-safe mechanisms. The drone activates these fail-safes in case of GPS is unavailable. These fail-safes can be: i) Land at the current location, ii) Return to home using IMU sensors, or iii) hover in the current location until a GPS fix is re-established. In our case, the drone was programmed to land at the current location. Prior work ~\cite{olsen2003jamming} has showcased various GPS jamming techniques and provides a comprehensive analysis of GPS receivers' susceptibility to jamming. \section{Implementation} \label{sec:implementation} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{semperfi_gnss-sdr_impl.pdf} \caption{LSR is implemented as part of GNSS-SDR and API is implemented as part of the UAV flight controller. \label{fig:gnss-sdr-impl} \end{figure} The two sub-systems which make up SemperFi are implemented independent of one another: the API is implemented at the flight controller level while the LSR alongwith spoofing detector is implemented in GNSS-SDR as part of the acquisition block. These two components interact with each other over a TCP socket. We implemented LSR module of \NoCaseChange{SemperFi\xspace}{} in GNSS-SDR~\cite{fernandez2011gnss} an open-source software-defined GNSS receiver written in C++. We implemented the API module using consumer drones. Refer to~\Cref{fig:gnss-sdr-impl} for a schematic of the implementation. GNSS-SDR follows GNURadio architecture and supports the processing of pre-recorded signals from a file source and software-defined radio frontends like a USRP~\cite{ettus}. GNSS-SDR follows a hardware receiver's design as described in~\Cref{sec:background}, except all the components are implemented in software. Signals from individual satellites are processed by individual \emph{channels}. Each channel is like a hardware pipeline of various GPS signal processing blocks, including acquisition, tracking, and PVT calculation. At run-time, the GNSS-SDR builds the receiver using these blocks based on specifications from a user-defined configuration file. This allows loosely coupled operations. In our implementation and evaluation, we use software-defined radio hardware platforms manufactured by Ettus Research~\cite{ettus}, specifically, USRP B210 and N210 with SBX-40 daughterboard, for recording and providing raw data. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{s500.pdf} \caption{Hardware setup showcasing the Holybro S500 drone, the radio controller and the ground control station.} \label{fig:s500} \end{figure} \paragraph{Adversarial Peak Identifier\xspace (API)} In \NoCaseChange{SemperFi\xspace}, API is implemented as an independent module that interacts with the LSR This was implemented on an unmanned aerial vehicle in a simulated environment as well as on a DJI Flamewheel F450 and a Holybro S500. These drones were specifically chosen as they support Pixhawk 4~\cite{pixhawk}, an advanced autopilot system and ArduCopter~\cite{ardupilot} firmware. Refer to~\Cref{fig:s500} for the hardware setup. A spoofing attack can cause errors in EKF estimates and raise EKF variance errors as GPS and EKF estimates do not match. In such a case, ArduCopter activates EKF failsafe. Moreover, ArduCopter may raise GPS glitch errors and activates the GPS failsafe. By default, ArduCopter switches to \emph{LAND} mode and lands at the current location. To prevent this, we temporarily disabled EKF and GPS failsafe by manipulating the \emph{FS\_OPTIOINS} parameter. The pseudorandom maneuver is implemented as a sequence of left/right turns determined at run-time. The algorithm generates a pseudorandom set of velocity vectors and instructs the autopilot to fly that heading for a specified time. To achieve this, we used the \emph{SET\_POSITION\_TARGET\_LOCAL\_NED} MAVLink message type to instruct the drone to move with a specified velocity and for a specific time. In our implementation, we used \emph{DroneKit}~\cite{dronekit} installed on a Raspberry Pi 3B+ to generate the maneuver and instruct the flight controller to execute it. A specific sequence of these messages then carries out the entire maneuver. Once the UAV completes the maneuver, API performs the correlation operation as described in~\Cref{sec:api} and notifies LSR over a TCP socket. \paragraph{Legitimate Signal Retriever{} (LSR)} In SemperFi, we implement LSR as a part of GNSS-SDR's acquisition module. As mentioned earlier, we use auxiliary peak based spoofing detection technique as proposed in~\cite{ranganathan2016spree}. We modified the acquisition block such that spoofing detection is triggered every time the acquisition block is activated. This allows \NoCaseChange{SemperFi\xspace}{} to recover from hard spoofing attacks that result in a loss of lock. Positive detection of an adversarial signal triggers further processing that includes peak identification, recovery signal generation, and signal recovery. GNSS-SDR allows external communications using TCP sockets as outlined here~\cite{fernandez2012open}. This enables GNSS-SDR to interact with the UAV's flight controller responsible for performing peak identification maneuvers. Once the API validates spoofing and provides peak information, LSR proceeds to the cancellation and recovery state. At this stage, LSR has the peak information and a rough estimate of the Doppler and the code phase delay of the satellite signal. The accuracy of parameter estimates is directly related to the degree to which the adversarial peaks may be attenuated; \NoCaseChange{SemperFi\xspace} performs re-acquisition using a more refined grid search to obtain more precise estimates. After performing a narrow search, LSR generates a replica of the satellite signal using the tracking parameters estimated in the two-step acquisition process. LSR also estimates the satellite signal's amplitude using the method described in~\Cref{sec:lsr}. We use the Vector-Optimized Library of Kernels~\cite{west2016vector} function to perform vector operations. These functions provide a significant boost to performance and reduce computation time. Once the signal is regenerated, it undergoes phase correction and cycles through phase shifts to determine maximum attenuation. Due to inaccuracies in the amplitude, Doppler, and the code phase delay estimates, a single attempt at recovery will not wholly attenuate the adversarial peak. SemperFi iterates the entire acquisition and recovery process until the legitimate signal is stronger than the adversarial signal. \noindent \emph{Pseudorange Rectifier{}:} This module is implemented as an optional component in the tracking module and is disabled by default. The receiver enables pseudorange rectifier{} if the navigation message decoder fails to detect a preamble even after tracking the correct peak. Even if the navigation message decoder can find preamble and decode the navigation bits, there is a possibility that adversarial peak interferes with correct PVT estimation. In these cases, the receiver will activate pseudorange rectifier{}. Pseudorange Rectifier{} can also be activated manually by setting a flag in the receiver configuration file. When pseudorange rectifier{} is activated, the tracking module tracks the adversarial peak instead of the legitimate peak. It, however, still obtains tracking parameters of the legitimate peak. It uses legitimate and adversarial code phase information to calculate $\Delta\tau^{i}$. Code phase information and subframe start pointer determined by preamble position in a buffer of samples are used to determine the ToA of satellite signals. A sample counter accurately maintains this information. $\Delta\tau^{i}$ is used to offset sample counters appropriately. The receiver still decodes adversarial navigation messages; however, it uses the ToA of legitimate signals for pseudorange calculation to calculate the correct PVT solution. \section{Introduction} A wide variety of applications such as positioning, navigation, asset and personnel tracking, communication systems, power grids, emergency rescue and support, and access control use Global Positioning System (GPS) ubiquitously to estimate location and time. Given the popularity of unmanned vehicular systems such as self-driving cars, GPS application in safety- and security-critical applications is increasing. Due to the lack of authentication in civilian navigation messages, GPS is vulnerable to signal spoofing attacks. In a GPS signal spoofing attack, the attacker transmits specially crafted signals that imitate satellite signals with power high enough to overshadow the legitimate signals~\cite{amin2016vulnerabilities}. Several researchers have shown that it is possible to modify the course of ships~\cite{texas2013yachtspoofing}, unmanned aerial vehicles~\cite{shepard2012drone} and self-driving cars~\cite{tesla2019hack} by simply spoofing GPS signals. There is also an increase in GPS spoofing incidents~\cite{c4ads2019spoofingreport} reported from around the world. For example, there are reports of thousands of ships in Shanghai falling victim to GPS spoofing~\cite{shanghai2019gpshack}. There are also reports~\cite{c4ads2019spoofingreport} of state actors using GPS spoofing and jamming in several countries to disrupt everyday affairs. With the widespread availability of software-defined radio and public GPS signal generator repositories~\cite{osqzss2015gpssim}, it is now possible to spoof GPS signals with less than \$100 of hardware equipment. Furthermore, it is possible to use GPS spoofing to trip power generators in smart grids triggering false activation of automatic control systems and potentially leading to wide-area power blackout~\cite{risbud2018vulnerability}. Proposed countermeasures are either cryptographic solutions or leverage physical-layer signal properties. Countermeasures that use some form of cryptographic authentication~\cite{kuhn2004asymmetric, wesson2012practical, lo2010authenticating, cheng2009authenticity} prevent attackers from generating arbitrary false GPS signals. However, they do not protect against attackers capable of recording and replaying legitimate GPS signals. The receiver's location and time are estimated using the GPS signal's time-of-arrival and not just on the navigation message content. Other countermeasures that do not require cryptographic authentication rely on detecting anomalies in the received GPS signal's physical characteristics, such as received signal strength~\cite{warner2003gps}, noise levels, direction or angle of arrival~\cite{meurer2016direction}, and other data that are readily available as receiver observables on many COTS GPS receivers. Some countermeasures~\cite{ranganathan2016spree} exploit the difficultly in canceling out legitimate GPS signals completely to detect stealthy, seamless takeover attackers. A few countermeasures propose the use of additional sensors~\cite{jafarnia2012detection} and receivers~\cite{tippenhauer2011requirements,montgomery2011receiver} to detect spoofing attacks. The majority of the above schemes only detect a GPS spoofing attack, i.e., raise the alarm in case of a spoofing attack and often require manual intervention, unlike SemperFi. Moreover, existing spoofing mitigation techniques are ineffective against strong adversaries capable of completely overshadowing legitimate signals and stealthy attackers, e.g., seamless takeover~\cite{tippenhauer2011requirements} of victim's GPS location without any signal disruption despite having redundant fail-safe sensors~\cite{narain2019security}. In summary, today's GPS receivers, specifically those implemented on UAVs, are incapable of uninterrupted operation during a spoofing attack. In this work, we present \NoCaseChange{SemperFi\xspace}, a single-antenna GPS receiver for UAVs that autonomously recovers and continues to output legitimate location during a spoofing attack. \NoCaseChange{SemperFi\xspace} comprises of three main building blocks: i) Spoofing detector, ii) Adversarial Peak Identifier (API), and iii) Legitimate Signal Retriever{} (LSR). The spoofing detector provides reliable spoofing detection. The API is responsible for detecting a spoofing attack \emph{and}, distinguishing the attacker's signal from the legitimate GPS signals. If necessary, the LSR synthesizes an appropriate recovery signal, and eliminates the spoofing signal using a successive interference cancellation (SIC) technique. Once spoofing is detected, the adversarial peak identifier\xspace{} module instructs the drone to perform a randomly generated maneuver that lasts for about 10-20 s. \NoCaseChange{SemperFi\xspace} exploits the short-term stability of IMU sensors and correlates its pattern estimates with that of the GPS. Adversarial and legitimate signals are identified based on correlation results. Peak identification information is then passed on to the legitimate signal retriever{}. Traditional wireless communication systems have successfully applied SIC to recover message contents. It is important to note that, in the case of GPS, in addition to the data contained within the navigation messages, it is essential to preserve the ToA of the satellite signal itself. To address this unique challenge present in eliminating GPS spoofing signals, we develop algorithms to estimate the various physical characteristics such as amplitude, phase, and ToA of both the legitimate and spoofing signals. We implement \NoCaseChange{SemperFi\xspace} using GNSS-SDR~\cite{fernandez2011gnss} and evaluate its performance against both synthetically generated as well as real-world GPS signals using consumer drones like DJI Flamewheel F450~\cite{djiflamewheel} and Holybro S500~\cite{holybros500}. We also evaluate the performance of \NoCaseChange{SemperFi\xspace} on various embedded systems commonly used as UAV flight controllers. Furthermore, we evaluate the effectiveness of \NoCaseChange{SemperFi\xspace} against TEXBAT~\cite{humphreys2012texas}, a public dataset of GPS spoofing traces. Our evaluation show that in the majority of attack scenarios, SemperFi can recover with an accuracy of less than 20 m. Moreover, performance evaluation on popular platforms like Jetson Nano and Xavier take less than 10 s to recover the legitimate location. It is also possible to deploy \NoCaseChange{SemperFi\xspace} as a pluggable module that outputs spoofer-free GPS signals identical to legitimate satellite signals. Therefore, \NoCaseChange{SemperFi\xspace} allows an unmodified COTS GPS receiver to process and generate location and time estimates without any disruption. \section{Design of \NoCaseChange{SemperFi\xspace}} \NoCaseChange{SemperFi\xspace} is a single-antenna GPS receiver capable of providing uninterrupted location estimates even when subjected to a stealthy GPS spoofing attack. In this section, we present the design of \NoCaseChange{SemperFi\xspace}{} and the challenges that follow. \subsection{Challenges} For the GPS receiver to operate autonomously in adversarial signals, the receiver must continuously perform the following actions. First, it is necessary to detect an ongoing spoofing attack reliably. Then, the receiver must be capable of identifying or distinguishing between spoofing signal and the legitimate signal. Finally, after identifying the spoofing signal, the receiver has to eliminate or reduce the spoofing signal's effect on the final estimated location. Unlike typical wireless communication systems where it is sufficient to recover the signals' data, GPS receivers require both the data and the ToA of the signal. Moreover, unlike typical systems, GPS receivers are not tolerant to received sample losses. Continuous tracking of the satellite signals is necessary to estimate code and carrier phase delays that directly affect the PVT estimation. Finally, in the case of a spoofing attack that injects fake dynamic motion pattern (e.g., diverting the course of a ship or force a drone to deviate from its flight path), the attacker dynamically manipulates the ToA of the spoofing signals as well as the data contained within the navigation messages. Therefore, traditional interference cancellation and mitigation techniques need to be modified or extended in order to handle this kind of attack. \subsection{High-level Overview} \NoCaseChange{SemperFi\xspace} provides fully-autonomous spoofing resistance through the combined effort of three modules: i) Spoofing Detection, ii) the Adversarial Peak Identifier (API), and finally, iii) Legitimate Signal Retriever{} (LSR). A block diagram of \NoCaseChange{SemperFi\xspace}'s various components is shown in~\Cref{fig:gps_receiver_ase}. Several spoofing detection techniques can be integrated into \NoCaseChange{SemperFi\xspace} as long as they provide reliable spoofing signal detection for all possible adversaries. In this work, the spoofing detection methodology employed is based on the design of a prior work~\cite{ranganathan2016spree} that demonstrated the ability to detect even a strong, seamless takeover attack. Peak identification and signal recovery modules are activated in case spoofing is detected. The receiver continuously analyses the incoming signal for a spoofing attack and raises an alarm once spoofing is detected. Upon spoofing detection, the API performs a pseudorandom maneuver to identify the adversarial peak. It correlates position estimates obtained from inertial measurement unit (IMU) data and GPS to accurately identify if the currently tracked signals are adversarial. LSR is generates a replica of the adversarial signal and performs SIC to recover the legitimate signal. \\ \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{semperfi.pdf} \caption{High-level overview depicting essential components of \NoCaseChange{SemperFi\xspace}.} \label{fig:gps_receiver_ase} \end{figure} \noindent SIC is a technique used for canceling out interference caused by stronger signals. The GPS signal from a single satellite can be modelled as \begin{align} S_{R} = a[k]\tilde{s}_{T}[k-\tau(k)]e^{j2\pi f_{D}[k]T_{s}k + \phi[k]} \end{align} where $s_{T}(k)$ is the baseband signal ($k$ number of samples per C/A code), and $a[k], \tau(k), f_{D}[k],\phi[k]$ are the amplitude, time-varying code delay, Doppler shift and carrier phase shift respectively. In presence of an adversary, received signal is \begin{align} S_{R} = S_{L} + S_{AT} \end{align} where $S_{L}$ is the legitimate signal and $S_{AT}$ is the attacker's signal. In a GPS spoofing attack, the attacker overpowers the legitimate signal. Thus, $a_{AT} > a_{L}$ and as result, the GPS receiver tracks $S_{AT}$. The LSR module uses the spoofing detector's tracking parameters $\tau_{AT},$ Doppler shift $f_{AT},$ to track the adversary signal for a specific duration and extracts the baseband data $s_{AT}$. The amplitude $a_{AT}$ and carrier phase shift $\phi_{AT}$ of the adversarial signal are then estimated and used in combination with the baseband data to generate the recovery signal $S^{'}_{AT}$, a close replica of the estimated adversary signal. Using the above information, $S_{L}$ can be obtained as follows: \begin{align} S_{L} = S_{R} - S^{'}_{AT} \end{align} The replica is fed back to perform SIC and undergoes re-acquisition. If necessary, \NoCaseChange{SemperFi\xspace} repeats this process until the spoofing detector does not trigger an alarm. At this stage, the spoofing signal is eliminated or significantly attenuated, and therefore the receiver starts tracking the legitimate signals. There are scenarios where, despite a successful recovery, either due to the spoofing signal's strength or synchronization concerning the legitimate signals, the navigation message content and arrival time are hard to decode and introduce ambiguities in the PVT estimates. We developed a pseudorange rectifier for such specific scenarios that can recover from an attack with decreased accuracy. Finally, we designed \NoCaseChange{SemperFi\xspace} as a plugin module that can be configured to act as a spoofing signal filter, where the filtered signal is fed directly to any commercial GPS receiver for PVT estimation. This prevents significant hardware design changes to existing deployments. \label{sec:high_overview} \subsection{Adversarial Peak Identifier (API)} The presence of a valid satellite signal is determined by a peak that forms as a result of the correlation operation performed by the acquisition module. Malicious signals result in additional correlation peaks which may be misidentified as legitimate GPS signals. The adversarial peak identifier (API) is responsible for identifying such malicious peaks. By nature, just like every wireless receiver, the GPS receiver also locks on to the strongest signal and tracks it. Thus, even in scenarios where the receiver receives both adversarial and legitimate signals, it calculates the stronger GPS signals' PVT solution. The spoofing detection strategy that we deploy will declare spoofing based on multiple peaks in the acquisition plot. \NoCaseChange{SemperFi\xspace} then attempts to attenuate adversarial signals to recover from the spoofing attack. This is not, however, a simple matter of attenuating the signal producing the strongest peak. An attacker aware of this strategy can transmit signals with a power lower than the received signal in specific attack scenarios. Even though the attacker's signal is weaker, it will still be visible in the acquisition plot. As a result, the spoofing detector will declare positive spoofing. If the stronger peak is assumed to be the adversarial peak, then \NoCaseChange{SemperFi\xspace}{} will eliminate the legitimate peak as the legitimate signal is stronger than the adversarial signal. Therefore, for \NoCaseChange{SemperFi\xspace} to successfully attenuate adversarial signals and recover the location, it is essential to identify the adversarial peak from the acquisition plot in a way that accounts for these scenarios. In our design, the API identifies adversarial peaks based on the following procedure. Once spoofing is detected, API signals the UAV to stop and hold the current position. As the UAV stops, the spoofed signals' location should also reflect this stop in an ideal attack scenario. This is possible as the attacker knows how the UAV is supposed to move based on spoofed trajectories. Once the UAV stops, the UAV performs a specific maneuver consisting of a pseudorandom sequence of turns. The attacker is not aware of the exact maneuver and cannot generate GPS signals that reflect this maneuver. Before performing the maneuver, UAV de-couples GPS and extended Kalman filter (EKF) and uses IMU sensors based dead-reckoning to track the movements. The UAV logs GPS coordinates, but it does not use it for rectifying EKF estimates. API compares the track estimated by IMU sensors and GPS by averaging the Euclidean distance between each deviation sample obtained from IMU sensors and the GPS receiver. Since the attacker cannot generate the signals corresponding to the maneuver, a comparison of track estimates by IMU sensors and GPS shows significant deviations. This confirms that the GPS receiver is locked on to adversarial peaks. API relays this information to LSR, which then attenuates the peak. On the contrary, if the comparison does not show deviations, it is concluded that the receiver is tracking the legitimate peak even if there is an ongoing spoofing attack. \label{sec:api} \subsection{Legitimate Signal Retriever (LSR)} LSR is responsible for generating the corresponding replica signal i.e.,\xspace the recovery signal for every spoofed satellite. LSR requires; i) Amplitude, ii) code phase delay, iii) Doppler shift, iv) carrier phase, and v) navigation bit of the attacker's signal for generating the recovery signal. LSR obtains the code phase delay and the Doppler shift from the acquisition module. The replica signal is aligned with the received spoofing signal in the time domain using the code phase delay and the frequency domain using the Doppler shift. The LSR consists of a minimal tracking module that extracts the navigation bits and the carrier phase information of the adversarial spoofing signal. Each of the required components except the signal amplitude is readily available through the basic acquisition and tracking components in any standard receiver architecture. We devised an amplitude estimation technique that relies on the correlation coefficient of the attacker's peak.\bigskip \noindent \emph{Amplitude Estimation:} The amplitude of the acquired signal can be estimated from the magnitude of the corresponding peak in the two-dimenstional function of code phase delay and the Doppler shift called the cross-ambiguity function (CAF). Recall that the input to the acquisition block is a set of $K$ observations of a modulated GNSS signal. The sampled baseband signal can be modeled as \begin{align} x_{IN}[k] = a[t]\tilde{s}_{T}[t-\tau(t)]e^{j2\pi f_{D}[t]T_{s}t + \phi[t]} \end{align} where $a[t]$ is the signal amplitude, $\tilde{s}_{T}[t]$ is a filtered and sampled version of the complex baseband GNSS signal. Computation of the correlations which comprise the sampled CAF, in the acquisition block is typically done in the Fourier domain after carrier wipe-off. \begin{align} x[k] = x_{IN}[k]\cdot e^{-j2\pi \check{f}_{D} tT_{s}} \end{align} At the peak of the CAF, the parameters $\check{f}_{D}[t],\check\tau[t],\check\phi[t]$ correspond to the maximum likelihood estimate of the ``true'' parameter values, and the discrete Fourier domain representation of the signal after wipe-off simplifies to \begin{align} X[k] = \textsc{FFT}_{K}{\{x[k]\}} = a[t]*S[t]W_{K}^{\tau} \end{align} Applying the FFT of the local code replica $D[k]$ is performed by multiplication in Fourier domain \begin{align} Y[k] = X[k]\cdot D[k] = a[t]*S[t]D[k]W_{K}^{\tau} \end{align} The final step in computing the CAF is taking the inverse FFT \begin{align} R_{xd}(f_{D},\tau) = \textsc{IFFT}_{K} \{Y[k]\} = a[k]\sum_{n=0}^{K-1} s[n]d[k-n] \end{align} The ``peak metric'' for a given local replica is found by maximizing the squared magnitude of the correlation grid. At the peak where the signal component $s[k]$ and the local replica are identical, this ideally reduces to \begin{align} S_{\textsc{max}} = |R_{xd}(f_{D},\tau)|^{2}\big|_{f_D\approx\hat{f}_D,\tau\approx\hat{\tau}} = |a|^2 |K|^2 \end{align} where $S_{\textsc{max}}$ is the maximum peak and $R_{xd}(f_{D},\tau)$ is the search grid. Rearranging this we find an expression for the amplitude of the input signal in terms of the peak metric \begin{align} |a| = \frac{\sqrt{S_{\textsc{max}}}}{K} \end{align} \noindent Equipped with all the above information, the recovery signal is generated. LSR performs this iterative cancellation process for all the satellites.\bigskip \noindent \emph{Pseudorange Rectifier{}:} Specific attack scenarios result in tracking failure i.e.,\xspace the receiver is unable to extract the navigation message. Such a scenario is possible when an adversary introduces extreme interference that buries the legitimate signals under the noise floor. As a result, the adversary will corrupt or distort navigation bits of the legitimate signal. Other kinds of interference may result in a situation in which the correlation peaks being close enough together that the adversary flips the bits in the legitimate navigation message. Even if \NoCaseChange{SemperFi\xspace} can recover the legitimate peak, it won't be able to successfully track and decode navigation bits. This leads to incorrect calculation of location. The pseudorange rectifier enables \NoCaseChange{SemperFi\xspace} to correct these ambiguities and aid in the recovery of location. An important assumption is that the attacker manipulates the location by changing the arrival time of the signals and keeps the navigation messages the same i.e.,\xspace legitimate and adversarial messages are the same. Commercial GPS receivers use a common reception time technique \cite{rao2012can} to calculate pseudorange to the satellite, an essential component in PVT calculation. In this technique, a common reception time, which is usually 65-85 ms~\cite{rao2012can}, is set across all the channels as the propagation time of the closest satellite's signal. The receiver calculates the propagation time of signals from other satellites relative to this reference. Modern GPS receivers maintain a sample counter for accurate time measurement. According to this technique, pseudorange is calculated as follows: \begin{equation} P^{i} = c (t_{ref} + t_{rx} + \tau^{i}) \end{equation} \noindent where $P^{i}$ is the pseudorange measurement for $i^{th}$ satellite, $c$ is the speed of light, $t_{ref}$ is the initial reference time (usually 65-85 ms \cite{rao2012can}), $t_{rx}$ is the receiver time maintained by a sample counter, and $\tau^{i}$ is the code phase delay of $i^{th}$ satellite.\bigskip \noindent \NoCaseChange{SemperFi\xspace} attenuates the adversarial peak and obtains tracking parameters of the legitimate peak. However, it doesn't track the legitimate peak. Instead, it starts tracking the adversarial peak and obtains adversarial navigation messages. A stealthy attacker will keep navigation messages the same and change only the signals' ToA. It offsets the sample counters by $\tau_{at}^{i} - \tau_{l}^{i}$ where $\tau_{at}^{i}$ is the code phase delay of $i^{th}$ satellite of the attacker and $\tau_{l}^{i}$ is the code phase delay of $i^{th}$ legitimate satellite obtained during the peak recovery. \begin{equation} P^{i}_{l} = c (t_{ref} + t_{rx} + \tau^{i}_{at} - \Delta\tau^{i}) \end{equation} \begin{equation} \Delta\tau^{i} = \tau^{i}_{at} - \tau^{i}_{l} \end{equation} \noindent Substituting (13) in (12) we get (11). In this way, \NoCaseChange{SemperFi\xspace} can obtain legitimate pseudoranges ($P^{i}_{l}$) by rectifying ToA of adversarial signals. \label{sec:lsr} \section{Related Work} In recent years significant work has been done in developing robust GPS spoofing countermeasures. The work that comes closest to ours is the spoof-proof GPS receiver~\cite{Eichelberger2020spoofproof} and the in-line GPS spoofing mitigation technique~\cite{ledvina2001line}. In~\cite{Eichelberger2020spoofproof}, the receiver uses maximum likelihood estimates after dampening the attacker signal to estimate the correct location. The in-line GPS spoofing mitigation technique~\cite{ledvina2001line} implements an extended RAIM method to filter outliers and correlation peak distortion techniques to detect spoofing signals. Both these works are incapable of distinguishing adversarial peaks and fail against strong adversaries such as a seamless takeover attacker. Signal cancellation has been explored in the context of GPS signals in~\cite{moser2019digital}. In this work, the goal is to attack the receiver by attenuating a specific satellite. Specifically, successive interference cancellation has been studied to eliminate the near-far problem associated with pseudolites~\cite{madhani2003application}. The authors treat overpowering pseudolites as interference because despite the signal being legitimate, it is so powerful that signals from GPS satellites are buried under the noise-floor, and the objective is to remove interference from this single interference. Whereas in our work, we focus on removing the adversarial signal. Other existing mitigation techniques can be categorized as i) Hardware-level mitigation techniques, ii) Signal processing level mitigation techniques, and iii) Cryptographic solutions. McMillin \textit{et~al.\xspace}~\cite{mcmilin2015gps} present a single-antenna design that can provide GPS jamming mitigation by null steering toward an optimal azimuthal direction. Such a solution requires additional hardware and might not be useful in a multi-spoofer setup, as described in ~\cite{tippenhauer2011requirements}. Borio \textit{et~al.\xspace}~\cite{borio2017fresh} provide an interference cancellation technique for recovering from GPS jammers. This work statistically models GPS jamming signals, which aides in jamming signal removal. McDowell \textit{et~al.\xspace}~in~\cite{mcdowell2007gps} provides a digital spatial nulling technique for spoofer mitigation. Several cryptographic solutions have been proposed for securing navigation messages. In \cite{kuhn2004asymmetric, cheng2009authenticity}, the authors propose an asymmetric and hidden marker approach for securing civilian GPS signals from signal-synthesis attacks. In~\cite{wesson2012practical}, the authors propose an authentication scheme by incorporating digital signatures. All these cryptographic solutions, although they prevent signal spoofing attack, requires key distribution and management. It is important to note that GPS is a public service used by millions of devices worldwide. Deployment of these solutions requires serious modifications to existing GPS infrastructure, which is impractical. Furthermore, cryptographic countermeasures do not prevent against record and replay attack~\cite{papadimitratos2008gnss}. Several spoofing detection schemes require extra peripherals like multiple antennas~\cite{montgomery2011receiver,bhamidipati2019gps,meurer2016direction}, which detect discrepancies in the angle of arrival of GPS signals. GPS signals and location estimates are correlated with data from extra IMU sensors~\cite{jafarnia2012detection, wendel2006integrated, titterton2004strapdown, farrell1999global} for detecting GPS spoofing attacks using vector-based tracking. Extensive work is present that focuses on the use of EKF to aid in recovering from GPS glitches~\cite{tanil2016kalman, hajiyev2013robust}. ArduPilot has one such implementation. Our experiments found that a spoofer can avoid detection by controlling the introduced error in the positions. In~\cite{zhang2018strategies, nashimoto2018sensor}, authors show how an attacker can create signals to defeat Kalman filter-based detection algorithms and inject false sensor data. However, GPS/IMU sensor-fusion based navigation~\cite{narain2019security} has been recently shown to be vulnerable to attacks against on-road navigation systems. Several works~\cite{tippenhauer2011requirements, jansen2016multi} propose using multiple receivers to detect spoofing signals by comparing reported positions of several GPS receivers with their deployed constellation. Researchers have also proposed spoofing detection schemes that correlate civilian GPS signals with military signals~\cite{psiaki2011civilian}, cross-validation of PVT solutions across multiple navigation systems~\cite{nighswander2012crossvalid} e.g.,\xspace GPS, GLONASS, Galileo, etc. In~\cite{jansen2018crowd}, the authors leverage a crowdsourced network to detect GPS spoofing attacks. In~\cite{borhani2020deep}, the authors discuss the use of deep learning schemes for spoofing detection and propose a detection approach based on machine learning. Works like SPREE~\cite{ranganathan2016spree} and vestigial signal detection~\cite{wesson2011evaluation} provides a spoofing detection approach based on identifying auxiliary peaks. All the above countermeasures only perform spoofing detection and are incapable of autonomous recovery during the spoofing attack.
{ "timestamp": "2021-05-06T02:09:24", "yymm": "2105", "arxiv_id": "2105.01860", "language": "en", "url": "https://arxiv.org/abs/2105.01860" }
\section{Introduction} Synchronization lies at the core of time keeping and underpins a vast class of natural phenomena, from life cycles to precision measurements~\cite{Pikovsky2002}. In a nutshell, synchronization occurs when an oscillatory system has its bare frequency entrained by a weak external signal, which may have a slightly different tempo. Since its observation by Huygens in the 17$^\text{th}$ century, the synchronization of widely distinct systems has been shown to share remarkably universal features~\cite{Pikovsky2002, Jenkins2013}, fostering its exploration across many disciplines~\cite{Aspelmeyer2014, Strogatz1994NonlinearEngineering, Jackson1991Perspectives1}. With the recent convergence among optical, mechanical and electrical waves using scalable microfabrication technologies, synchronization has emerged as a powerful tool targeted not only at technological applications, such as phase-lock loops (PLLs) in radio-based communications~\cite{Razavi2004a, Plessas2011, Rategh2003SuperharmonicDividers}, but also at developing the fundamentals of chaotic systems~\cite{Barbosa2019}, injection locking~\cite{Shi2019,Markovic2019a,ArreguiColombanoMairePitantiCapujGriolMartinezSotomayorTorresNavarroUrrios+2021+1319+1327}, electro and optomechanical devices~\cite{Mahboob2012,Huang2018a,Bekker2017InjectionDevice,Huang2018, Sheng2020, Xu2019b, Colombano2019, Bagheri2013PhotonicOscillators}, nonlinear dynamics~\cite{Zhou2019, Huan2019, Ganesan2019, Parlitz1997, Zou2019, Leijssen2017}, network coupling~\cite{Cabot2017, Matheny2019, Sanavio2020, Raeisi2020}, and quantum synchronization~\cite{Walter2014QuantumOscillator, Lorch2016, Lorch2017, Qiao2020, Roulet2018, Eshaqi-Sani2020}. Most synchronization realizations occur when the oscillation frequencies involved are barely dissimilar. This is usually the case because most oscillators rely on an underlying frequency-selective resonant response, e.g., mechanical, electrical, or optical resonance, which drastically suppresses off-resonant excitations. Despite the weak response to such non-resonant signals, oscillators with a strong nonlinearity may also synchronize when the ratio between external driving frequency ($\Omega_d$) and the oscillation frequency ($\Omega_0$) is close to a rational number $\rho = p/q$ called winding number~\cite{Kennedy1989TheFractal}, i.e., the ratio $\Omega_d/\Omega_0=p/q$ with $p,q$ being coprime integers. Indeed, higher order $p:q$ synchronization features have been experimentally observed in a variety of nonlinear systems, from Van der Pol's neon-bulb oscillator~\cite{VANDERPOL1927FrequencyDemultiplication} to modern spin-torque oscillators~\cite{Urazhdin2010, Tortarolo2018, Keatley2016a}, micro-electro-mechanical systems (MEMS)~\cite{Seitner2017, Pu2018, Taheri-Tehrani2019, Houri2019, Du2019, PhysRevLett.52.2277}, delay-coupled lasers~\cite{MartinezAvila2009TimeFeedback,Barbosa2019}, nuclear magnetic resonance laser~\cite{Simonet1994LockingFeedback}, and on-chip optical parametric oscillators~\cite{Jang2019ObservationCombs}. These higher-order synchronization demonstrations are of major importance in radio-frequency (RF) division applications, which often demand low-power consumption and wide-band operation~\cite{Rocheleau2014AReduction,Amann2009MechanismDividers,Kennedy2011ExperimentalInjection}. Within optomechanical devices, although seminal work has revealed that high-order synchronization is possible, its full strength is yet to be developed, potentially impacting the bridge between optical and RF signals~\cite{Hill2012} or enabling role in quantum~\cite{Chan2011LaserState,Lorch2017,Kato2019} and classical devices~\cite{Bagheri2013PhotonicOscillators,PhysRevLett.107.043603}. For instance, the first optomechanical injection-locking demonstration by Hossein-Zadeh et al.~\cite{Hossein-Zadeh2008a} showed evidence of synchronization at $\Omega_d=2\Omega_0$, while ~\cite{Wang2016DevilsCavity,PhysRevE.91.032910} demonstrated synchronization at subharmonics and the second harmonic in an on-fiber optomechanical cavity oscillator based on thermal effects. Theoretical work has suggested weak signatures of higher-order synchronization in optomechanical cavities~\cite{Amitai2017}. Here, we experimentally demonstrate the entrainment of a silicon-nitride optomechanical oscillator (OMO) by an external signal up to two octaves away from its oscillation frequency. Furthermore, the OMO operates in the intriguing regime where higher order synchronization ($p>q$) is actually stronger than the trivial $1:1$ case, as determined by the degree of nonlinearity set by the laser frequency and intensity. Finally, we explore this regime to experimentally demonstrate a purely optomechanical radio-frequency divider with a phase noise performance better than the 1:1 locking regime. Our results open a route for exploring and engineering nonlinear synchronization in optomechanical oscillators~\cite{Qiao2018}, phase-sensitive amplification~\cite{Rugar1991MechanicalSqueezing,Zega2015PredictingOscillators}, nonlinear sensing~\cite{Brawley2016NonlinearMotion}, and collective dynamics of emerging oscillator arrays~\cite{Pelka2020,Zhang2015SynchronizationLight,Raeisi2020}. \section{Results and Discussion} The general structure of optomechanical oscillators dynamic can be represented by the feedback diagram shown in \Cref{fig:1}(a). The optical force driving the mechanical mode depends nonlinearly on the displacement, $x(t)$. Thus, the Lorentzian-shape of the optical resonance provides a unique route to tailor the degree of nonlinearity of the optical force, defining how different harmonics of the mechanical oscillation are excited during the optical-to-mechanical transduction. To establish synchronization, we apply a weak intensity modulation to the optical driving power, $P_{\text{in}}(t)=P_0\left[1+\varepsilon\sin\left(\Omega_d t\right)\right]$, where $P_0$ is the continuous-wave average power and $\varepsilon$ $(\ll 1)$ is the modulation depth. In the unresolved sideband regime, where $\Omega_0$ is smaller than the optical linewidth $\kappa$, the essence of the feedback loop of \Cref{fig:1}(a) is captured by introducing a delayed mechanical response $x(t) \rightarrow \widetilde{x}(t-\tau)$, where $\widetilde{x}$ is a normalized dimensionless displacement (details in the Supplementary Note 5). The optical force can then be efficiently written as a power series in $\widetilde{x}(t-\tau)$, \vspace{-0.2cm} \begin{equation}\label{eq:force_x} F_\text{opt}(t) = f_\text{opt}\left[1+\varepsilon \sin\left(\Omega_d t\right)\right]\sum_{n=0}^{\infty} F_n \widetilde{x}^n\left(t-\tau\right), \end{equation} \noindent whose strength depends not only on the overall optical force strength, $f_{\text{opt}}$, but also on the dimensionless coefficients $F_n$, which dictates the intensity of the nonlinearity and their detuning dependence, as shown in \Cref{fig:1}(b). Important optomechanical properties, such as optical cooling/amplification or spring effect~\cite{Chan2011LaserState,Marquardt2006}, are described by considering up to the first-order term $F_1$ in \Cref{eq:force_x}. The modulation depth dependent terms $(\propto \varepsilon)$ enable the injection-locking and synchronization of the OMO to an external drive. While $F_{0}$ and $F_{1}$ hardly provide new insights into synchronization properties, the quadratic and cubic terms ($F_2$ and $F_3$) highlight a key aspect explored in this work: nonlinear synchronization properties can be adjusted with an easily accessible parameter, the optical detuning, which significantly changes their relative strengths, as shown in \Cref{fig:1}(b). The impact of these nonlinearities in the synchronization dynamics can be cast into the well-known Adler's model, which describes the slowly varying phase dynamics of an oscillator perturbed by a weak external drive~\cite{Adler1973,Amitai2017}. Indeed, we show in ``Methods'' that the Taylor-series description of \Cref{eq:force_x} leads to an effective Adler model when the optical modulation frequency is tuned towards a chosen harmonic of the mechanical frequency. Synchronization in this model arises when the perturbation strength overcomes the frequency mismatch between the drive and oscillator's harmonics. As the external drive frequency $\Omega_d$ is swept around the oscillator harmonics, the synchronization condition may still be satisfied and defines a region in a $\varepsilon-\Omega_d$ space known as Arnold tongues (AT)~\cite{Pikovsky2002}, illustrated in \Cref{fig:1}(c). Such response to higher harmonics could be readily explored for radio-frequency division, as we experimentally demonstrate for divisions ratio $2:1$, $3:1$ and $4:1$, the same orders of the measured Arnold tongues maps. \begin{figure*} \centering \includegraphics[width=\linewidth]{figs/fig2_paper_reviewed.pdf} \caption{\textbf{Experimental demonstration of multi-octave synchronization.} \textbf{a)} Illustration of the silicon nitride dual-disk optomechanical cavity used in the experiment. The inset shows the simulated flapping mechanical mode displacement profile $|\mathbf{u}|$; \textbf{b)} Schematic of the experimental setup used; TL is the tunable laser source; $\lambda$-Ref: acetylene gas cell and a Mach-Zehnder interferometer as a reference in frequency; EOM: electro-optic modulator; RFG: radio-frequency generator; ESA: electrical spectrum analyzer; PM: power meter; OSC: oscilloscope; \textbf{c)} Magnitude of the fast-Fourier transform of the OMO output signal (inset); \textbf{d)}-\textbf{g)} Time-trace of the OMO output entrained at $p=1$ (\textbf{d}) until $p=4$ (at \textbf{g)}. A RF injection power of -10 dBm ($\varepsilon \approx 4$\%) was used; \textbf{h)}-\textbf{k)} RF spectrograms measured as the RF drive frequency sweeps from lower to higher frequencies around each OMO harmonic, $p=1$ (\textbf{h}) until $p=4$ (at \textbf{k)}, for an injection RF power of -10 dBm. The vertical RF frequency axis is always centered at the mechanical oscillation frequency $\Omega_0/2\pi = 32$ MHz and increases from top to bottom, as the symbols minus and plus from \textbf{h)} suggests. The same is true for the horizontal axis, which the RF drive frequency increases from the left to the right; \textbf{l-o)} Measured Arnold tongues corresponding to each harmonic, obtained by stacking horizontal linecuts along the dashed black line shown in \textbf{h-k)}. The purple curves are the simulated ATs and the color scale of each plot matches the grayscale range shown in the right.} \label{fig:2} \end{figure*} To experimentally assess high-order synchronization and measure the ATs, it is important to harness the nonlinear response of an OMO. We achieve this control by employing a dual-disk optomechanical cavity based on silicon-nitride~\cite{Zhang2012SynchronizationLight,Shah2015}, as shown schematically in \Cref{fig:2}(a). This cavity supports a relatively low frequency ($\Omega_m/2\pi=$ \SI{31.86}{\mega\hertz}) and high quality factor mechanical mode ($Q_m=1250$)~\cite{Zhang2014EliminatingInterference}, which is coupled to a transverse-electric optical mode ($Q_{\text{opt}}=1.6\times10^5$ at a wavelength $\lambda\approx$ \SI{1556}{\nano\meter}) with an optomechanical coupling rate $g_0/2\pi =$ \SI{16.2}{\kilo\hertz}. The experimental setup, shown in \Cref{fig:2}(b), essentially consists of an intensity-modulated external cavity tunable laser that is coupled to the optomechanical cavity using a tapered fiber~\cite{Zhang2012SynchronizationLight}. The output light is analyzed with an oscilloscope and an electrical spectrum analyzer (ESA) that reveals the dynamics of the oscillator while monitoring the optical transmission. To transition this optomechanical cavity into an OMO we raise the pump power to $P_{0} =$ \SI{480}{\micro\watt} and fine-tune its wavelength such that the detuning between the laser frequency and the cavity resonance corresponds to $\Delta_x = 0.35\kappa$ $(\Delta_x/2\pi\approx$ \SI{408}{MHz}), which is inferred by monitoring the optical transmission. A typical OMO free-running output signal and the corresponding Fourier transform are shown in \Cref{fig:2}(c), revealing the mildly nonlinear characteristic with a few noticeable harmonics. Interestingly, at this detuning, both the $F_0$ and $F_1$ terms in \Cref{eq:force_x} are of similar strength (see \Cref{fig:1}(b)), suggesting that the nonlinear response to an injection signal should be readily observed. To observe injection-locking, the laser intensity modulation is activated, and the modulation frequency is swept around the OMO fundamental frequency or its harmonics ($p=1-4$ and $q=1$). The time-traces in \Cref{fig:2}(d-g) are captured with the injection signal frequency being precisely matched to each harmonic using a RF power of -10 dBm. As the RF driving frequency is detuned from each harmonic, the OMO response is monitored through the RF spectrum centered around the fundamental frequency $\Omega_0/2\pi$, as shown in the density plots of \Cref{fig:2}(h-k). At the left-hand side of these plots, the RF tone is far away from the OMO harmonics and do not synchronize, thus, both oscillator and drive frequencies appear as distinct peaks, accompanied by nonlinear mixing products typical of driven oscillators~\cite{Seitner2017}. When the RF tone approaches a harmonic, a clear transition occurs and a single RF peak emerges, which is one major signature of synchronization. The first striking feature is the observation of strong synchronization for all the driving harmonics, a phenomenon that has not been reported in optomechanical systems. Second, and most important, the width of the synchronization region for $p=2$ and $p=4$ are larger than the fundamental harmonic ($p=1$). It is also remarkable that the $p=3$ synchronization window is relatively small, counterposing the hierarchy among harmonics. To map the synchronization window into Arnold tongues and understand the role played by the optical modulation depth, we performed the measurements shown in \Cref{fig:2}(h-k) for a range of RF powers, and built the ATs shown in \Cref{fig:2}(l-o). The colored regions indicate a synchronized state and were obtained by stacking RF spectral slices along the OMO frequency, given by the horizontal dashed-lines in \Cref{fig:2}(h-k). It is worth pointing out that the highest RF power (-6 dBm) corresponds to a modulation depth $\varepsilon \approx 6\%$, ensuring a weak perturbation regime. Although the existence of higher order tongues could be anticipated by qualitative analysis of the nonlinear terms in \Cref{eq:force_x}, further theoretical analysis is necessary to precisely picture their nature. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/fig3_paper_reviewed.pdf} \caption{\textbf{Numerical analysis and experimental observation of fractional synchronization.} \textbf{a)} Arnold tongues boundaries simulated using the complete coupled optomechanical equations. The horizontal scale is the same used in experimental data of \Cref{fig:2}\textbf{(l-o)}, revealing a good agreement; \textbf{b)} Same simulation done at \textbf{a)} but now considering only one parametric term in each simulation, i.e., the green ($1:1$) boundary was simulated considering $\varepsilon F_1 = \varepsilon F_2 = \varepsilon F_3 = 0$ but $\varepsilon F_0 \neq 0$ (details in the Supplementary Note 5). The orange ($2:1$) boundary has only the term $\varepsilon F_1 \neq 0$, the blue ($3:1$) has $\varepsilon F_2 \neq 0$ and the red ($4:1$) has $\varepsilon F_3 \neq 0$; \textbf{c)}-\textbf{f)} Impact of optical detuning $\Delta_x$ in the ATs, showing their tunability and the possibility of a vanishing $p=3$ tongue at $\Delta_x \approx 0.43\kappa$ for the parameters used. These maps were simulated using $\varepsilon = 5\%$ and the black-dashed line is the mechanical oscillation frequency $f_0$, which increases with $\Delta_x$ because of the optical spring effect; \textbf{g)} Measured fractional synchronization threshold, indicated as blue dots, to observe a finite-width AT. The red-lines indicate the locking orders that did not synchronize and only frequency pulling was observed. The Arnold tongues shown are illustrations (see Supplementary Note 2 for actual data).} \label{fig:3} \end{figure} To study the observed AT behavior, we perform numerical simulations of the exact coupled equations describing both the mechanical and optical dynamics, and the resulting simulated Arnold tongues boundaries are shown in \Cref{fig:3}(a). Despite the specific parameters that influence the precise behavior of the optomechanical limit cycles~\cite{Amitai2017}, such as optical detuning, optomechanical coupling, and optical/mechanical linewidths, a good agreement is observed between the measured and simulated tongues. Such agreement suggests that the observed features are indeed dominated by the optomechanical interaction itself, in contrast to silicon optomechanical devices where thermal and charge carrier effects strongly influence the self-sustaining oscillator dynamics~\cite{Luan2014,Colombano2019}. Although the numerical model is useful for confirming the optomechanical nature of the observed effects, it hardly provides any analytical insight on the origins of the observed synchronization effects. We obtain further insight by approximating the optical force as a delayed power series, as suggested in \Cref{eq:force_x}. This analysis allows exploring the synchronization role of each nonlinear component $F_n$ in \Cref{eq:force_x} and elucidates the underlying structure of high-harmonic synchronization. The nonlinear components that \textit{are not} proportional to the driving signal define a ``forced Van der Pol-Duffing oscillator'' responsible for the oscillator limit cycle observed in \Cref{fig:2}(c). The synchronization dynamics is related to the terms proportional to the RF driving signal ($\propto\varepsilon$). However, in addition to the usual non-parametric excitation ($\propto \varepsilon F_0$), the injection signal also contributes to time-dependent coefficients in the mechanical oscillator dynamical equation. Physically, these time-varying coefficients indicate that the external signal modulates the oscillator's frequency and damping properties, leading to linear ($\propto\varepsilon F_1$) and nonlinear ($\propto\varepsilon F_{2,3}$) parametric resonance effects, a situation resembling the dynamics of a nonlinear Mathieu equation~\cite{Kovacic2018MathieusFeatures,Shah2015}. By neglecting all but one time-dependent term in the numerical simulations, we could identify how each harmonic ($p=1-4$) is related to the force expansion coefficients shown in \Cref{fig:1}(b). The resulting map is shown in \Cref{fig:3}(b), where each boundary was simulated considering only one parametric term, while all the others were set to zero. The resemblance with the full model simulation at \Cref{fig:3}(a) is remarkable. This analysis reveals that each $\varepsilon F_{p-1}$ term in the force expansion is the leading contribution to the $p:1$ AT, for all measured harmonics. For instance, as the $p=3$ entrainment occurs due to the $\varepsilon F_2$ parametric term, the thinner tongue observed in \Cref{fig:2}(n) is explained by the negligible value for $F_2$ at this detuning. Interestingly, although quadratic force terms like $F_2 x^2$ are often ignored in nonlinear mechanical oscillators (as they arise from an asymmetric elastic potential energy), here, they emerge naturally from the Lorentzian shape of the optical mode and can be tuned with the optical detuning. Another interesting feature, present both in the analytical and numerical model, is the presence of a cusp in the 1:1 tongue at -16~dBm RF power. Although we verified that such feature occurs due to an amplitude bifurcation using the analytical model (see Supplementary Note 8), the cusp was not observed in the experimental trace. The insights brought by our semi-analytical model suggest that tunable Arnold tongues should be feasible. In \Cref{fig:3}(c-f) we show a full numerical simulation of the ATs as a function the optical detuning, confirming this possibility. In particular, a complete suppression of $p=3$ tongue is attainable (\Cref{fig:3}(e)). Such rich response to higher harmonic excitation led us to verify whether our OMO could also respond to fractional frequency excitation, i.e., where $p/q$ is not an integer number. These experimental results are summarized in \Cref{fig:3}(g) but the full map can be found in the Supplementary Note 2 for various subharmonics of the mechanical frequency, revealing terms of the famous Farey sequence known in number theory \cite{Kennedy1989TheFractal}. Note, however, that the injection signal power required to observe fractional tongues were substantially larger, with some fractions (e.g., 4/5) requiring a full modulation, which is beyond the reach of our semi-analytical approximations ($\varepsilon\approx100\%$). \begin{figure*} \centering \includegraphics[width=\linewidth]{figs/fig4_paper_reviewed.pdf} \caption{\textbf{Phase-noise reduction and optomechanical frequency division.} \textbf{a)} Measured one-sided phase noise spectral density for the free-running (black), injection-locked (colored) and RF injection signal (gray). The RF power used for the injections was -7 dBm ($\varepsilon\approx 5.5\%$); \textbf{b)} Phase noise spectral density evolution as a function of the RF power for the 4:1 injection; \textbf{c)} Comparison between experimental data sidebands and the semi-analytical model prediction (see Supplementary Note 7). The colormap is the experimental power spectral density (PSD) around the OMO fundamental frequency. The RF drive frequency was set to the OMO frequency for all the RF powers shown. The black dashed lines are the experimental fit and the red/blue curves are the semi-analytical prediction, showing excellent agreement; \textbf{d)} Experimental PSD, in gray, for a RF power of -13 dBm ($\varepsilon \approx 2.75\%$), which is the same horizontal dashed gray linecut at \textbf{c)}, showing the agreement between semi-analytical model and experimental data for these sidebands, for both frequency and linewidth; \textbf{e)} Schematic of the optomechanical frequency divider; LPF means ``low-pass filter"; \textbf{f)} Experimental Optomechanical frequency division. The orange, blue and red curves are the injection-locked signal output from the OMO for the cases 2:1, 3:1 and 4:1, respectively. The overlapping black curves are the divided signal obtained using a low-pass filter with 48 MHz cutoff frequency.} \label{fig:4} \end{figure*} \textbf{Optomechanical Frequency Division}. An important aspect often praised when investigating synchronization and injection-locking phenomena is the reduction of phase noise (PN) in free-running oscillators. While optomechanical oscillator's phase noise has been previously explored~\cite{Hossein-Zadeh2008a,Fong2014a,Luan2014,Zhang2015SynchronizationLight,Bekker2017InjectionDevice}, its characteristics under high harmonic injection are not known. In \Cref{fig:4}(a) we show the measured PN at the fundamental oscillator's frequency for the free-running OMO and injection-locked at the harmonics $p={1-4}$ (see ``Methods'' for details). The PN curves were taken using a constant RF power of -7 dBm ($\varepsilon\approx 5.5\%$) for all harmonics. The general behavior of the free-running OMO PN has been discussed previously~\cite{Fong2014a} and it is influenced by various noise sources, such as flicker, thermomechanical, and amplitude-to-phase conversion~\cite{Mathai2019}. When injection locked at $p=1$ (green curve), the PN performance improves significantly, and the PN of the higher harmonics are surprisingly low, despite that the same modulation depth was employed. Indeed, the $p=2$ injection offers an improvement over the trivial $p=1$ case, $p=3$ is slightly deteriorated, and $p=4$ PN suffers a significant penalty of 10 dBc/Hz at small offset frequencies, however, it still preserves the low-frequency PN plateau, characteristic of injection-locked oscillators. To investigate the RF power dependence of each harmonic, PN curves were measured over a range of RF powers, shown in \Cref{fig:4}(b) for the 4:1 case. The transition to a low-frequency PN plateau (around -7 dBm) observed in \Cref{fig:4}(b) also occurs for other harmonics, albeit at lower injection powers, showing that very low PN levels can be achieved at the expense of higher RF power levels (see Supplementary Note 3 for other harmonics). In particular, while the 1:1 PN of \Cref{fig:4}(a) reaches -80 dBc/Hz at -7 dBm, the 4:1 PN of \Cref{fig:4}(b) requires -1 dBm to reach -80 dBc/Hz, still corresponding to a moderate modulation depth of 11\%. A qualitative understanding of the observed PN behavior can be cast upon previous investigations in the context of superharmonic injection-locking~\cite{Zhang1992AOscillators,Verma2003ADividers,Kalia2011,Plessas2011}. When the injection-signal PN is negligible, the phase-noise of a superharmonic injected oscillator is written as \begin{equation} \mathcal{L}_{\text{out}}(\Omega) = \frac{ \mathcal{L}_\text{free}(\Omega)}{1+(\Delta\Omega_n/\Omega)^2\cos^2\theta}, \label{eq:PN} \end{equation} \noindent where $\mathcal{L}_\text{free}(\Omega)$ is the free-running OMO PN spectra, i.e., the black curve of \Cref{fig:4}(a); $\Delta\Omega_n$ is the locking range (AT width) for each harmonic; $\theta$ is the phase offset between the injection signal and the OMO. Apart from the phase offset $\theta$, the AT width determines the locking range and is often associated with good phase noise performance. Indeed, the wider lock range $\Delta\Omega_2$ observed for the 2:1 injection is associated with a better PN. For the 3:1 and 4:1 PNs cases, however, the trend is not as clear. While the phase-noise is reduced as the lock-range increases (due to higher injection power), the 4:1 PN shown in \Cref{fig:4}(a) is not lower than the 3:1 injection, despite the wider 4:1 tongue. Although it is not clear all the factors contributing to this discrepancy, we verified in numerical simulations that the phase-offset $\theta$ varies among harmonics and could partially contribute to the observed mismatch. One unique factor contributing to these phase offsets in nonlinear oscillators is the strong frequency pulling~\cite{Slavin2009a,Pandey2008FrequencySystem} that distinctively shifts the bare OMO frequency for each harmonic. Indeed, we can notice in the injection maps of \Cref{fig:2}(h-k) that the locking frequency loci are not symmetric relative to the OMO frequency. For example, \Cref{fig:2}(o) is shifted towards lower frequencies, while \Cref{fig:2}(m) shifts toward higher frequencies. Such shifts are also anticipated by our semi-analytical model and can be traced back to the effective perturbation strength and frequency mismatch in the Adler's model (see ``Methods''). These nonlinearities also highlight the weakness of neglecting the amplitude-phase coupling in the PN modelling of OMOs. Another feature that supports the amplitude-phase coupling effects in the PN spectrum, which is not readily captured by the simple model leading to \Cref{eq:PN}, is the presence of the sidebands appearing in \Cref{fig:4}(a) between \SI{20}{\kilo\hertz} and \SI{60}{\kilo\hertz}. In contrast to the fixed-frequency satellite peaks at \SI{150}{\kilo\hertz}, which are caused by parametric mixing with a spurious mechanical mode, these peaks are intrinsic to the nonlinear locking dynamics of OMOs. These sidebands were discussed by Bagheri et al.~\cite{Bagheri2013PhotonicOscillators} and attributed to coupling between phase and amplitude dynamics that are intrinsic to OMOs. Based upon our amplitude-phase model leading to the effective Adler equations (\Cref{eq:adler}), we derive a quantitative model, in similarity to spin-torque oscillators~\cite{Tortarolo2018}, which predicts both frequency splitting and linewidth of these sidebands. Despite the various approximations necessary, the fitted model agrees remarkably well with the experimental data, as shown in blue/red curves in \Cref{fig:4}(c) and \Cref{fig:4}(d). In the context of higher-order synchronization, the demonstrated phase-noise performance could be explored towards injection-locked superharmonic frequency dividers~\cite{Rategh2003SuperharmonicDividers,Plessas2011}, which generate radio-frequency signals at a fraction of a higher frequency reference. Despite the low power-consumption advantage of injection-locked dividers, compared to other technologies, such as regenerative and parametric dividers~\cite{Rategh2003SuperharmonicDividers}, they often suffer from a narrow lock range. While OMOs offer intrinsically narrower lock ranges compared to electronic injection dividers~\cite{Rategh2003SuperharmonicDividers}, the wide Arnold tongues reported in \Cref{fig:3} suggest that a robust OMO frequency division is feasible. Exploring this strong response to higher harmonics, the experimental schematic of \Cref{fig:4}(e) was implemented to perform the demonstration of an optomechanical frequency division. A low-pass RF filter (\SI{48}{\mega\hertz} cutoff, MiniCircuits SLP-50+) rejects the higher-harmonics generated by the injection-locked OMO and delivers an output signal at a fraction of the injected reference, $f_0/N$. The measured frequency divided signals for 2:1, 3:1, and 4:1 locking for a RF power of 0 dBm are shown in \Cref{fig:4}(f). The worst PN performance, obtained in the divide-by-4 case, is better than -70 dBc/Hz and can be significantly improved at higher RF powers, as shown in the red-tone traces in \Cref{fig:4}(b). Further improvement in phase-noise could be achieved by using devices with higher mechanical quality factor and stronger optical driving power, for instance, double-disk optomechanical devices with mechanical quality factors exceeding $10^4$ and driven at larger amplitudes (using higher optical power) could exhibit a further PN reduction of 30 dB (see Supplementary Note 3). These results show that OMO-based frequency dividers can be readily derived from the observed higher-order synchronization. Although there is room for improvement in optomechanical frequency dividers, its ability to generate frequency references in the optical domain could be explored in experiments requiring optical synchronization, such as radio antenna telescopes~\cite{Maleki:2011aa}, optical frequency combs~\cite{Jang2019ObservationCombs}, or coherently linking arrays of optomechanical oscillators with distinct frequencies~\cite{Zhang2015SynchronizationLight}. Given the current state-of-the-art in hybrid integration~\cite{Stern:2018aa} and electro-optical conversion in photonic circuits~\cite{Luan2014}, the demonstrated divider could still ensure the low power consumption expected for injection locking frequency division. We have experimentally demonstrated an optomechanical oscillator entrained by high-order harmonics and its application as a purely optomechanical frequency division. The wider locking range observed for the higher harmonics, and its theoretical mapping to each nonlinear term in the oscillator dynamics, open new routes to control nonlinear synchronization phenomena in optomechanical oscillators, including the tailoring of the nonlinear response through the laser-cavity detuning and frequency synthesizers optomechanical devices. Furthermore, the importance of nonlinear parametric effects could also significantly impact phase-sensitive amplification~\cite{PhysRevA.102.023507} and nonlinear sensing~\cite{Brawley2016NonlinearMotion} with optomechanical devices. The demonstrated entrainment should also enable novel configurations for coupling and controlling optomechanical arrays based on dissimilar resonators. The demonstration of locking at fractional harmonics could also be a starting point for further nonlinear dynamics investigations within an optomechanical platform. \section{Methods} \noindent \textbf{Optical energy}. The optical energy dependence on the laser-cavity detuning and mechanical displacement is given by, \begin{equation} |a|^2= \frac{\kappa_e}{(\Delta-G x)^2+\kappa^2/4}P_{\text{in}}, \label{eq:optical_energy} \end{equation} \noindent in which two key parameters that will enable the tuning of the OMO nonlinear response arise: the input laser power, $P_{\text{in}}$, and the bare optical detuning, $\Delta=\omega_l-\omega_0$, between the pump laser ($\omega_l)$ and optical mode ($\omega_0$) frequencies; $x$ is the mechanical mode amplitude, $G=\partial\omega/\partial x$ is the optomechanical pulling-parameter, $\kappa$ is the optical mode linewidth and $\kappa_e$ is the external coupling to the bus waveguide \cite{Aspelmeyer2014}. \noindent \textbf{Effective Adler model}. By employing the Krylov-Bogoliubov-Mitropolsky (KBM) time-averaging method~\cite{bogoliubov1961asymptotic} at the mechanical oscillator equation, an effective Adler's equation may be derived (details in the Supplementary Note 6), \begin{equation}\label{eq:adler} \dot{\Phi} = \nu(\rho) + \varepsilon \frac{\Delta\Omega(\rho)}{2}\sin{\left(\rho\Phi\right)}. \end{equation} \noindent where $\Phi$ is the mechanical oscillator phase correction and $\dot{\Phi}$ denote its time derivative; $\nu(\rho)$ is the mean correction of $\Omega_0$ and $\Delta\Omega(\rho)$ is the size of the synchronization window at a particular harmonic $\rho = p/q$. Although many approximations must be carried on, this analysis relates the Taylor series coefficients in \Cref{eq:force_x} with the coefficients $\nu(\rho)$ and $\Delta\Omega(\rho)$ in the effective Adler's model \Cref{eq:adler}, providing a quantitative description of the width hierarchy among the measured ATs. \noindent \textbf{Experimental setup}. A full schematic of the experimental setup is shown in the Supplementary Note 1, along with optical and mechanical characterization of the bare resonator data. The optical transmission and the RF spectral measurements for the bare resonator properties were taken at low pump powers ($<$ \SI{50}{\micro\watt}). The laser wavelength and detuning are accurately monitored using a Mach-Zehnder Interferometer (MZI) and a HCN gas cell. The cavity is inside a vacuum chamber with pressure of approximately 0.1 mbar and at room temperature. Finally, the transduced signal goes to two detectors: a power meter (PM) that will track the optical mode and a fast photodetector (NewFocus 1617AC Balanced Photodetector) with 800-MHz bandwidth whose electrical output feeds both the electric-spectrum analyzer (ESA, Keysight N9030A) and oscilloscope (OSC, DSO9254A). The phase-noise measurements were performed in the spectral domain using the ESA N9030A phase-noise measurement application (N9068A). There was also a feedback loop between the PM and the TL to lock the signal, preventing the optical resonance to drift due to unwanted external perturbations. \noindent \textbf{Phase noise}. To derive the approximate expression for the phase noise (\Cref{eq:PN}), we must start from the general PN expression~\cite{Plessas2011,Zhang1992AOscillators}, \begin{equation} \mathcal{L}_{\text{out}}(\Omega)=\frac{(\Delta\Omega_n/n)^2\mathcal{L}_\text{inj}(\Omega)\cos^2\theta+\Omega^2 \mathcal{L}_\text{free}(\Omega)}{\Delta\Omega_n^2\cos^2\theta+\Omega^2}. \label{eq:PN_full} \end{equation} Since the injection-locking signal is derived from a stable RF frequency source (Agilent PSG E8251), $\mathcal{L}_\text{inj}(\Omega)$, the injection signal PN spectra is orders of magnitude smaller than $\mathcal{L}_\text{free}(\Omega)$, and then $\mathcal{L}_\text{inj}(\Omega)/\mathcal{L}_\text{free}(\Omega) \rightarrow 0$ results in \Cref{eq:PN}. The modulation depth as function of the RF power is given by $\varepsilon=\pi\sqrt{P_{\text{RF}}R}/V_{\pi}$, where $R=$ \SI{50}{\ohm} and $V_{\pi}=$ \SI{5.5}{\volt} is the optical modulator parameter. The phase-angle is given by $\theta=\arcsin\left[(\Omega_0-\Omega_d/n)/\Delta\Omega_n\right]$. A more detailed analysis is given in the Supplementary Note 3 where we show the measured phase noise as a function of the RF power for all the harmonics. \noindent \textbf{Simulations}. The acquired data was compared with numerical simulations using Julia language together with well-known and powerful packages like DifferentialEquations.jl, DSP.jl and Sundials.jl. As we are dealing with a stiff system, i.e., there is more than one relevant natural time scale which differ by many orders of magnitude, solvers available in Julia offer a better performance. We simulate the system for a range of modulation depths $\varepsilon$ while the RF signal sweeps around a set of chosen $p:q$ region, revealing the nature of synchronization. With the obtained time trace, we then locally Fourier transformed the data to construct the spectrogram. A detailed discussion on the numerical simulation is available at the Supplementary Note 4. The mechanical mode effective mass and the zero point fluctuation were obtained from COMSOL Multiphysics finite element simulations, $m_{\text{eff}} = 101.82$ pg, $x_{\text{zpf}} =$ \SI{1.536}{\femto\meter}, leading to an optomechanical pulling parameters $G/2\pi=(g_0/2\pi)/x_\text{zpf}=$ \SI{10.546}{\giga\hertz/\nano\meter}. \noindent \textbf{Data availability}. Further data supporting the findings of this study are openly available at Zenodo at \href{http://doi.org/10.5281/zenodo.4737381}{DOI:10.5281/zenodo.4737381} upon publication.\\ \section*{Acknowledgements} This work was supported by São Paulo Research Foundation (FAPESP) through grants 2019/14377-5, 2018/15577-5, 2018/15580-6, 2018/25339-4, 2017/24845-0, 2020/06348-2, 2019/09738-9, Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES)(Financial Code 001). This work was performed in part at the Cornell NanoScale Science and Technology Facility, which is supported by the NSF, its users, and Cornell University. \section*{Author contributions} C.C.R. and G.S.W. designed the experiment; C.C.R. performed measurements and data analysis with help from C.M.K. and A.G.P.; C.C.R., C.M.K. and A.G.P. contributed to the theoretical framework. G.S.W. and M.L. designed and fabricated the device; T.P.M.A. and G.S.W supervised the project. All authors contributed to the discussions and preparation of the manuscript. \section*{Competing interests} The authors declare no competing interests. \section{} \textbf{Cavity Characterization.} The whole experimental setup and the cavity geometry are shown in \ref{fig:1}. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs_supp/fig1_supp.pdf} \caption{\textbf{a)} Experimental setup used in the article. A tunable laser (TL) goes into a beam splitter (BS), in which one of the arms goes to a HCN cell wavelength reference, and the other arm goes to a electro-optical modulator (EOM) that is controlled by a radio frequency generator (RFG, Agilent PSG E8251). The modulated field after interacting with the sample inside a vacuum chamber of $\approx$ 0.1 mbar goes to another beam splitter which we finally obtain our results. The output signal then reaches a fast photodetector (FPD), which measures both the temporal trace using an oscilloscope (OSC, DSO9254A) and also the spectral content at a electrical spectrum analyzer (ESA, Keysight N9030), but also a slow photodetector (PD) which gives the Lorentzian shape optical transmission. The final part of the setup is a feedback loop (LaseLock) that goes back into the tunable laser that makes the laser wavelength stable by self referenciation, avoiding unwanted drifts during the data acquisition; \textbf{b)} Illustration of the nitride double disk cavity geometry used in the experiment.} \label{fig:1} \end{figure} \vspace{0.0cm} The optical and the mechanical modes used in this experiment are shown in \ref{fig:2}, with their best fits in red. The model of these curves are given by well-known \Cref{eq:fit_opt} and \Cref{eq:fit_mech} \begin{equation}\label{eq:fit_opt} T(\Delta) = \left|\frac{s_{\text{out}}}{s_{\text{in}}}\right|^2 = \frac{\left(1 - 2\eta\right)^2 + \frac{4\Delta^2}{\kappa^2}}{1 + \frac{4\Delta^2}{\kappa^2}} \quad \quad \quad \quad \text{(Optical Transmission Spectrum)} \end{equation} \begin{equation}\label{eq:fit_mech} \mathcal{S}_{PP}[\Omega] = \mathcal{S}^{\text{min}}_{PP} + \frac{\left(\mathcal{S}^{\text{max}}_{PP} - \mathcal{S}^{\text{min}}_{PP}\right)\left(\Gamma_m\Omega_m\right)^2}{\left(\Omega^2 - \Omega_m^2\right)^2 + \left(\Gamma_m \Omega\right)^2} \quad \quad \quad \quad \text{(Power Spectral Density)} \end{equation} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs_supp/fig2_supp.pdf} \caption{\textbf{a)} Experimental optical transmission spectrum of the cavity; \textbf{b)} Experimental power spectrum density (PSD). The best fits of both curves are shown in red.} \label{fig:2} \end{figure} \vspace{0.2cm} The value measured for the vacuum optomechanical coupling rate was $g_0/2\pi = 16.2$ kHz, where we have followed the M. L. Gorodetksy et al. article \cite{Gorodetksy:10}. The function $s_{\text{in}}^2$ can be interpreted as the power reaching the cavity, i.e., $s_{\text{in}}^2 = P_{\text{in}}$, which is also valid for the output field $s_{\text{out}}^2 = P_{\text{out}}$. The power spectral density $\mathcal{S}_{PP}$ (or just PSD) shown in \Cref{eq:fit_mech} is in dBm units. The parameter $\eta = \kappa_e/\kappa$ is the coupling between the optical fiber taper and the cavity. For now on we are omitting the sub-index of $s_{\text{in}}(t) \rightarrow s(t)$, because we are not using $s_{\text{out}}$ in any future calculations, so there will be no ambiguity in just writing $s(t)$ for the input field. \section{} \textbf{Fractional Synchronization.} As mentioned in the article we also observed several fractional order synchronizations, i.e., $\rho=p/q$ not an integer. We have shown in the main article, however, only the threshold to observe the tip of the Arnold tongues; here, we present in \ref{fig:3} the whole experimental map obtained. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs_supp/fig3_supp.pdf} \caption{Experimental fractional order Arnold tongues maps. The $p:q$ order is shown in white inside each map, which are arranged from lower frequency to higher frequencies from the left to the right.} \label{fig:3} \end{figure} These maps, however, requires a really strong modulation depth $\varepsilon$, impossible to study them in the weak perturbation regime of our semi-analytical model, as we are drastically changing the dynamic of the system. The importance of these data is to prove the existence of this kind of injection locking in optomechanics and to motivate the study of such phenomenon in future works, maybe the possibility of achieving such regimes using weak perturbations with different experimental parameters or even in some other cavity design to enhance these effects. \newpage \section{} \textbf{Phase Noise Analysis and Routes to Frequency Division Optimization.} In the main text we have shown only the phase noise measurements for the $4:1$ injection, here we will show the PN density spectra for the high-harmonics, 2 : 1 to 4 : 1, as a function of the RF power, as shown in \ref{fig:13}. For the smallest modulation depth (-19 dBm $\approx 1.5\%$) the $2:1$ PN is flat around -70 dBc/Hz, in contrast with the $3:1$ and $4:1$ cases. For small modulations, both the $3:1$ and $4:1$ PN spectra appear to be transitioning from the OMO free-running spectrum to the injection-locked regime characterized by the flat plateau. This is expected as the farther we are from $\Omega_0$ because smaller the interaction, in such a way that higher modulation depths are needed to achieve the same low PN levels. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs_supp/fig4_supp.pdf} \caption{Phase noise spectral densities for the cases $2:1$, at \textbf{a)}, $3:1$, at \textbf{b)}, and $4:1$, \textbf{c)}, for various modulation depth. The black curve is the OMO free-running PN for reference.} \label{fig:13} \end{figure} A compact way to visualize the evolution of the phase noise as a function of the RF power is averaging the phase noise around low frequencies, as done in \ref{fig:16}. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs_supp/fig5_supp.pdf} \caption{Average phase noise around 100 Hz and 1 kHz for high harmonic injections as function of the RF power.} \label{fig:16} \end{figure} We address possible paths to optimize the injection locked phase noise. The model used for superharmonic injection phase noise is \begin{equation} \mathcal{L}_{\text{out}}(\Omega)=\frac{(\Delta\Omega_n/n)^2\mathcal{L}_\text{inj}(\Omega)\cos^2\theta+\Omega^2 \mathcal{L}_\text{free}(\Omega)}{\Delta\Omega_n^2\cos^2\theta+\Omega^2}, \label{eq:PN_full} \end{equation} \noindent which is a weighted average of $\mathcal{L}_\text{inj}(\Omega)$ and $\mathcal{L}_\text{free}(\Omega)$. If any one of these is much smaller than the other, i.e., $\mathcal{L}_\text{inj}(\Omega) \ll \mathcal{L}_\text{free}(\Omega)$, we \textit{cannot} expect $\mathcal{L}_{\text{out}}(\Omega) \approx \mathcal{L}_\text{inj}(\Omega)$, which would be desirable to a frequency divider. That being said, we must find ways to improve $\mathcal{L}_\text{free}(\Omega)$ such that $\mathcal{L}_\text{free}(\Omega)$ approaches $\mathcal{L}_\text{inj}(\Omega)$. According to \cite{PhysRevA.90.023825} and \cite{Mathai:19}, an optomechanical cavity in the unresolved sideband regime is dominated by thermomechanical noise and its phase noise spectral density is given by \vspace{0.0cm} \begin{equation}\label{leeson} \mathcal{L}_\text{free}(\Omega) = \left(\frac{2\Gamma_m}{n_x}\right)\left(\bar{n}_\text{th} + \frac{1}{2}\right)\left(\frac{1}{\Omega^2} + \frac{\nu_{\text{om}}'^{2}}{\Omega^2}\frac{1}{\gamma_{\text{om}}'^{2} + \Omega^2} + \frac{\eta_{I}^2}{\gamma_{\text{om}}'^{2} + \Omega^2}\right), \end{equation} \vspace{0.15cm} \noindent in which three ``general rules" towards phase noise reduction are noticed: reducing $\Gamma_m$ (mechanical oscillator's linewidth); reducing $\bar{n}_\text{th}$ (thermal phonon number); and increasing $n_x$ (coherent phonon number). Quality factors up to $Q_m \approx 10000$, 10 times larger than in our experiment, are reported in silicon nitride double disks~\cite{Zhang2014}. Such large mechanical quality factors were obtained just by manipulating the thickness of the nitride film. Also, increasing the optical pump power $P_0$ should readily increase the coherent phonon occupation $n_x$. To obtain an estimate on a feasible amplitude enhancement, we simulate the oscillation amplitude $x$ (in units of its zero point fluctuation, $x_{\text{zpf}}$) as a function of the optical detuning $\Delta_x$, where the subindex indicates that we already counted the static optomechanical shift $x_0$, i.e., $\Delta_x = \omega_l - \omega_0 - Gx_0$. Our simulations were performed for a set of optical powers, as shown in \ref{fig:15}, to find out the precise scaling of $n_x$ with $P_0$. The numerical simulation details and the parameters used are covered in the next section. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs_supp/fig6_supp.pdf} \caption{\textbf{a)} Mechanical oscillator's amplitude, in units of $x_{\text{zpf}}$, as a function of the optical detuning $\Delta_x$, in units of $\kappa$; \textbf{b)} Maximum oscillator's amplitude for a given optical power $P_0$, in mW. The floating numbers indicates how many times we must multiply the actual pump power used in the experiment to reach that level. The $\times$1 case is the actual numerical simulation using the data obtained in the laboratory, which is $P_0 = 480\mu$W.} \label{fig:15} \end{figure} The terms $\nu_{\text{om}}'$, $\gamma_{\text{om}}'$ and $\eta_I$ are the frequency shift due to amplitude change at the limit cycle, the damping rate of the amplitude fluctuation and transfers from displacement amplitude noise to photon number phase noise, respectively \cite{PhysRevA.90.023825}. As \Cref{leeson} show us, all these terms also contribute to the phase noise level, however, as we are using an optomechanical cavity in an unresolved sideband regime, we are neglecting their contributions because the term $1/\Omega^2$ dominates. Therefore, combining higher quality factors with larger oscillation amplitudes, we could achieve a net 30 dBc/Hz improvement, paving the way towards future improvements in optomechanical frequency dividers. \newpage \section{} \textbf{Numerical Simulation.} The numerical simulations of this section are not straightforward to perform, and it is worth discussing how they were carried out carefully. One issue faced when solving the following coupled nonlinear ODE \vspace{-0.3cm} \begin{equation}\label{eq:optomechanics_1} \dot{a} = i\Delta(t) a - \frac{\kappa}{2}a - iGxa + \sqrt{\kappa_e}s_{0}\sqrt{1 + \varepsilon(t)\sin{\Theta_d(t)}} \quad \quad \text{and} \quad \quad \ddot{x} + \Gamma_{m}\dot{x} + \Omega_{m}^{2}x = -\frac{\hslash G}{m_{\text{eff}}}|a|^{2} \end{equation} \noindent is the stiff nature of the system, characterized by the need of very small discretization steps despite the relative smoothness of the solutions. To tackle this system, we used well known numerical packages DifferentialEquations.jl, FFTW.jl, Sundials.jl and DSP.jl available in Julia language which implements robust methods for such systems. The simulation was done as follows: we first set an optical detuning function $\Delta(t)$ to sweep linearly from $\Delta_i$ to $\Delta_f$, where the sub-indexes $i$ and $f$ means initial and final, respectively. We have chosen $\Delta_i > 0$ because we want to access the blue side of the optical mode, where the self-sustained dynamic is naturally accessible. After reaching $\Delta_f$, we wait a few cycles of the mechanical oscillator to make sure the system is in a stationary regime and then turned on the modulation depth $\varepsilon(t)$, in which we modeled as a Heaviside step function. With the modulation depth online we, once again, waited a few microseconds to stabilize the energy inside the cavity, and then finally turned on the RF frequency sweep. In the laboratory our RF frequency sweep was linear between $\Omega_{d}^{i}$ and $\Omega_{d}^{f}$ with constant velocity $d\Omega_{d}/dt = \dot{\Omega}_d$, so we modeled $\Theta_d(t)$ as a parabola, i.e., $d\Theta_d(t)/dt = \Omega_{d}(t) = \Omega_{d}^{i} + \dot{\Omega}_d t$. The value chosen for $\dot{\Omega}_d$ need to be small to guarantee adiabaticity, which clearly is the case in the laboratory. A good threshold for adiabaticity is to sweep the RF tone over the mechanical resonance (of linewidth $\Gamma_m$) within the mechanical lifetime, $\tau_{m} \approx 2\pi/\Gamma_m$, i.e., $\dot{\Omega}_d \approx \Gamma_m/\tau_{m} \approx \Gamma_m^2/2\pi$. For our purposes, a RF frequency sweep velocity of $\dot{\Omega}_d \approx 0.1\Gamma_m^2$ was enough to ensure adiabaticity. A summary of all said is shown in \ref{fig:4}, highlighting the main aspects of the dynamic. \vspace{0.0cm} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs_supp/fig7_supp.pdf} \caption{\textbf{a)} Complete time domain simulation, showing important aspects of the synchronization. The purple region is the transient region which the mechanical oscillator gains amplitude. The two vertical dashed black lines shows where exactly we turned on the modulation $\varepsilon$ and the RF sweep $\dot{\Omega}_d$. The pink region is where injection locking is happening; \textbf{b)} Transient region of \ref{fig:4}(a) showing the regime from non-oscillating cavity to self-sustained oscillation; \textbf{c)} Phase space of \ref{fig:4}(b); \textbf{d)} Temporal trace of the black part of \ref{fig:4}(b).} \label{fig:4} \end{figure} \vspace{-0.0cm} These are the raw data that we obtain from the simulation. To obtain from these data the Arnold tongues we can take the length of the synchronized region of \ref{fig:4}(a) - the pink region of the plot - for each modulation depth $\varepsilon$. However, we must clarify how we find this pink region, i.e., the specific point where we say that synchronization occurs is a bit blur in the time domain, and that's why we construct a spectrogram, which is the Fourier transform of our signal in function of time, as shown in \ref{fig:5}(a). However, instead of plotting the spectrogram as a function of time, as we know the value of the driving RF frequency $\Omega_d$ for each time $t$, we can plot the spectrogram already in function of the RF frequency, and that is what was done in \ref{fig:5}(a). One way to obtain the synchronized region is to take the horizontal slice of this spectrogram just above the mechanical oscillation frequency $\Omega_0/2\pi$, which is the horizontal dashed red line, and its plot is shown in \ref{fig:5}(b). A second way, which is more well known in the literature, is to plot the difference between the driving frequency and oscillator's frequency ($\Omega_d - \Omega_0$) as a function of the drive frequency itself (or, in our case, the driving frequency minus a constant natural mechanical frequency $\Omega_m$), as shown in \ref{fig:5}(c). \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figs_supp/fig8_supp.pdf} \caption{\textbf{a)} Spectrogram of the transmission signal (\ref{fig:4}(a)) after the RF sweep is turned on. The dashed black line is the value of the drive frequency. The vertical bold black line is the value of the mechanical mode frequency; \textbf{b)} Horizontal dashed red slice of the spectrogram shown in \textbf{a)}; \textbf{c)} Typical plot of synchronization systems showing the mismatch from driving frequency and bare oscillation frequency, making clear where these became the same, defining a synchronized state. We have used $\varepsilon = 2\%$ for these simulations.} \label{fig:5} \end{figure} The Arnold tongues constructed using the explanation above are shown in \ref{fig:6}, which was already presented at the article as Fig. 3(a). As we can see, the simulation shows bigger synchronized region for the case $p:q = 2:1$ than $1:1$, and also a pretty wide $4:1$ AT, but a small $3:1$, the same trend of the experimental data. \ref{table:1} shows the parameters used in the simulations done and \ref{fig:7} shows the conversion from RF power, in dBm, to modulation depth $\varepsilon$, in $\%$, which is based in experimental data. The actual formula for the RF power $P_{RF}$ is shown inside the plot. \vspace{-0.0cm} \begin{figure}[ht] \centering \includegraphics[width=0.95\linewidth]{figs_supp/fig9_supp.pdf} \caption{Simulated Arnold tongues using injection frequency $\Omega_d = p\Omega_{0}/q$ for the cases $p = \{1,2,3,4\}$ and $q=1$, in order, from \textbf{a)} to \textbf{d)}. To simulate these maps we used \Cref{eq:optomechanics_1}, being these the ones that we will always call ``full model". The dashed black line at \textbf{a)} is the region that we analyzed in \ref{fig:5}(c).} \label{fig:6} \end{figure} \vspace{0.1cm} \newpage \begin{table}[ht] \begin{minipage}[ht]{0.4\linewidth} \centering \begin{tabular}{|c|c|} \hline \textbf{Parameters} & \textbf{Values} \\ \hline $P_{0}$ & 425 $\mu$W \\ \hline $\lambda$ & 1560nm \\ \hline $\Delta_i$ & 8$\kappa$ \\ \hline $\Delta_f$ & 0.35$\kappa$ \\ \hline $d\Delta/dt$ & 10$^2 \Gamma_m^2$ \\ \hline $d\Omega_d/dt$ & 0.075$\Gamma_m^2$ \\ \hline $\eta$ & 0.75 \\ \hline $\kappa/2\pi$ & 1.16 GHz \\ \hline $Q_{\text{opt}}$ & 165000 \\ \hline $\Omega_m/2\pi$ & 31.86 MHz \\ \hline $\Gamma_m/2\pi$ & 25.37 kHz \\ \hline $Q_m$ & 1255 \\ \hline $g_0/2\pi$ & 16.2 kHz \\ \hline \end{tabular} \caption{Parameters values used in every simulations, unless explicitly mentioned the opposite.} \label{table:1} \end{minipage}\hfill \begin{minipage}[ht]{0.56\linewidth} \centering \includegraphics[width=0.8\linewidth]{figs_supp/fig10_supp.pdf} \captionof{figure}{Conversion from modulation depth, in percentage, to RF power, in dBm. The inset shows the actual formula, in S.I. units, of this graph. $V_\pi =$ 5.5 V and $R =$ 50 $\Omega$.} \label{fig:7} \end{minipage} \end{table} \section{} \textbf{Semi-Analytical Model.} The Hamiltonian of our system, regardless dissipative considerations, can be modeled by \vspace{-0.35cm} \begin{equation}\label{eq:hamiltonian} H = \hslash\left(\omega_{0} + Gx\right)a^{\dagger}a + \frac{p^2}{2m_{\text{eff}}} + \frac{m_{\text{eff}}\Omega_m x^2}{2} + i\hslash\sqrt{\kappa_e}s_0\left(e^{-i\omega_l t} a^{\dagger} - e^{i\omega_l t}a\right) , \end{equation} \noindent in which $\hslash$ is the reduced Planck's constant, $\omega_{0}$ is the unperturbed angular frequency of the optical mode, $a^{\dagger}$ and $a$ are the creation and annihilation operators for photons with energy $\hslash\omega_{0}$, respectively, $G$ is the first order coefficient of the Taylor expansion of $\omega(x) = \omega_0 + Gx$ evaluated at mechanical equilibrium position $\left(\text{i.e., } G = (d\omega/dx)|_{x = 0}\right)$, $p$ and $x$ are the momentum and position operator of the mechanical oscillator, respectively, $\Omega_{m}$ is the unperturbed angular frequency of the mechanical mode, $m_{\text{eff}}$ is the mechanical oscillator effective mass, $i$ is the complex unity, $\kappa_e$ is the external optical coupling rate, $s_{0}^2$ is the input power and $\omega_l$ is the optical pump angular frequency. As long as we are not interested in quantum phenomena we can study our dynamical system just looking to the average value of these operators, and we can also introduce the optical and mechanical loses $\kappa$ and $\Gamma_m$ directly in the equations of motion \cite{Aspelmeyer2014} as \vspace{-0.4cm} \begin{equation}\label{eq:optomechanics_2} \dot{a} = i\Delta a - \frac{\kappa}{2}a - iGxa + \sqrt{\kappa_e}s_{0} \quad \quad \text{and} \quad \quad \ddot{x} + \Gamma_{m}\dot{x} + \Omega_{m}^{2}x = -\frac{\hslash G}{m_{\text{eff}}}|a|^{2}, \end{equation} \noindent where we already changed $a$ to the slow rotating frame of reference $a \rightarrow ae^{-i\omega_l t}$ to let the equation autonomous. We define the bare optical detuning $\Delta = \omega_l - \omega_{0}$ as the difference between optical pump frequency and unperturbed optical mode frequency. To introduce the amplitude modulation used in the experiment we can simply multiply $s_{0}$ by a factor $\sqrt{1 + \varepsilon\sin{\Theta_d(t)}}$ in \Cref{eq:optomechanics_2}, i.e., $s(t) = s_0\sqrt{1 + \varepsilon\sin{\Theta_d(t)}}$, in which $\varepsilon$ is the modulation depth and is correlated with the RF power as shown in \ref{fig:7}. The term $\Theta_d(t)$ is the phase of this modulation, in which most of the time will just be $\Omega_d t$. Simulating \Cref{eq:optomechanics_2} as it is shown require us to know $G$ and $m_{\text{eff}}$ but, because they are normalization dependent, we are going to avoid this using $g_0 = Gx_{\text{zpf}}$, the optomechanical single-photon coupling strength, and also the $x_{\text{zpf}} = \sqrt{\hslash/2m_{\text{eff}}\Omega_m}$, the zero points fluctuation amplitude of $x$. New normalizations will be used to study self-sustained oscillations, which are given by \vspace{-0.2cm} \begin{equation}\label{eq:normalization_1} x(t) = x_0 + \delta x(t) = \left(\frac{\kappa}{2g_0}\right)\sqrt{\frac{\hbar}{2m_{\text{eff}}\Omega_m}}\left(\widetilde{x}_{0}+\widetilde{x}(t)\right) \quad \quad \text{and} \quad \quad t = \frac{\widetilde{t}}{\Omega_m}, \end{equation} \vspace{-0.1cm} \noindent where all tilde variables are now adimensional. The terms $x_0$ and $\delta x(t)$ are the DC and AC components of $x(t)$, respectively, being $\widetilde{x}_0$ and $\widetilde{x}(t)$ their adimensional version. We can then rewrite \Cref{eq:optomechanics_2} as \vspace{-0.2cm} \begin{equation}\label{eq:optomechanics_3} \frac{da}{d\widetilde{t}} = -\frac{\kappa}{2\Omega_m}a + i\left(\frac{\Delta_x}{\Omega_m}-\frac{\kappa}{2\Omega_m}\widetilde{x}\right)a + \sqrt{\frac{\kappa_e}{\Omega_m}}\frac{s}{\sqrt{\Omega_m}} \quad \quad \text{and} \quad \quad \frac{d^2\widetilde{x}}{d\widetilde{t}^2} + \frac{1}{Q_m}\frac{d\widetilde{x}}{d\widetilde{t}} + \widetilde{x} = -\widetilde{x}_0 - \frac{\mathcal{C}_0}{Q_m}|a|^{2}, \end{equation} \noindent where the new variables $\Delta_x$, $Q_m$ and $\mathcal{C}_0$ are the optical detuning with the static optomechanical shift correction, $\Delta_x = \omega_l - \omega_0 - Gx_0$, the mechanical oscillator quality factor, $Q_m = \Omega_m/\Gamma_m$, and the single-photon cooperativity $\mathcal{C}_0 = 4g_0^2/\Gamma_m\kappa$, respectively. The derivatives now will be all with respect to $\widetilde{t}$ unless explicitly said the opposite. The motivations to construct a semi-analytical model in this article are (i) to prove that each term $F_n$ from the power expansion of the optical force is mainly responsible for the $p:q$ Arnold tongue width $\Delta\Omega(p,q)$, (ii) to obtain a semi-analytical formula for these $\Delta\Omega(p,q)$, (iii) to show that the influence of the optical detuning greatly change the synchronization region, (iv) to prove that the symmetry breaking term $F_2$, which is neglected in many articles, is actually crucial for the dynamic and (v) to explain/predict those sidebands around the synchronization region. We then start uncoupling \Cref{eq:optomechanics_3} using adiabatic considerations: our optomechanical cavity has mechanical linewidth $\Gamma_m$ much smaller than the optical linewidth $\kappa$, as well the mechanical frequency $\Omega_m$ also much smaller than $\kappa$, a regime called unresolved sidebands. We can then assume that $a(\widetilde{t})$ is always in equilibrium with $\widetilde{x}(\widetilde{t} - \widetilde{\tau})$, where $\widetilde{\tau}$ is some adimensional time delay that we will deduce later in \Cref{eq:tau}, so we can write $a(\widetilde{t})$ as \vspace{-0.4cm} \begin{equation}\label{eq:a_delayed} a(\widetilde{t}) \approx \sqrt{\frac{\kappa_e}{\kappa}}\frac{s(\widetilde{t})}{\sqrt{\kappa}}\frac{2}{1 -i\left[\frac{2\Delta_x}{\kappa} - \widetilde{x}(\widetilde{t}-\widetilde{\tau})\right]}, \end{equation} \noindent and then we can analyze the whole system just looking to one equation \vspace{-0.3cm} \begin{equation} \ddot{\widetilde{x}}(\widetilde{t}) +\frac{\dot{\widetilde{x}}(\widetilde{t})}{Q_m} + \widetilde{x}(\widetilde{t}) = -\widetilde{x}_0 - \frac{\mathcal{C}_0}{Q_m}\left(\frac{2\kappa_e}{\kappa}\right)\left(\frac{2s_0^2}{\kappa}\right)\left(\frac{1 + \varepsilon\sin{\Theta_d(\widetilde{t})}}{1 + \left[\frac{2\Delta_x}{\kappa} - \widetilde{x}(\widetilde{t}-\widetilde{\tau})\right]^2}\right), \end{equation} \vspace{-0.3cm} \noindent where we define $f(\widetilde{t})$ and $f_0$ as \vspace{-0.4cm} \begin{equation} f(\widetilde{t}) = f_0\left[1 + \varepsilon\sin{\Theta_d(\widetilde{t})}\right] \quad \text{and} \quad f_0 = \frac{\mathcal{C}_0}{Q_m}\left(\frac{2\kappa_e}{\kappa}\right)\left(\frac{2s_0^2}{\kappa}\right), \end{equation} \noindent which allows us to rewrite \Cref{eq:a_delayed} as \vspace{-0.4cm} \begin{equation}\label{eq:x_delayed} \ddot{\widetilde{x}}(\widetilde{t}) +\frac{\dot{\widetilde{x}}(\widetilde{t})}{Q_m} + \widetilde{x}(\widetilde{t}) = -\widetilde{x}_0 - \frac{f(\widetilde{t})}{1 + \left[\frac{2\Delta_x}{\kappa} - \widetilde{x}(\widetilde{t}-\widetilde{\tau})\right]^2}, \end{equation} However, \Cref{eq:x_delayed} is still very complicated because it is a non-autonomous delay differential equation, so we will expand the RHS in a power series of $\widetilde{x}(\widetilde{t}-\widetilde{\tau})$ as \vspace{-0.3cm} \begin{equation}\label{eq:RHS_expansion} \frac{1}{1 + \left[\frac{2\Delta_x}{\kappa} - \widetilde{x}(\widetilde{t}-\widetilde{\tau})\right]^2} = F_{0} + F_{1}\widetilde{x}(\widetilde{t}-\widetilde{\tau}) + F_{2}\widetilde{x}^{2}(\widetilde{t}-\widetilde{\tau}) + F_{3}\widetilde{x}^{3}(\widetilde{t}-\widetilde{\tau}) + ... \end{equation} \vspace{0.0cm} \noindent in which the actual form of these first coefficients (which were already shown in Fig. 1(b) of the article) are \vspace{-0.4cm} \begin{equation}\label{eq:coefficients} F_{0} = \frac{1}{1 + \frac{4\Delta_x^2}{\kappa^2}} \quad , \quad F_1 = \frac{2\left(\frac{2\Delta_x}{\kappa}\right)}{\left(1 + \frac{4\Delta_x^2}{\kappa^2}\right)^2} \quad , \quad F_2 = \frac{\left(\frac{12\Delta_x^2}{\kappa^2} - 1\right)}{\left(1 + \frac{4\Delta_x^2}{\kappa^2}\right)^3} \quad \text{and} \quad F_3 = \frac{4\left(\frac{2\Delta_x}{\kappa}\right)\left(\frac{4\Delta_x^2}{\kappa^2} - 1\right)}{\left(1 + \frac{4\Delta_x^2}{\kappa^2}\right)^4}, \end{equation} \vspace{0.0cm} \noindent which shows that large normalized detuning ($\Delta_x/\kappa\gg 1$) leads to negligible values, as each $F_{n+1}$ term decreases faster than $F_{n}$ as a function of $\Delta_x/\kappa$, i.e., \vspace{-0.4cm} \begin{equation} \frac{F_{n}}{F_{n+1}} \sim \left(\frac{\Delta_x}{\kappa}\right). \end{equation} The value of each $F_n$ in our experiment was found to be $F_0=0.6711$, $F_1=0.6306$, $F_2=0.1421$ and $F_3=-0.2897$. Substituting \Cref{eq:RHS_expansion} in \Cref{eq:x_delayed} reveals the nonlinear nature of the optical feedback into the mechanical oscillator, \vspace{-0.4cm} \begin{equation}\label{eq:x_delayed_approx} \ddot{\widetilde{x}}(\widetilde{t}) +\frac{\dot{\widetilde{x}}(\widetilde{t})}{Q_m} + \widetilde{x}(\widetilde{t}) = -\widetilde{x}_0 - f(\widetilde{t})\left[F_{0} + F_{1}\widetilde{x}(\widetilde{t}-\widetilde{\tau}) + F_{2}\widetilde{x}^2(\widetilde{t}-\widetilde{\tau}) + F_{3}\widetilde{x}^3(\widetilde{t}-\widetilde{\tau})\right]. \end{equation} To remove the delay dependence we can expand $\widetilde{x}(\widetilde{t}-\widetilde{\tau})$ in powers of $\widetilde{\tau}$ as \vspace{-0.4cm} \begin{equation}\label{eq:delay_expansion} \widetilde{x}^{n}(\widetilde{t}-\widetilde{\tau}) = \widetilde{x}^{n}(\widetilde{t}) - n\widetilde{\tau} \widetilde{x}^{n-1}(\widetilde{t})\dot{\widetilde{x}}(\widetilde{t}) + O(\widetilde{\tau}^2), \end{equation} \noindent where we are neglecting $O(\widetilde{\tau}^2)$ since we have that $\widetilde{\tau}^2$ is of order $O(\Omega_m^2/\kappa^2)$, as we will verify soon. We can then group all these terms in an arrangement that highlights how far from the ideal harmonic oscillator this system is, as shown in \Cref{eq:x_delayed_final} \vspace{-0.3cm} \begin{equation}\label{eq:x_delayed_final} \ddot{\widetilde{x}} +\left[\frac{1}{Q_m} - \widetilde{\tau}f(\widetilde{t})\left(F_{1} + 2F_{2}\widetilde{x} + 3F_{3}\widetilde{x}^2\right)\right]\dot{\widetilde{x}} + \left[1 + f(\widetilde{t})\left(F_{1} + F_{2}\widetilde{x} +F_{3}\widetilde{x}^2\right)\right]\widetilde{x} = -f_1(\widetilde{t})F_{0} - \left(f_0F_0+\widetilde{x}_0\right), \end{equation} \noindent where $f_1(\widetilde{t})$ is the AC component of $f(\widetilde{t})$, i.e., $f(\widetilde{t}) = f_0 + f_1(\widetilde{t})$ with $f_1(\widetilde{t}) = \varepsilon f_0\sin{\Theta_d(\widetilde{t})}$. The last term of \Cref{eq:x_delayed_final} in parentheses must be zero because is the only DC component of the whole equation and, as one can verify, solving $f_0F_0+\widetilde{x}_0=0$ returns the same static correction for the mechanical displacement $x(t)$ from the linearized optomechanics. \Cref{eq:x_delayed_final} is, finally, a well looking shape equation, very similar with Equation 8 from reference \cite{PhysRevE.91.032910} where their $\zeta$ and $f_\text{e}$ are related with our $F_1$ and $F_0$, besides we are not using temperature dynamics here. The generalization of \Cref{eq:x_delayed_final} until $O(\widetilde{\tau}^2)$ is given by \vspace{-0.3cm} \begin{equation} \ddot{\widetilde{x}} +\left[\frac{1}{Q_m} - \widetilde{\tau} f(\widetilde{t})\sum_{n=1}^{n_{\text{max}}}nF_n\widetilde{x}^{n-1}\right]\dot{\widetilde{x}} + \left[1 + f(\widetilde{t})\sum_{n=1}^{n_{\text{max}}}F_n\widetilde{x}^{n-1}\right]\widetilde{x} = -f_1(\widetilde{t})F_{0}, \end{equation} \vspace{-0.1cm} \noindent but we are not going to analyze this system, we will stick with the case $n_{\text{max}}=3$. We can do a final approximation which is to neglect terms of order $O(\widetilde{\tau}\varepsilon)$ as far $\varepsilon$ is kept small, knowing a priori that $\widetilde{\tau}$ is already small. We then have \Cref{eq:x_final} \vspace{-0.4cm} \begin{equation}\label{eq:x_final} \ddot{\widetilde{x}} +\left[\frac{1}{Q_m} - \widetilde{\tau} f_0\left(F_{1} + 2F_{2}\widetilde{x} + 3F_{3}\widetilde{x}^2\right)\right]\dot{\widetilde{x}} + \left[1 + \left(f_0 + f_1\right)\left(F_{1} + F_{2}\widetilde{x} +F_{3}\widetilde{x}^2\right)\right]\widetilde{x} = -f_1F_{0} \end{equation} \noindent and finally it is an ODE that all the terms proportional to $\widetilde{x}$ have a parametric excitation $f_1$, but terms proportional to $\dot{\widetilde{x}}$ do not. The only formula that is missing is $\widetilde{\tau} = \widetilde{\tau}(\Delta_x)$ for us to start studying \Cref{eq:x_final}. To find this missing expression note (i) that the term $f_0 F_1$ acts like a constant shift in the frequency, so we can associate with it the optical spring effect and (ii) that the term $-\widetilde{\tau} f_0 F_1$ acts like a constant change in the mechanical linewidth, so we can associate it with the optical cooling/heating. Doing that interconnection with the linearized optomechanical equations \cite{Aspelmeyer2014} we can identify an analytic expression for $\widetilde{\tau}$ because from our model we have that $\delta\Gamma_{m}^{\text{linear}} = -\widetilde{\tau} f_0 F_{1}$ and, from the linearized optomechanical equations, \vspace{-0.3cm} \begin{equation}\label{eq:gamma_lin} \delta\Gamma_{m}^{\text{linear}} = \frac{\mathcal{C}_0}{Q_m}\left(\frac{2\kappa_e}{\kappa}\right)\left(\frac{2s_0^2}{\kappa}\right)\left(\frac{1}{1 + \left(\frac{2\Delta_x}{\kappa}\right)^2}\right)\left(\frac{1}{1 + \left(\frac{2\Delta_x}{\kappa} + \frac{2\Omega_m}{\kappa}\right)^2} - \frac{1}{1+\left(\frac{2\Delta_x}{\kappa} - \frac{2\Omega_m}{\kappa}\right)^2}\right), \end{equation} \vspace{-0.3cm} \begin{figure}[ht] \begin{minipage}[b]{0.56\linewidth} \justify \normalsize{remembering that $\delta\Gamma_m^{\text{linear}}$ is in units of $\Omega_m$. Solving then for $\widetilde{\tau}$} \vspace{-0.0cm} \begin{equation}\label{eq:tau} \widetilde{\tau} = \frac{1}{2\left(\frac{2\Delta_x}{\kappa}\right)}\left[\frac{1 + \left(\frac{2\Delta_x}{\kappa}\right)^2}{1+\left(\frac{2\Delta_x}{\kappa} - \frac{2\Omega_m}{\kappa}\right)^2} - \frac{1 + \left(\frac{2\Delta_x}{\kappa}\right)^2}{1+\left(\frac{2\Delta_x}{\kappa} + \frac{2\Omega_m}{\kappa}\right)^2}\right]. \end{equation} \vspace{0.3cm} We must emphasize here that our $\widetilde{\tau}$ should, rigorously, be rewritten as $\widetilde{\tau}_{\text{linear}}$, because that is just the first order correction of $\widetilde{\tau}$. Nevertheless, the function $\widetilde{\tau} = \widetilde{\tau}(\Delta_x)$ has every property that we expect: (i) is positive for every value of $\Delta_x$, (ii) it is approximately $\Omega_m/\kappa$ and (iii) is also consistent with the fact that far from the resonance there is no mechanical response, i.e., $\widetilde{\tau}(|\Delta_x| \gg \kappa) = 0$, as shown in \ref{fig:8} using our experimental parameters and also for three other cases. \end{minipage}\hfill \begin{minipage}[b]{0.4\linewidth} \centering \includegraphics[width=\linewidth]{figs_supp/fig11_supp.pdf} \caption{Adimensional linear mechanical relaxation time $\widetilde{\tau}$ (in units of $\Omega_m/\kappa$) as function of the optical detuning $\Delta_x$ (in units of $\kappa$).} \label{fig:8} \end{minipage} \end{figure} The Arnold tongues simulations using this semi-analytical model from \Cref{eq:x_final} are shown in \ref{fig:9}, and the comparison between this and \ref{fig:6} is striking. Before finishing this section we want to prove that the connection between $F_n$'s and the entire AT width $\Delta\Omega(n+1,1)$ is really strong and, for that, we simulated again \Cref{eq:x_final} but here we want to show not the AT map, but some particular spectrograms while we considered only one $F_n$ each time we simulate the system, as shown in \ref{fig:10}. \newpage \begin{figure}[ht] \centering \includegraphics[width=0.95\linewidth]{figs_supp/fig12_supp.pdf} \caption{Simulated Arnold tongues using injection frequency $\Omega_d = p\Omega_{0}/q$ for the cases $p = \{1,2,3,4\}$ and $q=1$, in order, from \textbf{a)} to \textbf{d)}. To simulate these maps we used \Cref{eq:x_final}.} \label{fig:9} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{figs_supp/fig13_supp.pdf} \caption{These simulations were done considering only one $\varepsilon F_n$ term in \Cref{eq:x_final}, e.g., the first row of this image-matrix-like consist of considering $\varepsilon F_1 = \varepsilon F_2 = \varepsilon F_3 = 0$ while $\varepsilon F_0 \neq 0 $ and doing the injection locking for all $p=1-4$. The same logic is valid for the others rows, as indicated in the right side of the figure. In these simulations we used $\varepsilon = 5\%$.} \label{fig:10} \end{figure} And as we can see, simulating \Cref{eq:x_final} proves that the major dependence of each AT is indeed the parametric term $\varepsilon F_n$, as each one of these terms alone almost reproduces the whole dynamic of the system in a specific region. This hierarchical dependence is explicitly calculated, as shown in \Cref{semianalyticAT}. \newpage \section{} \textbf{The averaging method of Krylov-Bogoliubov-Mitropolsky (KBM).} While the numerical simulation does predict many features and give us many insights about the observed data, it does not provide a direct prediction of the synchronization behavior. To pursue further analytical insight, we resort to the KBM method to derive amplitude and phase equations describing the coupling optomechanical oscillator enslaved by the driving modulated RF signal. We start here introducing new adimensional time $T$ and displacement $y$ given by \vspace{-0.3cm} \begin{equation}\label{eq:normalization_yT} \frac{\widetilde{x}}{y} = L_{\widetilde{x}} = -\frac{F_2}{3F_3} + \sqrt{\left(\frac{F_2}{3F_3}\right)^2 + \left(\frac{1-f_0F_1Q_m\widetilde{\tau}}{3f_0F_3Q_m\widetilde{\tau}}\right)} \quad \quad \text{and} \quad \quad \frac{\widetilde{t}}{T} = L_{\widetilde{t}} = \frac{1}{\sqrt{1 + f_0 F_1}}. \end{equation} At first glance, it seems like an awkward choice of normalization, but as we discuss below, they have a clear physical interpretation. \Cref{eq:normalization_yT} is the positive root of the coefficient of $\dot{\widetilde{x}}(\widetilde{t})$ in \Cref{eq:x_final}, which makes the amplitude of $y$ near the value of the limit circle of a van-der Pol oscillator $\approx O(1)$, i.e., we are renormalizing $\widetilde{x}$ by the positive solution of \vspace{-0.5cm} \begin{equation}\label{eq:x_dot_coef} \frac{1}{Q_m} - \widetilde{\tau} f_0\left(F_{1} + 2F_{2}\widetilde{x} + 3F_{3}\widetilde{x}^2\right) = 0. \end{equation} The choice of the new time scale makes the oscillation frequency of the oscillator, already accounted by the optical spring effect, about $\approx O(1)$. After these normalizations we have that \Cref{eq:x_final} becomes \vspace{-0.3cm} \begin{multline}\label{eq:master_y} \frac{d^2y}{dT^2}-\mu(1-y)(1+\sigma y)\frac{dy}{dT} + \left[1+\varepsilon\alpha\sin{\left(\omega T\right)}\right]y +\\ + \beta\left[1+\varepsilon\sin{\left(\omega T\right)}\right]y^2 + \gamma\left[1+\varepsilon\sin{\left(\omega T\right)}\right]y^3 = F\varepsilon\sin{\left(\omega T\right)}, \end{multline} \noindent with new adimensional parameters defined as \vspace{-0.4cm} \begin{equation} \omega = \frac{\Omega_{d}}{\Omega_m}L_{\widetilde{t}}, \quad \quad \quad \mu = \left(\widetilde{\tau}f_0F_1Q_m - 1\right)L_{\widetilde{t}}, \quad \quad \quad \sigma = 1+\frac{2\widetilde{\tau}f_0F_2Q_m}{\widetilde{\tau}f_0F_1Q_m - 1}L_{\widetilde{x}}, \end{equation} \vspace{-0.4cm} \begin{equation*} \alpha = f_0F_1 L_{\widetilde{t}}^2, \quad \quad \quad \beta = f_0F_2 L_{\widetilde{x}}L_{\widetilde{t}}^2, \quad \quad \quad \gamma = f_0F_3L^2_{\widetilde{x}}L_{\widetilde{t}}^2, \quad \quad \text{and} \quad \quad F = -f_0F_0L_{\widetilde{t}}^2/L_{\widetilde{x}}. \end{equation*} \vspace{-0.3cm} It is evident that every parametric term $\alpha$, $\beta$, $\gamma$ and $F$ is proportional to $f_0$, regardless of the convoluted terms $L_{\widetilde{t}}$ and $L_{\widetilde{x}}$, meaning that higher optical pump intensity enhance these terms. Also, each of these terms are proportional to one $F_n$, making clear distinction where each nonlinearity really is. Such model returns us the same used by Shreyas Y. Shah at \cite{PhysRevLett.114.113602} if $\mu = \gamma = F = 0$ and also neglecting the autonomous quadratic term $\beta y^2$ (which is the term that comes from a odd power potential, making the problem parity asymmetric), then \vspace{-0.3cm} \begin{equation} \frac{d^2y}{dT^2} + \left[1 + \alpha\varepsilon\sin{\left(\omega T\right)}\right]y + \beta\varepsilon\sin{\left(\omega T\right)}y^2 = 0, \end{equation} \noindent but to leave in the exact shape of the one used there we should change the independent variable $\omega T \rightarrow U + \frac{\pi}{2}$ and then \vspace{-0.4cm} \begin{equation} \frac{d^2y}{dU^2} + \left[\frac{1}{\omega^2} + \frac{\alpha}{\omega^2}\varepsilon\cos{\left(U\right)}\right]y + \frac{\beta}{\omega^2}\varepsilon\cos{\left(U\right)}y^2 = 0, \end{equation} \noindent from which we interpret the parameters as \vspace{-0.4cm} \begin{equation} \delta^{\text{Shah}} = \frac{1}{\omega^2} \quad , \quad D_1^{\text{Shah}} = \frac{\alpha}{\omega^2} \quad , \quad D_2^{\text{Shah}} = \frac{\beta}{\omega^2} \quad , \quad \gamma^{\text{Shah}} = \varepsilon. \end{equation} However, we will not use the multi-scale method to study synchronization neither to find bifurcations, our analysis will be based in the KBM method of averaging. The value of the parameters obtained from simulations are $\mu = 9.813 \times 10^{-4}$, $\sigma = 1.665 \times 10^{0}$, $\alpha = 2.383 \times 10^{-2}$, $\beta = 4.396 \times 10^{-3}$, $\gamma = -7.340 \times 10^{-3}$ and $F = -3.098 \times 10^{-2}$. We begin our KBM analyzes from \Cref{eq:master_y}, which is a nonlinear oscillators of the form \vspace{-0.3cm} \begin{equation}\label{eq:y_oscillator_form} \frac{d^2y}{dT^2} + y = K\left(T, y,\frac{dy}{dT}\right), \end{equation} \noindent where $K$ is small compared to $y$. If $K(T,y,\frac{dy}{dT})=0$, we would have the ideal harmonic oscillator with solution $y = A\sin{\left(T+\Phi\right)}$ for any choice of constants $A$ and $\Phi$. If we now try solving \Cref{eq:y_oscillator_form} with slowly varying amplitude and phase $(A(T),\Phi(T))$ as ansatz, i.e., \vspace{-0.3cm} \begin{equation} y = A(T)\sin{\left[T+\Phi(T)\right]} \quad \quad \text{and} \quad \quad \frac{dy}{dT} = A(T)\cos{\left[T+\Phi(T)\right]}, \end{equation} \noindent we can show \cite{jackson1989perspectives} that this system has general solution given by \Cref{eq:pre_KBM} \vspace{-0.2cm} \begin{equation}\label{eq:pre_KBM} \begin{cases} \frac{dA}{dT} = \cos{\left(\phi\right)}K\left( T, A\sin{\phi}, A\cos{\phi}\right) \\ \\ \phi(T) = T+\Phi(T) \\ \\ \frac{d\Phi}{dT} = - \frac{\sin{\left(\phi\right)}}{A}K\left(T, A\sin{\phi}, A\cos{\phi}\right) \end{cases} \end{equation} \vspace{0.0cm} The KBM method take its place here, where we average these equations over one period, however, the integral in $T$ is replaced over a integral in $\phi$ considering that $d\phi \approx dT$, which is correct to zero order in $\Phi(T)$, so \vspace{-0.2cm} \begin{equation}\label{eq:KBM_A} \left<\frac{dA}{dT}\right>_{T} \approx \left<\frac{dA}{dT}\right>_{\phi} = \frac{1}{2\pi}\int_{0}^{2\pi}\cos{\left(\phi\right)}K\left(\phi-\Phi, A\sin{\phi}, A\cos{\phi}\right)d\phi \end{equation} \begin{equation}\label{eq:KBM_Phi} \left<\frac{d\Phi}{dT}\right>_{T} \approx \left<\frac{d\Phi}{dT}\right>_{\phi} = -\frac{1}{2\pi A}\int_{0}^{2\pi}\sin{\left(\phi\right)}K\left(\phi-\Phi, A\sin{\phi}, A\cos{\phi}\right)d\phi \end{equation} \noindent and if our system were autonomous the integrals from \Cref{eq:KBM_A} and \Cref{eq:KBM_Phi} would be relatively easy to proceed, however, we have an external drive and this makes our system non-autonomous. To proceed the integral, we need to deal with $\Phi$, which we will just let constant during integration, arguing that $\Phi$ is a slow varying function of $T$. The general form of $K\left(T, y, \frac{dy}{dT}\right)$ for our system can be splitted in two contribution: one autonomous and another one non-autonomous, i.e., \vspace{-0.4cm} \begin{equation} K\left(T, y, \frac{dy}{dT}\right) = K_{\text{auto}}\left(y, \frac{dy}{dT}\right) + K_{\text{non-auto}}\left(T, y, \frac{dy}{dT}\right) \end{equation} \noindent in which each part is written as \vspace{-0.4cm} \begin{equation} K_{\text{auto}}\left(y, \frac{dy}{dT}\right) = \mu\left(1-y\right)(1+\sigma y)\frac{dy}{dT} - \beta y^2 - \gamma y^3 \end{equation} \vspace{-0.3cm} \begin{equation} \text{and} \quad \quad K_{\text{non-auto}}\left(T, y, \frac{dy}{dT}\right) =\varepsilon\sin(\omega T)\left(F - \alpha y - \beta y^2 - \gamma y^3\right). \end{equation} Substituting $K\left(T, y, \frac{dy}{dT}\right)$ into \Cref{eq:KBM_A} and \Cref{eq:KBM_Phi} and then performing the integration over one period of $\phi$ we obtain our amplitude and phase differential equations given by \vspace{0.0cm} \begin{multline}\label{eq:KBM_A_final} \left<\frac{dA}{dT}\right>_{\phi} = \frac{\mu A}{2}\left(1-\frac{\sigma A^2}{4}\right) +\\ + \frac{\varepsilon\sin (\pi\omega)}{\pi}\left(\frac{\omega \left[F \left(\omega ^2-9\right) + 2\beta A^2\right]\sin{\left[(\pi -\Phi ) \omega\right]}}{\left(\omega ^2-9\right) \left(\omega ^2-1\right)} - \frac{A \left[\alpha \left(\omega ^2-16\right)-6 \gamma A^2 \right] \cos{\left[(\pi -\Phi ) \omega \right]}}{\left(\omega ^2-16\right) \left(\omega ^2-4\right)}\right), \end{multline} \begin{multline}\label{eq:KBM_Phi_final} \left<\frac{d\Phi}{dT}\right>_{\phi} = \frac{3\gamma A^2}{8} +\\ -\frac{\varepsilon\sin(\pi\omega)}{\pi}\left(\frac{\left[F \left(\omega ^2-9\right) + 6\beta A^2\right]\cos{\left[(\pi -\Phi ) \omega\right]}}{\left(\omega ^2-9\right) \left(\omega ^2-1\right)A} - \frac{2 \left[\alpha \left(\omega ^2-16\right)-12 A^2 \gamma \right]\sin{\left[(\pi -\Phi ) \omega \right]}}{\omega \left(\omega ^2-16\right) \left(\omega ^2-4\right)}\right). \end{multline} \vspace{0.0cm} This averaging technique is the essence of the KBM method to obtain amplitude and phase equations of nonlinear oscillators. Before we proceed, we can study \Cref{eq:KBM_A_final} and \Cref{eq:KBM_Phi_final} for the case $\varepsilon = 0$, which would give us exact solutions for both $A(T)$ and $\Phi(T)$ as \vspace{-0.3cm} \begin{equation}\label{subsec:KBM_infty} \begin{cases} A(T) = \frac{\pm 2}{\sqrt{\sigma + \left(\frac{4}{A_0} - \sigma\right) e^{-\mu T}}} \quad \Rightarrow \quad \lim_{T \rightarrow \infty}A(T) = A_{\infty} = \pm\frac{2}{\sqrt{\sigma}} \\ \\ \Phi(T) = \Phi_0 + \frac{3\gamma}{2\mu\sigma}\ln{\left[1 + \frac{\sigma A_0^2}{4}\left(e^{\mu T} - 1\right)\right]} \quad \Rightarrow \quad \lim_{T \rightarrow \infty}\Phi(T) = \Phi_{\infty} = \frac{3\gamma}{2\sigma}T \end{cases} \end{equation} \vspace{0.0cm} \noindent for constants $A_0$ and $\Phi_0$. Here we conclude that even with zero modulation depth there exist a frequency shift contribution that comes from the Duffing term $\gamma$. The steady oscillation frequency $\Omega_0$ is given by \vspace{-0.3cm} \begin{equation} \Omega_0(\varepsilon=0) = \lim_{t \rightarrow \infty}\frac{d\phi}{dt} = \lim_{t \rightarrow \infty}\frac{d\phi}{dT}\frac{dT}{dt} = \frac{\Omega_m}{L_{\widetilde{t}}}\frac{d}{dT}\left(T + \Phi_{\infty}\right) = \Omega_m\sqrt{1 + f_0 F_1}\left(1 + \frac{3\gamma}{2\sigma}\right) \end{equation} \vspace{-0.1cm} \noindent which can be used to estimate the Duffing term from the measured oscillation frequency. Remembering the definitions of $\omega$ and $\rho$ we can obtain a really nice relation between them as \vspace{-0.3cm} \begin{equation} \omega = \frac{\Omega_d}{\Omega_m\sqrt{1 + f_0 F_1}} \quad \quad \& \quad \quad \rho = \frac{p}{q} = \frac{\Omega_d}{\Omega_0} \quad \quad \Rightarrow \quad \quad \frac{\omega}{\rho} = 1 + \frac{3\gamma}{2\sigma} + O(\varepsilon), \end{equation} \noindent and its clear now that if $\rho$ is an integer, it does not mean that $\omega$ is an integer. In our case (and most of the cases) the Duffing correction in the frequency is really small, as a matter of fact we have \vspace{-0.3cm} \begin{equation} \frac{3\gamma}{2\sigma} \approx -6.6125 \times 10^{-3} = O(10^{-3}) \ll O(1) \end{equation} \noindent so we will considerer $\omega \approx \rho$ for now on, because that is an excellent approach to obtain simple, but good, analytical results. The phase equation can be expanded in the vicinity of some integer $\rho = \{1,2,3,4\}$ to give us insight into higher harmonic synchronization. We have then the following cases \vspace{-0.3cm} \begin{equation}\label{subsec:simulated_tongues_KBM} \begin{cases} \lim_{\rho \rightarrow 1}\left<\frac{d\Phi}{dT}\right>_{\phi} \approx \frac{3\gamma A^2}{8} - \varepsilon\left(\frac{F}{2A} - \frac{3\beta A}{8}\right)\cos{\left(\Phi\right)}, \\ \\ \lim_{\rho \rightarrow 2}\left<\frac{d\Phi}{dT}\right>_{\phi} \approx \frac{3\gamma A^2}{8} +\varepsilon\left(\frac{\alpha}{4}+\frac{\gamma A^2}{4}\right)\sin{\left(2\Phi\right)}. \\ \\ \lim_{\rho \rightarrow 3}\left<\frac{d\Phi}{dT}\right>_{\phi} \approx \frac{3\gamma A^2}{8} -\varepsilon\frac{\beta A}{8}\cos{\left(3\Phi\right)}. \\ \\ \lim_{\rho \rightarrow 4}\left<\frac{d\Phi}{dT}\right>_{\phi} \approx \frac{3\gamma A^2}{8} -\varepsilon\frac{\gamma A^2}{16}\sin{\left(4\Phi\right)}. \end{cases} \end{equation} When the system is locked to the driving signal, we know that the amplitude $A(T)$ of the oscillator is almost constant (this is the Kuramoto approximation) and that the phase $\phi(T) = T + \Phi(T)$ is a linear function of time $T$ because, otherwise, the oscillation frequency $\Omega_0$ would not be static, i.e., it would fluctuate around some mean frequency. In other words, we are imposing that the derivative of $\phi(T)$ to be constant during the locking, so we can say that $\Phi(T) = \delta_{\rho} T$ for some frequency mismatch $\delta_{\rho}$ of our bare oscillator, i.e., \vspace{-0.5cm} \begin{equation} \phi(T) = (1 + \delta_\rho)T \quad \Rightarrow \quad \frac{d\phi}{dT} = 1 + \delta_\rho, \end{equation} \noindent and then we can solve for $\varepsilon = \varepsilon(\delta_\rho)$ for various $\delta_\rho$. The cases $\rho = 1$ and $\rho = 2$ are \vspace{-0.0cm} \begin{equation}\label{eq:AT_boundary} \begin{cases} \delta_{1} = \frac{3\gamma A^2}{8} - \varepsilon\left(\frac{F}{2A} - \frac{3\beta A}{8}\right)\cos{\left(\delta_{1}T\right)}, \\ \\ \delta_{2} = \frac{3\gamma A^2}{8} +\varepsilon\left(\frac{\alpha}{4}+\frac{\gamma A^2}{4}\right)\sin{\left(2\delta_{2}T\right)}, \end{cases} \quad \Rightarrow \quad \begin{cases} \varepsilon(\delta_1) > \left|\left(\delta_{1} - \frac{3\gamma A^2}{8}\right)/\left(\frac{F}{2A} - \frac{3\beta A}{8}\right)\right|, \\ \\ \varepsilon(\delta_2) > \left|\left(\delta_{2} - \frac{3\gamma A^2}{8}\right)/\left(\frac{\alpha}{4}+\frac{\gamma A^2}{4}\right)\right|, \end{cases} \end{equation} \newpage \noindent which defines a region in a $\varepsilon-\delta_\rho$ space which is, as we would guess, the Arnold tongues. The AT maps using this approach are shown in \ref{fig:11} for three different oscillation amplitudes $A = \{2,3,4\}$, which has the same effects of ones obtained from the experiment even after lots of approximations. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs_supp/fig14_supp.pdf} \caption{Simulated Arnold tongues using \Cref{subsec:simulated_tongues_KBM} for three different oscillations amplitude. \textbf{a)} $A=2$; \textbf{b)} $A=3$; \textbf{c)} $A=4$. The softening Duffing effect is enhanced as we increase the amplitude $A$, as we can verify as the whole region is getting away from $\delta_\rho = 0$ to the left, i.e., negative values of $\delta_\rho$.} \label{fig:11} \end{figure} The reduction of the oscillation frequency $\Omega_{0}$ as we increase the amplitude $A$ is called softening Duffing, which is expected because we have $\gamma < 0$ (a consequence of the chosen optical detuning $0 < \Delta_x < \kappa/2$). If we had chosen other detuning $\Delta_x$, for example, $\Delta_x > \kappa/2$ or $\Delta_x < 0$, we would have $\gamma > 0$ and we would see the hardening Duffing effect, which is the shift of $\Omega_{0}$ to higher frequencies. Not only this but for high enough amplitudes we can obtain larger $2:1$ AT than $1:1$, which is the case for $A=3$ and $A=4$, showing that our model still has features about high-harmonic synchronization, the same features of the experiment. To finalize this section, we conclude that each of the terms $F, \alpha, \beta$ and $\gamma$ are directly proportional to the tongue width $\Delta\Omega(p,q)$ with $p = \{1,2,3,4\}$ and $q=1$, respectively, as we can see at the denumerator of \Cref{eq:AT_boundary}, i.e., \begin{equation}\label{semianalyticAT} \begin{cases} \Delta\Omega(1,1) = 2\times\left(\frac{F}{2A}-\frac{3\beta A}{8}\right) \approx \frac{F}{A} \propto F \propto F_0 \\ \\ \Delta\Omega(2,1) = 2\times\left(\frac{\alpha}{4}+\frac{\gamma A^2}{4}\right) \approx \frac{\alpha}{2} \propto \alpha \propto F_1 \\ \\ \Delta\Omega(3,1) = 2\times\frac{\beta A}{8} \propto \beta \propto F_2 \\ \\ \Delta\Omega(4,1) = 2\times\frac{\gamma A^2}{16} \propto \gamma \propto F_3 \end{cases} \end{equation} \vspace{0.2cm} \noindent and these are the semi analytical expressions for the tongue width of each harmonic, which we could engineer it to achieve wider Arnold tongues for different harmonics choosing different $F$, $\alpha$, $\beta$ and $\gamma$ as we design the geometry and the materials of our optomechanical cavity. \vspace{-0.3cm} \section{} \textbf{Sidebands around the carrier at the synchronized region.} To explain the sidebands around the synchronization region we will first linearize \Cref{eq:KBM_A_final} and \Cref{eq:KBM_Phi_final} expanding $A(T)$ and $\Phi(T)$ as \vspace{-0.5cm} \begin{equation}\label{Sidebands:expansion} A(T) = \overline{A} + \delta A(T) \quad \quad \text{and} \quad \quad \Phi(T) = \overline{\Phi} + \delta \Phi(T), \end{equation} \vspace{-0.5cm} \noindent and then diagonalize the linear part of the system \vspace{-0.4cm} \begin{equation}\label{Sidebands:linear} \frac{d}{dT} \begin{pmatrix} \delta A \\ \delta \Phi \end{pmatrix} = \begin{pmatrix} H_{AA} & H_{A\Phi}\\ H_{\Phi A} & H_{\Phi \Phi} \end{pmatrix} \begin{pmatrix} \delta A \\ \delta \Phi \end{pmatrix}, \end{equation} \noindent in which the actual form of $H_{AA}$, $H_{A\Phi}$, $H_{\Phi A}$ and $H_{\Phi \Phi}$ are too big to be shown and not important for our present analysis. The eigenvalues of this system give us the first order correction in frequency and damping of our oscillator. The evolution of this system is \vspace{-0.2cm} \begin{equation} \begin{pmatrix} \delta A(T)\\ \delta \Phi(T) \end{pmatrix} = \begin{pmatrix} \delta A_+\\ \delta \Phi_+ \end{pmatrix} e^{\left(\lambda_{\text{Re}}^{+} + i\lambda_{\text{Im}}^{+}\right)T} + \begin{pmatrix} \delta A_-\\ \delta \Phi_- \end{pmatrix} e^{\left(\lambda_{\text{Re}}^{-} + i\lambda_{\text{Im}}^{-}\right)T}, \end{equation} \noindent where $\lambda^{\pm} = \lambda_{\text{Re}}^{\pm} + i\lambda_{\text{Im}}^{\pm}$ are the eigenvalues. Using these in $y(T)$ we can now search for sidebands, i.e., \vspace{-0.4cm} \begin{multline} y(T) = A(T)\sin{\left[T + \Phi(T)\right]} \approx \frac{1}{2i}\left(\overline{A} + \delta A\right)\left[\left(1+i\delta \Phi\right)e^{iT}e^{i\overline{\Phi}} - \left(1-i\delta \Phi\right)e^{-iT}e^{-i\overline{\Phi}}\right] \approx \\ \approx \overline{A}\sin{\left(T+\overline{\Phi}\right)} + \delta{A}(T)\sin{\left(T+\overline{\Phi}\right)} + \overline{A}\delta\Phi(T)\cos{\left(T+\overline{\Phi}\right)}. \end{multline} \vspace{-0.3cm} It is evident now that our oscillator has more than one single frequency because of the product $\delta A(T)$ with the sine function and also because of the product $\delta\Phi(T)$ with the cosine. The frequencies $\Omega_{\text{SB}}^{\pm}$ and the linewidths $\Gamma_{\text{SB}}^{\pm}$ of these new sidebands are given by \vspace{-0.4cm} \begin{equation} \Omega_{\text{SB}}^{\pm} = \frac{\lambda_{\text{Im}}^{\pm}}{L_{\widetilde{t}}} \quad \quad \text{and} \quad \quad \Gamma_{\text{SB}}^{\pm} = \frac{\lambda_{\text{Re}}^{\pm}}{L_{\widetilde{t}}}, \end{equation} \vspace{-0.1cm} \noindent and the graph of these using our semi-analytical model for the case $\rho = 1$ is shown in \ref{fig:12}, which has an excellent agreement with the experimental data in both frequency and linewidth, showing us that these sidebands are indeed a coupling between phase and amplitude of the mechanical oscillator. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs_supp/fig15_supp.pdf} \caption{\textbf{a)} Detuning from $\Omega_0/2\pi$ showing both sidebands emerging as we increase the modulation depth; \textbf{b)} Linewidth of these sidebands.} \label{fig:12} \end{figure} \vspace{-0.3cm} \section{} \textbf{1:1 Arnold Tongue Cusp Deviation.} The purple cusp simulation in Fig.2 (d3) from the main text (also shown again in \ref{fig:14}(a)), which is not present in the experimental data, was found to be a bifurcation in the amplitude dynamics. To show this we must investigate \Cref{eq:KBM_A_final} in more detail: \vspace{-0.2cm} \begin{multline} \left<\frac{dA}{dT}\right>_{\phi} = \frac{\mu A}{2}\left(1-\frac{\sigma A^2}{4}\right) +\\ + \frac{\varepsilon\sin (\pi\omega)}{\pi}\left(\frac{\omega \left[F \left(\omega ^2-9\right) + 2\beta A^2\right]\sin{\left[(\pi -\Phi ) \omega\right]}}{\left(\omega ^2-9\right) \left(\omega ^2-1\right)} - \frac{A \left[\alpha \left(\omega ^2-16\right)-6 \gamma A^2 \right] \cos{\left[(\pi -\Phi ) \omega \right]}}{\left(\omega ^2-16\right) \left(\omega ^2-4\right)}\right). \end{multline} We are searching for bifurcations around the $1:1$ Arnold tongue, so it's natural to expand such equation around $\omega = 1$, identically as we've done with the phase $\Phi$ in \Cref{subsec:simulated_tongues_KBM}, i.e., \begin{equation} \left<\frac{dA}{dT}\right>_{\phi} = \frac{\mu A}{2}\left(1-\frac{\sigma A^2}{4}\right) - \varepsilon\left(\frac{F}{2}-\frac{\beta A^2}{8}\right)\sin{\left[\Phi(T)\right]}. \end{equation} The fixed points of this equation are such that $\dot{A} = \dot{\Phi} = 0$. As the bifurcation occurs at the boundary of the Arnold tongue, we will assume a fixed phase $\Phi$ of $\pi/2$ and $3\pi/2$ (which, respectively, minimize and maximize $\sin{\Phi}$), such that the equation that we must solve for $A$ is \begin{equation}\label{eq:A_roots} 0 = \frac{\mu A}{2}\left(1-\frac{\sigma A^2}{4}\right) \pm \varepsilon\left(\frac{F}{2}-\frac{\beta A^2}{8}\right). \end{equation} We know that every $n$-degree polynomial has $n$ complex roots, so we can plot the absolute value of each of these roots as a function of $\varepsilon$ (we will use the same $\mu$, $\sigma$, $F$, and $\beta$ parameters from the simulation), and the result is shown in \ref{fig:14}(b) \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{figs_supp/fig16_supp.pdf} \caption{\textbf{a)} Experimental $1:1$ Arnold tongue with its simulated boundary in purple. The cusp occurs around -16 dBm, which corresponds to $\varepsilon = 1.9\%$; \textbf{b)} Amplitude evolution of each $A_n$ root from \Cref{eq:A_roots} as the modulation depth $\varepsilon$ increases.} \label{fig:14} \end{figure} As we see, two of the roots ($A_1$ and $A_2$) of \Cref{eq:A_roots} become degenerate (the orange and blue curves of \ref{fig:14}(b)) for a modulation depth around $\varepsilon = 1.83\%$. The cusp that we have at the \ref{fig:14}(a) happens around -16 dBm, which corresponds to a modulation depth of $1.9\%$. We must not forget that \Cref{eq:A_roots} is one of the final steps of our semi-analytical analysis, and many simplifications were involved. Nevertheless, the purple curve in \ref{fig:14} was obtained using the full optomechanical model, and a difference of only $0.07\%$ in $\varepsilon$ is observed. Similar cusps were observed in other injection-locking experiments \cite{PhysRevE.50.3383}. A more detailed analysis and more experimental data should be gathered to understand why our experiment does not exhibit this feature. \bibliographystyle{plain} \section*{Supplementary references}
{ "timestamp": "2021-09-27T02:18:24", "yymm": "2105", "arxiv_id": "2105.01791", "language": "en", "url": "https://arxiv.org/abs/2105.01791" }
\section{Introduction} \IEEEPARstart{D}{riven} by the recent development and prevalence of computing power, algorithms, Internet of Things (IoT) systems, and big data, a booming era of AI has occurred, covering a wide spectrum of applications including natural language processing \cite{NaturalLanguage}, speech recognition \cite{speechRecognition}, computer vision \cite{ResNet}, and robotics \cite{robotics}. Owing to these breakthroughs, AI has achieved unprecedented improvements in multiple sectors of academia, industry, and daily services in order to improve the humans' productivity and lifestyle. As an example, multiple intelligent IoT applications have been designed such as shopping recommenders, smart assistants, self-driving cars, disease mapping services, smart home appliances, manufacturing robots, and surveillance systems. In this context, studies estimate that AI will have higher impact on the global Gross Domestic Product (GDP) by 2030, accounting for \$ 13 trillion additional gains compared to 2018 \cite{AIEconomic}. The high performance of AI systems applied to multiple fields comes at the expense of a huge memory requirement and an intensive computational load to perform both training and inference phases. More specifically, training an intelligent model is computationally expensive because of the large number of parameters, reaching millions for deep networks, that need to be repetitively fine-tuned over hundreds of iterations. Similarly, the inference phase is computationally intensive due to the high dimension of raw data (e.g., high resolution images) and millions of tasks (e.g., multiplications and max-pooling) in deep networks \cite{InceptionV3,VGG}. To this end, the resource consumption has been adopted as an important parameter to assess the performance of AI models. \begin{figure*}[!h] \centering \frame{\includegraphics[scale=0.57]{Figures/introduction1.pdf}} \caption{Illustration of pervasive AI.} \label{intro} \end{figure*} The popularity of AI is also related to the abundance of storage and computing devices, ranging from server clusters in the cloud to personal phones and computers, further, to wearables and IoT units. In fact, the unprecedented amount of data generated by the massive number of ubiquitous devices opens up an attractive opportunity to provide intelligent IoT services that can transform all aspects of our modern life and fuel the continuous advancement of AI. Statistics forecast that, by 2025, the number of devices connected to the internet will reach more than 500 billion \cite{cisco} owing to the maturity of their sensing capabilities and affordable prices. Furthermore, reports revealed that these devices will generate enormous data reaching more than 79 ZB by 2025 and will increase the economic gains up to 11 trillion by the same year. Particularly, 40\% of this economic impact is related to the healthcare market, 33\% corresponds to the industrial applications, 7\% corresponds to the energy sector, whereas the rest is related to other domains such as agriculture, security, and retail \cite{Survey30}. With the rapid evolution of AI and the enormous bulks of data generated by pervasive devices, conventional wisdom resorts to centralized cloud servers for analytics. However, this approach is no longer sustainable as it introduces several challenges: (1) the appearance of a new breed of services and the advent of delay-sensitive technologies spanning from self-driving cars to Virtual and Augmented Reality (VR/AR), make the cloud-approaches inadequate for AI tasks due to the long transmission delays. More precisely, the aforementioned applications are real-time and cannot allow any additional latency or connectivity loss. For example, autonomous cars sending camera frames to remote servers need to receive prompt inferences to detect potential obstacles and apply brakes \cite{autonomousCars,autonomousCars2}. Besides, the delay variance is also very important in interactive IoT applications such as visuo-haptic perceptions and VR to avoid motion sickness \cite{VR2}. Other examples are the voice assistant applications (e.g., Siri and Alexa) that should parse the user’s request and answer its query instantly, and the Unmanned Aerial Vehicles (UAVs) that should sense and react rapidly in hazardous environments, even if the network is unavailable \cite{UAV3}. Sending data to cloud servers may not satisfy the latency requirements of the real-time applications. Particularly, experiments in \cite{Amazon} demonstrated that executing a computer vision task on a camera frame offloaded to an Amazon server takes more than 200 ms. (2) In addition to latency, privacy presents a major concern for cloud-based AI approaches. In fact, end-users are typically reluctant to upload their private data to cloud servers (e.g., photos or audios), as they can be highly exposed to cyber risks, malicious attacks, or disclosures. Among the most popular breaches reported in the 21st century, we can cite the Marriott attack revealed in 2018 and affecting 500 million customers and Equifax breach recorded in 2017 and affecting 147 million users \cite{breach}. (3) A tremendous number of AI tasks, involving unstructured and bandwidth-intensive data, needs to be transferred across the Wide Area Network (WAN), which poses huge pressure on the network infrastructure having varying quality. (4) In the same context, offloading the data to remote servers encounters also scalability issues, as the access to the cloud can become a bottleneck when the number of data sources increases, particularly if some devices import irrelevant and noisy inputs. (5) Nowadays, Explainable AI (XAI) \cite{XAI1} has become extremely popular, aiming to enhance the transparency of learning and detect prediction errors. However, consigning AI tasks to the cloud makes the whole process a black-box vis-a-vis the end-user, and prevents model decomposability and debugging. Pushing AI to the network edge has been introduced as a viable solution to face latency, privacy, and scalability challenges described earlier. As such, the large amount of computational tasks can be handled by edge devices without exchanging the related data with the remote servers, which alleviates the traffic load and guarantees agile IoT services owing to the physical proximity of computing devices to the data sources \cite{EdgeComputing2019}. In the case when the AI tasks can only be executed at the cloud datacenters, the edge devices can be used to pre-process the data and polish it from noisy inputs in order to reduce the transmission load \cite{elsevierbook}. Furthermore, the edge network can play the role of a firewall that enhances user privacy by discarding sensitive information prior to data transfer. A variety of edge devices can be candidate for executing different AI tasks with different computation requirements, ranging from edge servers provisioned with GPUs, to smart-phones with strong processors and even small IoT wearables with Raspberry Pi computing. These edge devices have been continuously improving to fit for deep AI models. In this context, researchers have proposed various strategies from diverse angles, covering new hardware designs and novel AI models. More specifically, when designing a learning technique for resource constrained units, a reduced number of parameters can be used to decrease the memory demand and the execution time, e.g., SqueezeNet \cite{squeezenet} and MobileNets \cite{mobileNet}. Additionally, to speed up the learning process, vendors have produced custom integrated circuits designed for deep learning tasks, such as the Tensor Processing Unit (TPU) of Google \cite{TPU}. Also, manufacturers have provided libraries and software tools to leverage the CPU/GPU in order to parallelize AI tasks. As an example, iPhone is currently issued with A12 Bionic chip dedicated to AI-based applications \cite{A12}. In spite of this technological advancement, a large range of pervasive devices used in countless fields of our daily life still suffers from limited power and memory, such as smart home IoT appliances, sensors, and gaming gears. Furthermore, privacy remains a challenge, even if local computing naturally improves the security of data. Given the limited resources of edge-devices, computing the full AI model in one device may be infeasible, particularly when the task requires high computational load, e.g., Deep Neural Networks (DNN). A promising solution is to adopt pervasive computing, where different data storages and processing capacities existing everywhere and including distributed cloud datacenters, edge servers, and IoT devices cooperate to accomplish AI tasks that need large memory and intensive computation. This marriage of pervasive computing and AI has given rise to a new research area, namely \textit{“Pervasive AI"}, which garnered considerable attention from both academia and industry. Formally, pervasive AI focuses on how to intelligently distribute the inference or the training of the AI model across devices, to minimize the latency, and improve privacy and scalability. Specifically, tech giants started to implement pilot projects to assess the efficiency of pervasive computing in crafting the way to better govern the AI applications. For example, Google designed a framework to distribute the training of language DNNs on clients’ devices to predict the next-word in virtual keyboards \cite{virtualKeyboards}. Notably, the research and practice on this emerging intersection is still in its infancy. The pervasive AI was firstly introduced to solve the described challenges of centralized approaches (e.g., on-cloud or on-device computation). (1) Thus, to preserve privacy and reduce the huge overhead of data collection and the complexity of training an astronomical dataset, \textit{Federated Learning (FL)} is proposed, where raw data are stored in their source entities and the model is trained collaboratively. Particularly, each entity computes a local model using its collected data, then sends the results to a fusion server to aggregate the global model. Such an approach covers the distribution of data and the assembly of the trained AI models. (2) To cope with the limited resources provided by edge devices and simultaneously avoid latency overheads caused by cloud transmissions, the inference task is distributed among ubiquitous devices located at the proximity of the source. The basic idea is to divide the trained model into segments and subsequently, each segment is assigned to a participant. Each participant shares the output to the next one until generating the final prediction. In other words, the \textit{Pervasive Inference} covers the distribution of the established model resulting from the training phase. (3) Some AI techniques are inherently distributed such as Multi-Agent Reinforcement Learning (MARL) or Multi-agent Bandits (MAB) classified as \textit{Online Learning}, where agents cooperate to build and improve a policy in real-time enabling them to take on-the-fly decisions/actions based on the environment status. In this case, the distribution covers the online creation and update of the Reinforcement Learning (RL) policy. The pervasive AI concept is illustrated in Fig. \ref{intro}. The pervasive AI exploits the on-device computation capacities to collaboratively achieve learning tasks, which requires a careful scheduling to wisely use the available resources without resorting to remote computing. Yet, some intensive AI tasks can only be performed by involving the cloud servers, which results in higher communication costs. Therefore, leveraging the small and ubiquitous resources and managing the enormous communication overheads present a major bottleneck for the pervasive AI. \subsection{Our scope} In this survey, we focus on the confluence of the two emerging paradigms: pervasive computing and artificial intelligence, which is named \textit{Pervasive AI}. The pervasive AI is a promising research field, in which the system design is highly correlated to the resource constraints of the ubiquitous participants (e.g., memory, computation, bandwidth, and energy.) and the communication overheads between them. More specifically, the size of some deep AI models, their computational requirements and their energy consumption may exceed the memory or the battery level of some devices, which restricts them from participating in the collaborative system. Furthermore, the process of decentralized training or inference may involve a big number of participants that potentially communicate over wireless links, which creates new challenges related to channels capacities and conditions, the delay performance, and the privacy aspect. Therefore, the pervasive AI should rely on various parameters, including the optimal AI partitioning, the wise design of architectures and algorithms managing the distributed learning, and the smart selection and scheduling of pervasive participants supported by efficient communication protocols, while accounting for the channel dynamics and communication overheads. Not only that, all the on-device constraints should be taken into consideration such as the memory, the computation, the energy, not to mention the privacy requirements of the system. Finally, the load of real-time inferences (e.g., an area that needs 24/7 surveillance), the pace of data collection (e.g., weather monitoring) and the dynamics of the studied environment should also be considered as they highly impact the number of selected participants and the parallelization strategies. In this paper, we survey the aforementioned challenges in deploying pervasive AI models and algorithms. Particularly, we provide a deep study of resource-efficient distributed learning for the training phase, the inference tasks, and the online learning involving real-time training and decision process. We start by identifying the motives behind establishing a pervasive AI system for IoT applications and its corresponding communication and resource challenges. \begin{table*}[!h] \footnotesize \centering \tabcolsep=0.09cm \caption{Comparison with existing surveys.} \label{tab:Related_works} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Refs} &\textbf{Summary} & \multicolumn{2}{c|}{\textbf{AI/pervasivity}} & \multicolumn{3}{c|}{\textbf{Scope}} & \multicolumn{3}{c|}{\textbf{AI technique}} & \multicolumn{2}{c|}{\textbf{Topic}} \\ \hline & & \begin{tabular}[c]{@{}c@{}}AI on pervasive\\ networks\end{tabular} & \begin{tabular}[c]{@{}c@{}}AI for pervasive\\ networks\end{tabular} & cloud & \begin{tabular}[c]{@{}c@{}}edge\\ servers\end{tabular} & IoT & DI & FL & MARL & \begin{tabular}[c]{@{}c@{}}Deployment:\\ hardware,\\ software\\ techniques, \\protocols.\end{tabular} & \begin{tabular}[c]{@{}c@{}}Management: \\ communication, \\ resource allocation, \\ and algorithms\end{tabular} \\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey6,Survey13} \\ (2020-2021)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Deep Learning \\ applications for the \\ Mobile Edge \\ computing networks\end{tabular} & \xmark& \begin{tabular}[c]{@{}c@{}}\cmark\\ 5G,\\ wireless \\ networks\end{tabular}& \cmark& \cmark & \cmark& \cmark& \cmark& \xmark& \cmark& \xmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey8}\\ (2019)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Efficient usage of IoT\\ hardware and software\\ for AI applications\end{tabular} & \cmark& \xmark& \xmark& \xmark& \cmark& \xmark& \xmark& \xmark& \cmark& \xmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey1,Survey2,Survey3,Survey4,Survey28}\\ (2019-2020)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Enabling AI on \\ edge networks\end{tabular} & \cmark & \cmark& \xmark& \cmark& \cmark& \cmark& \cmark& \xmark& \cmark& \xmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey7} \\ (2018)\end{tabular}& \begin{tabular}[c]{@{}c@{}}Enabling AI on \\ edge networks\end{tabular} & \cmark& \xmark& \xmark& \cmark& \cmark& \cmark& \xmark& \xmark& \cmark& \xmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey22,Survey23,Survey27}\\ (2018-2020)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Decision making in \\multi-agent \\ systems and related \\applications\end{tabular} & \xmark& \xmark& \xmark& \xmark& \xmark& \xmark& \xmark& \cmark& \xmark& \xmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey11}\\ (2020)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Deep RL \\ for IoT systems\end{tabular} & \cmark& \cmark& \xmark& \xmark& \cmark& \xmark& \xmark& \xmark& \cmark& \cmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey24} \\ (2020)\end{tabular}& \begin{tabular}[c]{@{}c@{}}Deep RL for \\wireless networks\end{tabular} & \cmark& \begin{tabular}[c]{@{}c@{}}\cmark\\ wireless \\ networks\end{tabular} & \xmark& \cmark& \cmark& \xmark& \xmark& \cmark& \cmark& \xmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey5} \\ (2019)\end{tabular}& \begin{tabular}[c]{@{}c@{}}Communication for ML \\ and \\ML for communication\end{tabular} & \cmark& \begin{tabular}[c]{@{}c@{}}\cmark\\ wireless \\ networks\end{tabular}& \xmark& \cmark& \cmark& \xmark& \cmark& \xmark& \xmark& \cmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey12} \\ (2020)\end{tabular}& \begin{tabular}[c]{@{}c@{}}Communication efficient \\ edge AI\end{tabular} & \cmark& \cmark& \xmark& \cmark& \cmark& \cmark& \cmark& \xmark& \xmark& \cmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey29} \\ (2019)\end{tabular}& \begin{tabular}[c]{@{}c@{}}AI on mobile \\ and wireless networks\end{tabular} & \cmark& \begin{tabular}[c]{@{}c@{}}\cmark\\ 5G,\\ wireless \\ networks\end{tabular}& \xmark& \cmark& \cmark& \xmark& \xmark& \xmark& \cmark& \cmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey15} \\ (2020)\end{tabular}& \begin{tabular}[c]{@{}c@{}}Enabling protocols, \\ technologies\\ for federated learning\end{tabular} & \cmark& \cmark& \cmark& \cmark& \cmark& \xmark& \cmark& \xmark& \cmark& \xmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey14,Survey17,Survey9} \\ (2020)\end{tabular}& \begin{tabular}[c]{@{}c@{}}Architecture, design and\\ applications of centralized,\\ distributed and federated \\learning\end{tabular} & \cmark& \cmark& \cmark& \cmark& \cmark& \xmark& \cmark& \xmark& \cmark& \cmark\\ \hline \ {}\begin{tabular}[c]{@{}c@{}}\cite{Survey16} \\ (2020)\end{tabular}& \begin{tabular}[c]{@{}c@{}}Enabling protocols, \\technologies\\ for federated learning\end{tabular} & \begin{tabular}[c]{@{}c@{}}\cmark\\ Vehicular\\ IoT\end{tabular} & \xmark& \xmark& \xmark& \cmark& \xmark& \cmark& \xmark& \cmark& \xmark\\ \hline \rowcolor[HTML]{ECF4FF} Our paper & Pervasive AI & \cmark& \xmark& \cmark& \cmark& \cmark& \cmark& \cmark& \cmark& \xmark& \cmark\\ \hline \end{tabular} \end{table*} \subsection{Related surveys} The intersection of pervasive computing and AI is still in its early stage, which attracts the researchers to review the existing works and provide useful and innovative insights, as illustrated in Table \ref{tab:Related_works}. First, many efforts discussed the applications of artificial intelligence that support edge networks composed of ubiquitous devices, in order to meet the networking requirements. Multiple edge contexts are explored such as healthcare, smart cities, and grid energy. As an example, two recent surveys \cite{Survey6,Survey13} provided an in-depth discussion of the usage of AI in wireless and 5G networks to empower caching and offloading, resource scheduling and sharing, virtual edge orchestration, and network privacy. Additionally, they discussed the standardizations that increased the potential of AI to solve communication and wireless issues. These surveys touched upon the pervasive AI, particularly federated learning and distributed inference. However, the distribution was discussed briefly as one of the techniques that further enables AI for the edge. In our survey, the applications of AI for pervasive networks are not the main topic. Instead, the deployment of AI on pervasive devices is the scope of this paper. Authors in \cite{Survey8} presented an overview about hardwares, softwares, and run-time optimizations enabling the deployment of AI on pervasive devices, which include CPU accelerators, samples reduction, and computation reduction through AI compression. Enabling the complex AI techniques in resource-constrained devices is one of the arms of this survey. However, we cover only the distribution approach to fit the complex models on small devices. Still, our study is not restricted nor to small devices neither to the distribution motivated by resource scarcity. Indeed, we define the pervasive AI as \textit{“the distribution of AI tasks among any type of devices existing anywhere in order to handle different types of systems (e.g., IoT systems, privacy-aware systems, and inherently decentralized systems)”}. \begin{figure*}[h] \centering \hspace{-9 mm} \includegraphics[scale=0.52]{Figures/taxonomy2.pdf} \caption{Pervasive AI survey roadmap.} \label{journalSkeleton} \end{figure*} The surveys in \cite{Survey1,Survey2,Survey3,Survey4,Survey28,Survey7} conducted a comprehensive review on the systems, architectures, frameworks, softwares, technologies, and algorithms that enable the AI on edge networks and discussed the advantages of edge computing to support the AI deployment compared to cloud approaches. However, even though they dedicated a short part for distributed AI, these efforts did not discuss the resource and communication challenges of pervasive systems nor the splitting techniques of AI. Moreover, they did not consider the cloud computing as indispensable part of the distributed system. Therefore, unlike the previous surveys \cite{Survey1,Survey2,Survey3,Survey4,Survey28,Survey7}, we present an in-depth review that covers the resources, communication and computation challenges of distributed AI among ubiquitous devices. More specifically, applying the same classical communication and computation techniques adopted in centralized approaches for pervasive AI is not trivial. As an alternative, both pervasive computing system and distributed AI techniques are tailored to take into consideration the heterogeneous resources of participants, the AI model, the requirements of the system to reduce the communication and computation overhead during the training and inference phases. These customized strategies for pervasive AI are the main focus of our survey. The previous papers discussed the distribution as one of the approaches enabling AI deployment on the edge. Particularly, they briefly examined the distributed inference and federated learning. However, the multi-agent online learning including Multi-Agent Reinforcement and bandit learning has not been reviewed by any one of them. Multiple papers surveyed the single agent and multi-agents reinforcement learning, such as \cite{Survey22,Survey23,Survey27,Survey11,Survey24}. In these tutorials, the authors conducted comprehensive studies of applications of distributed RL for networking problems and presented an overview of the evolution of cooperative and competitive MARL, in terms of rewards optimization, policy convergence, agents connection, and performance improvement. To the best of our knowledge, we are the first to cover the computation and communication issues witnessed by cooperative agents while achieving a consensus on the distributed RL policy. Finally, the authors in \cite{Survey5,Survey12,Survey29} provided a deep review of communication challenges of AI-based applications on edge networks. Specifically, the survey in \cite{Survey29} provided insights about allocating mobile and wireless networks resources for AI learning tasks. However, the distribution of AI techniques was not targeted in this latter paper. The surveys in \cite{Survey5,Survey12} are considered the closer ones to our topic as they explored the communication-efficient AI distribution. However, they mainly focused on the training phase, i.e., federated learning, whereas the pervasive inference and online learning were ignored as the literature about these topics was too small. The inference distribution is briefly discussed in \cite{Survey5} from a communication angle, however, without discussing other constraints such as the memory and computation nor presenting the partitioning strategies (i.e., splitting of the trained model), which highly impact the distribution process, the parallelization technique, and participants orchestration. Our paper represents a holistic survey that covers all AI tasks that require cooperation between pervasive devices guided by the system design, the AI model, and the application requirements. \subsection{Contributions and structure of the paper} The contributions of this paper are summarized as follows: \begin{itemize} \item We present an overview of the pervasive systems and introduce its architecture and potential participants. \item We provide a brief background of artificial intelligence, particularly deep learning and online learning. We, also, describe the frameworks that support AI tasks and the metrics that assess their performance. Furthermore, we present multiple IoT applications, in which pervasive AI can be useful. \item For each phase of the AI (e.i., training, inference, and online learning), we profile the communication and computation models and review the state-of-the-art. A comparison between different existing works, lessons learned, in addition to recent use cases, are also provided. \item We conclude by an elaborative discussion of our future vision and we identify some open challenges that may arouse new promising research ideas. \end{itemize} The rest of this paper is organized as follows: Sections \ref{pervasive_systems} and \ref{AI} present the fundamentals of pervasive systems and artificial intelligence, respectively. In section \ref{PI}, we present a deep study of the \emph{pervasive inference}. Particularly, we review the state-of-the-art approaches adopting different splitting strategies and managing the existing pervasive resources to distribute the inference. Next, we compare the performance of these works and discuss the learned lessons and potential use cases. Section \ref{FL} presents the related studies that investigated the potential of \emph{federated learning} schemes in different domains. Moreover, it highlights the use of FL within UAV swarms for cooperative target recognition as a case study. Section \ref{OL} investigates diverse \emph{online learning} schemes, which are multi-agent bandits, reinforcement learning, and active learning. Indeed, we review state-of-art algorithms that tackle the pervasive systems perspective on online learning. Specifically, we focus on the algorithms that study the trade-off between the consumed communication resources and the performance, offering a novel viewpoint on the strengths and weaknesses of the discussed approaches. We discuss the future vision and open challenges, in section \ref{future}. Finally, we conclude in section \ref{conclusion}. More details about the road map of the paper are illustrated in Fig. \ref{journalSkeleton}. Additionally, the list of acronyms is presented in Table \ref{acro}. \begin{table}[h!] \centering \footnotesize \caption{List of Acronyms} \label{acro} \begin{tabular}{ll} AC & Actor-Critic \\ AE & Auto-Encoder \\ AI & Artificial Intelligence \\ AL & Active Learning \\ AR & Augmented Reality \\ BM & Bolzman Machines \\ BS & Base Station \\ CNN & Convolutional Neural Network \\ Conv & Convolutional \\ \end{tabular} \end{table} \begin{table}[h!] \centering \footnotesize \begin{tabular}{ll} CTDE & \begin{tabular}[c]{@{}l@{}}Centralized Training and\\ Decentralized Execution\end{tabular} \\ DAG & Directed Acyclic Graph \\ DB & Distributed Bandits \\ DDPG & Deep Deterministic Policy Gradient \\ DL & Deep Learning \\ DNN & Deep Neural Network \\ DPPO & \begin{tabular}[c]{@{}l@{}}Distributed Proximal Policy\\ Optimization\end{tabular} \\ DQL & Deep Q-Learning \\ DQN & Deep Q-Network \\ DRL & Deep Reinforcement Learning \\ ECG & Electrocardiogram \\ EEG & Electroencephalogram \\ FANET & Flying Ad-hoc Network \\ FB & Federated Bandit \\ Fc & Fully connected \\ FL & Federated Learning \\ FNN & Feed forward Neural Network \\ GAN & Generative Adversial Networks \\ GDP & Gross Domestic Product \\ IID & \begin{tabular}[c]{@{}l@{}}Independent and Identically\\ Distributed\end{tabular} \\ IoT & Internet of Things \\ IoV & Internet of Vehicles \\ LSTM & Long Short Term Memory \\ MAB & Multi-Agent Bandit \\ MADDPG & \begin{tabular}[c]{@{}l@{}} Multi-Agent Deep Deterministic \\ Policy Gradient\end{tabular} \\ MARL & \begin{tabular}[c]{@{}l@{}}Multi-Agent Reinforcement\\ Learning\end{tabular} \\ MDP & Markov Decision Process \\ MEC & Mobile Edge Computing \\ MLP & Multi-Layer Percepteron \\ NN & Neural Network \\ PAC & Probably Approximately Correct \\ PAIaas & Pervasive AI as a service \\ POMDP & \begin{tabular}[c]{@{}l@{}}Partially Observable Markov\\ Decision Process\end{tabular} \\ POMG & \begin{tabular}[c]{@{}l@{}}Partially Observable Markov\\ Game\end{tabular} \\ ppb & part per billion \\ ppm & part per million \\ PPO & Proximate Policy Optimization \\ QoE & Quality of Experience \\ QoS & Quality of Service \\ RL & Reinforcement Learning \\ rMSE & \begin{tabular}[c]{@{}l@{}}regularized Maximum Likelihood\\ Estimation\end{tabular} \\ RNN & Recurrent Neural Network \\ SGD & Stochastic Gradient Descent \\ SINR & \begin{tabular}[c]{@{}l@{}}Signal to Interference plus \\ Noise Ratio\end{tabular} \\ TPU & Tensor Processing Unit \\ UAV & Unmanned Aerial Vehicle \\ UCB & Upper Confidence Bound \\ UE & User Equipment\\ VR & Virtual Reality \\ WAN & Wide Area Network \\ XAI & Explainable AI \\ \begin{tabular}[c]{@{}l@{}}6G, 5G,\\ 4G\end{tabular} & Sixth, Fifth, Fourth Generations \end{tabular} \end{table} \section{Fundamentals of pervasive systems}\label{pervasive_systems} \subsection{Definition} The pervasive computing \cite{pervasive,pervasive2}, named also ubiquitous computing, is the growing trend to embed computational capabilities in all devices in order to enable them to communicate efficiently and accomplish any computing task, while minimizing their resource consumptions e.g. battery, memory, cpu time, etc. The pervasive computing can occur in any device, at any format, in any place and any time. More specifically, it can span from resource-constrained devices to highly performant servers and can involve cloud datacenters, mobile edge computing servers, mobile devices, wearable computers, embedded systems, laptops, tablets, pair of intelligent glasses, and even a refrigerator or a TV. These ubiquitous devices are constantly connected and available for any task, Ubiquitous computing is supported by different technologies including operating and middleware systems, sensor networks, distributed systems, mobile protocols and networks, human-computer interaction, smart home technologies, and artificial intelligence. To illustrate the pervasive computing where participants hand tasks one to another, we can cite the example of an Apple watch that notifies the user of an incoming phone call and allows him to start the conversation from his mobile device and complete it from the smart watch. Another example is the Audible application of Amazon, where a user can read a book on his/her tablet in the park and continue listening to it using Amazon Echo or Alexa at home. To summarize, we are not talking anymore about devices acting on a passive data. Instead, the pervasive systems are able to collect, process, communicate any data type or size, understand its surroundings, adapt to the input context, and enhance humans’ experiences and lifestyles. \subsection{Ubiquitous participants} The pervasive systems are characterized by highly heterogeneous devices (see Fig. \ref{participants}), where the critical challenge is to design a scalable infrastructure able to dynamically discover different components, manage their interconnection and interaction, interpret their context, and adapt rapidly to the deployment of new softwares and user interfaces. A pervasive system can be composed of: \begin{figure}[!h] \centering \includegraphics[scale=0.5]{Figures/Pervasive_architecture.pdf} \caption{Ubiquitous participants.} \label{participants} \end{figure} \subsubsection{Data center and cloud computing} Cloud computing \cite{vital,QoE,RLopra,ptnet,GlobecomFatima} is defined as delivering on-demand services from storage, management, and computation to artificial intelligence and natural language processing on pay-as-you-go basis. Hence, instead of owning computing servers, companies, operators, and end-users can exploit the high-performance facilities offered by the cloud service provider. In this way, they can benefit from better computational capacities, while reducing the cost of owning and maintaining a computation infrastructure, and paying only for their requested services. Cloud computing underpins a broad number of services, including data storage, cloud back-up of photos, video streaming services, and online gaming. \subsubsection{Mobile Edge Computing (MEC)} Edge computing is introduced as a solution to bring cloud facilities at the vicinity of users in order to minimize the services perceived latency, relieve the data transmission, and ease the cloud congestion. In another word, the edge computing has become an essential complement to the cloud and even a substitute in some scenarios. Services and computing capabilities equipped at the edge of cellular networks are called Mobile Edge Computing (MEC) facilities \cite{CE-D2D,MEC,CE-D2D2}. Deploying MEC servers within the edge Base Stations (BSs) allows providing location and context awareness, deploying new services quickly and flexibly, and enhancing the Quality of Service (QoS). \subsubsection{Cloudlets} Cloudlets \cite{cloudlet} are the network components that connect cloud computing to mobile computing. This network part presents the middle layer of the three-tier hierarchical architecture composed of mobile devices, micro-clouds, and cloud data centers. The role of cloudlets is to define the algorithms and implement the functionalities that support low latency edge-cloud tasks offloading. \subsubsection{Fog computing} The fog \cite{fog} and cloud computing share the same set of services provided to end-users, such as storage, networking, computing, and artificial intelligence. However, the cloud architecture is composed of fully distributed large-scale data centers. Meanwhile, fog services focus on IoT devices in a specific geographical area and target applications requiring real-time response such as live streaming, interactive applications, and online collective gaming. \subsubsection{Edge devices} In most of the studies, the interpretation of edge devices (i.e., edge nodes and IoT devices) is still ambiguous \cite{BILAL201894}, which means the difference between end or IoT devices and edge nodes is still unclear. Yet, common consensus defines the end-devices/IoT as ubiquitous gadgets, such as smartphones and smart gadgets and the edge nodes, as devices in higher levels including fog nodes, MEC servers, and cloudlets. The edge nodes are expected to possess higher storage and computation capacities and to offer high-quality networking and processing services at the proximity of IoT devices with a lower response time compared to the cloud remote servers. Driven by the expansion and pervasiveness of the computing devices, we believe that the heterogeneity of ubiquitous systems will increase in the future. These devices have to interact seamlessly and coherently, despite their difference in terms of software and hardware capacities. \subsection{Architecture and intersection with AI} Fig. \ref{Architecture} illustrates the hierarchical architecture of a pervasive system \cite{pervasive2}, which is composed of three layers: \begin{itemize} \item Data source layer: the data is collected from different monitored sources generating information of physical world or human activities, multimedia data such as images and audio, and social media information. \item Data management layer: this layer involves the storage and integration of heterogeneous data incoming from pervasive sources, the cleaning and pre-processing that tailor the context of the system, and the data analytics that convert the raw information into useful and personalized insights using multiple approaches, such as business and artificial intelligence. \item Application layer: to this end, the insights generated from the previous layer are used to offer multiple intelligent applications, such as health advisor and smart home applications. \end{itemize} \begin{figure}[!h] \centering \includegraphics[scale=0.645]{Figures/UbiLayers.pdf} \caption{Pervasive architecture.} \label{Architecture} \end{figure} In our paper, we focus only on the data management layer, specifically the data analytics using artificial intelligence. The data source layer is thoroughly discussed in \cite{Survey31}, whereas the application layer can be found in \cite{Survey30}. \section{Fundamentals of Artificial Intelligence}\label{AI} Since approaches and techniques reviewed in this survey rely on artificial intelligence and deep neural networks, we start first by providing a brief background of deep learning. A deeper and detailed review of AI can be found in the reference book in \cite{DeepLearning}. \subsection{Background} Even though AI has recently gained enormous attention, it is not a new term and it was initially coined in 1956. In fact, AI is a computation paradigm that aims to teach machines how to act, react, learn, reason, plan, solve problems, and behave like humans. Particularly, by absorbing the knowledge from the real-world data, the AI agent makes decisions without being previously programmed. Multiple techniques and procedures fall under this broad umbrella, such as rule-based systems, expert systems, blackboard architectures, control systems, and well-known machine learning algorithms. Machine learning generally includes three categories, which are supervised, unsupervised and online learning. An important branch of machine learning is deep learning that can be supervised or unsupervised and it is based on simulating the biological nervous system and performing the learning through subsequent layers transformation. As most of the pervasive applications are led by deep learning techniques and recently online learning, the crossover between the above-mentioned domains (shown in Fig. \ref{DL}) defines the scope of this paper. \begin{figure}[!h] \centering \includegraphics[scale=0.6]{Figures/DL_2.pdf} \caption{Relation between AI, machine learning, deep learning, and online learning. This survey mainly focuses on pervasive deep and online learning.} \label{DL} \end{figure} \begin{figure*}[!h] \centering \includegraphics[scale=0.7]{Figures/DNN_types.pdf} \caption{NN structures: (a) Multilayer Perceptron (MLP), (b) Convolutional Neural Network (CNN), (c) Residual Neural Network, (d) Randomly Wired Neural Network.} \label{DNN_types} \end{figure*} \subsubsection{Deep learning and Deep Neural Networks} In the following, we briefly present an overview of the most common deep learning networks. Neural networks consist of a first input layer, multiple hidden layers, and a last output layer, as shown in Fig. \ref{DNN_types}. When the neural network contains a high number of sequential layers, it can be called Deep Neural Network (DNN). The DNN layers include smaller units, namely neurons. Each neuron performs a weighted summation and a bias over all received inputs, the obtained sum is then fed to an activation function to generate the output. Fig. \ref{FCNN} illustrates the structure of the neuron. During the training process, the weights and bias vector related to each layer are optimized to enhance the accuracy of the model. Most commonly, the output of one layer is the input of the next layer and the output of the final layer is either a classification or a feature. The correctness of the prediction is assessed by the loss function that calculates the error between the true and predicted values. To adjust the weights of different neurons, an optimization algorithm calculating the gradient of the loss function is used. The widely used optimizers are Stochastic Gradient Descent (SGD) \cite{SGD} and its variants including ADAM \cite{ADAM}. The error rate is propagated back across the network until the input layer. This process, known as the backpropagation, is repeated multiple rounds, while balancing the weights for each neuron at each round. The DNN is considered trained and ready for inference, when the error rate falls below the desired threshold. \begin{figure}[!h] \centering \includegraphics[scale=0.72]{Figures/FCNN.pdf} \caption{MLP composed of multiple neurons: Each neuron has several inputs and trainable weights and bias.} \label{FCNN} \end{figure} The DNN networks have various structures. Hence, we introduce the fundamentals of the most known types as follows: \begin{table*}[] \footnotesize \tabcolsep=0.08cm \caption{Parameters comparison of state-of-the-art DNNs trained on ImageNet \cite{imagenet}, in terms of flops\protect\footnotemark.\\ MACC: Multiply-ACCumulate operations. } \label{Macc} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline \textbf{Model} & \textbf{Comp} & \textbf{Add} & \textbf{Div} & \textbf{MACC} & \textbf{Activations} & \textbf{params} & \begin{tabular}[c]{@{}l@{}}\textbf{size} \\\textbf{(Mb)}\end{tabular} & \textbf{pros} & \textbf{cons} \\ \hline VGG 16 \cite{VGG}& 196.85 M & 10 K & 10 K & 154.7 G & 288.03 M & 138.36 M & 512.2 & \begin{tabular}[c]{@{}l@{}}- Spatial exploitation.\\ - Simple and homogeneous\\ topology.\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Computationally expensive\\ fully connected layers.\end{tabular} \\ \hline AlexNet \cite{AlexNet}& 17.69 M & 4.78 M & 9.55 M & 7.27 G & 20.81 M & 60.97 M & 217 & \begin{tabular}[c]{@{}l@{}}- Spatial exploitation.\\ - Low, medium, and high\\ feature extraction.\\ - Introduces regularization \\ in CNN.\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Inactive neurons in the first\\ layers.\\ - Large filter size that causes\\ artifacts aliasing in the output\\ feature maps.\end{tabular} \\ \hline GoogleNet \cite{GoogleNet}& 161.07 M & 8.83 M & 16.64 M & 16.04 G & 102.19 M & 7 M & 40 & \begin{tabular}[c]{@{}l@{}}- Spatial exploitation.\\ - Multi-scale layers.\\ - Reduces number of params by\\ using bottleneck and average\\ pooling layers.\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Potential lose of important\\ information because of \\ representational bottleneck.\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}ResNet 50 \\ \cite{ResNet} \\\\ \end{tabular} & 10.89 M & 16.21 M & 10.59 M & 3.87 G & 46.72 M & 25.56 M & 97.7 & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}- Depth and Multi-path.\\ - Introduce residual learning.\\ - Solve the vanishing gradient\\ problem.\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}- Complex architecture.\\ - Multiple layers have no \\ contribution for the inference.\\ - Potential re-learning of \\ redundant feature maps.\end{tabular}} \\ \cline{1-8} \begin{tabular}[c]{@{}l@{}}ResNet 152 \\ \cite{ResNet} \\ \\\end{tabular} & 22.33 M & 35.27 M & 22.03 M & 11.3 G & 100.11 M & 60.19 M & 230 & & \\ \hline Inception v3 \cite{InceptionV3}& 16.53 M & 25.94 M & 8.97 M & 5.72 G & 41.33 M & 23.83 M & 91 & \begin{tabular}[c]{@{}l@{}}- Depth and width.\\- Reduce computational load by\\ using asymmetric filters.\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Complex architecture.\\ - Problem of homogeneity.\end{tabular} \\ \hline Inception v4 \cite{InceptionV4}& 21.87 M & 53.42 M & 15.09 M & 12.27 G & 72.56 M & 42.71 M & 163 & \begin{tabular}[c]{@{}l@{}}- Depth and width.\\ - Deep hierarchy of features.\end{tabular} & - Learning is slow. \\ \hline SqueezeNet \cite{squeezenet} & 9.67 M & 226 K & 1.51 M & 861.34 M & 12.58 M & 1.25 M & 4.7 & - \begin{tabular}[c]{@{}l@{}} Squeezes non-important \\ features. \end{tabular}& - Lower accuracy. \\ \hline \end{tabular} \end{table*} \paragraph{Multilayer Perceptron (MLP)} If the output of one layer is fed forward to the subsequent layer, the Neural Network (NN) is termed as the Feed Forward NN (FNN). The baseline FNN is called MLP or Vanilla. As shown in Fig. \ref{DNN_types} (a), each layer is Fully connected (Fc) to the next one and the output is sent to the next layer’s perceptron without any additional computation or recursion other than the activation function. Even though the structure of MLP is simple, it is able to distinguish non-linearly separable data, as long as the NN model size is sufficiently large. \paragraph{Convolutional Neural Networks (CNN)}\label{CNN} Processing vision-based tasks (e.g., image data), using MLP, potentially requires a deep model with a huge number of perceptrons, as for each data pixel a perceptron is assigned, which makes the network hard to train and scale. One of the successors of MLP is CNN that is introduced to solve this problem by defining additional pre-processing layers, (i.e., convolutional (conv) and pooling layers), as shown in Fig. \ref{DNN_types} (b). In the convolutional layer, the 2D input data (e.g., speech signal and image) is processed by extracting high-level features and compressing the information. Furthermore, the convolutional layer includes a set of learning parameters, namely filters that have the same number of channels as the data feature maps with smaller dimensions. Each filter channel passes through the length and width of the corresponding input feature map and calculates the inner product to the data. The summation of all the outputs produces one feature map. Finally, the number of output feature maps equals the number of filters, as illustrated in Fig. \ref{Conv}. The second basic component of the CNN network is the pooling task, which has an objective to reduce the spatial size of the input feature maps and minimize the computation time. For example, Maxpooling aiming at partitioning the input data into a grid and picking the maximum value of each grid cell, is widely used in state-of-the-art CNNs. The main difference between the Fc and the conv layers is that each neuron in Fc networks is connected to the entire input, which is not the case of CNN that is connected to only a subset of the input. \begin{figure}[!h] \centering \includegraphics[scale=0.67]{Figures/Conv_2.pdf} \caption{Convolutional task.} \label{Conv} \end{figure} A milestone for CNN applied to computer vision problems is the design of AlexNet, which revolutionized the ImageNet visual recognition challenges in 2012 \cite{AlexNet}. AlexNet is composed of 5 conv layers and 3 Fc layers, it contains 61 million weights to classify 227x227 images, and requires 217 Mb to be stored . Another representative example of the state-of-the-art Deep Neural Network that has demonstrated unprecedented performance in visual recognition tasks is VGG. VGG-16 \cite{VGG} presents a deeper network of 13 conv layers and 3 Fc layers and includes more than 138 million parameters for 224x224 images. VGG can only be executed on powerful devices as deploying it on end-devices presents intolerable classification latency, reaching over 16 seconds \cite{MoDNN}. To reduce the computation of the inference, Google introduced a model called GoogleNet or inception model \cite{GoogleNet}. While achieving a better accuracy, GoogleNet is composed of 7 million weights and requires only tithe accumulative tasks of VGG for 224x224 images. \paragraph{Deep Residual Networks}\label{DRN} \footnotetext{http://dgschwend.github.io/netscope/quickstart.html\\https://machinethink.net/blog/how-fast-is-my-model/}Following the victory of AlexNet and VGG, the deep residual networks have achieved a new breakthrough in the computer vision challenges during the recent years. Particularly, the residual networks paved the way for the deep learning community to train up to hundreds and even thousands of layers, while achieving high performance. In fact, designing a deep network does not work by only stacking sequential layers, as the training becomes difficult due to the vanishing gradient problem where the backpropagation is inefficient. As a result, when the network goes deeper, its performance starts to saturate and degrade quickly. To tackle the vanishing gradient, an auxiliary loss can be added in an intermediate layer as extra supervision \cite{GoogleNet}; however, the performance improvement is not significant. ResNet \cite{ResNet} is the-state-of-the-art variant of the residual network. This model uses the so called shortcut/skip connections that skip multiple nodes and feed the intermediate output to a destination layer (see Fig. \ref{DNN_types} (c)), which serves as a memory to the model. A similar idea is applied in the Long Short Term Memory (LSTM) networks \cite{LSTM}, where a forget gate is added to control the information that will be fed to the next time step. LSTM belongs to the Recurrent Neural Networks (RNN) family. \paragraph{Randomly Wired Networks} The aforementioned networks focus more on connecting operations such as convolutional tasks through wise and sequential paths. Unlike previous DNNs, the randomly wired networks \cite{Randomly_wired} arbitrarily connect the same operations throughout the sequential micro-architectures, as shown in Fig. \ref{DNN_types} (d). Still, some decisions are required to design a random DNN, such as the number of stages to down-sample feature maps using Maxpooling and the number of nodes to deploy in each stage. The edge of the randomly wired networks over the other models is that the training is faster, the number of weights is reduced and the memory footprint is optimized.\\ Table \ref{Macc} summarizes the parameters of some state-of-the-arts DNNs trained on ImageNet dataset \cite{imagenet}. Other state-of-the-art structures achieved unprecedented performance in multiple deep learning applications \cite{Survey32,survey_DNN}, including Recurrent Neural Networks (RNNs) \cite{RNN}, Auto-Encoders (AEs) \cite{AEs}, and Generative Adversarial Networks (GANs) \cite{GAN}; however, detailed overview of all models falls outside the scope of this paper. \subsubsection{Online Learning} Online learning, also known as sequential decision making, refers to techniques that update the model/policy at each step, when receiving each new instance of data. This sequential learning handling real-time incoming data is opposed to offline learning or batch learning, where multiple instances of the data are collected first, and then the model is trained once by consuming the whole batch, although it can be updated later if the data distribution changes. The batch learning is the most common way to train deep neural networks as it avoids the problem of \textit{ catastrophic forgetting} occurring in online techniques where previous learning may be forgotten upon learning new information. On the other hand, the advantage of the online learning is that it is adaptable, as it does not have any knowledge or assumption about the data distribution. In this way, if the trend of data drifts or morphs, the policy or the model can adapt to the changes on-the-fly. To update the model in offline learning, the data have to be retrained every time. The online learning is also data efficient as once the input is digested, it is no longer needed and it can be removed, which is not the case of offline learning that stores the whole data for training. The mini-batch learning imposed by resource constraints is the halfway point between offline and online learning. \begin{comment} In a typical online learning formulation, an agent interacts with an environment by performing actions at discrete time steps. Each of these actions results in a feedback signal that is referred to as reward, which describes the goodness of that action. This is fundamentally different from supervised learning, where the true values of the trained data are known and the aim is to learn a model capable of classifying the inference data samples or forecasting targeted features. The online learning problem is similar to the optimal control problem in the field of systems control \cite{bertsekas2019reinforcement}. The difference is that a perfect model that describes the environment is used in the latter, whereas the online system model is only estimated from trials. To motivate for online learning, we present some real-world systems modeled and solved using algorithms from the online settings. Consider a website that wants to maximize the engagement and relevance of articles presented to users. When a new user arrives, the website needs to decide on an article header to show and observe whether or not the user interacts with this article. In this example, the selected action is the article to display, and the reward is binary, $1$ if clicked, $0$ otherwise. One can think of many recommendation-based systems that could be modeled similarly, such as movie-recommender systems where actions are which movie to recommend, and web search where the actions are which results to show. In these scenarios, the reward is assessed according to the users' satisfaction. \end{comment} \paragraph{Bandit learning} The bandit problem represents the simplest online learning formulation, where an agent interacts with an environment by performing actions at discrete time steps. Each of these actions results in a feedback signal that is referred to as reward, which describes the goodness of that action. Consider a website that wants to maximize the engagement and relevance of articles presented to users. When a new user arrives, the website needs to decide on an article header to show and observe whether or not the user interacts with this article. In this example, the selected action is the article to display, and the reward is binary, $1$ if clicked, $0$ otherwise. Many recommendation-based systems can be modeled similarly, such as movie-recommender systems where actions are which movie to recommend, and web search where the actions are which results to show. In these scenarios, the reward is assessed according to the users' satisfaction. This is fundamentally different from supervised learning, where the true values of the trained data are known and the aim is to learn a model capable of classifying the inference data samples or forecasting targeted features. More specifically, a perfect model that describes the environment is used in the latter, whereas the bandit online model is only estimated from trials. Agents, in bandit learning, aim to quickly discover the best action (also referred to as arm) across a group of actions by only observing the rewards obtained by executing each one. There are multiple extensions to this basic definition, e.g., linear bandits, adversarial bandits, and combinatorial bandits, where different assumptions regarding how the actions generate the reward, are made. There are optimal algorithms for the stochastic bandit problem in the single-agent case, like successive-elimination and Upper Confidence Bound (UCB). In successive-elimination, actions are tried for a fixed number of times before being eliminated. Analysis can show that sub-optimal actions can not be selected more than logarithmic factor on the horizon. In UCB, actions with the highest plausible expected reward are selected (utilizing confidence bounds related to the distribution, also known as concentration bounds). The same logarithmic upper bound on sub-optimal action selection is shown in \cite{slivkins_introduction_2019}. Note that a critical assumption in bandits is that actions do not have any effect on the agent other than causing a sample of a reward signal. In cases where actions may transform the environment from a well-described state to another, the formulation is known as reinforcement learning. \paragraph{Reinforcement Learning (RL)} The RL concept is based on learning how to map situations and environment states to actions in order to maximize the long-term reward signal. The RL-agent is not apprised which action to choose; instead, it discovers the actions that achieve the highest reward by trying different combinations and receiving immediate gains and penalties, which can be modeled as a Markov Decision Process (MDP). Different from the bandit learning, the RL chosen action does not impact only the direct reward, but also all subsequent situations and related rewards. These two features, trial and error, and search and delayed reward assignment, are the key characteristics of the RL enabling it to learn by interacting with its environment and then adapting to it. This principle is illustrated in Fig. \ref{DRL}. Deep Reinforcement Learning (DRL) \cite{DRL, 9207771} combines reinforcement learning and the deep learning. The DRL is well-suited and even indispensable, when the environment is highly dynamic and dimensional and the number of states is large or continuous. In such a scenario, the traditional RL cannot perform efficiently. Hence, the powerful ability representation of DNN is used to solve the continuous or huge state-action space problem. Using the DRL becomes a powerful solution in numerous fields, including robotics, 5G networks, and security, even though applications are still in their infancy. \begin{figure}[!h] \centering \includegraphics[scale=0.6]{Figures/DRL.pdf} \caption{Deep Reinforcement Learning (DRL) design.} \label{DRL} \end{figure} Variants of DRL include the deep policy gradient RL \cite{policy_gradient}, the Deep Q-Networks (DQN) \cite{DQN}, Distributed Proximal Policy Optimization (DPPO) \cite{PPO}, and Asynchronous Advantage Actor-Critic \cite{AAAC}. In this survey, we only discuss two representative techniques, which are the DQN and the policy gradient DRL: \begin{itemize} \item Deep Q-Networks (DQN): The DQN is a representative of the value-based DRL that leverages the powerful ability of DNN to map the high-dimensional states set to action values. Other variants of DQN include the Double Deep Q-Learning (Double-DQL) \cite{Double-DQL} which handles the problem of the overestimation of Q-actions, and Dueling Deep Q-Learning (Dueling-DQL) \cite{Dueling-DQL} that finds the most valuable states without experiencing their impact on the actions. \item Policy-gradient-based DRL: Another commonly used DRL strategy is policy-gradient that includes Deep Deterministic Policy Gradient (DDPG) \cite{DDPG} and Proximate Policy Optimization (PPO) \cite{PPO}. The policy-gradient is trained by continuously calculating the gradient of policy expectation reward and updating its parameters. Besides, a well-known approach in policy-gradient DRL is the Actor-Critic (AC) framework, which is composed of policy and action-value functions. The policy function plays the role of the actor that takes decisions and interacts with the environment, whereas the action-value function is called a critic that is responsible to evaluate the performance of the actor. \end{itemize} \subsection{Performance metrics} The assessment of the DNN performance depends on the proximity-aware IoT application where deep learning is used. For example, for object detection, face authentication, or self-driving car, the accuracy is of an ultrahigh importance. Yet, some performance metrics are general and not specific to any application, including latency, memory footprint, and energy consumption. An overview of different performance metrics is presented as follows: \subsubsection{Latency} The latency is defined as the required time to perform the whole inference/training process, which includes the data pre-processing, data transmission, the classification process or the model training, and the post processing. Real-time applications led by artificial intelligence (e.g., drones, autonomous vehicles, AR/VR gaming, and intelligent wearable devices) have usually stringent latency constraints, of around 100 ms \cite{Survey8}. Hence, the near-processing is advantageous for fast inference response. Furthermore, specialized accelerators should be wisely designed to efficiently perform deep learning for edge applications and meet the latency requirements. The latency metric is affected by different factors, such as the size of the DNN model, the computational capacity of the host device, and the transmission efficiency. \subsubsection{Accuracy vs efficiency} The accuracy refers to the percentage of data samples that receive the right prediction compared to the total number of input data. This metric mainly reflects the performance of the trained model. In addition to the capability of the deep network, accuracy is also impacted by the speed of feeding the data to the model. Particularly, the fast feeding of images is a serious issue encountered by the video analytics applications in resource-constrained devices because of the potential skipping of some data samples, which causes accuracy drop. Therefore, for ultrahigh accuracy, the computing device has to fit the extremely intensive memory and computation requirements of the DNN model. Recently, some approaches have resorted to compressing the deep network in order to deploy it in small IoT devices, with an objective to maintain the accuracy as high as possible. Pervasive AI, which is the scope of this survey, is an efficient solution to perform edge inference without changing the accuracy. \subsubsection{Energy efficiency} Unlike the cloud and edge servers, the IoT devices are battery-limited (e.g., commercial drones.). Moreover, the communication and computation overhead caused by the deep model training/inference incurs huge energy consumption. Hence, the energy efficiency is of a large importance in the context of edge AI and it primarily depends on the size of the DNN and the capabilities of the computing device. \subsubsection{Computation and memory footprint} To perform DNN training/inference, significant cycles are executed for memory data transfer to/from computational array, which makes it a highly intensive and challenging task. For example, VGG 16 and AlexNet require respectively 512 Mb and 217 Mb of memory to store more than 136 M and 60 M of weights and perform respectively 154.7 G and 7.27 G multiplications to classify a single ImageNet input, which is illustrated in Table \ref{Macc}. Such amount of memory and computational tasks is infeasible to be executed in power and resource constrained devices with a real-time response. Therefore, optimizing the size of the DNN is very necessary (see squeezeNet in Table \ref{Macc}). Additionally, the way to load the tremendous DNN parameters has a significant impact on the computation requirements of the learning tasks, which encourages for network re-designing (e.g., pruning \cite{pruning} and quantization \cite{quantization}.). While the model squeezing approaches are based on features’ removal, the high accuracy of the DNN requires millions of parameters. This paves the way for introducing efficient data distribution and parallelization that do not affect the performance of the system. \subsubsection{Communication Overhead} The communication overhead impacts the performance of the system, when the DNN computation is offloaded to the cloud or other edge participants. Hence, it is indispensable to minimize this overhead, particularly in costly network infrastructure. The data overhead mainly depends on how the model is designed, i.e., types and configuration of the layers that determine the output size, in addition to the communication technology. Another important performance metric related to offloading DNN tasks is the consistency of parameters and computations distributed across all machines. Furthermore, the fault-tolerance should be guaranteed to deal with communication failures efficiently. \begin{figure*}[!h] \centering \includegraphics[scale=0.64]{Figures/DL_application_services.pdf} \caption{AI pervasive application and the related foundational DL services reviewed in this survey.} \label{services} \end{figure*} \subsubsection{Privacy} IoT devices produce and offload a massive amount of data every second, which can result in serious privacy vulnerabilities and security attacks such as black-box attacks \cite{black-box}, white-box attacks \cite{white-box}, data poisoning \cite{data_poisoning}, membership attacks \cite{membership}, and targeted mis-classification \cite{mis-classification}. Guaranteeing the robustness and privacy of the DNN system has become a primary concern for the deep learning community. The traditional wise resorts to data encryption, pre-processing, and watermarking. Yet, all these solutions can be neutralized using model stealing attacks. Hence, more sophisticated defenses need to be designed to secure the DNN training and execution, through data distribution. To design an efficient deep learning network or select the adequate one for the targeted application, a large number of hyperparameters need to be considered. Therefore, understanding the trade-off between these parameters (e.g., latency, accuracy, energy, privacy, and memory.) is essential before designing the model. Recently, automated machine learning frameworks responsible for DNN selection and parameters tuning, have been introduced, such as Talos \cite{talos}. \subsection{Pervasive frameworks for AI} Several hardware and open source software libraries are publicly available for pervasive devices, particularly resource-limited ones, to enable DNN training and inference. As a first example, Google TensorFlow \cite{Tensorflow} is an open source deep learning framework released in 2015 to execute DNN tasks on heterogeneous distributed systems based on their estimated computational and communication times. In 2017, an optimized version of TensorFlow, namely TensorFlowLite \cite{Tensorflowlite}, was specially designed for resource constrained devices, such as Raspberry Pi. This new version was further enhanced by enabling mobile GPU support, in 2019. However, TensorFlowLite was not designed to process the training phase but only to compress pre-trained DNN models and perform inference tasks with minimum latency. Another lightweight deep learning framework developed by Facebook is Caffe2 \cite{caffe2}. Caffe2 provides a straightforward way to experiment deep learning algorithms with an easy update, high flexibility, and ability to enable heterogeneous models on low-power devices. In 2019, this framework was merged with the well-known deep learning platform, PyTorch \cite{Pytorch}, for research and production purposes. Core ML \cite{CoreML} and DeepLearningKit \cite{ DeepLearningKit} are two machine learning frameworks commercialized by Apple to support pre-trained models on iPhone/iPad devices. More specifically, Core ML was designed to leverage the CPU/GPU endowed with the end-device for deep learning applications such as natural language and image processing, while minimizing the energy consumption and memory footprint. On the other hand, DeepLearningKit supports more complex networks such as CNNs and it is coined to utilize the GPU more efficiently for iOS based applications. Equipping end-devices with GPUs is important for an efficient inference and model training. In this context, IoT-specific development kits are provided to experiment AI in edge computing. One of the recent powerful kits is the NVIDIA Jetson Nano developer kit \cite{NVIDIA}, which is a powerful small device that enables running multiple DNNs in parallel for intensive applications including object recognition. The Intel Edison kit \cite{Intel} is another popular AI platform designed for IoT experiments. \subsection{Pervasive AI for IoT Applications} Deep learning methods have brought substantial breakthroughs in a broad range of IoT applications, spanning from signal and natural language processing to image and motion recognition. Recently, because of the revolution of pervasive computing, the DL paradigm has been re-designed to target a wide variety of proximity-aware IoT applications, use cases and verticals. In this section, we review the accomplishments of deep learning in different domains where pervasive computing is needed, including intelligent vehicles and robots, smart homes and cities, health and well-being, energy and smart grid, virtual reality and augmented reality, and 5G/6G intelligent networks. Besides, we identify several types of foundational DL services on which pervasive applications are built, such as vision and image classification, and motion and speech recognition. The common metric that groups different DL services is the need for prompt response and fast analytic mode of data that should not be piled for later processing. Fig. \ref{services} illustrates different foundational DL services and the related pervasive applications/domains reviewed in this paper. Note that each application may require more DL services beyond the ones summarized in this survey. \subsubsection{Intelligent vehicles, robots, and drones} Recently, DNNs have been widely used to lead a variety of mobile platforms such as drones, robots, and vehicles, in order to achieve critical tasks such as autonomous navigation and human safety monitoring. In this context, intelligent transportation systems have become an important source of ubiquitous data, e.g., Internet of Vehicles (IoV). For example, authors in \cite{traffic_flow, traffic_congestion,GPS3,GPS4} used the GPS data from taxis and bikes as an input to their CNN models to forecast the traffic flow and predict the potential congestions. Learning from GPS data can be categorized as outdoor localization, which is also called location aware DL services. Motivated by the revolutionary development of image processing DL services, applications such as driving assistance, autonomous driving, and mobility mapping have become more reliable and commonly used in intelligent mobile systems. As an example, in \cite{self_driving}, the captured image from the vehicle front facing camera is used to decide the steering angle and keep the car in the middle of the lane. Authors in \cite{sign_recognition} designed a traffic sign recognition system that outperforms the humans' detection by 0.62\% and boosts self-driving efficiency. The ever-improving online learning techniques are broadly exploited for UAVs/robots guidance, including the works in \cite{drone1,drone2} where drones learn how to navigate and avoid obstacles while searching target objects. Several start-ups are using DL for their self-driving systems, such as prime-air UAVs of Amazon used to deliver packages \cite{prime_air}, Uber self-navigating cars \cite{Uber}, and the smart delivery robots widely used in many hospitals during the Covid-19 pandemic to avoid contact with patients \cite{smart_robots}. Finally, the distinctive performance of drones/robots encouraged the emergence of more critical and sophisticated missions; many of them were not even envisaged a couple of decades ago, including military border surveillance, oil/gas offshore inspection, and forest fire detection \cite{fire_detection} based on image processing DL services. \subsubsection{Smart homes and cities} The concept of a smart home covers a large range of applications, that contribute to enhance the productivity, convenience, and life quality of the house occupants. Nowadays, many smart appliances are able to connect to the internet and offer intelligent services, such as smart air conditioners, smart televisions, and lighting control systems. Most of these appliances require the deployment of wireless controllers and sensors in walls, floors, and corners to collect data for motion recognition DL services. Speech/voice DL recognition services are also involved for a better home control. Well-known examples are Amazon Alexa \cite{alexa}, Apple siri \cite{siri}, and Microsoft Cortana \cite{cortana} applications that interact with the vocal requests of users. Combined with image recognition DL services, Cortana can also be used to gather information from smart refrigerators and define food items \cite{cortana_refrigerator}. Compared to smart homes, smart city services are more relevant to the deep learning community as the data collected from different ubiquitous participants is huge and highly heterogeneous, which allows high-quality analysis. Examples involve waste management and garbage classification \cite{waste_classification}, air quality and pollution level estimation \cite{air_pollution}, energy consumption and smart grid \cite{smart_grid}, pedestrian traffic and crowd moving prediction \cite{pedestrian_detection}, parking control \cite{car_parking}, human activity monitoring using their wearable devices \cite{human_activity}, and even analyzing the time passengers spend to look at the ads. These aforementioned applications are based on image recognition, localization, and signal processing DL services. \subsubsection{Health and well-being} Deployed on wearable and personal devices, the DL services have been used as health care solutions for individual users and communities. For instance, some mobile applications can monitor the dietary regime by recognizing food images, the portion size and related relevant information, using image processing DL services \cite{food_recognition, EdgeHealth}. Furthermore, handwritten images can help to identify Parkinson's disease \cite{parkinston_detection}, electrocardiogram (ECG) and electroencephalogram (EEG) signals processing contributes to early detection of seizures and QRS complex \cite{EEG,ECG}, and voice/motion monitoring can give some insights to diagnose psychological disorders \cite{stress_detection}. \subsubsection{Virtual Reality (VR) and Augmented Reality (AR)} VR is designed to create an artificial environment, where users are placed into a 3D experience simulating their different senses such as vision, touch, and hearing. AR can be defined as a VR that inserts artificial objects into the real environment. In AR, sensors are used to control the orientation and position of the camera. Popular examples of applications using AR/VR include the tactile internet and holographic telepresence \cite{holographic}, and multi-players VR games. The latency of the virtual reality systems is measured in terms of “motion-to-photons” metric, which is defined as the delay starting from moving the headset to updating the display according to the movement. This motion-to-photons latency should be in the range of tens to hundreds of milliseconds \cite{VR}. Offloading the VR/AR computation to the remote cloud servers may incur higher latencies exceeding the required constraints. Hence, on-device computation is indispensable to achieve real-time performance. \subsubsection{5G/6G intelligent networks} The potential applications of DNN aiming to enhance networking performance are countless, particularly after the emergence of the sixth generation (6G). Different from previous generations, the 6G paradigm is based on supporting a wider variety of AI services spanning from high-performance servers to resource-limited devices, making “connected things” evolve into “connected intelligence”. Applications of DL in the new generation networks involve adaptive resource allocation to serve users in real-time \cite{allocation}, device-to-device (D2D) task offloading using online learning and localization services \cite{offloadingDL}, proactive caching to minimize remote communication and reduce latency \cite{caching}, network energy efficiency \cite{energy}, and privacy and data security \cite{privacy}. \subsection{Lessons learned} In this section, we reviewed state-of-the-art deep learning and online learning techniques, examined their performance metrics, and presented some of their applications that may require pervasive deployment. In this context, multiple conclusions can be stated: \begin{figure*}[!h] \centering \includegraphics[scale=0.55]{Figures/Pervasive_inference.pdf} \caption{Pervasive inference system in multiple scenarios. } \label{pervasive_AI} \end{figure*} \begin{itemize} \item The AI proximity-aware IoT applications have different requirements and each one has its distinctive performance keys. For example, VR/AR is highly sensitive to delays and cannot tolerate any motion sickness. Meanwhile, the applications relying on UAVs and moving robots have stringent requirements in terms of energy to accomplish their missions. For the surveillance applications, the accuracy is paramount whereas health services require strict privacy constraints. However, such requirements come with other costs. More specifically, lower delays and energy consumption can be achieved using small DL networks that generate fast inference and can be deployed locally. On the other hand, high accuracy cannot be attained using these models. Instead, deep networks can be adopted, while incurring higher memory and computation requirements and consequently higher communication overheads for remote execution. Privacy imposes local training and inference, that requires robust devices issued with GPUs. Therefore, understanding the requirements of the targeted application and the trade-off between different hyper-parameters is crucial for selecting the adequate AI model and the processing device. \item The common characteristic for most of AI applications, particularly for IoT applications that require real-time data collection, is the need for prompt response and fast analytics that should not be piled for later processing. Hence, centralized solutions such as cloud-based data analytics are not feasible anymore, due to the communication overheads. Pervasive computation has emerged as a solution that enables the deployment of AI at the proximity of the data source for latency-sensitive applications, and in collaboration with high-performance servers for better computational resources. \item Understanding the application requirements and the pervasive environment and wisely selecting the data shape and the adopted AI technique, is critical for determining the distribution mode. More specifically, the privacy constraints and the size of the data open the doors for federated learning where each entity trains its data locally. The low latency requirements and the limited resources imposed by some pervasive systems, push for the partitioning of inference where the AI model is split into smaller segments. Finally, the dynamics of the system, the unavailability of labeled data and the inherently decentralized architectures call for the online learning where agents are distributed. \end{itemize} \begin{comment} This approach requires an IID datasets. In this case, the partitioning strategy, the size of segments, and the parallelization approach are conditioned by the chosen model (i.e., layers' types, connections, and data reduction ability.), and the number of ubiquitous participants and their capacities. \end{comment} After understanding the motivations for \textit{pervasive AI} and the requirements of the IoT applications and their related AI models, we present different distribution modes and their communication and computation models in the subsequent sections. We start by the distributed inference and federated learning. Next, we discuss online learning, including multi-agent bandit, multi-agent RL, and active learning. \section{Pervasive Inference}\label{PI} \begin{figure}[!h] \hspace{-5mm} \centering \includegraphics[scale=0.55]{Figures/IVA.pdf} \caption{Outline of pervasive inference section.} \label{IVA} \end{figure} In this section, we discuss the pervasive inference, where the trained model is partitioned and different segments are distributed among ubiquitous devices. Fig. \ref{pervasive_AI} illustrates different scenarios, where the distribution can solve the challenges presented by the centralized approaches. In the following subsections, the communication and computation components of the pervasive inference are introduced. Then, the resource management approaches for the distribution are reviewed and two use cases are described. Fig. \ref{IVA} presents different branches of this section. \subsection{Profiling computation and communication models}\label{profiling} The computation and communication models present the mechanisms to formulate different operations and functions into an optimization problem in order to facilitate the theoretical analysis of DNN distribution. More specifically, we discuss the computational requirements of different DNN tasks, the wireless communication latency between different pervasive participants and their energy consumption. \subsubsection{Computation models} Various parameters play a critical role to model the computational tasks of different segments of the DNN network including latency, generality, scalability and context awareness. In this subsection, we describe the computation models of two popular splitting strategies adopted in the literature, which are the per-layer and per-segment splitting. These models are presented after introducing some definitions. \paragraph{Overview and definitions}\mbox{}\\ \textbf{\indent Binary offloading}: Relatively simple or highly complex tasks that cannot be divided into sub-tasks and have to be computed as a whole either locally at the source devices or sent to the remote servers because of resource constraints, are called binary offloading tasks. These tasks can be denoted by the three-tuple notation $T(K,\tau, c)$. This commonly used notation illustrates the size of the data to be classified presented by $K$ and the constraint $\tau$ (e.g., completion deadline, the maximum energy, or the required accuracy). The computational load to execute the input data of the DNN task is modeled as a variable $c$, defined as the number of multiplications per second \cite{b3}. Using these parameters not only depicts the key proprieties of the AI application, such as the memory and computation requirements, but also allows a better evaluation of the energy consumption performance, accuracy and classification latency. Although binary offloading has been widely studied in the literature, we note that it is out of the scope of this survey covering the pervasivity and distribution of AI tasks. \newline \textbf{ \indent Partial offloading}: In practice, DNN classification is composed of multiple subtasks (e.g., layers execution, multiplication tasks, and feature maps creation), which allows to implement fine-grained (partial) computations. More specifically, the AI task can be split into two or more segments, where the first one can be computed at the source device and the others are offloaded to pervasive participants (either remote servers or neighboring devices). \begin{figure}[h!] \centering \includegraphics[scale=0.6]{Figures/parallelization_2.pdf} \caption{Inference parallelization: data and model parallelization} \label{parallelization} \end{figure} \textbf{Data parallelization:} The most manageable task of partial offloading is the data parallelization, where duplicated offloaded segments are independent and can be arbitrarily divided into different groups and executed by different participants of the pervasive system, e.g., segments from different classification requests (as shown in Fig. \ref{parallelization} (a)). We highlight that the input data to parallel segments are independent and can be different or akin. \textbf{Model parallelization:} A more sophisticated partial offloading pattern is the model parallelization, where the execution of one task is split across multiple pervasive devices. Accordingly, the input data is also split and fed to different parallel segments. Then, their outputs are merged again. In this offloading pattern, the dependency between different tasks cannot be ignored as it affects the execution of the inference. Particularly, the computation order of different tasks (e.g., layers) cannot be determined arbitrarily because the outputs of some segments serve as the inputs of others (as shown in Fig. \ref{parallelization} (b)). In this context, the inter-dependency between different computational parts of the DNN model needs to be defined. It is worth mentioning that many definitions of data and model parallelism are presented in the literature, which are slightly different. In our paper, we opted for the definitions presented in \cite{robots}. \textbf{Typical dependencies:} Different DNN networks can be abstracted as task-call graphs. These graphs are generally presented by Directed Acyclic Graphs (DAGs), which have a finite directed structure with no cycles. Each DNN graph is defined as $G(V,E)$, where the set of vertices $V$ presents different segments of the network, while the set of edges $E$ denotes their relations and dependencies. Typically, three types of dependencies contribute to determining partitions’ strategies, namely the sequential dependency which includes the conventional CNN network with sequential layers and without any residual block (e.g., VGG \cite{VGG}), the parallel dependency which includes the relation between different tasks in the same layer (e.g., feature maps transformation), and the general dependency including general DNN models (e.g., randomly wired CNN \cite{Randomly_wired}). Different dependencies are depicted in Fig. \ref{partitioning}. The required computation workload and memory are specified for each vertex $V$ and the amount of the input and output data can be defined on the edges. \begin{figure}[!h] \centering \includegraphics[scale=0.8]{Figures/Partitionning_2.pdf} \caption{Typical typologies of DNNs and partitioning strategies.} \label{partitioning} \end{figure} Based on the presented dependencies, two partition strategies can be introduced, namely per-layer and per-segment partitioning (see Fig. \ref{partitioning}). Per-layer partitioning defines dividing the model into layers and allocating each set of layers within a pervasive participant (e.g., IoT device or remote servers). On the other hand, per-segment partitioning denotes segmenting the DNN model into smaller tasks such as feature maps transformations, multiplication tasks and even per-neuron segmentation. \textbf{Computation latency:} The primary and most common engine of the pervasive devices to perform local computation is the CPU. The performance of the CPU is assessed by cycle frequency/ clock speed $f$ \cite{Survey33} or the multiplication speed $e$ \cite{CNNDist}. In the literature, authors adopt the multiplication speed to control the performance of the devices executing the deep inference. In practice, $e$ is bounded by a maximum value $e_{max}$ reflecting the limitation of the device computation capacity. Based on the model introduced for binary offloading, the computation latency of the inference task $T(K,\tau,c)$ is calculated as follows \cite{CNNDist}: \begin{equation} \begin{aligned} t^c=\frac{c}{e}. \end{aligned} \label{eq:1} \end{equation} Importantly, a higher computational capacity $e_{max}$ is desirable to minimize the computation latency at the cost of energy consumption. As end-devices are energy constrained, the energy consumption of the local computation is considered as a key measure for evaluating the inference efficiency. More specifically, a high amount of energy consumed by AI applications is not desirable by end-devices due to their incurred cost. Similarly, significant energy consumption of edge nodes (e.g., access points or MEC servers.) increases the cost envisaged by the service providers. \begin{table*}[] \footnotesize \tabcolsep=0.09cm \caption{Characteristics of different splitting strategies:\\ {\footnotesize A: After, B: Before, $N_{Fc}$: number of fully connected layers, $n$: number of input neurons (Fc), $m$: number of output neurons (Fc), $H_1, W_1, D_1$: dimensions of the input data (Conv), $H_2, W_2, D_2$: dimensions of the output data (Conv), $H_f, W_f, D_1$: dimensions of the filter (Conv), $k/D_2$: number of filters, $d_x,d_y$: dimensions of the spatial splitting (Conv), $N$: Number of participants, $k'_i$: Number of segments per participant.}} \label{tab:splitting} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}\textbf{Partitioning}\\ \textbf{strategy}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{$N^{o}$ of smallest} \\\textbf{segments}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Activation}\\ \textbf{task}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Inputs}\\ \textbf{per segment}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Filters weights}\\ \textbf{per device}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Outputs}\\ \textbf{per segment}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Computation}\\ \textbf{per segment}\end{tabular} &\begin{tabular}[c]{@{}c@{}}\textbf{Transmitted data}\\ \textbf{per layer}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Merging}\\ \textbf{strategy}\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Per-layer:\\ Fully-connected (Fc)\end{tabular} & $N_{Fc}$ & A & $n$ & \xmark & $m$& $n \times m$ &$n+m$ & Seq\\ \hline \begin{tabular}[c]{@{}c@{}}Per-segment: Output\\ splitting for Fc layers\end{tabular} & $\sum\limits^{N_{Fc}}_{i=1} m_i$ & B/A & $n$ & \xmark& 1 & $n$ & $n \times N +m$& Concat \\ \hline \begin{tabular}[c]{@{}c@{}}Per-segment: Input \\ splitting for Fc layers\end{tabular} & $\sum\limits_{i=1}^{N_{Fc}} n_i$& A & 1 & \xmark & $m$ & $m$ & $N \times m +n$& Sum \\ \hline \begin{tabular}[c]{@{}c@{}}Per-layer:\\ Convolution (Conv)\end{tabular} & $N_{Conv}$ & A & $H_1 \times W_1 \times D_1$ & \begin{tabular}[c]{@{}c@{}}$k \times D_1 \times$ \\ $(H_f \times W_f)$\end{tabular} & $H_2 \times W_2 \times k$ & \begin{tabular}[c]{@{}c@{}} $cp= D_1 \times $\\ $(W_f \times H_f) \times$ \\ $ k \times (W_2 \times H_2)$ \end{tabular} & \begin{tabular}[c]{@{}c@{}} $H_1 \times W_1 \times D_1 +$\\ $H_2 \times W_2 \times k$\end{tabular} & Seq \\ \hline \begin{tabular}[c]{@{}c@{}}Per-segment: channel\\ splitting for Conv\end{tabular} & \begin{tabular}[c]{@{}c@{}} $ \sum\limits_{i=1}^{N_{Conv}}k_i$ \end{tabular} & B/A & $H_1 \times W_1 \times D_1$ & \begin{tabular}[c]{@{}c@{}}$k'_i \times D_1 \times$ \\ $(H_f \times W_f)$\end{tabular} & $H_2 \times W_2$ & $\frac{cp}{k}$ &\begin{tabular}[c]{@{}c@{}} $(N \times H_1 \times W_1 \times D_1)$ \\$+ (K \times H_2 \times W_2)$\end{tabular} &Concat \\ \hline \begin{tabular}[c]{@{}c@{}}Per-segment: spatial\\ splitting for Conv\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\sum\limits_{i=1}^{ N_{Conv}}\frac{H_1^i\times W_1^i}{d^i_x \times d^i_y}$ \end{tabular}& B/A & \begin{tabular}[c]{@{}c@{}}$\frac{H_1 \times W_1 \times D_1}{d_x \times d_y} +$ \\ $padding$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$k \times D_1 \times$ \\ $(H_f \times W_f)$\end{tabular} & $\frac{H_2 \times W_2 \times k}{d_x \times d_y}$ & $\frac{cp}{d_x\times d_y}$&\begin{tabular}[c]{@{}c@{}} $H_1 \times W_1 \times D_1 +$\\ $H_2 \times W_2 \times k+$ \\ $N\times padding$\end{tabular}& Concat \\ \hline \begin{tabular}[c]{@{}c@{}}Per-segment: filter \\ splitting for Conv\end{tabular} & \begin{tabular}[c]{@{}c@{}} $\sum\limits_{i=1}^{N_{Conv}}D^i_1\times k_i$\end{tabular} & A & $H_1 \times W_1$ & \begin{tabular}[c]{@{}c@{}}$k'_i \times$ \\ $(H_f \times W_f)$\end{tabular} & $H_2 \times W_2$ &$ \frac{cp}{D_1 \times k}$ & \begin{tabular}[c]{@{}c@{}} $(D_1 \times H_1 \times W_1)+$ \\$ (N \times H_2 \times W_2 \times k)$\end{tabular}& \begin{tabular}[c]{@{}c@{}}Sum+ \\concat \end{tabular}\\ \hline \end{tabular} \end{table*} \textbf{Computation energy:} If the inference is executed at the data-generating source, the consumed energy is mainly associated to the task computation. In contrast, if the task is delegated to remote servers or to neighboring devices, the power consumption consists of the required energy to transfer the data between participants, the amount of energy consumed for the computation of different segments, and the energy required to await and receive the classification results. Suppose that the inference task/sub-task $T_i$ takes a time $t^c_i$ to be computed locally in the device participating in the pervasive inference and let $P_i$ denote the processing power to execute the task per second. The energy consumed to accomplish an inference task $T_i$ locally at the computing device is equal to \cite{energy-Aware-dist}: \begin{equation}\label{eq:2} \begin{aligned} e^{local}_i= t^c_i \times P_i. \end{aligned} \end{equation} Next, we profile the DNN partitioning strategies presented in the literature, in terms of computation and memory requirements first and then in terms of communicated data to offload the output of segments. The key idea of partitioning a DNN network is to evenly or unequally distributing the computational load and the data weights across pervasive devices intending to participate in the inference process, while minimizing the classification latency. A partitioning can be achieved by simply segmenting the model per-layer or set of layers (see Fig. \ref{parallelization} (a)) or by splitting the layers' tasks (see Fig. \ref{parallelization} (b)). Then, each part is mapped to a participant. \paragraph{Per-layer splitting} As previously mentioned, the computational load of each layer is measured as the number of multiplications required to accomplish the layer's goal \cite{b1}. \newline \textbf{\indent Fully-connected layers:} The computation requirement of a fully-connected layer can be calculated as follows: \begin{equation}\label{eq:4} \begin{aligned} c^{Fc}=n \times m, \end{aligned} \end{equation} where $n $ represents the number of the input neurons and $m$ is the number of the output neurons. \newline \textbf{\indent Convolutional layers:} The computation load of a convolution layer can be formulated as follows \cite{b1}: \begin{equation}\label{eq:5} \begin{aligned} c^{conv}=D_1 \times (W_f \times H_f) \times D_2 \times (W_2 \times H_2). \end{aligned} \end{equation} We remind that $D_1$ is the number of input channels of the convolutional layer which is equal to the number of feature maps generated by the previous layer, $(W_f \times H_f)$ denotes the spatial size of the layer’s filter, $D_2$ represents the number of filters and $(W_2 \times H_2)$ represents the spatial size of the output feature map (see Fig. \ref{Conv}). The computational load introduced by pooling and ReLU layers can be commonly neglected, as it does not require any multiplication task \cite{b1}. We highlight that the per-layer splitting is motivated by the sequential dependency between layers. This dependency does not permit the model parallelism nor the latency minimization. Instead, it allows the resource-constrained devices to participate in the AI inference. \paragraph{Per-segment splitting} \mbox{}\\ \textbf{\indent Fully-connected layers:} We start by profiling the fully-connected layer partitioning. More specifically, the computations of different neurons $y_i$ in a fully-connected layer are independent. Hence, their executions can be distributed, and model parallelism can be applied to minimize the inference latency. Two methods are introduced in the literature (e.g., \cite{IoTInferencing,FullyDistribution}), which are the output and input partitioning as shown in Fig. \ref{FC}. \begin{figure}[H] \centering \includegraphics[scale=0.28]{Figures/FC_a.pdf} \caption{Partitioning of fully connected layers.} \label{FC} \end{figure} \begin{itemize} \item \textit{Output splitting}: the computation of each neuron $y_i$ is performed in a single participant that receives all input data $\{x_1,x_2,...,x_n\}$, as highlighted in Fig. \ref{FC} (a). Later, when the computation of all neurons is done, results are merged by concatenating the output of all devices in the correct order. The activation function can be applied on each device or after the merging process. \item \textit{Input splitting}: each participant computes a part of all output neurons $y_i$. Fig. \ref{FC} (b) illustrates an example, where each device executes $\frac{1}{n}$ of the required multiplications. By adopting this partitioning method, only a part of the input, $x_i$, is fed to each participant. Subsequently, when all participants accomplish their tasks, summations are performed to build the output neurons. However, in contrast to the output-splitting method, the activation function can only be applied after the merging process. \end{itemize} \begin{figure*}[!h] \centering \includegraphics[scale=0.4]{Figures/Splitting2.pdf} \caption{Partitioning of convolutional layer: (a) is an output splitting, and (b) and (c) are input splittings.} \label{splitting} \end{figure*} \textbf{\indent Convolutional layers:} Next, we illustrate different partitioning strategies of the convolutional layer. As described in the previous section \ref{CNN}, each filter is responsible to create one of the feature maps of the output data (Fig. \ref{Conv}). We remind that the dimensions of the input data are $H_1 \times W_1 \times D_1$, the dimensions of the $k$ filters are $H_f \times W_f \times D_f$, and the dimensions of the output feature maps are defined by $H_2 \times W_2 \times D_2$. We note that by definition $D_1$ is equal to $D_f$ and $k$ is equal to $D_2$. Furthermore, each filter contains $D_1 \times (H_f \times W_f)$ weights and performs $D_1 \times (H_f \times W_f)$ multiplications per output element. Similarly to the fully connected layers, two partitioning strategies characterize the convolutional layer, namely the input and output splitting. In this context, the output splitting includes the channel partitioning, meanwhile, the input splitting consists of the spatial and filter partitioning strategies (see Fig. \ref{splitting}). These splitting strategies are introduced and adopted by multiple recent works, including \cite{IoTInferencing,DeepThings,MoDNN}, for which we will thoroughly review the resource management techniques in the following section. \begin{itemize} \item \textit{Channel splitting}: each participant computes one or multiple non-overlapping output feature maps, which serve as input channels for the next layer. This implies that each device $i$ possesses only $1 \leq k'_i \leq k$ filters responsible to generate $k'_i$ feature maps, where $\sum_i k_i'=k$. In addition to the $k'_i$ filters, the entire input data is fed to each device to compute different outputs. In this way, filters’ weights are distributed across participants, ($k'_i \times D_1 \times H_f \times W_f$) each, and the total number of multiplications is equal to $D_1 \times (H_f \times W_f) \times k'_i \times (W_2 \times H_2)$ per device. The channel partitioning strategy allows model parallelization, and consequently inference acceleration. At the end, when all devices finish their tasks, different feature maps are concatenated depth-wise, with a complexity equal to $O(k)$. We emphasize that the activation function can be applied before merging at each device or once at the concatenation device. Fig. \ref{splitting} (a) shows an example of channel partitioning. \item \textit{Spatial splitting}: this fine-grained splitting divides the input spatially, in the x or y axis, in order to jointly assemble the output data, as shown in Fig. \ref{splitting} (b). Let $d_x$ and $d_y$ define the split dimensions on the x-axis and y-axis, respectively. Therefore, the input data are partitioned into segments of size ($d_x \times d_y$), and each group of segments can be transmitted to a device. Furthermore, each part allocated to a participant needs to be extended with overlapping elements from the neighboring parts, so that the convolution can be performed on the borders. Compared to the latter splitting, in which all the input data should be copied to all participants with parts of the filters, the spatial splitting distributes only parts of the data with all the filters to each device. It means, in addition to the segment of the input data, an amount of ($k \times D_1 \times W_f \times H_f$) weights should be transmitted and stored at each device. Note that storing the filters is considered as a one-time memory cost, as they will be used for all subsequent inferences. Also, the total number of multiplications is reduced per-device and each one executes only $\frac{1}{(d_x \times d_y)}$ of the computational load per segment. When all computations are done, the output data is concatenated spatially with a complexity of $O(\frac{H_2\times W_2}{d_x \times d_y})$, and the activation function can be applied before or after the merging process. Note that for simplicity, we presented, for spatial splitting, the case where filters do not apply any size reduction. \item \textit{Filter splitting}: in this splitting strategy, both filters and input data are split channel wise on a size of $k'_i$ for each participant $i$. Figure \ref{splitting} (c) illustrates the convolution of the input data by one filter in order to produce one feature map. In this example, the input channels and one of the filters are divided into 4 devices, which implies that each device saves only its assigned channels of the input data and the filter, so the memory footprint is also divided. The computational load is also reduced, in such a way each participant executes $k'_i \times (H_f \times W_f) \times (W_2 \times H_2)$ multiplications. In the end, all final outputs are summed to create one feature map and the activation function can only be applied after the merging process. A concatenation task is performed, when all features are created. Note that the complexity of this partitioning is equal to the number of devices contributing in the distribution. \end{itemize} Table \ref{tab:splitting} summarizes the computation and memory characteristics of different splitting strategies. In this table, we present the number of smallest segments per model, the input, output and computation requirements for each small segment, the weights of filters assigned to each device owing $k'_i$ segments, and the transmitted data per layer when having $N$ participants. \subsubsection{Communication models} The latency is of paramount importance, in AI applications. Hence, minimizing the communication delay and the data transmission by designing an efficient DNN splitting is the main focus of pervasive inference. \paragraph{Overview}\mbox{}\\ \textbf{ \indent Communication latency}: In the literature, the communication channels between different pervasive devices are abstracted as bit-pipes with either constant rates or random rates with a defined distribution. However, this simplified bit-pipe model is insufficient to illustrate the fundamental properties of wireless propagation. More specifically, wireless channels are characterized by different key aspects, including: (1) the multi-path fading caused by the reflections from objects existing in the environment (e.g., walls, trees, and buildings); (2) the interference with other signals occupying the same spectrum due to the broadcast nature of the wireless transmissions, which reduces their Signal-to-Interference-plus-Noise-Ratios (SINRs) and increases the probability of errors; (3) bandwidth shortage, motivating the research community to exploit new spectrum resources, design new spectrum sharing and aggregation, and propose new solutions (e.g., in-device caching and data compression). Based on these characteristics, the communication/upload latency between two devices, either resource-constrained devices or high-performant servers, can be expressed as follows: \begin{equation}\label{eq:6} \begin{aligned} t^u=\frac{K}{\rho_{i,j}}, \end{aligned} \end{equation} where $K$ is the size of the transmitted data and $\rho_{i,j}$ is the achievable data rate between two participants $i$ and $j$, defined as follows: \begin{equation}\label{eq:7} \begin{aligned} \rho_{i,j}=B_i \times log_2(1+\Gamma_{i,j}), \end{aligned} \end{equation} $B_i$ denotes the bandwidth of the device $i$. Furthermore, the average SINR of the link between $i$ and $j$, namely $\Gamma_{i,j}$, is given by: \begin{equation}\label{eq:8} \begin{aligned} \Gamma_{i,j}=\frac{P_{i,j}h_{i,j}}{\sum_{q, q\neq j} I_{q,j} \sigma^2}, \end{aligned} \end{equation} where $P_{i,j}$ and $h_{i,j}$ are the transmit power and the channel gain between $i$ and $j$, $\sigma^2$ is the Gaussian noise, and $\sum_{q, q\neq j} I_{q,j}$ is the total interference power at the receiver $j$ resulting from neighboring devices transmitting over the same channel. The total transmission latency $t^T$ of the entire inference is related to the type of dependency between different layers of the model. This latency is defined in eq. (\ref{eq:9}), if the dependency is sequential (e.g., layers) and in eq. (\ref{eq:10}) if the dependency is parallel (e.g., feature maps). In case the dependency is general (e.g., randomly wired networks), we formulate the total latency as the sum of sequential communication and the maximum of parallel transmissions. \begin{equation}\label{eq:9} \begin{aligned} t^T=\sum_{s=1}^{S} t^u_s. \end{aligned} \end{equation} \begin{equation}\label{eq:10} \begin{aligned} t^T= max(t^u_s, \quad \forall s \in \{1...S\}). \end{aligned} \end{equation} \newline \textbf{ \indent Communication energy}: The energy consumption to offload the inference sub-tasks to other participants consists of the amounts of energy consumed on outwards data transmissions and when receiving the classification results generated by the last segment of the task $T$. This energy is formulated as follows \cite{Survey33}\cite{energy-Aware-dist}: \begin{equation}\label{eq:3} \begin{aligned} e^{ofd}_i= t^u_i. P_i+\sum_s\sum_k\sum_j \frac{K_s}{\rho_{k,j}}.P_s.X_{k,s}X_{j,s+1}, \end{aligned} \end{equation} where $t^u_i$ is the upload delay to send the original data/task $i$ to the first participant, $K_s$ is the output of the segment $s$ (e.g., layers or feature maps), $\rho_{k,j}$ denotes the data rate of the communication, and $X_{k,s}$ is a binary variable indicating if the participant $k$ executes the segment $s$. Using only the onboard battery and resources, the source-generating device may not be able to accomplish the inference task within the required delays and the energy constraint. In such a case, partitioning the task among neighboring devices or offloading the whole inference to the remote servers are desirable solutions. \paragraph{Per-layer splitting} Per-layer partitioning is characterized by a simple dependency between different segments and a higher data transmission per device. Indeed, the computation of one Fc layer per participant costs the system a total communication overhead equal to ($n+m$). Meanwhile, the allocation of a convolutional layer requires a transmission load equal to ($H_1 \times W_1 \times D_1) + (H_2 \times W_2 \times k$). \paragraph{Per-segment splitting} The per-segment partitioning requires a higher total transmission load with less computation and memory footprint per device. Meaning, this type of partitioning trades communication with the memory. More details are illustrated in Table \ref{tab:splitting}, where the output and input of the fully-connected layers have a total communication overhead of $n \times (N-1)$ and $m \times (N-1)$ respectively compared to the per-layer distribution. Hence, depending on the input and output sizes, namely $n$ and $m$, the optimal partition strategy can be selected. Regarding the convolutional layers, the channel splitting has an overhead of $(N-1) \times H_1 \times W_1 \times D1$ since a copy of the entire input data needs to be broadcasted to all participants, the spatial splitting pays an overhead of the padding equal to $N \times padding$, and the filter splitting has an overhead of $(N-1) \times H_2 \times W_2 \times k$ incurred in the merging process. \begin{figure*}[!h] \centering \includegraphics[scale=0.4]{Figures/distribution2.pdf} \caption{Resource management for distributed inference.} \label{distribution} \end{figure*} \subsubsection{Lessons learned} The main lessons acquired from the review of the splitting strategies are: \begin{itemize} \item The performance of model parallelism is always better than that of data parallelism in terms of latency minimization, as it allows computing multiple sub-tasks simultaneously. Meanwhile, the data parallelism pays the high costs of merging and transmitting the same inputs, either for fault-tolerance purposes or to handle multiple concurrent requests. \item The choice of the parallelism mode, highly depends on the partitioning strategy and the dependency between different segments. For example, in the per-layer splitting with a sequential dependency, the model parallelism cannot be applied to compute different fragments. On the other hand, the general and parallel dependencies pave the way to distribute concurrent segments. \item Data parallelism is highly important for AI applications with a high load of inference requests, such as 24/7 monitoring systems and VR/AR applications. In such scenarios, the classifications and feature learning are required every short interval of time, measured sometimes in terms of “motion-to-photons" latency. Generally, source devices do not have sufficient resources to compute this huge load of inferences. In this case, distributing the requests within neighboring devices and parallelizing their computations, contribute to minimizing the queuing time. \item Understanding the characteristics of the pervasive system is compulsory for selecting the partition strategy. More specifically, the per-layer distribution is more adequate for systems with a lower number of participants and higher pervasive capacities. For example, VGG19 has 19 layers and accordingly needs a maximum of 19 participants. More importantly, these devices are required to be able to accommodate the computation demand of convolutional layers. Meanwhile, opting for fine-grained partitioning results in small fragments that fit in resource-limited devices, such as sensors. However, a high number of sensors (e.g., $\sum_{i}^{N_{conv}}D_1^i \times k_i$ segments using filter splitting.) should be involved to accomplish the inference. \item Choosing the most optimal partitioning for the per-segment strategy highly depends on the proprieties of the DNN network, including the channel sizes, the number of filters, the size of feature maps, and the number of neurons. Particularly, for Fc splitting, $m$ and $n$ are the decisive variables for choosing input or output partitioning. For convolutional layers, the size of the channels and filters, and the capacities of participants are the decisive parameters to select the strategy. In terms of memory requirements, the channel splitting requires copying the whole input channels to all devices along with a part of the filters. Meanwhile, the spatial splitting copies all the filters and a part of the data, whereas the filter splitting needs only a part of the channels and filters. In terms of transmission load, the spatial splitting has less output data per segment compared to channel and filter strategies. Finally, the channel splitting has a higher computational load. Still, it incurs less dependency between segments. \end{itemize} \subsection{Resource management for distributed inference} The joint computational and transmission resource management plays a key role in achieving low inference latency and efficient energy consumption. In this section, we conduct a comprehensive review of the existing literature on resource management for deep inference distribution and segments allocation on pervasive systems. We start by discussing the remote collaboration, which consists of the cooperation between the data source and remote servers to achieve the DNN inference. In this part, we determine the key design methodologies and considerations (e.g., partitioning strategies and number of split points) in order to shorten the classification delays. Subsequently, more complex collaboration, namely localized collaboration, is examined, where multiple neighboring devices are coordinated to use both computational and wireless resources and accomplish the inference tasks with optimized energy, delays, and data sharing. \subsubsection{Remote collaboration} The remote collaboration encompasses two approaches, the binary and the partial offloading defined in the previous section. The binary offloading consists of delegating the DNN task from a single data-generating device to a single powerful remote entity (e.g., edge or cloud server), with an objective to optimize the classification latency, accuracy, energy, and cost (see Fig. \ref{distribution} (a)). The decision will be whether to offload the entire DNN or not, depending on the hardware capability of the device, the size of the data, the network quality, and the DNN model, among other factors. Reference papers covering binary offloading of deep learning include DeepDecision \cite{DeepDecision,offloading} and MCDNN \cite{MCDNN}. The authors of these papers based their studies on empirical measurements of trade-offs between different aforementioned parameters. The binary offloading has been thoroughly investigated in the literature for different contexts. However, the DNN offloading has a particular characteristic that distinguishes it from other networking tasks, namely the freedom to choose the type, the parameters, and the depth of the neural network, according to the available resources. As the scope of this survey, is the pervasive AI, we focus on the partial offloading that covers the per-layer distribution applying one or multiple splitting points along with the per-segment distribution. \paragraph{Per-layer distribution - one split point} The partial offloading leverages the unique structure of the deep model, particularly layers, to allow the collaborative inference between the source device and the remote servers. More specifically, in such an offloading approach, some layers are executed in the data-generating device whereas the rest are computed by the cloud or the edge servers, as shown in Fig. \ref{distribution} (b). In this way, latency is potentially reduced owing to the high computing cycles of the powerful remote entities. Furthermore, latency to communicate the intermediate data resultant from the DNN partitioning should lead to an overall classification time benefit. The key idea behind the per-layer partitioning is that after the shallow layers, the size of the intermediate data is relatively small compared to the original raw data thanks to the sequential filters. This can speed up the transmission over the network, which motivates the partition after the initial layers. Fig. \ref{AlexNet} shows the size of data transmitted between different layers of AlexNet trained with images-sized 224x224 RGB and with the parameters defined in \cite{alextnet_code}. It is clear that the intermediate data size decreases, as the network goes deeper. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{Figures/AlexNet.pdf} \caption{The size of data transmitted between different layers of the AlextNet network.} \label{AlexNet} \end{figure} Neurosurgeon \cite{Neurosurgeon} is one of the first works that investigated layer-wise partitioning, where the split point is decided intelligently depending on the network conditions. Particularly, the authors examined deeply the status quo of the cloud and in-device inference and confirmed that the wireless network is the bottleneck of the cloud approach and that the mobile device can outperform the cloud servers only when holding a GPU unit. As a next step, the authors investigated the layer-level performance in terms of computing and output data size of multiple state-of-the-art DNNs over multiple types of devices and wireless networks and concluded that layers have significantly different characteristics. Based on the computation and data transmission latency of the DNN layers, the optimal partition points that minimize the energy consumption and end-to-end latency are identified. Finally, after collecting these data, Neurosurgeon is trained to predict the power consumption and latency based on the layer type and network configuration and dynamically partition the model between the data source and the cloud server. However, while the DNN splitting significantly minimizes the inference latency by bending the computational resources of the mobile device and the remote server, this strategy is constrained by the characteristics of intermediate layers that can still generate high-sized data, which is the case of VGG 16 illustrated in Fig. \ref{VGG16}. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{Figures/VGG16.pdf} \caption{The transmitted data size between different layers of the VGG16 network.} \label{VGG16} \end{figure} \newline The work in \cite{li2018edge} followed the same steps of the previous work, including testing the performance of different layers and training a regression model that predicts the optimal split point. Moreover, to further reduce the latency and tackle the problem of sized intermediate data, the authors proposed to combine the early-exit strategy, namely BranchyNet \cite{BranchyNet}, with their splitting approach. The objective is to execute only few layers and exit the model without resorting to the cloud, if the accuracy is satisfactory. In this way, the model inference is accelerated, while sacrificing the accuracy of the classification. We note that BranchyNet is a model trained to tailor the right size of the network with minimum latency and higher accuracy. Accordingly, both models cooperate to select the optimal exit and split points. The authors extended the work by replacing both trained models with a reinforcement learning strategy \cite{Boomerang}, namely Boomerang. This RL approach offers a more flexible and adaptive solution to real-time networks and presents less complex and more optimal split and exit points’ selection. The early-exit strategy is also proposed along with the layer-wise partitioning by the ADDA approach \cite{ADDA}, where authors implemented the first layers on the source device and encouraged the exit point before the split point to use only local computing and eliminate the transmission time. Similarly, authors in \cite{energy-Aware-dist2}, formulated the problem of merging the exit point selection and the splitting strategy, while aiming to minimize the transmission energy, instead of focusing on latency. In addition to using the early-exit to accelerate the inference, other efforts adopted compression combined with the partitioning to reduce the shared data between collaborating entities. Authors in \cite{featureEncoding} introduced a distribution approach with feature space encoding, where the edge device computes up to an intermediate layer, compresses the output features (loss-less or lossy), and delegates the rest of the inference on the compressed data to a host device in order to enhance the bandwidth utilization. To maintain high accuracy, the authors proposed to re-train the DNN with the encoded features on the host side. The works in \cite{JALAD, AutoTuning} also suggested compressing the intermediate data through quantization, aiming at reducing the transmission latency between edge and cloud entities. The authors examined the trade-off between the output data quantization and the model accuracy for different partitioning scenarios. Then, they designed accordingly a model to predict the edge and cloud latencies and the communication overhead. Finally, they formulated an optimization problem to find the optimal split layer constrained by the accuracy requirements. To make the solution adaptive to runtime, an RL-based channel-wise feature compression, namely JALAD, is introduced by the authors in \cite{JALAD}. Pruning is another compression technique proposed in \cite{2stepsPruning} to be joined with the partitioning strategy. The authors introduced a 2-step pruning framework, where the first step mainly focuses on the reduction of the computation workload and the second one handles the removal of non-important features transmitted between collaborative entities, which results in less computational and offloading latency. This can be done by pruning the input channels, as their height, length, and number impact directly the size of the output data and the computing requirements, which we illustrated in Table \ref{tab:splitting}. \paragraph{Per-layer distribution - back and forth, and hierarchical distribution} Solely offloading the deep learning computation to the cloud can violate the latency constraints of the AI application requiring real-time and prompt intervention. Meanwhile, using only the edge nodes or IoT devices can deprive the system from powerful computing resources and potentially increase the processing time. Hence, a judicious selection of multiple cuts and distribution between different resources, i.e., IoT device – edge server – cloud, contribute to establishing a trade-off between minimizing the transmission time and exploiting the powerful servers. Additionally, the layers of the DNN model are not always stacked in a sequential dependency. More specifically, layers can be arranged in a general dependency as shown in Fig. \ref{partitioning} (c), where some of them can be executed in parallel or do not depend on the output of the previous ones. In this case, adopting an optimized \textit{back and forth} distribution strategy, where the end-device and the remote servers parallelize the computation of the layers and merge the output, can be beneficial for the inference latency. Authors in \cite{DNNSurgery} designed a Dynamic Adaptive DNN Surgery (DADS) scheme that optimally distributes complex structured deep models, presented by DAG graphs, under variable network conditions. In case the load of requests is light, the min-cut problem \cite{min-cut} is applied to minimize the overall delay to process one frame of the DNN structure. When the load condition is heavy, scheduling the computation of multiple requests (data parallelization) is envisaged using the 3-approximation ratio algorithm \cite{3-appro} that maximizes the parallelization of the frames from different requests. Complex DNN structures were also the focus in \cite{joinDNN}, where the authors used the shortest path problem to formulate the allocation of different frames of the DNN \textit{back and forth} between the cloud and the end-device. The path, in this case, is defined as latency or energy of the end-to-end inference. On the other hand, \textit{hierarchical architecture} for sequential structures is very popular as a one way distribution solution to establish a trade-off between transmission latency and computation delay (see Fig. \ref{distribution} (c)). The papers in \cite{HierarDis, HierarDistGlobecom, AR} proposed to divide the trained DNN over a hierarchical distribution, comprising “IoT-edge-cloud” resources. Furthermore, they leveraged the state-of-the-art work BranchyNet \cite{BranchyNet} to early exit the inference if the system has a good accuracy. In this way, fast, private, and localized inference of only shallow layers becomes possible at the end and edge devices, and an offloading to the cloud is only performed when additional processing is required. Hierarchical distribution can also be combined with compressing strategies to reduce the size of the data to be transmitted and accordingly minimize the communication delay and the time of the entire inference, such as using the encoding technique as done in \cite{IoTDNN}. Authors in \cite{HDDNN,FailureDis} also opted for hierarchical offloading, while focusing primarily on fault-tolerance of the shared data. Particularly, authors in \cite{HDDNN} considered two fault-tolerance methods, namely reassigning and monitoring, where the first one consists of assigning all layers tasks at least once, and then the unfinished tasks are reassigned to all participants regardless of their current state. This method, is generating a considerable communication and latency overhead related to allocating redundant tasks, particularly to devices with limited-capacities. Hence, a second strategy is designed to monitor the availability of devices before the re-assignment. Meanwhile, the work in \cite{FailureDis} proposed to add skip blocks \cite{ResNet} to the DNN model and include at least one block in each partition, to enhance the robustness of the system in case the previous layer connection fails. \begin{comment} \begin{table*}[] \centering \footnotesize \tabcolsep=0.09cm \caption{Performance of different partitioning strategies per inference compared to the remote strategy.} \label{tab:performance} \begin{tabular}{|l|l|l|l|l|l|} \hline \begin{tabular}[c]{@{}l@{}}Partitioning\\ strategy\end{tabular} & Latency & Bandwidth & Energy & computation/memory & throughput \\ \hline Neurosurgeon \cite{Neurosurgeon} & 3.1 $\times$ $\rightarrow$ 40.7 $\times$ & \xmark & 59.5 \% $\rightarrow$ 94.7\% & \xmark & 1.5 $\times$ $\rightarrow$ 6.7 $\times$ \\ \hline \begin{tabular}[c]{@{}l@{}}Edgent \cite{li2018edge}\\ Boomerang \cite{Boomerang}\end{tabular} & 4 $\times$ & \xmark & \xmark & \xmark & \xmark \\ \hline ADDA \cite{ADDA} & 1.7 $\times$ $\rightarrow$ 6.6 $\times$ & \xmark & \xmark & \xmark & \xmark \\ \hline \cite{featureEncoding} & \xmark & \xmark & 15.3 $\times$ & \xmark & 16.5 $\times$ \\ \hline JALAD \cite{JALAD} & 1.1 $\times$ $\rightarrow$ 11.7 $\times$ & \xmark & \xmark & 1.4 $\times$ $\rightarrow$ 25.1 $\times$ & \xmark \\ \hline JointDNN \cite{joinDNN} & 18 $\times$ $\rightarrow$ 32 $\times$ & \xmark & 18 $\times$ $\rightarrow$ 32 $\times$ & \xmark & \xmark \\ \hline \cite{2stepsPruning} & 4.81 $\times$ & 25.6 $\times$ & \xmark & 6.01 $\times$ & \xmark \\ \hline \cite{energy-Aware-dist2} & \xmark & \xmark & 0.016 J $\rightarrow$ 0.0482 J & \xmark & \xmark \\ \hline DADS \cite{DNNSurgery} & 8.08 $\times$ & \xmark & \xmark & 14.01 $\times$ & \xmark \\ \hline Auto tuning \cite{AutoTuning} & 1.13 $\times$ $\rightarrow$ 1.7 $\times$ & \xmark & \xmark & 85\% $\rightarrow$ 99\% & \xmark \\ \hline DDNN \cite{HierarDis} & \xmark & 20 $\times$ & \xmark & \xmark & \xmark \\ \hline COLT-OPE \cite{HierarDistGlobecom} & 3 $\times$ & \xmark & \xmark & \xmark & \xmark \\ \hline \cite{AR} & 0.35 $\times$ $\rightarrow$ 5.28 $\times$ & \xmark & \xmark & \xmark & \xmark \\ \hline DINA\cite{acceleration}& 2.4 $\times$ $\rightarrow$ 4.2 $\times$& \xmark & \xmark & \xmark & \xmark \\ \hline MoDNN\cite{acceleration}& 2.17 $\times$ $\rightarrow$ 4.28 $\times$& \xmark & \xmark & \xmark & \xmark \\ \hline \cite{CNNDist}& 34\% $\rightarrow$ 84\% & \xmark & \xmark & \xmark & \xmark \\ \hline AAIoT \cite{AAIoT}& 1 $\times$ $\rightarrow$ 10 $\times$& \xmark & \xmark & \xmark & \xmark \\ \hline DeepWear \cite{DeepWear}& 5.08 $\times$ $\rightarrow$ 23 $\times$& \xmark & 53.5\% $\rightarrow$ 85.5\% & \xmark & \xmark \\ \hline \end{tabular} \end{table*} \end{comment} \begin{table*}[] \centering \footnotesize \tabcolsep=0.09cm \begin{threeparttable} \caption{Performance of distribution strategies compared to: \protect \begin{tikzpicture} \protect\filldraw[color=black!60, fill=white!5, thick](-1,0) circle (0.15); \protect\end{tikzpicture} cloud only; \protect\begin{tikzpicture} \protect\filldraw[color=black!60, fill={rgb,255:red,218; green,232; blue,252}, thick](-1,0) circle (0.15); \protect\end{tikzpicture} on-device only;\\ \protect\begin{tikzpicture} \protect\filldraw[color=black!60, fill={rgb,255:red,213;green,232;blue,212}, thick](-1,0) circle (0.15); \end{tikzpicture} edge-server only. } \label{tab:performance} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \begin{tabular}[c]{@{}l@{}}\textbf{Refs}\end{tabular} & \textbf{Latency} & \textbf{Bandwidth} & \textbf{Energy} & \textbf{computation/ memory} & \textbf{throughput} & \begin{tabular}[c]{@{}l@{}}\textbf{Inference}\\ \textbf{rate}\end{tabular} \\ \hline Neurosurgeon \cite{Neurosurgeon} & 3.1 $\times$ $\rightarrow$ 40.7 $\times$ & \xmark & 59.5 \% $\rightarrow$ 94.7\% & \xmark & 1.5 $\times$ $\rightarrow$ 6.7 $\times$ & \xmark \\ \hline \begin{tabular}[c]{@{}l@{}}Edgent \cite{li2018edge}\\ Boomerang \cite{Boomerang}\end{tabular} & \cellcolor[HTML]{DAE8FC}2.3 $\times$ & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark \\ \hline & 1.2 $\times$ $\rightarrow$ 2 $\times$ & \xmark & \xmark & \xmark & \xmark & \xmark \\ \cline{2-7} \multirow{-2}{*}{ADDA \cite{ADDA}} & \cellcolor[HTML]{DAE8FC}1.7 $\times$ $\rightarrow$ 3 $\times$ & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark \\ \hline & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}15.3 $\times$ & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}16.5 $\times$ & \cellcolor[HTML]{DAE8FC}\xmark \\ \cline{2-7} \multirow{-2}{*}{\cite{featureEncoding}} & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}2.3 $\times$ & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}2.5 $\times$ & \cellcolor[HTML]{D5E8D4}\xmark \\ \hline JALAD \cite{JALAD} & 1.1 $\times$ $\rightarrow$ 11.7 $\times$ & \xmark & \xmark & \xmark & \xmark & \xmark \\ \hline JointDNN \cite{joinDNN} & 3 $\times$ & \xmark & 7 $\times$ & \xmark & \xmark & \xmark \\ \hline & 8.08 $\times$ & \xmark & \xmark & 14.01 $\times$ & \xmark & \xmark \\ \cline{2-7} \multirow{-2}{*}{DADS \cite{DNNSurgery}} & \cellcolor[HTML]{D5E8D4}6.45 $\times$ & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}8.31 $\times$ & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark \\ \hline Auto tuning \cite{AutoTuning} & 1.13 $\times$ $\rightarrow$ 1.7 $\times$ & \xmark & \xmark & 85\% $\rightarrow$ 99\% & \xmark & \xmark \\ \hline DDNN \cite{HierarDis} & \xmark & 20 $\times$ & \xmark & \xmark & \xmark & \xmark \\ \hline & 2 $\times$ & \xmark & \xmark & \xmark & \xmark & \xmark \\ \cline{2-7} \multirow{-2}{*}{COLT-OPE \cite{HierarDistGlobecom}} & \cellcolor[HTML]{DAE8FC}4 $\times$ & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark \\ \hline & 48.11 \% & \xmark & \xmark & \xmark & \xmark & \xmark \\ \cline{2-7} \multirow{-2}{*}{\cite{AR}} & \cellcolor[HTML]{DAE8FC}39.75 \% & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}70\% & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark \\ \hline DINA\cite{acceleration} & \cellcolor[HTML]{D5E8D4}2.6 $\times$ $\rightarrow$ 4.2 $\times$ & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark \\ \hline MoDNN\cite{MoDNN} & \cellcolor[HTML]{DAE8FC}2.17 $\times$ $\rightarrow$ 4.28 $\times$ & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark \\ \hline AAIoT \cite{AAIoT} & \cellcolor[HTML]{D5E8D4}1 $\times$ $\rightarrow$ 10 $\times$ & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark & \cellcolor[HTML]{D5E8D4}\xmark \\ \hline DeepWear \cite{DeepWear} & \cellcolor[HTML]{DAE8FC}5.08 $\times$ $\rightarrow$ 23 $\times$ & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}53.5\% $\rightarrow$ 85.5\% & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark \\ \hline \cite{IoTInferencing} & \cellcolor[HTML]{DAE8FC}2 $\times$ $\rightarrow$ 6 $\times$ & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark \\ \hline \cite{MDPIDist} & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}1.7 $\times$ $\rightarrow$ 4.69 $\times$ \\ \hline DeepThings \cite{DeepThings} & \cellcolor[HTML]{DAE8FC}0.6 $\times$ $\rightarrow$ 3 $\times$ & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}68\% & \cellcolor[HTML]{DAE8FC}\xmark & \cellcolor[HTML]{DAE8FC}\xmark \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item - The results in the table present the enhancement of the proposed strategies compared to the baseline approaches. \item - $\times$ stands for the number of times the metric is improved, i.e., how many times the latency, bandwidth usage, energy, computation, and memory are reduced, and how many times the throughput and inference rate are increased compared to the baselines. \end{tablenotes} \end{threeparttable} \end{table*} \paragraph{Per-segment distribution} The per-segment partitioning is generally more popular when distributing the inference among IoT devices with limited capacities, as some devices, such as sensors, cannot execute the entire layer of a deep network. Furthermore, per-segment partitioning creates a huge dependency between devices, and consequently, multiple communications with remote servers are required. However, few works adopted this strategy for inference collaboration between end devices and edge/fog servers, including \cite{acceleration}. Authors in \cite{acceleration} proposed a spatial splitting (see Fig. \ref{splitting} (b)) that minimizes the communication overhead per device. Then, a distribution solution is designed based on the matching theory \cite{matching_theory} and the swap matching problem \cite{swap_matching}, to jointly accomplish the DNN inference. The matching theory is a mathematical framework in economics that models interactions between two sets of selfish agents, each one is competing to match agents of the other set. The objective was to reduce the total computation time while increasing the utilization of the resources related to the two sets of IoT devices and fog nodes. \subsubsection{Localized collaboration} Another line of work considers the distribution of DNN computation across multiple edge participants, as shown in Fig. \ref {distribution} (d). These participants present neighboring nodes that co-exist in the same vicinity, e.g., IoT devices or fog nodes. The model distribution over neighboring devices can be classified into two types: the per-layer distribution where each participant performs the computation of one layer or more and the per-segment allocation where smaller segments of the model are allocated to resource-limited devices. \paragraph{Per-layer distribution} The layer-wise partitioning can itself be classified under two categories, the one splitting point strategy where only two participants are involved and multiple splitting points where two or more devices are collaborating. For example, the DeepWear \cite{DeepWear} approach splits the DNN into two sub-models that are separately computed on a wearable (e.g., smart watch) and a handheld device. First, the authors conducted in-depth measurements on different devices and for multiple models to demystify the performance of wearable-side DL and study the potential gain from the partial offloading. The derived conclusions are incorporated into a lightweight online scheduling algorithm based on a prediction model that judiciously determines how, and when to offload, in order to minimize latency and energy consumption of the inference. On the other hand, authors in \cite{CNNDist} proposed a methodology for optimal placement of CNN layers among multiple IoT devices, while being constrained by their computation and memory capacities. This methodology minimizes the latency of decision-making, which is measured as the total of processing times and transmissions between participants. Furthermore, this proposed technique can be applied both to CNNs in which the number of layers is fixed and CNNs with an early-exit. Similarly, authors in \cite{AAIoT} proposed a CNN multi-splitting approach to accelerate the inference process, namely AAIoT. Unlike the above-mentioned efforts, AAIoT deploys the layers of the neural network on multi-layer IoT architecture. More specifically, the lower-layer device presents the data source, and the higher-layer devices have more powerful capacities. Offloading the computation to higher participants implies sacrificing the transmission latency to reduce the computation time. However, delivering the computation to lower participants does not bring any benefits to the system. An optimal solution and an online algorithm that uses dynamic programming are designed to make the best architectural offloading strategy. Other than capacity-constrained IoT devices, the distribution of the inference process over cloudlets in a 5G-enabled MEC system is the focus of the work in \cite{energy-Aware-dist}, where authors proposed to minimize the energy consumption, while meeting stringent delay requirements of AI applications, using a RL technique. \paragraph{Per-segment distribution} The per-segment distribution is defined as allocating fine-grained partitioned DNN on lightweight devices such as Android devices or Raspberry Pis. The partitioning strategy is based on the system configuration and the pervasive network characteristics, including the memory, computation, and communication capabilities of the IoT devices and their number. The segmentation of the DNN models varies from neurons partitioning to channels, spatial, and filters splitting, as discussed in section \ref{profiling}. For example, the work in \cite{MoDNN} opted for the spatial splitting (see Fig. \ref{splitting} (b)), where the input and the output feature maps are partitioned into a grid and distributed among lightweight devices. The authors proposed to allocate the cells along the longer edge of the input matrix (rows or columns) to each participant, in order to reduce the padding overhead produced by the spatial splitting. Different segments are distributed to IoT devices according to the load-balancing principles using the MapReduce model. The same rows/columns partitioning is proposed in \cite{CCNN}, namely the data-lookahead strategy. More specifically, each block contains data from other blocks within the same layer such that its connected blocks in subsequent layers can be executed independently without requesting intermediate/padding data from other participants. The spatial splitting is also adopted in \cite{DeepThings}, where authors proposed a Fused Tile Partitioning (FTP) method. This method fuses the layers and divides them into a grid. Then, cells connected across layers are assigned to one participant, which largely reduces the communication overhead and the memory footprint. The previous works introduced homogeneous partitioning, where segments are similar. Unlike these strategies, authors in \cite{MDPIDist, MDPIDist2} proposed a heterogeneous partitioning of the input data to be compatible with the IoT system containing devices with different capabilities ranging from small participants that fit only few cells to high capacity participants suitable for layer computation. For the same purpose, authors in \cite{EDDL} jointly conducted per-layer and per-segment partitioning, where the neurons and links of the network are modeled as a DAG. In this work, grouped convolutional techniques \cite{grouped_conv} are used to boost the model parallelization of different nodes of the graph. The papers in \cite{FullyDistribution, MusicChair,MusicChair2, IoTInferencing} studied different partitioning strategies of the convolutional layers (channel, spatial and filters splitting) and fully connected layers (output and input splitting). Next, they emphasized that an optimal splitting depends greatly on the parameters of the CNN network and that the inference speedup depends on the number of tasks to be parallelized, which is related to the adopted splitting method. Hence, one partitioning approach cannot bring benefits to all types of CNNs. Based on these conclusions, a dynamic heuristic is designed to select the most adequate splitting and model parallelism for different inference scenarios. Table \ref{tab:performance} shows the performance of these techniques in terms of latency, bandwidth, energy, computation, memory, and throughput, whereas Table \ref{tab:my-table} presents a comparison between different distributed inference techniques introduced in this section. \subsubsection{Lessons learned} The lessons acquired from the literature review covering the DNN distribution can be summarized as follows: \begin{itemize} \item In per-layer strategies, selecting the split points depends on multiple parameters, which are the capacity of the end device that constrains the length of the first segment, the characteristics of the network (e.g., wi-fi, 4G, or LTE) that impact the transmission time, and the DNN topology that determines the intermediate data size. \item The deep neural networks with a small-reduction capacity of pooling layers or with fully-connected layers of similar sizes undergo small variations in the per-layer latency and data size. In this case, remote collaboration is not beneficial for data transmission. Hence, compression (e.g., quantization, pruning, and encoding) can be a good solution to benefit from remote capacity with the minimum of communication overhead. \item Recently, many efforts have focused on the localized inference through per-segment distribution that allows to involve resource-limited devices and avoid transmission to remote servers. This kind of works targeted the model parallelization and aimed to maximize the concurrent computation of different segments within the same request. However, fewer works covered data parallelization and real-time adaptability to the dynamic of requests. Particularly, the load of inferences highly impacts the distribution of segments to fit them to the capacity of participants. \item Adopting a mixed partitioning strategy is advantageous for heterogeneous systems composed of high and low-capacity devices and multiple DNNs, which allows to fully utilize the pervasive capacities while minimizing the dependency and data transmission between devices. \end{itemize} \begin{table*}[] \centering \footnotesize \tabcolsep=0.09cm \caption{Comparison between Distributed Inference techniques.} \label{tab:my-table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Refs} & \textbf{Year} & \textbf{End-Device} & \begin{tabular}[c]{@{}c@{}}$N^{o}$ \textbf{. of}\\ \textbf{end}\\ \textbf{Devices}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Localized} \\ \textbf{inference}\end{tabular} & \textbf{Context} & \begin{tabular}[c]{@{}c@{}}\textbf{Real-time}\\\textbf{processing}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Partitioning}\\ \textbf{mechanism}\end{tabular} & \begin{tabular}[c]{@{}c@{}}$N^{o}$\textbf{. of}\\ \textbf{partitions}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Model or data}\\ \textbf{parallelism}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{other}\\ \textbf{techniques}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Runtime} \\ \textbf{adaptability}\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Neuroseurgeon\\ \cite{Neurosurgeon}\end{tabular} & 2017 & Tegra TKI & 1 & \xmark & \xmark & \xmark & Per-layer & 1 & \xmark & \xmark & \xmark \\ \hline DDNN \cite{HierarDis} & 2017 & \xmark & Many & \xmark & \xmark & \cmark & Per-layer & Many & Data & Early exit & \xmark \\ \hline MoDNN \cite{MoDNN} & 2017 & LG Nexus 5 & 4 & \cmark & \xmark & \xmark & Per-segment & Many & Model & \xmark & \xmark \\ \hline Edgent \cite{li2018edge} & 2018 & RaspBerry Pi 3 & 1 & \xmark & \xmark & \cmark & Per-layer & 1 & \xmark & Early exit & \xmark \\ \hline \cite{featureEncoding} & 2018 & \xmark & 1 & \xmark & \xmark & \xmark & Per-layer & 1 & \xmark & Compression & \xmark \\ \hline \begin{tabular}[c]{@{}c@{}}DeepThings \\ \cite{DeepThings}\end{tabular} & 2018 & RaspBerry Pi 3 & Many & \cmark & \xmark & \cmark & Per-segment & Many & Model & \xmark & \xmark \\ \hline \begin{tabular}[c]{@{}c@{}}Collaborative\\ robots \cite{robots}\end{tabular} & 2018 & RaspBerry Pi & 12 & \cmark & \begin{tabular}[c]{@{}c@{}}Robots and\\ image \\recognition\end{tabular} & \cmark & Per-segment & Many & Both & \xmark & \cmark \\ \hline \begin{tabular}[c]{@{}c@{}}Musical chair\\ \cite{MusicChair,MusicChair2}\end{tabular} & 2018 & RaspBerry Pi & Many & \cmark & \begin{tabular}[c]{@{}c@{}}object/action \\ recognition\end{tabular} & \cmark & Per-segment & Many & Both & \xmark & \cmark \\ \hline HDDNN \cite{HDDNN} & 2018 & \xmark & Many & \xmark & \xmark & \cmark & Per-layer & Many & Data & Encryption & \xmark \\ \hline \begin{tabular}[c]{@{}c@{}}Auto tuning\\ \cite{AutoTuning}\end{tabular} & 2018 & Jetson TX2 & Many & \xmark & \xmark & \xmark & Per-layer & Many & \xmark & Quantization & \xmark \\ \hline JALAD \cite{JALAD} & 2018 & \begin{tabular}[c]{@{}c@{}}GPU \\Quadro k620 \end{tabular}& 1 & \xmark & \xmark & \xmark & Per-layer & 1 & \xmark & Quantization & \cmark \\ \hline \begin{tabular}[c]{@{}c@{}} KLP \\ \cite{MDPIDist, MDPIDist2}\end{tabular} & \begin{tabular}[c]{@{}c@{}}2018\\ 2019\end{tabular} & STM32F469 & Many & \cmark & \xmark & \xmark & Per-segment & Many & Model & \xmark & \xmark \\ \hline ADDA \cite{ADDA} & 2019 & RaspBerry Pi 3 & 1 & \xmark & \xmark & \xmark & Per-layer & 1 & \xmark & Early exit & \xmark \\ \hline \begin{tabular}[c]{@{}c@{}}Boomerang\\ \cite{Boomerang}\end{tabular} & 2019 & RaspBerry Pi 3 & 1 & \xmark & \xmark & \cmark & Per-layer & 1 & \xmark & Early exit & \cmark \\ \hline \cite{Byzantine} & 2019 & Krait CPU & 12 & \cmark & \begin{tabular}[c]{@{}c@{}}sensors\\ fault tolerance\end{tabular} & \cmark & \xmark & Many & Model & \xmark & \cmark \\ \hline \cite{CNNDist} & 2019 & \begin{tabular}[c]{@{}c@{}}RaspBerry Pi\\ STM32H7\end{tabular} & Many & \cmark & \xmark & \xmark & Per-layer & Many & Data & \xmark & \xmark \\ \hline DADS \cite{DNNSurgery} & 2019 & \begin{tabular}[c]{@{}c@{}}RaspBerry Pi 3\\ model B\end{tabular} & 1 & \xmark & \xmark & \cmark & Per-layer & Many & \xmark & \xmark & \cmark \\ \hline \begin{tabular}[c]{@{}c@{}}COLT-OPE \\ \cite{HierarDistGlobecom} \end{tabular} & 2019 & \xmark & 1 & \xmark &\xmark & \xmark & Per-layer & Many & \xmark & Early exit & \cmark \\ \hline EDDL \cite{EDDL} & 2019 & Fog nodes & Many & \cmark & \xmark & \xmark & \begin{tabular}[c]{@{}c@{}}Per-layer\\ Per-segment\end{tabular} & Many & Model & \begin{tabular}[c]{@{}c@{}}Sparsification\\ Early exit\end{tabular} & \xmark \\ \hline \cite{energy-Aware-dist2} & 2019 & \begin{tabular}[c]{@{}c@{}}GPU \\ GTX1080 \end{tabular}& 1 & \xmark & \xmark & \cmark & Per-layer & 1 & \xmark & \xmark & \cmark \\ \hline \cite{FullyDistribution} & 2019 & \xmark & 7 & \cmark & \xmark & \xmark & \begin{tabular}[c]{@{}c@{}}Per-layer\\ Per-segment\end{tabular} & Many & Model & \xmark & \xmark \\ \hline \begin{tabular}[c]{@{}c@{}}deepFogGuard\\\cite{FailureDis} \end{tabular} & 2019 & \xmark & Many & \xmark & \xmark & \xmark & Per-layer & Many & \xmark & \xmark & \xmark \\ \hline \begin{tabular}[c]{@{}c@{}}2steps-pruning\\ \cite{2stepsPruning} \end{tabular} & 2019 & \xmark & 2 & \xmark & \xmark & \xmark & Per-layer & 1 & \xmark & pruning & \xmark \\ \hline \begin{tabular}[c]{@{}c@{}}JointDNN \\ \cite{joinDNN} \end{tabular}& 2019 & jetson tx2 & 1 & \xmark & \xmark & \xmark & Per-layer & 1 & \xmark & \xmark & \cmark \\ \hline AAIoT \cite{AAIoT} & 2019 & \begin{tabular}[c]{@{}c@{}}Raspberry Pi,\\ Mobile PC, \\ Desktop PC, \\ Server\end{tabular} & Many & \cmark & \xmark & \xmark & Per-layer & Many & \xmark & \xmark & \xmark \\ \hline MWWP \cite{AccelerateDI} & 2020 & \xmark & Many & \xmark & health care & \cmark & Per-layer & Many & Data & \xmark & \cmark \\ \hline \cite{Commeff} & 2020 & Raspberry Pi & Many & \cmark & \begin{tabular}[c]{@{}c@{}}multi-view \\ object \\ detection\end{tabular} & \xmark & Per-segment & Many & Model & Compression & \xmark \\ \hline \begin{tabular}[c]{@{}c@{}}CONVENE \\ \cite{CCNN}\end{tabular} & 2020 & \xmark & 1 & \xmark & \begin{tabular}[c]{@{}c@{}}Parallel data \\ sharing on \\antennas\end{tabular} & \xmark & Per-segment & Many & Model & \xmark & \cmark \\ \hline DINA \cite{acceleration} & 2020 & \xmark & Many & \xmark & \xmark & \xmark & Per-segment & Many & Both & \xmark & \cmark \\ \hline \cite{vehicles} & 2020 & \xmark & Many & \cmark & \begin{tabular}[c]{@{}c@{}}Intelligent \\Connected\\ Vehicles\end{tabular} & \cmark & \xmark & Many & \xmark & \xmark & \xmark \\ \hline \cite{IoTDNN} & 2020 & \xmark & 1 & \xmark & \xmark & \xmark & Per- layer & 1 & \xmark & Compression & \xmark \\ \hline \cite{AR} & 2020 & Huawei & 1 & \xmark & \begin{tabular}[c]{@{}c@{}}augmented \\reality\\ in 5G\end{tabular} & \cmark & Per-layer & 2 & Data & Early-exit & \cmark \\ \hline \cite{IoTInferencing} & 2020 & Raspberry Pi 3 & Many & \cmark & \begin{tabular}[c]{@{}c@{}}Visual based \\ applications\end{tabular} & \xmark & Per-segment & Many & Both & \xmark & \xmark \\ \hline \begin{tabular}[c]{@{}c@{}}Deep Wear \\ \cite{DeepWear} \end{tabular}& 2020 & Android wear & 2 & \cmark & \begin{tabular}[c]{@{}c@{}} Wearable\\ devices\end{tabular} & \cmark & Per-layer & 1 & \xmark & Compression & \cmark \\ \hline \cite{energy-Aware-dist} & 2021 & Cloudlet & Many & \cmark & 5 G & \cmark & Per-layer & Many & Data & \xmark & \cmark \\ \hline \begin{tabular}[c]{@{}c@{}}DistPrivacy \\ \cite{DistPrivacy}\end{tabular} & 2021 & \begin{tabular}[c]{@{}c@{}}Raspberry Pi\\ STM32H7\\ LG Nexus 5\end{tabular} & Many & \cmark & Data privacy & \cmark & Per-segment & Many & Both & \xmark & \cmark \\ \hline \end{tabular} \end{table*} \subsection{Use cases} The DNN distribution is applied in multiple AI-governed use cases, including healthcare \cite{AccelerateDI}, object detection \cite{Commeff}, and intelligent connected vehicles \cite{vehicles}. In this section, we review the literature of two applications that impose extra constraints to the system, namely data privacy and the distribution on moving robots, which presents challenges in managing the battery life, the connection with other participants and the dynamic model selection to enhance the accuracy of the data captured in harsh environments. \subsubsection{Data privacy} The data captured by end-devices and sent to remote servers (e.g., from cameras or sensors to cloud servers) may contain sensitive information such as camera images, GPS coordinates of critical targets, or vital signs of patients. Exposing these data has become a big security concern for the deep learning community. This issue is even more concerning when the data is collected from a small geographical area (e.g., edge computing) involving a set of limited and cooperating users. In fact, if an attacker reveals some data (even public or slightly sensitive), a DL classifier can be trained to automatically infer the private data of a known community. These attacks, posing severe privacy threats, are called inference attacks that analyze trivial or available data to illegitimately acquire knowledge about more robust information without accessing it, by only capturing their statistical correlations. An example of a popular inference attack is the Cambridge Analytica scandal in 2016, where public data of Facebook users were exploited to predict their private attributes (e.g., political view and location). Some well-known inference attacks are summarized in Table \ref{inference_attacks}. \begin{table}[h] \centering \caption{Examples of inference attacks.} \label{inference_attacks} \begin{tabular}{|c|c|c|} \hline \textbf{Inference attacks} & \textbf{Exposed data} & \textbf{Sensitive data} \\ \hline \begin{tabular}[c]{@{}c@{}}Side-channel attacks\\ \cite{side_channel}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Processing time,\\ power consumption.\end{tabular} & \begin{tabular}[c]{@{}c@{}}Cryptographic\\ keys\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Location inference\\ attacks \cite{location_attack}\end{tabular} & \begin{tabular}[c]{@{}c@{}}smartphones' sensor\\ data.\end{tabular} & Location \\ \hline \begin{tabular}[c]{@{}c@{}}Feature inference\\ attacks \cite{feature_attack}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Prediction results,\\ partial features of the\\ DNN model.\end{tabular} & DNN structure \\ \hline \begin{tabular}[c]{@{}c@{}}Membership inference\\ attacks \cite{membership_attack}\end{tabular} & \begin{tabular}[c]{@{}c@{}}confidence level \\ of classes,\\ gradients.\end{tabular} & \begin{tabular}[c]{@{}c@{}}membership of \\ a sample to a\\ dataset.\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}attribute inference \\ attacks \cite{attribute_attack}\end{tabular} & \begin{tabular}[c]{@{}c@{}}social data, likes, \\ friends.\end{tabular} & \begin{tabular}[c]{@{}c@{}}Gender, ages,\\ preferences.\end{tabular} \\ \hline \end{tabular} \end{table} Edge computing naturally enhances privacy of the sensitive information by minimizing the data transfer to the cloud through the public internet. However, additional privacy techniques should be adopted to further protect the data from eavesdroppers. In this context, in addition to its ability to allow the pervasive deployment of neural networks, the DNN splitting was also used for privacy purposes. Meaning, by partitioning the model, partially processed data is sent to the untrusted party instead of transmitting raw data. In fact, in contrast to the training data that belongs to a specific dataset and generally follows a statistical distribution, the inference samples are random and harder to be reverted. Furthermore, the model parameters are independent from the input data, which makes the inference process reveal less information about the sample \cite{white-box}. While preserving privacy, the inevitable challenge of DNN partitioning that remains valid, is selecting the splitting point that preserves the latency requirements of the system. \begin{table*}[!h] \centering \caption{Comparison between privacy-aware distribution strategies.\\ (H: High, M: Medium, L: Low).} \label{tab:privacy} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}\textbf{Privacy-aware}\\ \textbf{strategy}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Privacy}\\ \textbf{level}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Accuracy}\\ \textbf{preserving}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{DNN}\\ \textbf{re-training}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Compatibility}\\ \textbf{with IoT and DNNs}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Partitioning}\\ \textbf{strategy}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Communication}\\ \textbf{overhead}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Computation}\\ \textbf{overhead on}\\ \textbf{source-device}\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Deep split \cite{white-box}\end{tabular} & H & \cmark & \xmark & \cmark & per-layer & L & H \\ \hline \begin{tabular}[c]{@{}c@{}}Feature extraction \\ \cite{DistEncoder,hybrid_privacy}\end{tabular} & L & \xmark & \xmark & \cmark & per-layer & L & M \\ \hline \begin{tabular}[c]{@{}c@{}}Noise addition\\ \cite{DistNoise,shredder_privacy,diffPrivacy}\end{tabular} & M & \xmark & \cmark & \xmark & per-layer & M & H \\ \hline \begin{tabular}[c]{@{}c@{}}Cryptography \cite{dist_privacy2}\end{tabular} & H & \xmark & \cmark & \xmark & per-layer & M & H \\ \hline \begin{tabular}[c]{@{}c@{}}Privacy-aware\\ partitioning \cite{DistPrivacy}\end{tabular} & M & \cmark & \xmark & \cmark & \begin{tabular}[c]{@{}c@{}}Filter\\ splitting \end{tabular} & H & L \\ \hline \end{tabular} \end{table*} Authors in \cite{DistEncoder} proposed to extract the features sufficient and necessary to conduct the classification from the original image or from one of the layers' outputs using an encoder and transmit these data to the centralized server for inference. This approach prevents the exposure of irrelevant information to the untrusted party that may use it for unwanted inferences. Nevertheless offloading only extracted features contributes to minimizing the transmission overhead, this work focused only on the privacy perspective. The work in \cite{hybrid_privacy} also proposed feature extraction for data privacy, while achieving a trade-off between on-device computation, the size of transmitted data, and security constraints. In fact, selecting the split layer from where the data will be extracted intrinsically presents a security compromise. Particularly, as we go deeper in the DNN network, the features become more task specific and the irrelevant data that can involve sensitive information are mitigated \cite{transformDNN}. Hence, if the split is performed in a deep layer, the privacy is more robust and the transmission overhead is lower. However, a higher processing load is imposed on the source device. The latter work \cite{hybrid_privacy}, along with the work in \cite{white-box}, advised to perform deep partition in case the source device has enough computational capacity. If the source device is resource-constrained, the model should be partitioned in the shallow layers, although most of the output features are not related to the main task. Authors in \cite{hybrid_privacy} proposed a solution based on Siamese fine-tuning \cite{Siamase} and dimensionality reduction to manipulate the intermediate data and send only the primary measures without any irrelevant information. In addition to enhancing privacy, this mechanism contributes to reducing the communication overhead between the end-device and the remote server. However, to this end, the arms race between attacks and defenses for DNN models has come to a forefront, as the amount of extracted features can be sufficient for adversary approaches to recover the original image. Whereas, less shared features may also result in low classification accuracy. The works in \cite{white-box,adversial_attacks1,adversial_attacks2} proposed adversarial attacks to predict the inference input data (or the trained model), using only available features from shared outputs between participants. Authors in \cite{white-box} focused particularly on the privacy threats presented by the DNN distribution; and accordingly, designed a white-box attack assuming that the structure of the trained model is known and the intermediate data can be inverted through a regularized Maximum Likelihood Estimation (rMSE). Additionally, a black-box attack is also proposed, where the malicious participant only has knowledge about his segment and attempts to design an inverse DNN network to map the received features to the targeted input and recover the original data. Authors demonstrated that reversing the original data is possible, when the neural system is distributed into layers. Numerous countermeasures have been considered to strengthen the robustness of deep networks, including adding noise to the intermediate data to obfuscate it, differential privacy, and cryptographic techniques that train the model on encrypted samples. Adding noise to the intermediate data is adopted in \cite{DistNoise}. In this paper, the authors proposed to perform a simple data transformation in the source-device to extract relevant features and add noise. Next, these features extracted from shallow layers are sent to the cloud to complete the inference. To maintain a high classification accuracy, the neural network is re-trained with a dataset containing noisy samples. However, adding noise to the intermediate data costs the system additional energy consumption and computational overhead. Therefore, the splitting should be done at a layer where the output size is minimal. Though, the latter work did not describe the partition strategy. The Shredder approach \cite{shredder_privacy} resolved this dilemma by considering the computation overhead during the noise injection process. The idea is to conduct an offline machine learning training to find the noise distribution that strikes a balance between privacy (i.e., information loss) and accuracy drop. In this way, the DNN model does not require retraining with the noisy data and the network can be cut at any point to apply directly the noise distribution. The partitioning decision is based on the communication and computation cost. A higher privacy level and lower communication overhead are guaranteed, when the split is performed at deep layers; however, the allocation at the end-device becomes less scalable. Adding noise or extracting task-specific data can be included under the umbrella of differential privacy, which at a high level ensures that the model does not receive any information about the private input data, while still presenting satisfactory classification. The performance of differential privacy is assessed by a privacy budget parameter $\epsilon$ that denotes the level of distinguishability. Authors in \cite{diffPrivacy} conducted theoretical analysis to minimize $\epsilon$, while considering accuracy and the communication overhead to offload the intermediate features among fog participants. Cryptography is another technique that can be used to protect the distributed inference. The main idea is to encrypt the input data and process it using a model trained on encrypted dataset, in a way the intermediate data cannot be used by a malicious participant. Little research, including \cite{dist_privacy2}, investigated the encrypted DNN distribution, as this approach suffers from a prohibitive computation and communication overhead that exacerbates the complexity of the inference process, particularly when executed in resource-constrained devices. All the previous techniques applied additional tasks to secure the shared data, e.g., feature extraction, adding noise, and encryption, which overloads the pervasive devices with computational overhead. Different from previous works, DistPrivacy \cite{DistPrivacy} used the partitioning scheme to guarantee privacy of the data. In fact, all the existing privacy-aware approaches adopted the per-layer distribution of the DNN model. This partitioning strategy incurs an intermediate shared information that can be reverted easily using adversarial attacks. The main idea in \cite{DistPrivacy} is to divide the data resulting from each layer into small segments and distribute it to multiple IoT participants, which contributes to hiding the proprieties of the original image and preventing untrusted devices from recovering the data. Particularly, the authors adopted the filter splitting strategy, in such a way that each device computes a part of the feature maps. However, as stated in section \ref{profiling}, this partitioning strategy results in large data transmission between participants. Therefore, the authors formulated an optimization that establishes a trade-off between privacy and communication overhead. Fig. \ref{privacy_aware} illustrates different privacy-aware strategies for distributed inference existing in the literature. \newline \begin{figure}[!h] \centering \includegraphics[scale=0.42]{Figures/privacy2.pdf} \caption{Privacy-aware distribution strategies.} \label{privacy_aware} \end{figure} Table \ref{tab:privacy} shows the performance of different privacy-aware distribution strategies. We can see that choosing the adequate strategy depends on the requirements of the pervasive system, as multiple trade-offs need to be established, such as the security level and accuracy, or the computation and communication loads. \subsubsection{Distribution on moving robots} Currently, robotic systems have been progressively converging to computationally expensive AI networks for tasks like path planning and object detection. However, resource-limited robots, such as low power UAVs, have insufficient on-board power-battery or computational resources to scalably execute the highly accurate neural networks. In surveillance applications, the aim is to monitor specific objects or identify threats within the target region. Moving devices are the most suitable technology to provide information about the target object from different angles, which makes the identification more accurate. These data-generating devices are only responsible for collecting the data, while servers with higher capacities generate the identification results. The traditional wisdom resorts to cloud or edge servers to compute heavy tasks. However, due to the harsh environments where robots move (e.g., military border zones, forests, and offshore oil reserves), the communication with remote servers is strongly affected by the weather. Also, the processing might be difficult or even impossible because of the interference resulting from the UAV altitude or the underground environment (e.g., high-rise building effect on path loss). Furthermore, as surveillance devices send high-resolution images to cloud/edge servers at each small interval of time and knowing that incidents rarely occur, the large data volume transmitted by source units has become problematic, particularly for such systems characterized by an unstable bandwidth availability. Because of this tremendous amount of data obtained during the robots mission, AI should be integrated within the design of the devices. However, moving robots come with distinct features, often understated, such as that communicating with the remote servers while moving incurs unstable latency, more energy consumption, and potential loss of data. \begin{figure}[!h] \centering \frame{\includegraphics[scale=0.6]{Figures/UAVs.pdf}} \caption{A fire detection scenario with distributed DNN.} \label{UAVs_mobility} \end{figure} The work in \cite{uavs,uavs2} examined the case of per-layer distribution with one split point between one UAV and one MEC server (see Fig. \ref{UAVs_mobility}). More specifically, the authors proposed a framework for AI-based visual target tracking system, where low-level layers of the DNN classifier are deployed in the UAV device and high-level layers are assigned to the remote servers. The classification can be performed using only the low-level layers, if the image quality is good. Otherwise, the output of these layers should be further processed in the MEC server, for higher accuracy. In this context, the authors formulated a weighted-sum cost minimization problem for binary offloading and partial offloading, while taking into consideration the error rate/accuracy, the data quality, the communication bandwidth, and the computing capacity of the MEC and the UAV. The offloading probability is derived for the binary offloading and the offloading ratio (i.e., the segment of DNN to execute in the MEC) is obtained for the partial offloading scheme. In this model, the mobility of the UAVs (i.e., the distance between the UAV and the server) is involved through the transmission data rate between the device and the MEC, as presented in Eq. (\ref{eq:7}) in the previous section \ref{profiling}. Additionally, the distance between the UAV and the target impacts the quality of the image and consequently impacts the offloading decisions. In the proposed framework, multiple trade-offs are experienced: \begin{itemize} \item The accuracy is achieved at the expense of delay and transmitted data: if most of the images have bad quality, the system is not able to accomplish low average latency as on-board inference is not sufficient. For this reason, different inferences should be extended wisely using the segment allocated in the MEC, particularly if the environment is challenging such as bad weather or when the target is highly dynamic. \item A trade-off exists also between the accuracy and latency, and the position of the UAVs: when the device is close to the target, high resolution images can be taken, which allows obtaining good accuracy on-board and avoiding the data offloading. Being close to the targets is not always possible, particularly in harsh environments or when the surveillance should be hidden. \item The battery life is increased at the expense of the inference latency: the battery can be saved, if processing coefficient is decreased, which enlarges the computation time of the classification. \item The split point selection: if the intermediate data is smaller than the raw data, the offloading is encouraged to enhance the accuracy. \end{itemize} An online solution of this offloading trade-off using reinforcement learning is presented in \cite{cloudRobotics}. The previous works adopted the per-layer wise with one split point and remote collaboration approach. This strategy is more adequate for flying devices that can enhance their link quality by approaching the MEC stations. However, for ground robots, offloading segments of the inference to remote servers costs the system a large transmission overhead and high energy consumption. Authors in \cite{robots_power} studied the distribution of the DNN network among ground robots and profiled the energy consumed for such tasks, when moving or being in idle mode. Several conclusions are stated: \begin{itemize} \item When the robot is idle, the DNN computation and offloading increase the power consumption of the device by 50\%. \item If the device is moving, the DNN execution causes high spikes in power consumption, which may limit the device to attain a high performance as this variation incurs a frequent change of the power saving settings in the CPU. \item Distributing the inference contributes to reducing the energy consumed per device, even-though the total power consumption is higher. This is due to the reduced computation and memory operations per device and the idle time experienced after offloading the tasks. \end{itemize} Based on the energy study of moving robots, the authors proposed to distribute the DNN model into smaller segments among multiple low-power robots to achieve an equilibrium of performance in terms of energy and number of executed tasks \cite{robots}. Still, the distribution of the model into small segments (e.g., filter splitting) requires the intervention of a large number of robots that are highly dependent, which is not realistic. \section{Federated Learning for Distributed Training}\label{FL} Despite the great potential of deep learning in different applications, it still has major challenges that need to be addressed. These challenges are mainly due to the massive amount of data needed for training deep learning models, which imposes severe communication overheads in both network design and end-users. Moreover, the conventional way of transferring the acquired data to a central server for training comes with many privacy concerns that several applications may not tolerate. In this context, the need for intelligent and on-device DL training has emerged. More specifically, instead of moving the data from the users to a centralized data center, pervasive data-sources engage the server to broadcast a pre-trained model to all of them. Then, each participant deploys and personalizes this generic model by training it on its own data locally. In this way, privacy is guaranteed as the data is processed within the host. The on-device training has been widely used in many applications\cite{Survey14}, such as the medical field, assistance services, and smart education. However, this no-round-trip training technique precludes the end-devices to benefit from others' experiences, which limits the performance of the local models. To this end, Federated Learning (FL) has been advanced, where end-users can fine-tune their learning models while preserving privacy and local data processing. Then, these local models (i.e., model updates) are aggregated and synchronized (averaged) at a centralized server, before being sent back to the end-users. This process repeats several times (i.e., communication rounds) until reaching converge. Accordingly, each participant builds a model from its local data and benefits from other experiences, without violating privacy constraints. FL is proposed by Google researchers in 2016 \cite{synchFL}, and since then, it has witnessed unprecedented growth in both industry and academia. For the sake of completeness, we present in this section an overview for this emerging pervasive learning technique, i.e., Federated Learning. In particular, we introduce the computation and communication models of the FL techniques. Then, we present a brief summary of the related works in the literature, while highlighting a use case that considers the application of FL within UAV swarms. It is worth mentioning that the FL can be used for both online and offline learning (i.e., the training can be performed on static datasets at once, or continuously training on new data received by different participants). In this paper, we focus on studying the communication models of the pervasive FL, which are valid for offline and online learning; and for this reason we chose not to include it under the umbrella of the online learning section. Fig. \ref{FL_outline} summarizes the main subsections presented in what follows. \begin{figure}[!h] \centering \includegraphics[scale=0.6]{Figures/FL_taxonomy2.pdf} \caption{Outline of federated learning.} \label{FL_outline} \end{figure} \subsection{Profiling computation and communication models \label{sec:Fundamentals}} Generally, the FL system is composed of two main entities, which are the data-sources (i.e., owners of data or pervasive participants) and the centralized server (i.e., model owner). Let $N$ denote the number of data-sources. Each one of these devices has its own dataset $D_i$. This private data is used to train the local model $m_i$, and then the local parameters are sent to the centralized server. Next, the local models are collected and aggregated onto a global model $m_G=\bigcup_{i=1}^{N} m_i$. The FL is different from training in the remote server, where the distributed data are collected and aggregated first, i.e., $D_G=\bigcup_{i=1}^{N} D_i$, and then one model $m$ is trained centrally. We assume that data-sources are honest and submit their real data or their true local models to the centralized server. Otherwise, control and incentive techniques are used to guarantee the reliability of FL, including \cite{FL_incentive}. Typically, the life cycle of FL is composed of multiple communication rounds that are completed when the centralized model reaches a satisfactory accuracy. Each round includes the following steps: \begin{itemize} \item \textit{Initialization of FL:} The centralized server fixes the training task, the data shape, the initial model parameters, and the learning process (e.g., learning rate). This initial model $m_G^0$ is broadcasted to the selected participants \item \textit{Training and updating the local models:} Based on the current global model $m_G^t$, each data-source $i$ utilizes its own data $D_i$ to update the local model $m_i^t$. We note that $t$ presents the current round index. Hence, at each step $t$, the goal of each participant is to find the optimal parameters minimizing the loss function $L(m_i^t)$ defined as: \begin{equation} m_i^{t*}= argmin_{m_i^{t}} L(m_i^t) \end{equation} Subsequently, the updated parameters of the local models are offloaded to the server by all selected participants. \item \textit{Global model aggregation:} The received parameters are aggregated into one global model $m_G^{t+1}$, which will be sent back in its turn to the data owners. This process is repeated continuously, until reaching convergence. The server goal is to minimize the global loss function presented as follows: \begin{equation} L(m^t_G)= \frac{1}{N} \sum\limits_{i=1}^{N}L(m^t_i) \end{equation} The aggregation of the global model is the most important phase of FL. A classical and straightforward aggregation technique, namely FedAvg, is proposed by Google reference paper \cite{synchFL} and presented in Algorithm \ref{alg:FL}. \end{itemize} \begin{algorithm} \caption{FederatedAveraging (FedAvg) \cite{synchFL}} \label{alg:FL} \begin{algorithmic}[1] \State \textbf{Input:} $N$: participants, $D_i$: dataset of the device $i$, $B$: local mini-batches, $E$: number of local epochs, $T$: number of rounds, $\rho$: learning rate, c: fraction of participants. \State \textbf{Output:} Global model $m_G$. \State \textbf{Initialization of FL:} initialize $m_G^0$ \State \textbf{Global model aggregation:} \For{t=1...T} \State $NP\leftarrow max(c.N,1)$ \State $P \leftarrow$ select random $NP$ participants \For {$i \in P$ \textbf{in parallel}} \State $m_i^{t+1}\leftarrow$ \textbf{Local model update} $(i,m_G^t)$ \EndFor \State $m_G^{t+1}=\sum\limits_{i=1}^{N}\frac{|D_i|}{\sum\limits_{j=1}^{N}|D_j|}m_i^{t+1}$ \text{\footnotesize\% Averaging aggregation} \EndFor \State \textbf{Local model update} (i,m): \State $d \leftarrow$ split $D_i$ into batches of size $B$ \For{j=1..E} \For{samples $b \in d$} \State $m \leftarrow m-\rho \Delta L(m,b)$ \text{\footnotesize\% $\Delta L$ is the gradient of $L$} \EndFor \EndFor \end{algorithmic} \end{algorithm} Algorithm \ref{alg:FL} summarizes the aforementioned steps, where the centralized server tries to minimize the global loss function by averaging the aggregation following the equation in line 11. The FL system is iterated continuously until the convergence of the global loss function or reaching a desirable accuracy. A major challenge in FL is the large communication and energy overhead related to exchanging the models updates between different end-users, and the centralized server \cite{SigProc_M, ChinaComm}. Such overheads depend on multiple parameters, including the models' updates size, the number of participating users, the number of epochs per user, and the number of communication rounds required to maintain the convergence. Particularly, the energy consumed by an FL participant $i$ characterized by a frequency $f$, a local dataset $D_i$, and a number of local epochs $E$, is given by \cite{FL_energy,FL_energy2}: \begin{equation} \label{FL_energy} e^c_i= E \times (\phi\gamma |D_i| f^2), \end{equation} where $\phi$ is the number of CPU cycles required to compute one input instance, and $\gamma$ is a constant related to the CPU. The latency required to compute the local model can be expressed as: \begin{equation} \label{FL_latency} t^c_i= E \times (\frac{\phi|D_i|}{f}). \end{equation} From the equations (\ref{FL_energy}) and (\ref{FL_latency}), we can see that a trade-off exists between the local training latency and the consumed energy. More specifically, for a fixed accuracy determined by the number of local epochs and a fixed frequency, the latency is accelerated depending on the size of the private data. If the data size and the accuracy are fixed, increasing the CPU frequency can help to minimize the local model computation. However, minimizing the latency comes at the expense of energy consumption that increases to the square of the operating frequency. \begin{figure*}[!h] \centering \includegraphics[scale=0.47]{Figures/FL_designs1.pdf} \caption{The FL architectures considered in the literature: (a) one-layer FL, (b) edge-assisted FL.} \label{fig: FL_arch} \end{figure*} The transmission time to share the model updates between the centralized servers and different FL participants mainly depends on the channel quality, the number of devices and the number of global rounds, illustrated as follows: \begin{equation} \label{FL_transmission} t^T= T \times \sum\limits_{i=1}^{N}\frac{K}{\rho_i}, \end{equation} where $K$ is the models' parameters size shared with the server and $\rho$ is the data rate of the participant $i$. On the other hand, the total energy consumed during the federated learning process using the local transmit powers $P_i$ is equal to: \begin{equation} e^T= T \times \sum\limits_{i=1}^{N}\frac{KP_i}{\rho_i}, \end{equation} From the above equations, we can see that the local iterations $E$ and the global communication rounds $T$ are very important to optimize the energy, computation, and communication costs. Particularly, for a relative local accuracy $\theta_l$, $E$ can be expressed as follows \cite{FL_local}: \begin{equation} \label{FL_E} E= \alpha \times log(\frac{1}{\theta_l}), \end{equation} where $\alpha$ is a parameter that depends on the dataset size and local sub-problems. The upper bound on the number of global rounds to reach the targeted accuracy $\theta_G$ can be presented as \cite{FL_local}: \begin{equation} \label{FL_T} T= \frac{\zeta log(\frac{1}{\theta_G})}{1-\theta_l}. \end{equation} We note that $\zeta log(\frac{1}{\theta_G})$ is used instead of $O(log(\frac{1}{\theta_G}))$, where $\zeta$ is a positive constant. From equations (\ref{FL_E}) and (\ref{FL_T}), we can see that the computation cost depending on the local iterations $E$ and the communication cost depending on the global rounds $T$ are contradictory. It means, minimizing $E$ implies maximizing $T$ and consequently increasing the convergence latency. To summarize, FL pervasiveness aspects that are being tackled by different studies, to reduce communication and energy overheads, may include: \begin{enumerate} \item reducing communication frequency, i.e., number of communication rounds; \item reducing the number of local iterations; \item selecting minimum number of participating users in the training process; \item optimizing local devices operating frequencies; \item minimizing the entropy of the models updates by using lossy compression schemes; \item using efficient encoding schemes in communicating models updates. \end{enumerate} In what follows, we categorize different presented FL schemes in the literature, based on the system architecture, namely one-layer FL and edge-assisted FL. The former refers to user-cloud architecture, where different users share their learning models with a cloud or centralized server for aggregation, while the latter refers to user-edge-cloud architecture, where edge nodes are leveraged to reduce communication overheads and accelerate FL convergence (see Figure \ref{fig: FL_arch}). \subsection{Resource management for Federated learning} \subsubsection{One-layer Federated Learning \label{sec:Single}} The concept of FL was first proposed in \cite{synchFL} by Google, with its efficiency demonstrated through experiments on different datasets. The presented model in \cite{synchFL} considered a one-layer FL, where the users exchange their updated models with a centralized server that aggregates and forms an updated global model with a fixed frequency. Afterward, several extensions have been proposed to the original FL. The investigated problems/approaches in FL, considering one-layer architecture, can be categorized into: \begin{itemize} \item analyzing the convergence of distributed gradient descent and federated learning algorithms from a theoretical perspective, and optimizing the learning process given computation and communication resource budget \cite{zhao_federated_2018, Convergence_FedAvg, Wang2019, flOptimizationModel}; \item considering partial user participation for the FL aggregation process in a resource-constrained environment while balancing between the model accuracy and communication cost \cite{Nishio2019ClientSF, wang_optimizing_2020, 9145182, userSelection_Straggler2020}; \item developing communication-efficient techniques to reduce the amount of exchanged data in FL communications by adopting various sparsification and compression techniques \cite{Sparse_Communication, Felix2020, FL_IOT}. \end{itemize} The effect of non-Independent and Identically Distributed (non-IID) data on the performance of FL has been studied in \cite{zhao_federated_2018}. It has been shown, theoretically and empirically, that highly skewed non-IID data (i.e., the local data at different users are not identically distributed) can significantly reduce the accuracy of the global learning model by up to $55\%$. As a solution to enhance the training on non-IID data, the authors proposed to share globally a small subset of data between all users. Combining this data with the local data of each user turns it to be less biased or skewed. However, exchanging data between different users is not always feasible due to the privacy constraint and communication overhead of sharing such data. In \cite{Convergence_FedAvg}, the authors analyzed the convergence rate of FedAvg on non-IID data for strongly convex and smooth problems. In \cite{Wang2019}, the authors studied the adaptation of global aggregation frequency for FL, while considering a fixed resource budget. Indeed, they analyzed the convergence bound of gradient-descent based FL on non-IID data from a theoretical perspective. Then, they used this convergence bound to build a control algorithm that adapts the frequency of global aggregation in real-time to minimize the learning loss under a fixed resource budget. A new FL algorithm, named FEDL, is presented in \cite{flOptimizationModel}. This algorithm used a local surrogate function that enables each user to solve its local problem approximately up to a certain accuracy level. The authors presented the linear convergence rate of FEDL as a function of the local accuracy and hyper-learning rate. Then, a resource allocation problem over wireless networks was formulated, using FEDL, to capture the trade-off between the training time of FEDL and user's energy consumption. It is shown in \cite{Convergence_FedAvg} that the participation of all users in the FL process enforces the central server to wait for \textit{stragglers}, i.e., users who have low-quality wireless links that can significantly slow down the FL process, which turns the FL to be unrealistic. Thus, to mitigate the impact of \textit{stragglers}, the authors in \cite{Nishio2019ClientSF} proposed a method to select a subset of users for the FL synchronization (or aggregation) process in a resource-constrained environment. They demonstrated the advantages of such a technique on improving the FL learning speed. This work has been extended in \cite{wang_optimizing_2020}, where a control scheme is proposed, based on reinforcement learning, to accelerate the FL process by actively selecting the best subset of users in each communication round that can counterbalance the bias introduced by non-IID data. In \cite{9145182}, a joint optimization framework for sample selection and user selection was studied to keep a balance between the model accuracy and cost. However, the distribution distance between different users was optimized in this framework by adjusting the local batch size, which might lead to the under-utilization of data in strongly skewed users. } In \cite{userSelection_Straggler2020}, the problem of users selection to minimize the FL training time was investigated on cell-free massive multiple-input multiple-output (CFmMIMO) networks. Alternatively, sparsification schemes have been studied to reduce the entropy of the exchanged data (i.e., models' updates) in FL communications. The authors in \cite{Sparse_Communication} presented an approach that accelerates the distributed stochastic gradient descent by exchanging sparse updates instead of dense updates. Indeed, they fixed the sparsity rate by only communicating the fraction of entries with the biggest magnitude for each gradient. In \cite{Felix2020}, the authors proposed a sparse ternary compression scheme that was designed to maintain the requirements of the FL environment. The proposed scheme compressed both the upstream and downstream communications of FL leveraging sparsification, ternarization, error accumulation, and optimal Golomb encoding. This study demonstrated the effect of communications compression and data distributions on the obtained performance. However, it neither considered the wireless resources allocation nor the edge-assisted FL architecture. In \cite{FL_IOT}, the FedAvg scheme was adjusted to use a distributed form of Adam optimization, along with the sparsification and quantization, in order to propose a communication-efficient FedAvg. \subsubsection{Edge-assisted Federated Learning \label{sec:Hierarchical}} Few studies have been proposed so far to address the problem of non-IID data in edge-assisted FL architecture. For instance, the authors in \cite{Client-Edge-Cloud} extended the work in \cite{Wang2019} to analytically prove the convergence of the edge-assisted federated averaging algorithm. This work was further extended in \cite{wu_accelerating_2020} considering probabilistic users selection to avoid the impact of \textit{stragglers}. In \cite{duan_self-balancing_2021}, a self-balancing FL framework, along with two strategies to prevent the bias of training caused by imbalanced data distribution, was proposed. The first strategy aimed to, before training the model, perform data augmentation in order to alleviate global imbalance. The second strategy exploited some mediators (which can be considered as edge nodes) to reschedule the training of the users based on the distribution distances between the mediators. In \cite{Naram2020}, the effect of the skewed data in edge-assisted FL was studied and compared to the centralized FL. Indeed, this work identified the major parameters that affect the learning performance of edge-assisted FL. However, this work ignored the resource allocation and wireless communications constraints, such as bandwidth, energy consumption, and latency. Table \ref{tab:Fl} presents the taxonomy of the federated learning techniques described in this section. \begin{table*}[] \centering \footnotesize \tabcolsep=0.09cm \caption{Taxonomy of federated learning techniques.} \label{tab:Fl} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \textbf{Refs} & \textbf{Year} & \textbf{FL devices} & \textbf{Architecture} & \textbf{Trained model} & \textbf{Aggregation algorithm} & \textbf{Dataset} & \textbf{Targeted metrics} \\ \hline \cite{synchFL} & 2017 & \begin{tabular}[c]{@{}l@{}}Mobile\\ devices\end{tabular} & One-layer & \begin{tabular}[c]{@{}l@{}}- 2NN\\ - CNN \\ - LSTM\end{tabular} & FedAvg & \begin{tabular}[c]{@{}l@{}}- CIFAR-10 \cite{cifar}\\ - MNIST \cite{mnist} \end{tabular} & - Accuracy vs rounds \\ \hline \cite{zhao_federated_2018} & 2018 & \begin{tabular}[c]{@{}l@{}}Mobile and\\ IoT devices\end{tabular} & One-layer & - CNN & Enhanced FedAvg & \begin{tabular}[c]{@{}l@{}}- CIFAR-10\\ - MNIST\\ - KWS \cite{KWS} \end{tabular} & \begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\ - Shared data\\ - Weight divergence\end{tabular} \\ \hline \cite{Convergence_FedAvg} & 2019 & End-users & One-layer & - Logistic regression & FedAvg & - MNIST & \begin{tabular}[c]{@{}l@{}}- Global loss vs rounds\\ - Rounds vs local epochs\end{tabular} \\ \hline \cite{Wang2019} & 2019 & Edge nodes & One-layer & \begin{tabular}[c]{@{}l@{}}- Squared-SVM\\ - Linear regression,\\ - K-means\\ - CNN\end{tabular} & FedAvg & \begin{tabular}[c]{@{}l@{}}- MNIST \\ - Energy \cite{energy_data} \\ - User Knowledge \\ Modeling \cite{UKM} \\ - CIFAR-10\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Loss vs nodes\\ - Accuracy vs nodes\end{tabular} \\ \hline \cite{flOptimizationModel} & 2019 & End-users & One-layer & \xmark & \begin{tabular}[c]{@{}l@{}}Non-weighted\\ averaging\end{tabular} & \xmark & \begin{tabular}[c]{@{}l@{}}- Communication vs \\ computation time\\ - Learning time vs energy\end{tabular} \\ \hline \cite{Nishio2019ClientSF} & 2019 & End-users & One-layer & - CNN & Averaging & \begin{tabular}[c]{@{}l@{}}- CIFAR-10\\ - Fashion-MNIST \cite{fashionmnist} \end{tabular} & \begin{tabular}[c]{@{}l@{}}- Accuracy vs time\\ - Number of participants\end{tabular} \\ \hline \cite{wang_optimizing_2020} & 2020 & \begin{tabular}[c]{@{}l@{}}Mobile\\ devices\end{tabular} & One-layer & - CNN & \begin{tabular}[c]{@{}l@{}}FedAvg with users seclection\\ Favor\end{tabular} & \begin{tabular}[c]{@{}l@{}}- MNIST\\ - Fashion-MNIST\\ - CIFAR-10\end{tabular} & -Accuracy vs rounds \\ \hline \cite{9145182} & 2020 & End-users & One-layer & \begin{tabular}[c]{@{}l@{}}- MLP\\ - CNN\end{tabular} & \begin{tabular}[c]{@{}l@{}}FedAvg\end{tabular} & - MNIST & - Accuracy vs rounds \\ \hline \cite{userSelection_Straggler2020} & 2020 & \begin{tabular}[c]{@{}l@{}}Mobile\\ devices\end{tabular} & One-layer & \xmark & \xmark & \xmark & \begin{tabular}[c]{@{}l@{}}- Transmission time\\ - Loss\end{tabular} \\ \hline \cite{Felix2020} & 2020 & \begin{tabular}[c]{@{}l@{}}Mobile \\ devices\end{tabular} & One-layer & \begin{tabular}[c]{@{}l@{}}- VGG11\\ - CNN\\ - LSTM \\ - Logistic regression\end{tabular} & \begin{tabular}[c]{@{}l@{}}Weighted averaging with \\ Top-k sparsified communication\end{tabular} & \begin{tabular}[c]{@{}l@{}}- CIFAR\\ - KWS\\ - MNIST\\ - Fashion-MNIST\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Communication delay\\ - Accuracy\end{tabular} \\ \hline \cite{FL_IOT} & 2020 & \begin{tabular}[c]{@{}l@{}}IoT\\ devices\end{tabular} & One-layer & \begin{tabular}[c]{@{}l@{}}- 2NN\\ - CNN\end{tabular} & \begin{tabular}[c]{@{}l@{}}communication-efficient\\ FedAvg (CE-FedAvg)\end{tabular} & \begin{tabular}[c]{@{}l@{}}- CIFAR-10\\ - MNIST\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Uploaded data\\ - communication rounds\\ - convergence time\end{tabular} \\ \hline \cite{9148776} & 2020 & UAVs & One-layer & \xmark & FedAvg & \xmark & - Rounds vs bandwidth \\ \hline \cite{9143577} & 2020 & UAVs & One-layer & - FCN & FedAvg & CRAWDAD \cite{CRAWD} & \begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\ - local learning time\end{tabular} \\ \hline \cite{9159929} & 2020 & UAVs & One-layer & - CNN & FedAvg & - MNIST & - Utility of participants \\ \hline \cite{9184079} & 2020 & UAVs & One-layer & \begin{tabular}[c]{@{}l@{}}- LSTM\\ - GRU\\ - AQNet \cite{aqnet} \end{tabular} & FedAvg & \begin{tabular}[c]{@{}l@{}}- Ground and aerial \\Sensing Data collected \\ by authors\end{tabular} & - Energy consumption \\ \hline \cite{Client-Edge-Cloud} & 2020 & End-users & Edge-assisted & - CNN & Hierarchical FedAvg & \begin{tabular}[c]{@{}l@{}}- CIFAR-10\\ - MNIST\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Accuracy vs epochs\\ - Training time\\ - Energy consumption\end{tabular} \\ \hline \cite{wu_accelerating_2020} & 2020 & End-users & Edge-assisted & \begin{tabular}[c]{@{}l@{}}- FCN \\ - LeNet-5\end{tabular} & \begin{tabular}[c]{@{}l@{}}Weighted averaging with\\ Effective Data Coverage (EDC)\end{tabular} & \begin{tabular}[c]{@{}l@{}}- MNIST\\ - Aerofoil \cite{Airfoil}\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\ - Training time\\ - Energy consumption\end{tabular} \\ \hline \cite{duan_self-balancing_2021} & 2021 & \begin{tabular}[c]{@{}l@{}}Mobile\\ devices\end{tabular} & Edge-assisted & - CNN & FedAvg & \begin{tabular}[c]{@{}l@{}}- EMNIST \cite{EMNIST}\\ - CINIC-10 \cite{CINIC}\\ - CIFAR-10\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\ - Accuracy vs epochs\\ - Storage requirement\end{tabular} \\ \hline \cite{Naram2020} & 2021 & End-users & Edge-assisted & \begin{tabular}[c]{@{}l@{}}- FCN\\ - CNN\end{tabular} & FedAvg & \begin{tabular}[c]{@{}l@{}}- MNIST\\ - Fashion-MNIST\\ - CIFAR-10\end{tabular} & \begin{tabular}[c]{@{}l@{}}- Accuracy vs rounds\\ - Accuracy vs edge \\ distance distribution\\ - Speed\end{tabular} \\ \hline \end{tabular} \end{table*} \subsection{Use case: Learning in the sky}\label{sec:Cases} \begin{figure}[h!] \centering \scalebox{1.6}{\frame{\includegraphics[width=0.54\columnwidth]{Figures/FL_2.pdf}}} \caption{An example of FL applications in UAV-assisted environment.} \label{fig:FL_CS} \end{figure} Nowadays, deep learning has been widely used in Flying Ad-hoc Network (FANET). Different tasks can be executed using DL techniques at UAV swarms, such as coordinated trajectory planning \cite{9148776} and jamming attack defense \cite{9143577}. However, due to the related massive network communication overheads, forwarding the generated large amount of data from the UAV swarm to a centralized entity, e.g., ground base stations, makes implementing centralized DL challenging. As a promising solution, FL was introduced within a UAV swarm in several studies \cite{9148776, 9143577, 9159929, 9184079} to avoid transferring raw data, while forwarding only local trained models' updates to the centralized entity. Models aggregation and generation of a global FL model is the responsibility of the centralized entity, who will also send this global model to the end-user and all participants over the intra-swarm network (see Fig. \ref{fig:FL_CS}). In \cite{9148776}, the authors present a FL framework for the swarm of wirelessly connected UAVs flying at the same altitude. The considered swarm includes a leader UAV and a set of followers UAVs. It is assumed that each follower collects data while flying and implements FL for executing inference tasks such as trajectory planning and cooperative target recognition. Hence, each follower exploits its gathered data to train its own learning model, then forwarding its model's updates to the leading UAV. All received models are then aggregated at the leading UAV to generate a global FL model, that will be used by the following UAVs in the next iteration. Interestingly, \cite{9148776} investigates the impact of wireless factors (such as fading, transmission delay, and UAV antenna angle deviations) on the performance of FL within the UAV swarms. The authors present the convergence analysis of FL while highlighting the communication rounds needed to obtain FL convergence. Using this analysis, a joint power allocation and scheduling optimization problem is then formulated and solved for the UAV swarm network in order to minimize the FL convergence time. The proposed problem considers the resource limitations of UAVs in terms of: (1) the strict energy limitations due to the energy consumed by learning, communications, and flying during FL convergence; and (2) delay constraints imposed by the control system that guarantees the stability of the swarm. \subsection{Lessons learned} Despite the prompt development of diverse DL techniques in different areas, they still impose a major challenge, which is: How can we efficiently leverage the massive amount of data generated from pervasive IoT devices for training DL models if these data cannot be shared/transferred to a centralized server? \begin{itemize} \item FL has emerged as a promising privacy-preserving collaborative learning scheme to tackle this issue by enabling multiple collaborators to jointly train their deep learning models, using their local-acquired data, without the need of revealing their data to a centralized server \cite{Felix2020}. However, a major dilemma in FL is the large communication overhead associated with transferring the models' updates. Typically, by following the main steps of FL protocol, every node or collaborator has to send a full model update in every communication round. Such update follows the same size of the trained model, which can be in the range of gigabytes for densely-connected DL models \cite{9123563}. Given that large number of communication rounds may be needed to reach the FL convergence on big datasets, the overall communication cost of FL can become unproductive or even unfeasible. Thus, minimizing the communication overheads associated with the FL process is still an open research area. \item We also remark that despite the considerable presented studies that have provided significant insights about different FL scenarios and user selection schemes from the theoretical perspective, optimizing the performance and wireless resources usage for edge-assisted FL is still missing. Most of the existing schemes for FL suffer from slow convergence. Also, considering FL schemes in highly dynamic networks, such as vehicular networks, or resource-constraint environments, such as healthcare systems, is still challenging. \end{itemize} \begin{figure}[!h] \centering \includegraphics[scale=0.635]{Figures/OL_1.pdf} \caption{Outline of online learning.} \label{OL_outline} \end{figure} \section{Online learning}\label{OL} In this section, we discuss three prominent online learning formulations: Multi-agent Reinforcement Learning (MARL), Multi-agent Bandit (MAB) problem, and Active Learning (AL). The "Multi-agent" prefix indicates the existence of multiple collaborative agents \footnote{Note that while competitive settings can also be modeled, the focus of this section is on systems that aim to jointly optimize an objective function with minimum resource utilization (pervasive AI systems). Thus, competitive and zero-sum games will not be deeply surveyed.} that are aiming to optimize a specific criterion through learning from past experience (i.e., past interactions). We note that MARL and bandits were originally proposed to model a single agent interacting with the environment and aiming to maximize its reward. However, in pervasive systems, where there are numerous but resource-limited agents (i.e., devices), collaboration becomes essential to leverage the potential of the collective experience of these devices. Motivated by the prevalence of collaboration in pervasive systems, we review in this section distributed MAB and MARL algorithms from a resource utilization perspective. We also highlight that AL can be performed both as an offline and online learning, although the latter is considered as the edge of the AL technique. Thus, in this paper, we focus on the online execution of AL, where the trained dataset is updated progressively in real-time. As has been the case throughout the paper, we are interested in the performance/resource-management trade-offs. Specifically, we propose a taxonomy based on the obtained performance with specific resource budgets (e.g., communication rounds). Fig. \ref{OL_outline} shows the taxonomy of the online learning discussed in this section. \subsection{Multi-agent multi-arm bandit learning} In this section, we first provide technical definitions of the single-agent bandit problem and then explain its multi-agent version. \subsubsection{Overview} \begin{figure}[!h] \centering \includegraphics[scale=0.31]{Figures/multi-bandits_3.pdf} \caption{The basic bandit problem: a set of actions corresponding to different reward distributions.} \label{multi-bandits} \end{figure} The Bandit problem, introduced earlier in section \ref{AI}, is given in Algorithm \ref{alg:bp}, and visually illustrated in Fig. \ref{multi-bandits}. Fundamentally, there exists a set of actions $\mathcal{K}$ (10 actions in the figure), where each action $a$ results in a reward sampled from a distribution $\mathcal{D}_a$ (Gaussians in the example illustrated in Fig. \ref{multi-bandits}). \begin{algorithm} \caption{Basic bandit problem} \label{alg:bp} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \Require The set of $K$ actions (arms) $\mathcal{K}$, \State \textbf{for} each round $t\in[T]$: \State \hskip1em Algorithm picks an action $a_t$ \State \hskip1em Environment returns a reward $r_t \sim \mathcal{D}_{a_t}$ \end{algorithmic} \end{algorithm} The problem instance is fully specified by the time horizon $T$ and mean reward vector (the vector of the expected reward for each action/arm) $\boldsymbol{\mu}=u_a, a\in \mathcal{K}$, where $u_a = \mathbb{E}[\mathcal{D}_a]$. The optimal policy is simply choosing the action whose expected value is the highest, i.e., $a_* = \mathop{\mathrm{arg\,max}}_a u_a$. However, as this action is not known a priori ($\mathcal{D}_a$ is not known), it has to be estimated online from samples. Thus, it is inevitable that some sub-optimal actions will be picked, while building certainty on the optimal one. A reasonable performance measure is the \emph{Regret}, which is defined as the difference between the optimal policy's cumulative rewards, and the cumulative rewards achieved by a solution algorithm \begin{equation} R_T = \underbrace{u_* \times T}_{\text{\shortstack{Optimal policy's\\cumulative rewards}}} - \underbrace{\sum_{t=1}^T u_{at}}_{\text{\shortstack{An algorithm's cumulative\\ rewards}}} \end{equation} \begin{figure*}[!h] \centering \includegraphics[scale=0.39]{Figures/FLBandit3.pdf} \caption{Multi-agent bandits formulations: (a) Distributed Bandits: each agent collaborates with others to identify the best action in the same environment (b) Federated bandits: each agent collaborates with others to identify the best global action using biased local samples. In this example, the local environments were generated (e.g., sampled) from a global one.} \label{fed-bandits} \end{figure*} In other words, the regret $R_T$ is the sum of \emph{per-step regrets}. A per-step regret at time step $t$ is simply the difference between the best action's expected reward $u_*$ and the expected reward of the action chosen by an algorithm $u_{a,t}$ (i.e., $a$ is selected by the algorithm we are following). Thus, it represents how much rewards are missed because the best action is not found and has to be estimated from samples. Solution algorithms typically prove sub-linear regret growth (i.e., this difference goes to zero as time progresses. In this way, learning is achieved). The best achievable regret bound for the described bandit problem was proven to be $O(log T)$ \cite{lattimore2020bandit}. Several solution algorithms with optimal performance guarantees have been proposed in the literature \cite{lattimore2020bandit}, which fall generally into two categories, explore-then-commit and optimism-based algorithms. Explore-then-commit class, such as successive elimination algorithm, acts in phases and eliminates arms using increasingly sensitive hypothesis tests. On the other hand, the optimism algorithm, such as Upper Confidence Bound (UCB) algorithm, builds confidence for the reward of each action and selects the action with the highest upper bound. The asymptotic performance of both classes is similar. Note that performance guarantees are also classified into instance-dependent bound that depends on the problem information such as the difference between the best and second-best arms, and instance-independent regret (i.e., worst-case regret). These algorithms are recently being extended to model pervasive systems though two main MAB formulations: \emph{distributed} and \emph{federated} bandits, as shown in Fig. \ref{fed-bandits}. In distributed bandits, agents aim to solve the same bandits instance (i.e., quickly discover the best action), represented by the action set and their generating distributions. Meanwhile, in the federated bandit settings, agents handle different bandits instances and utilize each others' experiences to solve them. While the terms used to describe the exact problem is sometimes ambiguous in the literature (i.e., distributed, federated, and decentralized were sometimes used interchangeably), in this work, we adopt the recent convention on reserving the term federated for the case where each agent faces a different (but related to others) problem instance, while keeping the term distributed for the case where the instance is the same but the decision making is distributed across other agents. \subsubsection{Distributed Bandits Formulations} In many bandit problem instances, it is appealing to employ more agents to learn collaboratively and concurrently to speed up the learning process. In the distributed bandit problem, there exists a set of agents $[M]$ collaborating to solve the \emph{same} bandit instance (the $K$ arms are the same). These agents communicate according to some networking topologies. In many contexts, the sequential decision-making problem at hand is distributed by nature. For example, we can consider a recommender system deployed over multiple servers in different locations. While every server aims to always recommend the best item, it is intuitive to reuse each other's experiences and cut the time needed to learn individually. Furthermore, since their communication may violate the latency constraints, it is desirable that this collaboration and reuse of experience require minimum communication overhead. While the classical single-agent bandit algorithm has been proposed since the $~2002$, its multi-agent counterpart is much more recent, with new state-of-the-art algorithms being currently proposed. The work in \cite{kanade2012distributed} initiated the interest in the communication-regret trade-off. The authors established a non-trivial bound on the regret, given an explicitly stated bound on the number of exchanged messages. However, they focused on the full-information setting assuming that the agents observe the rewards of all actions at each round, and not only the one picked, which is the case in bandit settings. Nonetheless, this work initiated the interest in studying the same trade-off under the bandit settings. The authors of \cite{hillel2013distributed} considered the partial feedback (i.e., bandit settings) and presented an optimal trade-off between performance and communication. This work did not consider regret as the performance criterion, but rather assumed the less common ``best arm identification" setup, where the goal is to purely explore in order to eventually identify the best arm with high probability after some number of rounds. The authors in \cite{pmlr-v28-szorenyi13} studied the regret of distributed bandits with a gossiping-based P2P communication specialized to their setup, where at every step, each agent communicates only with two other agents randomly selected. \cite{kolla2018collaborative} studied the regret under the assumption that the reward obtained by each agent is observable by all its neighbors. \cite{landgren2016distributed} proposed a collaborative UCB algorithm on a graph-network of agents and studied the effect of the communication graph topology on the regret bound. \cite{martinez2019decentralized} improved this line of work as the approach requires less global information about the communication graph by removing the graph dependent factor multiplying the time horizon in the regret bound. Other works go beyond merely studying the effect of the network topology on the regret bound and explicitly accounting for the communication resources to use. The authors in \cite{wang2020optimal} deduced an upper bound on the number of needed communicated bits, proving the ability to achieve the regret bound in \cite{martinez2019decentralized} with a finite number of communication bits. However, the interesting question, particularly from the perspective of pervasive system design, is whether the use of communication resources can also be bounded, i.e., can the order of optimal regret bound be guaranteed with a maximum number of communicated bits / communicated messages ? The work in \cite{Sankararaman2019Social} established the first logarithmic upper bound on the number of communication rounds needed for an optimal regret bound. The authors considered a complete graph network topology, wherein a set of agents are initialized with a disjoint set of arms. As time progresses, a gossiping protocol is used to spread the best performing arm with agents. The authors showed that, with high probability, all agents will be aware of the best arm while progressively communicating at less (halving) periods. The authors generalized this work with a sequel formulation \cite{chawla2020gossiping}, which relaxes the assumption of a complete graph, and introduces the option for agents to pull information. However, this approach is still using the same gossiping style of communication. According to \cite{agarwal2021multi}, this dependence on pair-wise gossiping communication results in a sub-optimal instance-independent regret bound. The authors in \cite{wang2019distributed} focused on the regret-communication trade-off in the distributed bandit problem. The networking model utilizes a central node that all agents communicate with. Initially, agents work independently to eliminate bad arms. Then, they start communicating with the central node at the end of each epoch, where epochs' duration grows exponentially, leading to a logarithmic bound on the number of needed messages. \begin{table*}[!h] \centering \footnotesize \tabcolsep=0.09cm \caption{Multi-agent stochastic bandit learning literature.} \label{table:MAB_previous_work} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Refs} & \shortstack{\textbf{Problem} \\ \textbf{Formulation}} & \shortstack{\textbf{Communication} \\ \textbf{Model}} & \shortstack{\textbf{Communication}\\\textbf{Guarantee}} & \textbf{Regret Guarantee} & \textbf{Method}\\ \hline P2P-$\epsilon$-Greedy \cite{pmlr-v28-szorenyi13} & DB & Two Neighbors on a graph & $O(T)$& $O(T)$ & Gossiping arms estimates. \\ \hline coop-UCB2 \cite{landgren2016distributed} & DB & Neighbors on a graph & $O(T)$& $O(logT)$ & \shortstack{A running Consensus on \\the estimates of arms total rewards.} \\ \hline UCB-Network \cite{kolla2018collaborative} & DB & \shortstack{Multiple \\ (Graph and centralized)} & $O(T)$& $O(logT)$ & \shortstack{Identifying and utilizing \\dominating sets in the network.} \\ \hline DDUCB \cite{martinez2019decentralized} & DB & Neighbors on a graph & $O(T)$& \shortstack{$O(logT)$ \\ (with improved constants)}& \shortstack{A running Consensus on \\the estimates of arms total rewards.} \\ \hline \cite{Sankararaman2019Social} & DB & \shortstack{Single neighbors \\ on a complete graph} & $O(constant)$& $O(logT)$ & \shortstack{Gossiping among different \\local Poison clocks.} \\ \hline GosInE \cite{chawla2020gossiping} & DB & \shortstack{Neighbors on \\ a complete graph}& $\Omega(T)$& $O(logT)+C_G$ & \shortstack{Gossiping and information \\ pulling.}\\ \hline DPE2 \cite{wang2020optimal} & DB & Neighbors on a graph & $O(constant)$& $O(logT)$ & \shortstack{Leader-election to handle \\exploration (exploration is centralized).} \\ \hline DEMAB \cite{wang2019distributed} & DB & Centralized coordinator & $O(logT)$& $O(logT)$ &\shortstack{ Utilizing public randomness \\to divide arms among clients. }\\ \hline LCC-UCB \cite{agarwal2021multi} & DB & \shortstack{Multiple \\(Graph and centralized)} & $O(logT)$& $O(logT)$ &\shortstack{ communicating after\\ doubling-epochs with maintaing fixed. }\\ \hline \cite{shahrampour2017multi} & FB & Neighbors on a graph & $O(logT)$ & $O(logT+C_G)$ & \shortstack{Selecting the best arm \\according to voting.}\\ \hline GossipUCB \cite{zhu2021federated} & FB & Neighbors on a graph &$O(T)$ & \shortstack{$O(max\{logT,log_{C_G}N\})$} & \shortstack{Maintaining local belief \\that is updated through gossiping.} \\ \hline \cite{shi2021federated} & FB & Centralized coordinator & $O(logT)$ & $O(logT)$ & \shortstack{Aggregating estimates through \\the controller until a fixed point of time.} \\ \hline \cite{shi_federated_2021} & FB & Centralized coordinator & $O(logT)$ & $O(logT)$ & \shortstack{Mixed target learning objective\\ based on local and global objectives.} \\ \hline \cite{wang2020stochastic} & FB & Neighbors on a graph& $O(T)$& $O(logT)$& \shortstack{Agent use estimates of their\\ neighbors weighted by a similarity metric.}\\ \hline \end{tabular} \end{table*} The work in \cite{agarwal2021multi} presents a state of the art distributed bandit learning algorithm. The authors proposed algorithms for both fully connected and partially connected graphs (i.e., assuming that every agent can broadcast to everyone and assuming that agents can communicate with a subset of the others). Similar to elimination-based algorithms, the proposed algorithm proceeds with epochs of doubling lengths, only communicating at the end of an epoch, thus guaranteeing a logarithmic need for communication resources. The communicated messages are only the ID of the action played most often. Furthermore, the regret is proved to be optimal even in instance independent problems, for reasonable values of the time horizon (i.e., $log(T)>2^{14}$). During each epoch, agents maintain a set of arms that are recommended by other agents at the end of previous epochs and implement a UCB algorithm among these. \subsubsection{Federated Bandits Formulation} The federated bandit formulation, shown in Fig. \ref{fed-bandits} (b), is a recently emerging framework akin to the federated learning framework discussed earlier. In this formulation, there exists a set of agents, each one is facing a \emph{different} instance of the bandit (but the instances are related to each other). This is different from the distributed bandit formulation discussed in the previous sub-section, where a set of agents collaborate to solve the \emph{same} instance of the multi-arms bandits. Recall that a bandit instance is determined by the mean vector $\boldsymbol{\mu}$. Thus, in the federated bandit settings, the local bandit instance is a \emph{noisy} and potentially \emph{biased} observation of the mean vector. In addition, the collaboration is necessary, as even perfect local learning algorithms might not perform adequately due to their biased observations. The setting of federated bandits is first proposed by \cite{shahrampour2017multi} (although not under the same term). The authors proposed an algorithm, where agents agree on the best global arm, and they all play it at the beginning of each round. In this way, communication is needed at the beginning of each round. Recently, \cite{zhu2021federated} studied this federated setting, where the global arm mean vector is the average of the local ones. Although the authors did not propose a bound on the number of messages needed to be exchanged, the communication model considered a partially connected graph, where each agent communicates only with neighbors but with focus on constrained communication resources. The algorithm contains two main steps: First, each agent shares a set of local information with other neighbors (the number of times an arm was sampled and its sampled mean) . Second, a gossip update is performed, where each agent incorporates information received from neighbors in updating its estimate of each arm's mean. \cite{shi_federated_2021} presented a more general formulation, where the global mean vector is not necessarily the average of the local ones. Instead, the local means are themselves \emph{samples} from a distribution whose mean is unknown. The local observation for each agent is, in turn, samples from the local distributions. The communication model is similar to supervised federated learning, where agents communicate periodically with an orchestrator that updates the estimates of arms payoffs and instructs the agents on which arms to keep and which to eliminate. Although the communication is periodic, the total number of communication rounds is bounded (logarithmic with respect to the horizon $T$). This is because the number of agents incorporated in the learning process decays exponentially with time. Such an approach works since the average of clients' local means concentrates exponentially fast around that global mean (a known result from probability concentration analyses). A setting that is slightly different from the federated bandits was studied in\cite{wang2020stochastic}. The difference is that although agents have similar yet not identical local models, the reward for each agent is actually sampled from its local distribution. Thus, each agent of them is trying to identify the best arm in its local instance through using information from other ones on arms that are similar. This is in contrast with other works presented here where the agents' rewards are sampled from a \emph{global} distribution that they are collaboratively trying to estimate from biased local observations. Table \ref{table:MAB_previous_work} summarizes the works in MAB problems. It lists the problem formulation: distributed Bandits (DB) or federated Bandits (FB), the communication model (i.e., the network topology), the communication guarantee (i.e., number of messages needed to achieve the performance), the regret guarantee (i.e., the growth of the regret with respect to the time horizon), and the method (i.e., the main principle behind the algorithms). $C_G$ donates a constant related to the communication graph or gossiping matrix. \subsubsection{Lessons Learned} \begin{itemize} \item Communication-cognizant Multi-agent Bandit formulations: Online-learning systems need to account for the communication resources. Thus, recent works do not only analyze regret but explicitly reason about the necessary communication resources. This is manifested through two main observations. First, the derived regret guarantees are always affected by the networking topology (e.g., parameters representing the connectivity of a communication graph, number of involved agents, or number of exchanged messages). Second, accompanied by every regret guarantee, an upper bound on communication resource usage is also provided (e.g., the maximum number of exchanged messages or exchanged bits). \item Towards the federation of the bandit framework: When the bandit instances faced by each agent are local biased instances, the federated bandits framework arises. In such situation, agents need to learn with the help of a logically centralized controller, similar to supervised federated learning, in order to estimate the true global instance and the true best action\cite{shi_federated_2021}. However, if agents are not interested in solving a hidden global instance but rather only their own, they may reuse their peers' experience and an instance-similarity metric to help them solve their own instances \cite{wang2020stochastic}. \end{itemize} \subsection{Multi-agent reinforcement learning} This section presents an overview of Multi-agent Reinforcement Learning (MARL) from a pervasive system perspective. We specifically focus on the \emph{communication-performance} trade-off and classify previous works according to their approach to handle this trade-off. We note that our perspective is different from previous surveys (i.g., \cite{hernandez2019survey}, \cite{9043893}), which studied the technical merits and demerits of the learning algorithms. Instead, we are interested in the \emph{systems} aspects of the considered works. That is, what are the communication topology and protocol used between agents and how do these choices affect the performance (rewards obtained by all agents). \subsubsection{Overview} \begin{figure}[!h] \centering \includegraphics[scale=0.6]{Figures/multi-agents1.pdf} \caption{MARL Framework: multiple autonomous agents act on an environment and observe (parts) of its state, and a, potentially different, reward signal.} \label{fig:MARL} \end{figure} Unlike MAB formulations, in reinforcement learning, we have a \emph{state space}, which is a set of all possible states (i.e., configuration) the environment might be in, along with a \emph{transition operator} which describes the distribution over the next states given the current state and performed actions. Therefore, agents need to not only detect the best actions, which maximize the reward (bandits objective) but also to account for the possible next state, as it might be arbitrarily bad/good regardless of the current one. Hence, in MARL, the collaborative agents aim to maximize the current and \emph{future} expected sum of rewards. MARL problems, visualized in Fig. \ref{fig:MARL}, are often modeled as a Partially Observable Markov Game (POMG)\cite{POMG}, which is a multi-agent extension for Partially Observable Markov Decision Process (POMDP). POMGs are represented by the tuple $(\mathcal{N},\mathcal{S},\mathcal{A},\mathcal{O},\mathcal{T},\mathcal{R}, \gamma)$, where: \begin{itemize} \item $\mathcal{N}$ is the set of all agents. \item $s_t \in \mathcal{S}$ is a possible configuration of all the agents at time $t$. \item $a_t \in \mathcal{A}$ is a possible action vector for the agents, where $\mathcal{A} = \mathcal{A}_1 \times \mathcal{A}_2 \times ....\times \mathcal{A}_N$. \item $o_t \in \mathcal{O}$ is a possible observation of the agents, where $\mathcal{O} = \mathcal{O}_1 \times \mathcal{O}_2 \times .... \times \mathcal{O}_N$. \item $\mathcal{T}: \mathcal{O} \times \mathcal{A} \mapsto \mathcal{O}$ is the state transition probability. \item $\mathcal{R}$ is the set of rewards for all the agents $r_i: \mathcal{O} \times \mathcal{A} \mapsto \mathbb{R}$. \item $\gamma$ is the reward discount factor. \end{itemize} Each agent aims to find a policy $\pi_n$ that maximizes its own reward. In cooperative scenarios, the policy aims at maximizing the total reward. If the rewards are not the same for all agents, the framework is referred as mixed Decentralized-POMDP. When the rewards are similar for all agents (i.e., $r_n=r \quad \forall n \in \mathcal{N}$), the POMG is collaborative. In the following, we discuss algorithms that might work on one or both settings. The main focus will be, on the communication aspects (i.e., topology and cost) of MARL algorithms. There exist results on the hardness of solving the POMG under several settings. We can cite the case of a tabular representation of the spaces and the cases where function approximation is used (linear or nonlinear). The main solution approaches are policy gradient, value-based methods \cite{sutton2018reinforcement}. Policy gradient methods parametrize agents' policies within a class and utilize gradient descent to optimize an objective function (i.e., reward obtained by the policy). Value-based methods aim to generalize the famous Q-learning algorithm to the multi-agent settings, either through making each agent learn its own Q-function and treating others as a part of a non-stationary environment , or through learning a global Q-function. The optimization in policy gradient methods is done on the objective function: $J(\mathbf{\theta}\doteq v_{\pi_{\mathbb{\theta}}}(s_0))$, which is the cost of starting from the intitial state $s_0$, and following the parametrized policy $\pi_{\mathbf{\theta}}$ thereafter. The gradient of this function can be written as: \begin{equation} (G_t - b(S_t)) \frac{\nabla \pi (A_t|S_t, \mathbb{\theta}_t)} {\pi (A_t|S_t, \mathbb{\theta}_t)}. \end{equation} As shown in the policy gradient algorithm \cite{sutton2018reinforcement}, $b$ is any function of the state, referred to as the baseline. If it is the zero function, the resulting equation represents the ``reinforce'' algorithm. Another popular option is the value function of the state. If this state value function is updated through bootstrapping, the resulting method is called Actor-critic. Thus, Actor-critic methods are policy gradients that use the state value function as a baseline ($b(s)=V(s)$) and update this function through bootstrapping. Readers may refer to \cite{8103164} for more details and comparison between these approaches. As will be clarified next, each work tunes different parts of these main solution approaches according to the application. \subsubsection{Centralized training and Decentralized Execution (CTDE)} The Centralized Training and Decentralized Execution (CTDE) approach is originally proposed in \cite{oliehoek2008optimal}. This approach leverages the assumption that, in most application scenarios, the initial training is done on centralized simulators, where agents can communicate with each other with no communication cost. This phase is denoted as centralized training. Then, at deployment, agents are assumed not to communicate at all, or conduct limited communication with each other, and they rely on their ``experience" at the training phase to actually execute their collaborative policies. \paragraph{Communication only at training} The advantage of such an approach is that it does not require communication between agents upon deployment and thus incurs no communication cost. However, this comes at the cost of losing adaptability, which is the major motivation behind online learning. Such loss might occur in case of a major shift in the environment model between the training and deployment, where the learned coordinated policies are no longer performant, and new coordination is needed. The main workaround is to monitor the agents' performance and re-initiating the centralized training phase to learn new coordinated policies whenever needed. This approach has been popularized by recent methods such as VDN \cite{sunehag2018value}, QMIX \cite{rashid2018qmix}, and QTRAN \cite{son2019qtran}. These works adopt \emph{value function factorization} technique, where factorizations of the global value function in terms of individual (i.e., depending only on local observations) value function are learned during centralized training. Then, the global function (i.e., neural network) can be discarded at execution time, and each individual agent utilizes only the local function. When each agent acts greedily according to its local network, the global optimality can still be guaranteed since, at the training phase, these local networks were trained according to gradient signals with respect to the global reward. Another approach to solving POMG is actor-critic. The CTDE version of actor-critic approaches is represented by learning a centralized critic, which evaluates the global action, and decentralized policy network, which outputs an action based only on local observation. During training, the actor-critic networks are jointly learned, and hence the global critic ``guides" the training of the actors. Then, at execution, the global critic may be discarded, and only actors can be used. The works in \cite{lowe2017multi} present a deep deterministic policy gradient method that follows the described approach, where each agent learns a centralized critic and decentralized actor. Similarly, \cite{foerster2018counterfactual} follows the same approach, but all agents learn the same critic. Multiple other variations are done on the same DDPG algorithm aiming to either enhance performance \cite{iqbal2019actor} through incorporating an attention-mechanism, or limit the use of communication resources (limited budget on the number of messages used, or a message is (a part of) an agent's state). \cite{wang2020r}. \begin{table*} \centering \footnotesize \tabcolsep=0.09cm \caption{Communication-Cognizent Multi-Agent Reinforcement Learning literature.} \label{table:rl_previous_work} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Refs} & \textbf{Framework} & \shortstack{\textbf{Learning} \\ \textbf{algorithm}} & \shortstack{\textbf{Communication}\\\textbf{scheme}} & \shortstack{\textbf{Communication}\\\textbf{decision}} \\ \hline \shortstack{VDN \cite{sunehag2018value}, QMIX \cite{rashid2018qmix},\\ QTRAN \cite{son2019qtran}} & CTDE & Value-based &NA& \shortstack{Always during training,\\ None at execution.} \\ \hline \shortstack{MADDPG \cite{lowe2017multi}, \\ COMA\cite{foerster2018counterfactual}} & CTDE & Actor-critic-based &NA& \shortstack{Always during training, \\ None at execution.} \\ \hline \cite{wang_learning_2020} IMAC & CTDE with learned comm. & Policy gradient & \shortstack{Learned source \\ and destination} &\shortstack{At every step (limited size)} \\ \hline \cite{jiang2018learning} ATOC & CTDE with learned comm. & Actor-critic based & \shortstack{Gated communication \\with neighbors} &\shortstack{When network topology \\changes.} \\ \hline \cite{singh2018learning} IC3Net & CTDE with learned comm. & Policy gradient & \shortstack{Gated communication \\with neighbors} &\shortstack{ Communicate when necessary, \\possibly many messages per round. } \\ \hline \cite{mao_learning_2020} ACML &CTDE with learned comm. & Actor-critic based & \shortstack{Gated communication \\with neighbors} & \shortstack{Communicate when necessary, \\respecting a limited bandwidth.} \\ \hline \cite{omidshafiei2017deep} & Fully decentralized & Value-based & Indirect & No message passing. \\ \hline \cite{zhang_fully_2018}& Fully decentralized & actor-critic-based & With neighbors & At every step. \\ \hline \cite{9029257}& Fully decentralized & actor-critic-based & With neighbors & At every step (limited size). \\ \hline \cite{chen2018communication}& Fully decentralized & Policy gradient & \shortstack{Broadcast to all through \\central controller} & At every step. \\ \hline \end{tabular} \end{table*} \paragraph{Learned communication} An important line of work within the MARL community is the study of learned communication between agents. In these settings, agents are allowed to send arbitrary bits through a communication channel to their peers in order to convey useful information for collaboration. These agents need to learn \emph{what} to send, and \emph{how} to interpret the received messages so that they inform each other of action selection. Thus, the agents are effectively learning communication protocols, which is a difficult task \cite{foerster2016learning}. While the learned communication can be trained centrally and executed in a decentralized fashion, agents can still communicate at the execution phase through a limited bandwidth channel. Hence, we distinguish this setting from the works discussed in the previous subsection. Yet, similar approaches can be followed. For example, discarding the critic in execution (sometime used interchangeably with the term CTDE) but still maintaining the learned communication \cite{mao_learning_2020} and parameter sharing and gradient pushing in \cite{foerster2016learning}, where in execution, these messages are discretized. The authors in \cite{kim2019learning} aimed to learn to schedule communication between agents in a wireless environment and focused only on collision avoidance mechanism in the wireless environment. In \cite{wang_learning_2020}, information theoretic approach was used to compress the content of the message. In addition, source and destination are also learned through a scheduler. On the other hand, a popular line of work concentrated on designing so-called \emph{gating mechanism} techniques in order to accomplish the efficiency of the learned communication protocols. In this line of work, agents train a gating network, which generates a binary action to specify whether the agent should communicate with others or not at a given time step, limiting the number of communicated bits/messages needed to realize a certain desirable behavior. \cite{jiang2018learning} investigates the adaptability of these communication protocols and demonstrates the importance of communicating only with selected groups. Specifically, agents cannot distinguish messages that are particularly important to them (i.e., have implications on their actions) from the messages of all other agents. Thus, they introduce an attention scheme within each agent where an attention unit receives encoded local observation and action intention of the agent to decide whether a communication with others in its observable field is needed. The communication group dynamically changes and persists only when necessary. The authors in \cite{singh2018learning} looked at \textit{communication at scale} and proposed an Individualized Controlled Continuous Communication Model (IC3Net), where agents are trained according to their own rewards (hence the approach can work for competitive scenarios also). Then, they demonstrated that their designed gating mechanism allows agents to block their communication, which is useful in competitive scenarios and reduces communication in cooperative scenarios by opting out from sending unnecessary messages. However, the effect of the gate on communication efficiency was not thoroughly studied, and the focus was instead on the emerging behavior. The work in \cite{mao_learning_2020} presents the state-of-the-art on efficient learned communication. The authors introduced Actor-Critic Message Learner (ACML), wherein the gate adaptively prunes less beneficial messages. To quantify the benefit of an action, Gated-ACML adopts a global Q-value difference as well as a specially designed threshold. Then, it applies the gating value to prune the messages, which do not hold value. The authors showed that surprisingly, not only the communication-efficiency significantly increases, but in specific scenarios, even the performance improves as a result of well-tuned communication. The reason behind this is that, since the communication protocol is learned, it is probable to hold redundant information that agents do not decode successfully. The proposed gating mechanism can also be integrated with several other learned communication methods. \subsubsection{Fully Decentralized Agents} In fully decentralized reinforcement learning, there is no distinction between training and testing environments. Thus, the communication model stays the same throughout the agents' interaction with the environment. Under these settings, we recognize two extreme cases. First, agents do not communicate with each other, and learn to coordinate solely through the obtained rewards. In the case of no communication, the major challenge faced by the agents is the non-stationarity of the environment. A non-stationary environment from the perspective of the agents is when the distribution of the next states varies for the same current state and action pairs. The fully decentralized DRL was recently popularized by \cite{omidshafiei2017deep}. In \cite{omidshafiei2017deep}, the authors proposed a 3-dimensional reply buffer whose axes are the episode index, timestep index, and agent index. It was illustrated that conditioning on data from that buffer helps agents the perceived non-stationarity of the environment. On the other extreme, agents can be modeled to be able to communicate at every step. Specifically, the problem of graph networked agents is investigated in \cite{zhang_fully_2018}. In this paper, agents are connected via a time-varying and possibly sparse communication graph. The policy of each agent takes actions that are based on the local observation and the neighbors' messages to maximize the globally averaged return. The authors fully decentralized actor-critic algorithms and provided convergence guarantees when the value functions are approximated by linear functions. However, a possible disadvantage of this algorithm is that the full parameter vector of the value function is required to be transmitted at each step. This has been addressed in \cite{9029257}, where also graph-networked agents are assumed, but each agent broadcasts only one (scaled) entry of its estimate of parameters. This significantly reduces communication cost (given that it occurs at every iteration). The paper also does not assume a bidirectional communication matrix and deals with only unidirectional ones, which is a more general formulation that appeals to more applications. The decentralized actor-critic-based algorithm also solves the distributed reinforcement learning problem for strongly connected graphs with linear value function approximation. \cite{chen2018communication} considered the communication efficiency in fully decentralized agents, but with the assumption of a centralized controller. The paper utilizes policy gradient solution methods, where the controller aggregates the gradients of the agents to update the policy parameters. This process is akin to federated learning clients selection. The authors propose a process to determine which clients should communicate to the controller based on the amount of progress in their local optimization. They also propose a methodology to quantify the importance of local gradient (i.e., the local optimization progress) and then only involve agents who are above a certain threshold. Following this approach, the authors showed that the performance (i.e., cumulative reward) is similar to the case where all clients are participating, with considerable communication round savings. Table \ref{table:rl_previous_work} summarises the works discussed above according to their communication model and the approach in handling the communication-performance tradeoff. We first identify the framework CTDE, CTDE with learned communication, or fully decentralized) as well as the learning algorithm (value, policy gradient, or actor-critic). Then, we list two important columns. First, the communication scheme, which states \emph{how} agents communicate with each other. In CTDE, the training is done in simulation. Thus, agents are logically centralized and do not communicate. If no messages are passed between agents and their collaboration is solely learned through rewards, then the communication scheme is \textit{indirect}. Otherwise, it is either \textit{(gated) with neighbors} directly or through a \textit{central controller}. Lastly, the communication decision columns state \emph {when} the communication is made, which can be at every step (with optimized message size or not), or according to other conditions as detailed in the discussion. \subsubsection{Lessons Learned} \begin{itemize} \item \textit{CTDE-a practical middle ground}: CTDE algorithm leverages the fact that training is often done in simulators, where there is no communication cost, and agents may share experience tuples, network parameters, and observations freely, in order to train policies that can be executed later on, based on only local observations. This approach seems to model most of the pervasive systems applications where agents do not need to start training while being decentralized. In this framework, the actor-critic-based algorithms are more popular, where a centralized critic network that uses the observations of all agents guides the training of a decentralized policy network that uses only the local observations. The critic network can be discarded at execution time, enabling decentralized execution. The framework is emerging as a possible alternative to the fully decentralized extremes, which either communicate at every step or do not communicate at all and try to indirectly and independently learn collaborative policies \cite{lowe2017multi,wang2020r}. \item \textit{Scheduling for efficient learned communication}: In learned communication, agents learn to encode and decode useful messages. In this area, gating mechanisms are the main tools towards efficient communication \cite{jiang2018learning,singh2018learning,mao_learning_2020}. In gate design, agents learn when to send and refrain from sending a message by quantifying the benefit (i.e., reward) of actions following this communication. More general \emph{schedulers} modules investigate the design of communication module that learn to minimize the content of the messages as well (i.e., compressing the communication messages) \cite{wang_learning_2020}. Overall, scheduling mechanisms are being increasingly used in MARL settings with learned communication, in order to face the limited bandwidth problems often encountered in practical scenarios. \end{itemize} { \subsection{Active Learning (AL)} \label{AL} As far as online learning schemes have been tackled, AL has emerged as a promising and effective concept. Herein, we first present a brief overview for the concept of AL, then we discuss some recent applications of AL presented in the literature. \subsubsection{Overview} \label{AL_Overview} The main idea behind AL is that an active learner is allowed to actively select over time the most informative data to be added to its training dataset in order to enhance its learning goals \cite{AL2017}, \cite{AL2014}. Hence, in AL framework, the training dataset is not static, which means that the training dataset and learning model are progressively updated in order to continuously promote the learning quality \cite{ASPAL2018}. Specifically, the main steps of AL are: (1) acquiring new data from the contiguous nodes; (2) picking out the most informative data to append to the training dataset; (3) retraining the learning model using newly-acquired data. Hence, the communication overheads associated with different AL schemes will depend on: \begin{enumerate} \item Type and amount of exchanged data between the contiguous nodes. We remark here that contiguous nodes can exchange labels, features, or samples. Hence, based on the type and amount of changed data there will be always a tradeoff between enhancing the performance and decreasing communication overheads. \item Number of selecting nodes that will be considered in the AL process. \end{enumerate} It is worth mentioning also that FL allows multiple nodes to cooperatively train a global model without sharing their local data, which differs from AL in many ways. In particular, FL seeks for obtaining a synchronization between different cooperative nodes, in addition to the presence of a centralized node (or server) to generate the global model. Thus, AL and FL are addressing orthogonal problems – the former leverages the newly-acquired data from the contiguous nodes to retrain its model, while the latter trains its model in a distributed manner by sharing the model's updates with the contiguous nodes \cite{infocom2020}. \subsubsection{Applications of AL} \label{AL_Applications} Traditionally, AL algorithms depend on the presence of an accurate classifier that generates the ground-truth labels for unlabeled data. However, this assumption becomes hard to maintain in several real-time applications, such as crowdsourcing applications and automated vehicles. Specifically, in crowdsourcing, many sources are typically weak labelers, which may generate noisy data, i.e., data that may be affected by errors due to low resolution and age of information problems. However, most of the existing studies on AL investigate the noisy data (or imperfect labels) effect on the binary classification problems \cite{niosyBinaryClass2016,niosyBinaryClass2015}, while few works consider the general problem of multi-class or multi-labeled data \cite{DeepLearningFromCrowds,Imbalanced_Label2015,Subset_selection2019}. One of the main problems in crowdsourcing is how to collect large amount of labeled data with high quality, given that the labeling can be done by volunteers or non-expert labelers. Hence, the process of acquiring large amount of labeled data turned to be challenging, computationally demanding, resource hungry, and often redundant. Moreover, crowdsourced data with cheap labels comes with its own problems. Despite being labels cheap, it is still expensive to handle the problem of noisy labels. Thus, when data/labelers are not selected carefully, the acquired data may be very noisy \cite{Crowds2018, WMV2017}, due to many reasons such as varying degrees of competence, labelers biases, and disingenuous behavior, which significantly affects the performance of supervised learning. Such challenges have encouraged the researcher to design innovative schemes that can enhance the quality of the acquired data from different labelers. For instance, \cite{DeepLearningFromCrowds} tackles the problem of demanding deep learning techniques to large datasets by presenting an AL-based solution that leverages multiple freely accessible crowdsourced geographic data to increase datasets' size. However, in order to effectively deal with the noisy labels extracted from these data and avoid performance degradation, the authors have proposed a customized loss function that integrates multiple datasets by assigning different weights to the acquired data based on the estimated noise. \cite{Subset_selection2019} enhances the performance of supervised learning with noisy labels in crowdsourcing systems by introducing a simple quality metric and selecting the $\epsilon$-optimal labeled data samples. The authors investigate the data subset selection problem based on the Probably Approximately Correct (PAC) learning model. Then, they consider the majority voting label integration method and propose two data selection algorithms that optimally select a subset of $k$ samples with high labelling quality. In \cite{Imbalanced_Label2015}, the authors investigate the problem of imbalanced noisy data, where the acquired labeled data are not uniformly distributed across different classes. The authors therein aim to label training data given received noisy labels from diverse sources. Then, they used their learning model to predict the labels for new unlabeled data, and update their learning model until some conditions are met (e.g., the performance of the learned model meets a predefined requirement, or it cannot be improved any more). Specifically, for labeled data, they implement a label integration and data selection scheme that considers data uncertainty and class imbalance level, while classifying the unlabeled data using the trained model before adding them to the training dataset. Hence, the proposed framework presents two core procedures: label integration and sample selection. In the label integration procedure, a Positive LAbel Threshold (PLAT) algorithm is used to infer the correct label from the received noisy labels of each sample in the training set. After that, three sample selection schemes are proposed to enhance the learning performance. These schemes are respectively based on the uncertainty derived from the received-noisy labels, the uncertainty derived from the learned model, and the combination method. A different application of AL is investigated in \cite{ASPAL2018}, where AL is exploited for incremental face identification. Conventional incremental face recognition approaches, such as incremental subspace approaches, have limited performance on complex and large-scale environment. Typically, the performance may drastically drop when the training data of face images is either noisy or insufficient. Moreover, most of existing incremental methods suffer from noisy data or outliers when updating the learning model. Hence, the authors in \cite{ASPAL2018} present an active self-paced learning framework, which combines: active learning and Self-Paced Learning (SPL). The latter refers to a recently developed learning approach that mimics the learning process of humans by gradually adding to the training set the easy to more complex data, where easy data being those with high classification confidence. In particular, this study aims to solve the incremental face identification problem by building a classifier that progressively selects and labels the most informative samples in an active self-paced way, then adds them to the training set. AL has been also considered in various applications of intelligent transportation systems. For instance, the authors in \cite{VehicleRecognition2019} investigate the vehicle type recognition problem, in which labeling a sufficient amount of data in surveillance images is very time consuming. To tackle this problem, this work leveraged fully labeled web data to decrease the required labeling time of surveillance images using deep transfer learning. Then, the unlabeled images with high uncertainty are selected to be queried in order to be added later to the training set. Indeed, the cross-domain similarity metric is linearly combined with the entropy in the objective function of the query criteria to actively select the best samples. Ultimately, we highlight that most of the presented studies so far consider in their AL framework specific classifiers (or learning models), which cannot be easily used in other learning models \cite{multiclass2018}. Accordingly, obtaining an optimal label integration and data selection strategy that can be used with a generic multi-class classification techniques is still worth further investigation.} \subsection{Use Cases} \label{UCAL} \subsubsection{MAB for recommender systems} Online learning systems are fundamental tools in recommender systems, which are, in turn, a cornerstone in the development of current intelligent user applications, from social media application feeds to content caching across networks. Due to the recent growth in data generation, local geo-distributed servers are often used to support applications that utilize recommender systems. Furthermore, privacy concerns sometimes limit the ability of these local servers to share data with other servers. In \cite{shi_federated_2021}, the authors motivate the use of federated bandits by explaining how they can model such a critical use case. In the discussed example, a set of servers run a recommender system for their prospective clients. The goal of each one is to recommend the most popular content across all servers. However, due to latency constraints, communication at every decision-making step is infeasible. Besides, sharing individual samples of rewards violates privacy, as all servers will learn about a particular user's choice and preference. Due to these reasons, the authors proposed and utilized a federated bandits algorithm (Fed-UCB) which only communicates $logT$ times in a $T$ horizon to minimize recommendation latency. At each round, only the sample \emph{means} are communicated, preserving a certain level of privacy (additional improvements are also discussed). Finally, the performance of the system is shown to be near-optimal; thus, achieving the goal of recommending the best item across all servers while meeting the privacy and communication constraints. \subsubsection{MARL for UAV-assisted networks} \begin{figure}[h!] \centering \scalebox{1.9}{\includegraphics[width=0.48\columnwidth]{Figures/UC_uav1.pdf}} \caption{UAV-assisted networks: UAV agents are trained to deduce a collaborative policies for providing compute/communication resources for on ground equipment.} \label{fig:uav_MARL} \end{figure} UAVs have provided new potentials for extending communication coverage and even compute capabilities of devices in areas where full networking infrastructure is not present. This is done through wireless communication between UAV and on-ground equipment, enabling those equipment to extend their connectivity and potentially offload tasks to a broader network relayed by the UAV. The work in \cite{9209079} aims at utilizing UAVs to provide intermittent compute/communication resource support for user equipments (UEs) on the ground. The benefits of such UAV-assisted scenarios are numerous, including creating dynamic networking graphs without the need for full networking infrastructure, which can be of extreme value in catastrophes response for example. Nonetheless, the UAVs need to optimize their trajectory paths so that they cover the widest areas with minimum movement (i.e., energy consumption) and maximum utility (i.e., providing the resources to the UE that needs it the most). However, such optimization is shown to be intractable. Thus, the authors opted for learning-assisted (i.e., data-driven) methods. Since a centralized training was possible in their tackled scenario is available, they used a CTDE algorithm, specifically Multi-Agent DDPG (MADDPG). In MADDPG, agents aim to jointly optimize the UE-load of each UAV while minimizing the overall energy consumption experienced by UEs, that depends on the UAV's trajectory and offloading decisions. Following the MADDPG algorithm, the UAVs observations were \textit{communicated} among them to deduce the collaborative policy at training. At execution, no message sharing was needed. This resulted in a satisfactory performance due to the accurate simulator. However, as discussed earlier, environments that are expected to change might necessitates the use of other algorithms that maintain periodic, learned, or persistent communication after deployment. We note that the application of reinforcement learning in resource-constrained environments (e,g., IoT devices), requiring the design of communication-aware techniques, is still scarce. Most testing for these methods is done on artificial testing environments like Open AI's Particle environments \cite{openaimultiagent-particle-envs_2021}, or video games like StarCraft II \cite{vinyals2019grandmaster}, which is a typical practice in the RL community since success in these environments is often indicative of broader applicability. \subsubsection{AL for Connected Vehicles} \begin{figure}[h!] \centering \scalebox{1.9}{\includegraphics[width=0.53\columnwidth]{Figures/AL.pdf}} \caption{AL for time-varying vehicular network.} \label{fig:CS} \end{figure} Traditional machine learning models require massive, accurately labeled datasets for training in order to ensure high classification accuracy for new data as it arrives \cite{dlcomp2018}. This assumption cannot be guaranteed in many real-time applications, such as connected and autonomous vehicles. Indeed, vehicles are typically weak labelers (i.e., data sources that generates label with low classification confidence). Hence, they may acquire/generate noisy data, e.g., data generated by cameras in the presence of fog or rain. Also, in a highly dynamic environment like vehicular network, not only the generated data by the vehicles' classifiers can have low classification accuracy, but also the data received from neighboring vehicles may be prone to noise and communication errors. Hence, the authors in \cite{TVT2021} have tackled these challenges by proposing a cooperative AL framework for connected vehicles. The main goal of this work is two-fold: (1) selecting the optimal set of labelers, those considered to be the most reliable ones; as well as (2) selecting a maximally diverse subset of high quality data that are locally-generated and/or received from neighboring vehicles to be used for updating the learning model at each vehicle. In \cite{TVT2021}, the time-varying vehicular network shown in Figure \ref{fig:CS} is considered. It is assumed that each vehicle can communicate and exchange information only with the neighboring vehicles that are located within its communication range. For instance, the set $\mathcal{N}_{v_0}(t) = \left\{v_{j}, v_{j+1}, v_{j+2}\right\}$ at time $t$ means that there are only three vehicles staying in the communication range of vehicle $v_0$. Furthermore, this framework considers two types of data: multiple-labeled online dataset and offline/historical labeled dataset. The online dataset is considered as sequences of samples that arrive from neighboring vehicles or generated at vehicle $v_0$ within time $T$ (i.e., refers to the period of time during which a vehicle $v_0$ is exposed to a certain road view). At time $T$, vehicle $v_0$ receives a sequence of training samples/labels that contains input features and associates with multiple noisy labels generated from the vehicles sending data to $v_0$. The presented framework in \cite{TVT2021} includes five main stages, as described below: \begin{enumerate} \item \textbf{Offline Learning:} Initially, each vehicle with its own offline/historical training data generates an initial learning model with a certain accuracy level. \item \textbf{Online labeling:} The vehicle starts to collect new information through its local sensors or from neighboring vehicles. These information can be labels, features, or samples, depending on the adopted operational mode. \item \textbf{Label integration:} After acquiring the new information, each vehicle obtains an aggregated label for the received data using different proposed label integration strategies. \item \textbf{Labeler selection:} After monitoring the behavior of the neighboring vehicles, each vehicle selects a subset of high-quality labelers, based on their reputation values that are estimated from the past interactions using subjective logic model. \item \textbf{Data selection and models update:} Finally, each vehicle selects the maximally diverse collection of high-quality samples to update its learning model. \end{enumerate} The proposed AL framework in \cite{TVT2021} depicts its efficiency for connected automated vehicles, as follows: (1) it allows for increasing the amount of acquired data at different vehicles during the training phase; (2) it accounts for the labelers' accuracy, data freshness, and data diversity while selecting the optimal subset of labelers and data to be included in the training set; (3) Using different real-world datasets, it could provide $5-10\%$ increase in the classification accuracy compared to the state-of-the-art approaches that consider random data selection, while enhancing the classification accuracy by $6\%$ compared to random labelers selection approaches. } \begin{figure*}[!h] \centering \includegraphics[scale=0.47]{Figures/FutureDirections1.pdf} \caption{Future directions and open challenges.} \label{future_illustrated} \end{figure*} \section{Future directions and open challenges}\label{future} In this section, we present a list of open challenges and issues facing the pervasive AI systems, and we propose some promising ideas to mitigate these issues. Specifically, we introduce the opportunities to integrate the pervasive AI in emerging systems, and we suggest some future directions for efficient distributed inference and enhanced federated learning algorithms. Finally, we present some innovative ideas related to the new concepts of online learning. Fig. \ref{future_illustrated} presents a brief of the proposed directions. \subsection{Deployment of Pervasive AI in emerging systems} \subsubsection{Pervasive AI-as-a-service} While the 5G main goal is to provide high speed mobile services, the 6G pledges to establish next-generation softwarization and improve the network configurability in order to support pervasive AI services deployed on ubiquitous devices. However, the research on 6G is still in its infancy, and only the first steps are taken to conceptualize its design, study its implementation, and plan for use cases. Toward this end, academia and industry communities should pass from theoretical studies of AI distribution to real-world deployment and standardization, aiming at instating the concept of Pervasive AI-as-a-service (PAIaas). PAIaas allows the service operators and AI developers to be more domain-specific and focus on enhancing users’ quality of experience, instead of worrying about tasks distribution. Moreover, it permits to systemize the mass-production and unify the interfaces to access the joint software that gathers all participants and applications. \subsubsection{Incentive and trusty mechanism for distributed AI using blockchain} The distribution of heavy and deep neural networks on ubiquitous and resource-limited devices contributes to minimizing the latency of the AI task and guarantees the security of the data. However, even though pervasive systems are composed of computing units existing everywhere, anytime, and not belonging necessarily to any operator, the distribution is based on the assumption that pervasive devices are consenting to participate in the collaborative system. In this context, several considerations should be examined first: (1) The design of an incentive mechanism to motivate different nodes to take over AI tasks and sacrifice their memory, energy, communication, and computation resources to gain some rewards (e.g., monetary remuneration and free access to services); (2) In addition to the security of the private data to be processed by the pervasive devices, the security of the participants’ information should be guaranteed (e.g., locations, identifiers, and capacities). Recently, blockchain \cite{blockchain} has gained large popularity as a decentralized dataset managing transaction records across distributed devices, while ensuring trusty communication. Moreover, the aforementioned incentivizing mechanism can also be handled by blockchain systems. More specifically, all source devices and pervasive nodes have to first register to the blockchain system to benefit from the distributed AI or to participate in the computation. Then, data-generating devices request help to accomplish a task and submit at the same time a transaction application to the blockchain with a reward. Next, when the joining devices complete the offloaded tasks, they return the results to the source device and validate the completion of the transaction. Finally, the recorded participants are awarded according to their contribution to the blockchain transaction. The edge-based blockchain has a promising potential to prevent the security threats of transferring data between heterogeneous, decentralized, and untrusted devices. However, this approach is still in its infancy. Particularly, deploying it in resource-constrained devices is challenging due to the huge energy and computation load of blockchain mining \cite{mining}. \subsubsection{Explainable AI (XAI)} The AI-based applications are increasingly involved in many fields, where the decisions are very critical to lives and personal wellness, such as smart health applications and autonomous drones used during wars. However, most of the users do not have visibility on how the AI is making decisions. This lack of explainability prevents us to fully trust the predictions generated by AI systems. Finding reasons and logical explanations for decisions made by AI is called Explainable AI (XAI) \cite{XAI1,XAI2,XAI3}. XAI is an emerging field that is expected to answer some questions, including: Why are some predictions chosen, and why others not? When does an AI model succeed in taking the right decision, and when it fails? Various techniques are used to explain the AI: (1) One of these techniques is decomposability, which stands for the ability to describe each part of the model, extract features of the output, and analyze them using clustering methods. The pervasive AI system is the most adequate environment to empower XAI by improving the ability to interpret, understand, and explain the behavior of the model. More specifically, by distributing the inference, the model becomes algorithmically transparent, and each segment can be interpreted and clustered by its importance for the prediction. (2) Moreover, among the most important directions supporting the XAI is model debugging. Debugging a DNN allows to detect errors and understand their sources and their influence on misleading the prediction. A distributed model produces fine-grained outputs, that help to follow the inference process and localize the errors before reaching the prediction layer. (3) A third direction to explain the AI is the extraction of data samples that are highly correlated with the results generated by the model. In fact, similarly to human behaviors when trying to understand some processes, examples are analyzed to grasp the inner correlation that is derived by the AI model. Federated learning is based on clustering data entries and training local models. This technique permits us to narrow the examples search and enables the detection of the most influencing inputs on the model behavior. Research on XAI is still in its infancy, and pervasive DNN computing looks like a promising environment to track the AI process and interpret the results. \subsection{Efficient algorithms for pervasive inference} \subsubsection{Online resource orchestration} The pervasive systems are characterized by an extremely dynamic environment, where the available computing resources are volatile, and the load of requests may follow some statistical distributions. More specifically, ubiquitous nodes can join and leave the pervasive system randomly, which makes it hard to estimate the available resources. Moreover, the load of requests depends on the IoT application. For example, surveillance systems require frequent image processing (e.g., face recognition and crowd monitoring) every small interval of time. On the other hand, other applications, such as weather, air quality, and pollution level estimation, have to gather a specific amount of data before requesting the inference, which makes the load of requests lower. Additionally, the quality of the collected data may affect the depth of the adopted DL and consequently the computation requirements of the tasks. As an example, capturing high quality images allows to have a good prediction using smaller networks. In this scenario, early-exit techniques or squeezed models can be adopted. Having such data quality depends on the source device (e.g., UAV, professional camera, mobile phone, etc.) and the dynamic of the targets (e.g., fixed or moving). Because of this network’s dynamics, the pervasive systems deploying distributed inference need a well-designed online resource orchestration and participants selection strategy to support the large number of AI services with minimum latency. Meanwhile, heterogeneous and limited resources, and high dimensional parameters should be taken into consideration. In section \ref{PI}, we have introduced existing theoretical approaches to split different DNN networks and distribute the resultant segments into pervasive devices to optimize pervasive computing. Nonetheless, most of them focused on how to partition the model in order to maximize the data parallelism and minimize the dependency between participants. Yet, there is no relevant work that deeply studied the performance of DNN distribution and reported the bottleneck and gain of such an approach in long-term online resource orchestration, with different loads of requests and a dynamic behavior of participants and sources. In other words, the model parallelism is not well investigated in the literature, where sources can generate multiple requests at the same time and offload them to neighboring devices. In this scenario, the critical decision is to choose whether to process the same task from different requests while minimizing the memory to store the filters’ weights or to compute sequential tasks from the same request while reducing the transmission among participants. Furthermore, the age-aware inference is an important factor that can be foreseen in online model parallelization. In fact, some requests are highly sensitive to delays and need to be processed timely, such as self-driving cars, whereas others are less critical, including machine translation and recommendation systems. Thus, prioritizing urgent tasks and assigning better resources and intensive data parallelization to them is of high importance. We believe that pervasive AI computing should focus more on the online configuration to implement the above vision. \subsubsection{Privacy-aware distributed inference} Guaranteeing the privacy of the data shared between collaborative devices is one of the main concerns of pervasive systems, since untrusted participants may join the inference and observe critical information. Because of this heterogeneity of ubiquitous devices, the trained models are subject to malicious attacks, such as black-box and white-box risks, by which the original inputs may be in jeopardy. In this case, privacy-aware mechanisms should be enhanced to ensure the security of the distributed inference process. Many efforts have been conducted in this context, such as noise addition and cryptography. Even though these techniques succeeded in hiding features of the data from untrusted devices, most of them suffer from computation overhead and incompatibility with some end-devices or DNNs. More specifically, noisy or encrypted data need to be re-trained to preserve the accuracy of the prediction, and each input has to be obfuscated, which adds a computation overhead. Moreover, encryption may not be applicable for all DNN operations nor possible in some end-devices due to the crypto key management requirements. A notable recent work \cite{DistPrivacy} proposed to use the distribution for data privacy, without applying any additional task requiring computation overhead. In fact, per-segment splitting leads by design to assigning only some features of the input data to participants. Authors, of this work, applied filter partitioning and conducted empirical experiments to test the efficiency of black-box attacks on different segments’ sizes (i.e., number of feature maps per device). The lower the number of feature maps per device, the higher the privacy. However, filter partitioning incurs high communication load and dependency between devices. This study is still immature. Other partitioning strategies (e.g., channel and spatial.) can be examined to identify the optimal partitioning and distribution that guarantee satisfactory privacy and minimum resource utilization per participant. \subsubsection{Trajectory optimization of moving robots for latency-aware distributed inference} The usage of robots (e.g., UAVs) proved its efficiency to improve services in critical and hard-reaching regions. Recently, moving robots have been used for real-time image analysis, such as highway inspection, search and rescue operations, and border surveillance missions. These devices have numerous challenges, including energy consumption and unstable communication with remote servers. Recent works \cite{robots_power,robots} proposed to avoid remote AI inferences and leverage the computation capacity of ground robots to accomplish the predictive tasks. However, existing works did not cover the distribution of the inference among flying drones, characterized by their faster navigation, higher power consumption, and ability to reach areas with high interferences (e.g., high-rise buildings) compared to ground devices. Moreover, recent efforts did not cover the path planning for different moving robots to complete their assigned missions, while performing latency-aware predictions. More specifically, the time period between capturing the data to the moment when tasks from all the points are collected, should be minimized by optimizing the devices’ trajectories, and planning close paths for participants handling subsequent segments. Furthermore, the trajectories of devices with available resources should cross the paths of the nodes offloading the tasks, because of resource constraints. \subsubsection{Remote inference of non-sequential DNN models} A major part of pervasive inference literature analyzes the remote collaboration, where the source device computes the shallow layers of the model, while the cloud handles the deep layers. In this context, the split point is chosen based on the size of the shared data, the resource capability of the end-device, and the network capacity. This DNN partitioning approach may work well for the standard sequential model, where filters are sequentially reducing the size of the intermediate data. However, state-of-the-art networks do not only include sequential layers with reduced outputs. Indeed, generative models (GAN) \cite{GAN} proved their efficiency for image generation, image quality enhancement, text-to-image enhancement, etc. Auto-encoders also showed good performance for image generation, compression, and denoising. These types of networks have large-sized inputs and outputs. Hence, despite the reduced intermediate data, the cloud servers have to return the high-sized results to the source device, which implies high transmission overhead. Another family of efficient neural networks is the RNN (see section \ref{DRN}), used mostly for speech recognition and natural language processing. These networks include loops in their structures and multiple outputs of a single layer, which imposes multiple communications with remote servers in case of partitioning. Other complex DNN structures prevent remote collaboration wisdom, such as the randomly wired networks and Bolzman Machines (BM) having a non-sequential dependency. Keeping up with ever-advancing deep learning designs is a major challenge for per-layer splitting, particularly for remote collaboration. Based on these insights, the scheduling of DNN partitioning should have various patterns depending on the model structure. \subsubsection{Fault-tolerance of distributed inference} When a deep neural network is split into small segments and distributed among multiple physical devices, the risk of nodes failure is increased, which leads to performance drop and even inference abortion. The typical networking wisdom resorts to re-transmission mechanisms along with scheduling redundant paths. These failure management techniques inevitably consume additional bandwidths. The DNNs are characterized by a unique structure that may enclose skip connections, convolutional neural connections, and recurrent links. These features of state-of-the-art networks implicitly increase the robustness and resiliency of the joint inference. More specifically, skip blocks allow receiving information from an intermediate layer in addition to the data fed from the previous one. These connections, serving as a memory for some DL models (e.g., ResNet), can play the role of fault-tolerant paths. If one of the devices fails or leaves the joint system, information from a prior participant can still be propagated forward to the current device via the skip blocks, which adds some failure resiliency. The skip connections proved an unprecedented ability to enhance the accuracy of deep models, in addition to its potential to strengthen the fault-tolerance of pervasive computing. However, transmission overheads are experienced, particularly for failure-free systems. Thus, a trade-off between accuracy, resilience, and resource utilization should be envisaged. Another vision to be investigated is to train the system without skip connections and use them only in case of failures. This idea is inspired from the Dropout \cite{dropout} technique that is used to reduce the data overfitting problem. It is based on randomly dropping some neurons during the training and activating them during the inference. Studying the impact of cutting off some transmissions during the inference for different splitting strategies while re-thinking the dropout training is interesting to strengthen the fault-tolerance of pervasive computing. Very recent works \cite{FailureDis,resilinet} started to discuss such insights; however, they are still immature. \subsubsection{Data-locality-aware algorithms} Most of the efforts, studying the pervasive inference, focus on splitting and parallelizing the DNN tasks related to a predictive request. Next, based on the requirements of these tasks and the available resources in the joint system (e.g., computation and energy), tasks are distributed and assigned to the participants. However, in terms of memory, only the weight of the input data is considered, whereas the weights to store the DNN structure are never taken into account. In section \ref{AI}, we showed in Table \ref{Macc} that state-of-the-art DNN models are very sized and require high memory availability. For example, VGG-16 model has 138 M parameters and requires 512 Mb to store its filters. What worsens the situation is that some partitions impose copying the filters to all participants (e.g., spatial splitting.). Moreover, if the intelligent application is led by multiple DNN models and different segments are assigned to each device, a huge memory burden is experienced. Therefore, data-locality-aware algorithms should be designed. More specifically, the distribution system has to account for the past tasks assigned to each participant and try to maximize the re-usability of previously-stored weights, with consideration to the capacity of the devices. Minimizing the number of weights assigned to each participant, not only contributes to reduce the memory usage, but also guarantees the privacy of the structure against white-box attacks \cite{emna_white_box}. \subsubsection{Pervasive inference for nanotechnology applications} Nanotechnology is the field of innovation and research that focuses on creating particles (e.g., devices and materials) in the scale of atoms. These particles can be used in multiple domains, such as Nanomedicine that studies new ways of detecting and curing diseases. One of the interesting examples of Nanomedicine is the detection of diabetes through analyzing human's breaths. In fact, our body chemistry and our breaths change when we are sick, although our noses are not strong enough to detect it. More specifically, specific biomarkers are released in case of sickness, giving a huge opportunity to detect the disease just by sniffing out the breath. However, a big challenge is faced, as these biomarkers exist at a very low concentration equal to parts-per-million (ppm). In the context of diabetes, Acetone is produced such as patients have 2 ppm Acetone concentration, whereas healthy people have only 1 ppm. Meaning, the difference of the biomarker concentration between healthy and patients is equal to 1 ppm. In order to detect this ppm or even part-per-billion (ppb) in the human breath rather than blood, the design of super-sensitive sensors is mandatory. Before nanotechnology, it was not possible to precisely detect such nano concentration. Nowadays, intelligent and invisible nano-sensors can be trained to sniff human breath. However, reaching the full potential of Nanomedicine (e.g., drug delivery systems and precision cancer medicine) is still yet to be fully realized. To guarantee that Nano particles achieve the targeted objectives, large amount of data and computational analysis is expected. While the traditional techniques opt for an in-depth understanding of biological and chemical knowledge, the AI only requires data training. Thus, it is highly interesting to integrate the AI to evaluate and formulate the nanoscale particles \cite{nano1,nano2}. However, these particles suffer from small energy capacity that limits their communication with remote devices (e.g., handheld mobiles and computers). Hence, the distribution of inference within the nano-sensors can provide localized processing and minimize the data transmission. In this context, new partitioning strategies should be envisaged, as the existing ones do not fit the extremely limited computational resources of the particles. Particularly, even neuron, spatial, or filter splitting involving numerous multiplications are considered complex tasks. Thus, per-multiplication partitioning and the related dependency between millions of nano-participants have to be investigated to ensure the practicality of this futuristic convergence between pervasive AI and nanotechnology. \subsection{Enhanced federated learning algorithms} \subsubsection{Active Federated Learning} Given the main limitations of FL in terms of communication overheads and slow convergence, combining AL concept with emerging FL schemes would be of great interest. Since most of the existing schemes for FL suffer from slow convergence, a novel active FL solution would be needed, which exploits the distributed nature of FL, while coping with highly dynamic environments and ensuring adequately fast convergence. Indeed, heterogeneity of the local training data at distributed participating nodes and considering all nodes in the FL process can significantly slow down the convergence. Full nodes participation renders the centralized server to wait for the stragglers. Thus, we envision that: (1) exchanging some side information between different participating nodes (e.g., the unique data samples or class distribution) can significantly help in tackling the data heterogeneity problem; (2) considering partial node participation by proposing efficient user selection schemes can play an important role in decreasing communication overheads and accelerating the convergence. \subsubsection{Blending inter and intra data parallelism for federated learning} Deep neural networks require intensive memory and computational loads. This challenge is compounded, when the model is larger and deeper, as it becomes infeasible to acquire training results from a single resource-limited device. Triggered by this challenge, federated learning is proposed to train deep models over tens and even hundreds of CPUs and GPUs, by taking advantage of \textit{inter-data parallelism}. At present, federated learning techniques split the data to be trained among pervasive nodes while copying the whole DL model to all of them. Still, small devices cannot participate in such a process due to their limited capacities. Hence, blending the \textit{inter-data parallelism} where the trained data is distributed and the \textit{intra-data parallelism} where the intermediate data and segments of the model are partitioned, can be a feasible solution to enable training within non-GPU devices. Certainly, the practicality, gains and bottleneck of such an approach are to be examined and studied, as the backpropagation characterizing the training phase imposes huge dependency and communication between devices. \subsection{Communication-efficient online learning} \subsubsection{Demonstrated applications} Since most of the MAB algorithms discussed in this paper are recent, it remains interesting to see their implications on practical applications. For example, quantifying the effect of bounded communication resources or energy used in wearable devices and congestion between edge nodes. Similarly, quantifying the improvement in regret bounds on actual and Quality of Experience (QoE) metrics can be promising. \subsubsection{More general forms of MABs} The state of art algorithms in the distributed and federated setup adapt the finite-actions, and the stochastic case of the multi-agent settings. However, there exist many more general forms of the bandit problem that are yet to be studied under the multi-agent settings. These include but are not limited to adversarial-bandits, linear bandits, pure exploration, and non-stationary bandits. Investigating potential regret improvements and communication resource utilization in the MAB settings of non-stochastic and infinite-action bandits remains to be tackled. \subsubsection{Heterogeneity of Bandit agents} In MAB settings, agents might not only differ in the instance each is trying to solve, but also in their computational capabilities. Different computational capabilities mean that agents interact with their environments at different rates, collecting an additional amount of samples and hence different quality estimates. While the effect of this computational heterogeneity is heavily studied in supervised federated learning, it is not yet investigated either in distributed or federated bandits. \subsubsection{ MARL performance/communication tradeoff} Methods that train in a logically centralized server and then execute in a decentralized manner (CTDE) are able to communicate less (even not at all) at execution while being able to learn good joint policy due to the central training phase, as illustrated earlier. However, their adaptability is not guaranteed when dealing with a non-stationary environment, and they might require re-training again to adapt to the new environment. On the other hand, fully decentralized agents can continue the learning throughout their deployment but need to communicate more often to reason about their joint action. Otherwise, learning can be challenging and might diverge \cite{du2020survey}. A natural goal is to design adaptable methods yet communicate conservatively, which is the main motivation behind scheduling in learned communication. Thus, more work is needed to address the question of adaptable and communication-cognizant MARL. \subsubsection{MARL under networking constraints} Several communication characteristics have not been investigated under the MDP and POMG settings. For example, while delay, noise, failure, and time-varying topologies are vital factors in today's practical networks, they were not considered in most of MARL papers. These factors were, however, considered in other optimization frameworks like multi-agent (distributed) convex optimization \cite{MAL-051}. Some of the works surveyed here started to study bandwidth and multiple-access aspects \cite{wang_learning_2020,mao_learning_2020}. Yet, it is important to study the performance of emerging policies of MARL under realistic networking constraints. \section{Conclusion}\label{conclusion} Recently, AI and pervasive computing have drawn the attention of academia and industrial verticals, as their confluence has proved a high efficiency to enhance human’s productivity and lifestyle. Particularly, the computing capacities offered by the massive number of ubiquitous devices open up an attractive opportunity to fuel the continuously advancing and pervasive IoT services, transforming all aspects of our modern life. In this survey, we presented a comprehensive review of the resource allocation and communication challenges of pervasive AI systems, enabling to support a plethora of latency-sensitive applications. More specifically, we first presented the fundamentals of AI networks, applications and performance metrics, and the taxonomy of pervasive computing and its intersection with AI. Then, we summarized the resource management algorithms for the distributed inference, training, and online learning. In this context, partitioning strategies, architectures, and communication issues and solutions were extensively reviewed. Additionally, relevant use cases were described and futuristic applications were discussed. Multiple challenges remain to be addressed, to further improve the performance, as well as the resource management, privacy, and avant-garde applications. Therefore, we presented our vision of technical challenges and directions that may emerge in the future, along with some opportunities for innovation. We hope that this survey will elicit fruitful discussion and inspire new promising ideas. \section*{Acknowledgment} This work was made possible by NPRP grant \# NPRP12S-0305-190231 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the authors. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-06T02:06:35", "yymm": "2105", "arxiv_id": "2105.01798", "language": "en", "url": "https://arxiv.org/abs/2105.01798" }
\section*{Acknowledgements}% \addtocontents{toc}{\protect\vspace{6pt}}% \addcontentsline{toc}{section}{Acknowledgements}% } \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, citecolor=blue } \newcommand*{\KeepStyleUnderBrace}[1] \mathop{% \mathchoice {\underbrace{\displaystyle#1}}% {\underbrace{\textstyle#1}}% {\underbrace{\scriptstyle#1}}% {\underbrace{\scriptscriptstyle#1}}% }\limits } \usepackage{mathtools} \mathtoolsset{showonlyrefs} \usepackage{amsmath,amssymb,amsthm,bm} \usepackage{dsfont,listings} \renewcommand\footnotemark{} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}{Lemma} \theoremstyle{definition} \newtheorem{prop}{Proposition} \newtheorem{corollary}{Corollary} \newtheorem{assumption}{Assumption} \newtheorem{defn}{Definition} \newtheorem{example}{Example} \newtheorem{rmk}{Remark} \def\mathcal{F}_{\textup{sgn}}{\mathcal{F}_{\textup{sgn}}} \def\mathcal{M}_{\textup{sgn}}{\mathcal{M}_{\textup{sgn}}} \usepackage{xcolor} \allowdisplaybreaks \input macros.tex \def\@normalsize{\@setsize\normalsize{11pt}\xpt\@xpt} \setcounter{secnumdepth}{3} \def\spacingset#1{\renewcommand{\baselinestretch}% {#1}\small\normalsize} \spacingset{1} \def\fixme#1#2{\textbf{\color{red}[FIXME (#1): #2]}} \def\mycomment#1{\textbf{\color{blue}#1}} \def\ccomment#1{\textbf{\color{ForestGreen}#1}} \usepackage[parfill]{parskip} \usepackage{bm} \newcommand{\Hnorm}[1]{\left\lVert#1\right\rVert_{\mathcal{H}_\alpha}} \newcommand{\nullnorm}[1]{\left\lVert#1\right\rVert} \def\bm B^{\text{true}}{\bm B^{\text{true}}} \def\bm X_{\textup{new}}{\bm X_{\textup{new}}} \defy_{\textup{new}}{y_{\textup{new}}} \def\textup{sgn}{\textup{sgn}} \defS_{\textup{bayes}}{S_{\textup{bayes}}} \deff_{\textup{bayes},\pi}{f_{\textup{bayes},\pi}} \def\textup{srank}{\textup{srank}} \def\textup{rank}{\textup{rank}} \def\textup{supp}{\textup{supp}} \def\textup{Risk}_\pi{\textup{Risk}_\pi} \def\widehat{\textup{Risk}}_\pi{\widehat{\textup{Risk}}_\pi} \def\mathcal{F}_{\textup{sgn}}{\mathcal{F}_{\textup{sgn}}} \def\mathcal{M}_{\textup{sgn}}{\mathcal{M}_{\textup{sgn}}} \def\bar Y_\pi{\bar Y_\pi} \def\textup{Risk}_{\pi,F}{\textup{Risk}_{\pi,F}} \def\widehat{\textup{Risk}}_{\pi,F}{\widehat{\textup{Risk}}_{\pi,F}} \def\text{\bf \small CNN}{\text{\bf \small CNN}} \def\text{\bf \small LogisticV}{\text{\bf \small LogisticV}} \def\text{\bf \footnotesize ASSIST}{\text{\bf \footnotesize ASSIST}} \def\LogisticM{\text{\bf \small LogisticM}} \def\HardImpute{\text{\bf \small HardImpute}} \def\SoftImpute{\text{\bf \small SoftImpute}} \def\ALT{\text{\bf \small ALT}} \usepackage{setspace} \onehalfspacing \usepackage{xr} \usepackage{authblk} \title{Nonparametric Trace Regression in High Dimensions via \\Sign Series Representation} \author[1]{Chanwoo Lee} \author[2]{Lexin Li} \author[3]{Hao Helen Zhang} \author[1]{Miaoyan Wang$^*$\footnote{$^*$corresponding author: miaoyan.wang@wisc.edu. }} \affil[1]{Department of Statistics, University of Wisconsin--Madison} \affil[2]{Division of Biostatistics, University of California--Berkley} \affil[3]{Department of Mathematics, University of Arizona} \date{} \begin{document} \maketitle \begin{abstract} Learning of matrix-valued data has recently surged in a range of scientific and business applications. Trace regression is a widely used method to model effects of matrix predictors and has shown great success in matrix learning. However, nearly all existing trace regression solutions rely on two assumptions: (i) a known functional form of the conditional mean, and (ii) a global low-rank structure in the entire range of the regression function, both of which may be violated in practice. In this article, we relax these assumptions by developing a general framework for nonparametric trace regression models via structured sign series representations of high dimensional functions. The new model embraces both linear and nonlinear trace effects, and enjoys rank invariance to order-preserving transformations of the response. In the context of matrix completion, our framework leads to a substantially richer model based on what we coin as the ``sign rank'' of a matrix. We show that the sign series can be statistically characterized by weighted classification tasks. Based on this connection, we propose a learning reduction approach to learn the regression model via a series of classifiers, and develop a parallelable computation algorithm to implement sign series aggregations. We establish the excess risk bounds, estimation error rates, and sample complexities. Our proposal provides a broad nonparametric paradigm to many important matrix learning problems, including matrix regression, matrix completion, multi-task learning, and compressed sensing. We demonstrate the advantages of our method through simulations and two applications, one on brain connectivity study and the other on high-rank image completion. \end{abstract} \section{Introduction} \label{sec:intro} Matrix-valued data are rising ubiquitously in modern data science applications, for instance, brain neuroimaging analysis, integrative genomics, and sensor network localization. Trace regression is one of the most commonly used approaches for modeling matrix data \citep{fan2019generalized,hamidi2019low}. The model characterizes the relationship between a scalar response $Y$ and a high dimensional matrix predictor $\bm X \in \mathcal{X} \subset \mathbb{R}^{d_1\times d_2}$ as \begin{equation}\label{eq:linear} Y=\langle \bm X, \bm B \rangle+ \varepsilon,\ \text{with } \bm B \in \mathbb{R}^{d_1\times d_2} \text{ and rank}(\bm B)\leq r, \end{equation} where $\varepsilon$ is a zero-mean sub-Gaussian noise, and $r\in\mathbb{N}_{+}$ is the matrix rank typically assumed fixed and much smaller than $\min(d_1,d_2)$. The function $\bm X\mapsto \langle \bm X,\bm B\rangle=\text{tr}(\bm X\bm B^T)$ is called the trace effect, where $\text{tr}(\cdot)$ denotes the matrix trace. Over the last decade, the low-rank trace regression \eqref{eq:linear} has been studied intensively in numerous contexts, including matrix predictor regression, matrix completion, multi-task learning, and compressed sensing. \begin{itemize} \item{\bf Matrix predictor regression.} Linear trace regression~\eqref{eq:linear} was first proposed to model a matrix-valued predictor \citep{zhou2014regularized, wang2014network}, and was later generalized to model an exponential family response with a known link function \citep{wang2017generalized, fan2019generalized}. \smallskip \item {\bf Matrix completion.} In addition to the usual regression setting, another application of trace regression \eqref{eq:linear} is matrix completion, where the goal is to fill in the missing entries of a partially observed matrix \citep{Cai2016}. Suppose the predictor space $\mathcal{X}$ consists of basis matrices $\bm{a}_i\bm{b}^T_j$ in $\mathbb{R}^{d_1\times d_2}$, with $\bm{a}_i\in\mathbb{R}^{d_1}$ (respectively, $\bm{b}_j\in\mathbb{R}^{d_2}$) being the basis vector with 1 at the $i$-th (respectively, $j$th) position and 0 elsewhere. Let $\mathbb{P}_{\bm X}$ be a uniform distribution over $\mathcal{X}$. Then model \eqref{eq:linear} reduces to a matrix completion problem, $Y_{ij}=\langle \bm{a}_i \bm{b}^T_j,\bm B \rangle +\varepsilon_{ij}= B_{ij}+\varepsilon_{ij}$, where $Y_{ij}, B_{ij}\in\mathbb{R}$ denotes the $(i,j)$-th entry of the data matrix $\bm Y$ and the signal matrix $\bm B$, respectively, for $(i,j) \in \Omega\subset \{1,\ldots,d_1\}\times\{1,\ldots,d_2\}$ in the observed index set. Moreover, the model becomes a matrix denosing problem~\citep{Ma2016} when the observation set is complete, i.e, $\Omega=\{1,\ldots,d_1\}\times\{1,\ldots,d_2\}$. \smallskip \item {\bf Multi-task learning.} Another application of trace regression is multi-task learning, where the goal is to predict one task response by leveraging the structural similarities among multiple tasks. Here the predictor space $\mathcal{X}$ consists of only matrices that have a single non-zero row. The multi-task problem collects $n$ observations from $d_1$ different supervised learning tasks. Each task is modeled as a linear regression with an unknown $d_2$-dimensional parameter $\bm{b}_i, i=1,\ldots,d_1$, and the collection of $\bm{b}_i$ forms the rows of $\bm B$. The model exploits similarities among multiple tasks to predict the response of the $i$-th task \citep{caruana1997multitask,fan2019generalized}. \smallskip \item {\bf Compressed sensing.} Compressed sensing is also a special application of trace regression, where the goal is to recover the structured matrix $\bm B$ from multiple linear combinations of the entry observations. The space $\mathcal{X}$ is the family of measurement matrices given the sampling schemes. For example, Gaussian ensembles use random matrices $\bm X$ with i.i.d.\ entries from a standard normal distribution \citep{candes2011tight}, while factorized ensembles use rank-1 matrices $\bm X=\bm{u}\bm{v}^T$ for two random vectors $\bm{u}\in\mathbb{R}^{d_1}, \bm{v}\in\mathbb{R}^{d_2}$ \citep{recht2010guaranteed}. \end{itemize} \noindent In this article, we propose and study a nonparametric extension of the trace regression model \eqref{eq:linear}, which encompasses all above matrix learning problems. Particularly, we illustrate our method with two common problems, i.e., matrix predictor regression and matrix completion. \subsection{Inadequacy of low-rank trace regression} \label{sec:limit} The existing trace regression model \eqref{eq:linear} and its variants rely on two key assumptions: the relationship between $\mathbb{E}(Y|\bm X)$ and the trace effect is known a priori through some link function, and the matrix effect is encoded by a global low-rank matrix $\bm B$ in the entire function range. However, despite the popularity of trace regressions, these assumptions are stringent and may often be violated in practice. Next, we use two examples to illustrate the limitations of the classical low-rank trace regression. We present the pitfall in the context of matrix completion, and similar phenomena also occur in general matrix predictor regression. In the first example, we show the sensitivity of low-rank matrix models to order-preserving transformations. Let $\bm B=\bm{U}^T\bm{V} \in \mathbb{R}^{d\times d}$ be a rank-5 matrix, where $\bm{U}, \bm{V} \in\mathbb{R}^{d\times 5}$ consists of i.i.d. standard normal entries and $d=50$. Now suppose a monotonic transformation $g(b)=(1+\exp(-cb))^{-1}$ is applied to $\bm B$ entry-wise, and we let $g(\bm B)$ be the signal matrix prior to measurements. A small $c$ implies an approximate linearity $b\mapsto -cb$, whereas a large $c$ implies a high nonlinearity $b\mapsto \{0,1\}$. Fig~\ref{fig:limit}(a) shows that the numerical rank of $g(\bm B)$ increases rapidly with $c$, rendering the classical low-rank model ineffective. In genomic signal processing and other applications, the matrix of interest often undergoes unknown transformation prior to measurements. The sensitivity makes low-rank models less desirable as the global low-rank structure fails to be preserved through monotonic transformations. In the second example, we show the failure of the classical low-rank model in representing a structured but high-rank effect. We again consider the matrix completion for simplicity, but this time, from a full-rank signal matrix $\bm B\in\mathbb{R}^{d\times d}$, where the $(i,j)$-th entry is $\log(1+\max(i,j)/d)$ and $d=10$. Fig~\ref{fig:limit}(b) shows that $\bm B$ is clearly structured, but is of full-rank that $\text{rank}(\bm B)=d$. The classical low-rank model is again ineffective in this case. \begin{figure} \begin{center} \includegraphics[width=.8\textwidth]{figure/low_rank.pdf} \caption{Two examples of high-rank matrix trace models. (a) The numerical rank of the matrix $g(\bm B)$ versus $c$ in the transformation, where the numerical rank is defined by $\textup{rank}(g(\bm B))=\min\{\textup{rank}(\bm C )\colon \FnormSize{}{\bm C-g(\bm B)} \leq 0.01 \FnormSize{}{g(\bm B)} \}$. The error bar represents standard errors from 10 realizations of $\bm B$. (b) Heatmap of a full-rank matrix $\bm B\in\mathbb{R}^{d\times d}$ with the $(i,j)$-th entry equal to $\log(1+\max(i,j))$. In (a), $d=50$, and in (b), $d=10$.} \label{fig:limit} \label{penG} \end{center} \end{figure} These examples reveal the inadequacy of the conventional low-rank trace model~\eqref{eq:linear} in capturing important yet complex matrix effects. This has motivated us to develop a flexible class of nonparametric trace regression for modeling and estimating nonlinear, local, and possibly high-rank effects for high dimensional matrices. We later revisit these two examples in Section~\ref{sec:idea}, and show how those limitations can be overcome using a richer class of matrix models based on a new concept what we coin as the matrix ``sign rank''. \subsection{Our proposal and contributions} In this article, we first propose a new notion of low-rank sign representable function, then develop a flexible class of nonparametric trace regression models based on this representation, as well as relevant theory and computational algorithms. Our proposal makes useful contributions on multiple fronts. First, the proposed work fills a crucial gap between a global parametric model and a local nonparametric model in the literature of matrix modeling. We develop a new nonparametric regression paradigm -- structured sign representations -- to address the challenges previously difficult or infeasible in trace regressions, especially in the high dimensional regime where $d_1d_2\gg n$. Existing literature on matrix regressions almost exclusively focuses on low-rank trace effects in the global scale. However, such a premise often fails, where the rank of global effects may grow with the matrix dimension. By contrast, our proposed model enjoys rank invariance under monotonic transformations, and permits both low-rank and high-rank effects through aggregations of sign representation functions. We show that the low-rank sign functions not only preserve all information for conventional low-rank models, but also provide powerful tools for extracting nonlinear, high-rank trace effects and estimating them accurately. Our framework is flexible and applicable to high-rank matrix learning problems, and it greatly expands the horizon of conventional low-rank matrix models. Second, we show that the sign function series can be statistically characterized by classification tasks with carefully specified weights. This characterization converts a complex and hard regression problem, \emph{``what is the value of the nonparametric regression function?''} to a series of simpler and easier classification problems, \emph{``does the regression function fall below a threshold?''} Correspondingly, we develop a learning reduction approach to estimate the regression function via a series of classifiers, by leveraging classification solutions from existing state-of-art computational algorithms. Theoretically, we establish the excess risk bounds, estimation error rates, and sample complexities. Particularly, our error bound reveals the well-controlled complexity from sign estimation to regression, where \vspace{-0.01in} \begin{align*} \text{sign function error }& \lesssim \KeepStyleUnderBrace{t_n^{\alpha/( 2+\alpha)}}_{\text{classification error}},\\ \text{regression error } & \lesssim \KeepStyleUnderBrace{t_n^{\alpha/(2+\alpha)}\log H}_{\text{estimation error inherited from classification}}+\KeepStyleUnderBrace{\textstyle{1\over H}}_{\text{reduction bias}}+\KeepStyleUnderBrace{t_nH\log H}_{\text{reduction variance}}, \end{align*} in which $\alpha\geq 0$ quantifies the smoothness of the nonparametric regression function, $H\in\mathbb{N}_{+}$ is a resolution parameter that specifies the total number ($2H+1$) of sign functions to aggregate in our algorithm, $t_n=t_n(d,n)\to 0$ quantifies the convergence rate depending on the specific model, and $d=d_1=d_2$ for simplicity. In particular, we establish $t_n\asymp n^{-1}\log d$ under a two-way sparse non-parametric trace regression model (see Section~\ref{sec:sparse}), and $t_n \asymp n^{-1}d$ under a low sign rank non-parametric matrix completion model (see Section~\ref{sec:matrixcompletion}). These results imply that a low sample complexity with respect to the matrix dimension. Note that the sign function estimation reaches a faster $\mathcal{O}(n^{-1})$ rate compared to the $\mathcal{O}(n^{-1/2})$ regression rate when $\alpha= \infty$, which confirms our premise that sign estimation is easier than regression. To our knowledge, these statistical guarantees are among the first for the learning reduction approach in the context of nonparametric matrix regression. Lastly, we develop an alternating direction method of multipliers (ADMM) algorithm for optimization with a family of large-margin loss functions. From the computational and learning perspectives, the proposed method can be characterized as the {\bf \small A}ggregation of {\bf \small S}tructured {\bf \small SI}gn {\bf \small S}eries for {\bf \small T}race regression ({\bf \footnotesize ASSIST}). We show that the \text{\bf \footnotesize ASSIST}\ algorithm leverages recent advances in large-margin solvers as well as non-convex optimization for low-rank, two-way sparse matrix learning. As demonstrated in our simulations and real data applications, the \text{\bf \footnotesize ASSIST}\ method contributes a new matrix modeling tool of easy interpretability and accurate prediction. \subsection{Related work} Nonparametric learning for matrix data is much more challenging than standard multivariate data. Naively turning a matrix into a vector followed by a classical vector based nonparametric method can destroy rich structural information encoded in the matrix data. Moreover, most nonparametric methods rely on some notion of smoothness in a local neighborhood of the predictors. In the context of matrix regressions, however, the predictor space is huge, rendering the ``local smoothness'' assumption less practical, which is partially why the topic is barely explored by data with a limited sample size. Our work is related to but also clearly distinctive from several lines of existing research. The first line is the classical trace regression \citep{fan2019generalized,hamidi2019low}. The key difference is that the existing solutions all adopt a parametric model with a global low-rank structure. By contrast, our method is nonparametric and embraces nonlinear, local, and possibly high-rank effects for high dimensional matrices. The second line is the recent development of nonparametric methods with matrix-valued or tensor-valued data. In imaging analysis, convolution neural networks (CNNs) have been widely adopted as a nonparametric tool for prediction given matrix-valued images \citep{goodfellow2016deep}. In contrast, our proposal studies not only prediction, but also estimation and interpretability, with the theoretical guarantees. We also numerically compare our method with CNNs. \cite{hao2019sparse} proposed a sparse additive model with tensor predictors by extending the usual spline basis functions. \cite{zhou2020broadcasted} studied tensor predictors and proposed a broadcasting operation to introduce nonlinearity to individual tensor entries. Our nonparametric solution has broader implications than those approaches in estimating local low-rank effects. Our sign series representation of function bridges the gap between regression and classification in high dimensions, and naturally lends the problem to a learning reduction type solution. Moreover, although a matrix can be viewed as a two-dimensional tensor, the problem of nonparametric learning for matrix data itself is more parsimonious and deserves a full investigation. We leave the counterpart problem for nonparametric tensor regression as future research. The third line is function sign estimation, which is in turn related to classification, or more generally, the level set estimation. The latter problem has a long history in statistics \citep{tsybakov1997nonparametric} and computational mathematics \citep{gibou2018review}. Particularly, \cite{wang2008probability} proposed a conditional probability estimation method based on support vector machines (SVMs), but their results were restricted to a fixed number of features and vector predictors only. \cite{singh2009adaptive} proposed a tree based method for multiple sets extraction, but their goal was level set estimation instead of function estimation. None of these methods address the regression problem or high dimensional matrix predictors. By contrast, we bridge the problems of level set estimation and nonparametric regression using low-rank sign series representations. Instead of constructing a point-wise function in the domain space, the sign representation partitions the domain space based on the function range. The benefit bears the analogy of Lebesgue versus Riemann integrals in functional analysis, in the sense that the neighborhood is determined by the range space instead of the domain space. The former approach is especially appealing for matrix regressions, where the range space is determined by a simple scalar response, whereas the domain space is huge and high dimensional. \subsection{Notation and organization} We adopt the following notation throughout this article. Let $\mathcal{X} \subset \mathbb{R}^{d_1\times d_2}$ denote the feature space equipped by some measure $\mathbb{P}_{\bm X}$. For a function $f\colon \mathcal{X} \to \mathbb{R}$, let $\textup{sgn} f$ denote its sign function, i.e., $\textup{sgn} f(\bm X)=1$ if $f(\bm X)>0$ and $\textup{sgn} f(\bm X)=-1$ otherwise. Let $\onenormSize{}{f}$ denote its $L_1$ norm, where we define $\onenormSize{}{f}=\mathbb{E}|f(\bm X)|$ with the expectation taken with respect to $\bm X \sim \mathbb{P}_{\bm X}$. For a set $A \subset \mathcal{X}$, let $\textup{sgn} (\bm X\in A)$ denote the sign function induced by $A$, i.e., a function taking value $1$ on the event $\{\bm X\in A\}$ and $-1$ otherwise. Let $[n] = \{1,\ldots,n\}$, and $|\cdot|$ denote the cardinality. Let $\newnormSize{}{\cdot}_p$ denote the vector $p$-norm for $p\geq 0$. For a matrix $\bm B \in \mathbb{R}^{d_1\times d_2}$, let $\bm B_i$ denote its $i$-th row and $B_{ij}$ its $(i,j)$-th entry. Let $\newnormSize{}{\bm B}_{p,q}$ denote the matrix $(p,q)$-norm such that $\newnormSize{}{\bm B}_{p,q}=\newnormSize{}{\bm{b}}_q$, where $\bm{b}=(\newnormSize{}{\bm B_1}_p,\ldots,\newnormSize{}{\bm B_{d_1}}_p)^T\in\mathbb{R}^{d_1}$ consists of the $p$-norms for each row of $\bm B$. In particular, let $\newnormSize{}{\bm B}_{1,0}=|\{i\in [d_1]\colon \bm B_i\neq 0\}|$ denote the number of non-zero rows in $\bm B$. Let $\FnormSize{}{\bm B}=\sqrt{\langle \bm B, \bm B \rangle}$ denote the matrix Frobenius norm, and $\mnormSize{}{\bm B}=\max_{(i,j)}|B_{ij}|$ the matrix maximum norm. Denote $a_n\asymp b_n$ if $c_1\leq \lim_{n\to \infty} a_n/b_n\leq c_2$ for some constants $c_1,c_2>0$, and denote $a_n\lesssim b_n$ if $\lim_{n\to\infty} a_n/b_n\leq c$ for some constant $c\geq 0$. Let $\mathcal{O}(\cdot)$ denote the big-O notation, $\tilde \mathcal{O}(\cdot)$ the variant that hides the logarithmic factors, and $\mathds{1}(\cdot)$ the indicator function. Whenever applicable, the basic arithmetic operators are applied to a matrix in an element-wise manner. The rest of the article is organized as follows. Section \ref{sec:idea} presents the low-rank sign representable functions and our nonparametric trace regression model. Section \ref{sec:bridge} develops the learning reduction approach through weighted classifications, and establishes the corresponding statistical guarantees. Section \ref{sec:examples} specializes the general theory to two concrete learning problems, the low-rank sparse matrix predictor regression and the high-rank matrix completion. Section \ref{sec:estimation} studies the large-margin based estimation and develops an optimization algorithm. Section \ref{sec:simulation} presents the simulations, and Section \ref{sec:realdata} two real data applications. Section \ref{sec:discussion} concludes with a discussion. All technical proofs and additional results are relegated to the Supplementary Appendix. \section{Nonparametric trace regression model} \label{sec:idea} In this section, we present our nonparametric trace regression model. Let $\bm X\in\mathcal{X}\subset \mathbb{R}^{d_1\times d_2}$ denote the matrix predictor, $Y\in\mathbb{R}$ the scalar response, and $\mathbb{P}_{\bm X,Y}$ the joint probability distribution. We consider the model, \begin{equation}\label{eq:model} Y=f(\bm X)+\varepsilon, \end{equation} where $f\colon\mathcal{X}\mapsto \mathbb{R}$ is an unknown regression function of interest, and $\varepsilon$ is a mean-zero noise. For a cleaner exposition, we assume the noise is bounded and the range of $Y$ is in $[-1,1]$; the extension to a sub-Gaussian noise is provided in Section~\ref{sec:sub-Gaussian} of the Appendix. In addition, we allow a heterogeneous noise such that $\varepsilon$ may depend on $\bm X$. Model \eqref{eq:model} therefore incorporates both continuous and binary-valued responses. For instance, we allow the binary regression problem where $Y$ is a $\{0,1\}$-label from a Bernoulli distribution, in which case, the noise variance depends on the mean, and $f$ represents the conditional probability, $f(\bm X)=\mathbb{P}(Y=1|\bm X)$. Our goal is to estimate the regression function $f(\bm X)=\mathbb{E}(Y|\bm X)$ based on $n$ i.i.d.\ training samples $(\bm X_i,Y_i)_{i=1,\ldots,n}$. We next introduce the notion of low-rank sign representable function, which is essential to bridge the usual global low-rank trace models to nonparametric local low-rank trace models. \begin{defn}[Rank-$r$ sign representable function] \label{def:caliF} A function $f\colon \mathcal{X}\mapsto[-1,1]$ is called $(r,\pi)$-sign representable, for a given level $\pi\in[-1,1]$ and a rank $r \in \mathbb{N}_{+}$, if the function $(f-\pi)$ has the same sign as a rank-$r$ trace function; that is, \begin{equation} \label{eq:sign} \textup{sgn}(f(\bm X)-\pi) = \textup{sgn}(\langle \bm X, \bm B \rangle+b),\quad \text{for all }\bm X\in\mathcal{X}, \end{equation} where $\bm B=\bm B(\pi)$ is a rank-$r$ matrix, and $b=b(\pi)$ is the intercept. A function $f$ is called globally rank-$r$ sign representable, if $f$ is $(r,\pi)$-sign representable for all $\pi\in[-1,1]$. Let $\mathcal{F}_{\textup{sgn}}(r)$ denote the rank-$r$ sign representable function family, and let $\Phi(r)=\{\phi\colon \bm X\mapsto \langle \bm X, \bm B \rangle+b\ \big|\ \text{rank}(\bm B)\leq r, (\bm B,b)\in\mathbb{R}^{d_1\times d_2}\times\mathbb{R}\}$ denote the rank-$r$ trace function family. \end{defn} Next, we show that \eqref{eq:model} and \eqref{eq:sign} together form a very general family of models that incorporate most existing matrix regression models, including the low-rank trace regression, single index models, and high-rank matrix completion model. \begin{example}[Generalized trace regression] The linear and generalized trace regression \citep{zhou2014regularized, wang2017generalized, fan2019generalized} imposes that $f(\bm X)=g(\langle \bm X, \bm B \rangle)$ with a known link function $g$ and a rank-$r$ coefficient matrix $\bm B$. By definition, $\textup{sgn}(f(\bm X)-\pi)=\textup{sgn}(\langle \bm X, \bm B \rangle -g^{-1}(\pi))$ holds for every $\pi$ in the function range. Therefore, our model includes the generalized trace regression, i.e, $f \in \mathcal{F}_{\textup{sgn}}(r)$. In particular, the usual trace model corresponds to the identity link $g$. More generally, any monotonic $g$ is allowed as the link function, e.g., the logistic function $g(z)=(1+\exp(-z))^{-1}$, the arctangent function $g(z)={1/\pi}\arctan(z)+{1/2}$, the rectified linear unit (ReLU) function $g(z)=\max(0,z)$, and any inverse cumulative distribution function. \end{example} \begin{example}[Single index regression model] The monotonic matrix predictor single index model \citep{balabdaoui2019least,ganti2017learning} assumes a similar form of regression function $f(\bm X)=g^{}(\langle \bm X, \bm B\rangle)$ with a low-rank $\bm B$ and a monotonic $g$, but the form of $g$ is unknown. By definition, our model family $\mathcal{F}_{\textup{sgn}}(r)$ incorporates the single index model and does not require to know $g$ a priori. \end{example} \begin{example}[Multivariate normal mixture] The prospective model from matrix linear discriminant analysis \citep{hu2020matrix} considers a binary response $Y=\{0,1\}$, and assumes the matrix $\bm X|Y$ follows a Gaussian mixture distribution, $\bm X|\{Y=i\} = \bm B_0 + \bm B\times i + \bm{E}_i$, $i=0,1$, where $\bm B_0$ is an arbitrary baseline matrix, $\bm B$ is a rank-$r$ matrix, and $(\bm{E}_i)_{i=0,1}$ are two mutually independent noise matrices with i.i.d.\ standard normal entries. Our model incorporates this model, by noting that $f(\bm X)=\mathbb{E}(Y|\bm X)=\text{logistic}(\langle \bm B, \bm X \rangle+b)$ for some $b\in\mathbb{R}$, and thus $f\in \mathcal{F}_{\textup{sgn}}(r)$. \end{example} Definition \ref{def:caliF} leads to another notion, the matrix sign rank, which is important for applying our proposed model for matrix completion as a special nonparametric trace regression. Specifically, for a given matrix $\bm{\Theta}\in\mathbb{R}^{d_1\times d_2}$, define its sign rank as: \begin{equation*} \textup{srank}(\bm{\Theta})=\min\big\{ \textup{rank}(\bm{\Theta}')\colon \textup{sgn}(\bm{\Theta}')=\textup{sgn}(\bm{\Theta}),\ \bm{\Theta}'\in\mathbb{R}^{d_1\times d_2} \big\}. \end{equation*} This concept is important in areas such as combinatorics \citep{cohn2013fast} and quantum mechanics \citep{de2003nondeterministic}, and, to our knowledge, we are the first to exploit this notion for nonparametric learning. To better understand its relation to the proposed nonparametric trace regression, we consider model~\eqref{eq:sign} with the predictor space $\mathcal{X} = \{\bm{a}_i\bm{b}_j^T\colon (i,j)\in[d_1]\times[d_2]\}$, and $\bm{a}_i\in\mathbb{R}^{d_1}, \bm{b}_j\in\mathbb{R}^{d_2}$ are the basis vectors. For matrix completion, a function $f$ over $\mathcal{X}$ is equivalently represented by a $d_1$-by-$d_2$ signal matrix $\bm{\Theta}=\entry{f(\bm{e}_i\bm{e}_j^T)}$. Our proposed function family $\mathcal{F}_{\textup{sgn}}(r)$ essentially defines a new family of structured matrices with a low sign rank, as shown in the next proposition. \begin{prop}[Sign-representable function over basis matrices]\label{prop:signbasis} Consider the predictor space $\mathcal{X}=\{\bm{a}_i\bm{b}_j^T \colon (i,j)\in[d_1]\times[d_2]\}$. We represent a bounded function $f\colon \mathcal{X}\to [-1,1]$ by its function values organized as a matrix $\bm{\Theta}=\entry{f(\bm{a}_i\bm{b}_j^T)} \in [-1,1]^{d_1\times d_2}$, for basis vectors $\bm{a}_i\in\mathbb{R}^{d_1}, \bm{b}_j\in\mathbb{R}^{d_2}$. If $f$ is rank-$r$ sign representable, then $\max_{\pi\in[-1,1]}\textup{srank}(\bm{\Theta}-\pi)\leq r+1$ (the constant 1 is due to the intercept in~\eqref{eq:sign}). Conversely, if $\max_{\pi\in[-1,1]}\textup{srank}(\bm{\Theta}-\pi)\leq r$, then $\bm{\Theta}$ defines a rank-$r$ sign representable function $f$. \end{prop} Define the sign-$r$ representable family for the signal matrix in matrix completion. \begin{align*} \mathcal{M}_{\textup{sgn}}(r)=\{\bm{\Theta}\colon \max_{\pi\in[-1,1]}\textup{srank}(\bm{\Theta}-\pi)\leq r, \ \mnormSize{}{\bm{\Theta}}\leq 1\}. \end{align*} The family $\mathcal{M}_{\textup{sgn}}(r)$ is a special case of the function family $\mathcal{F}_{\textup{sgn}}(r)$ in Definition \ref{def:caliF} with $b=0$ and the predictor space $\mathcal{X}=\{\bm{a}_i\bm{b}_j^T\colon (i,j)\in[d_1]\times[d_2]\}$. We next further compare the sign rank with the matrix rank in this setting. \begin{prop}[Sign-rank vs. matrix rank]\label{prop:signrank} Consider the setting in Proposition \ref{prop:signbasis}. Then, \begin{enumerate}[label=(\alph*)] \item $\max_{\pi\in[-1,1]}\textup{srank}(\bm{\Theta}-\pi)\leq \textup{rank}(\bm{\Theta})+1$. \item If $\bm{\Theta} \in \mathcal{M}_{\textup{sgn}}(r)$, then $g(\bm{\Theta})/\mnormSize{}{g(\bm{\Theta})}\in\mathcal{M}_{\textup{sgn}}(r+1)$ for any strictly monotonic function $g\colon \mathbb{R}\to\mathbb{R}$. Here $g(\bm{\Theta})$ denotes the matrix by applying $g(\cdot)$ to $\bm{\Theta}$ entry-wise. \item For every dimension $d$, there exists a $d$-by-$d$ matrix $\bm{\Theta}\in\mathcal{M}_{\textup{sgn}}(2)$ such that $\textup{rank}(\bm{\Theta})= d$. \end{enumerate} \end{prop} \noindent Proposition \ref{prop:signrank} highlights the advantages of using the sign rank in the high dimensional matrix analysis. The first property implies that classical low-rank matrix model is a special case of our low sign rank model. The second property shows that, compared to the matrix rank, the sign rank remains nearly invariant under monotonic transformations, since $\textup{srank}(g(\bm{\Theta})) \leq 1+\textup{srank}(\bm{\Theta})$ for all monotonic functions $g$. The last property shows that the sign rank can be dramatically smaller than the conventional matrix rank. Therefore, our model $\mathcal{M}_{\textup{sgn}}(r)$ is strictly richer than the usual low-rank model. A key advantage about the sign rank concept is that the low sign rank assumption is more relaxed and hence more realistic than the classical low matrix rank assumption. We next revisit the high-rank matrix model in Fig~\ref{fig:limit}(a) to show that $\bm B$ is of a high matrix rank but a low sign rank. Meanwhile, we provide some additional examples of low sign rank matrices in Section~\ref{sec:signrank} of the Appendix, including matrices with repeating patterns \citep{chan2014consistent}, banded matrices, and the identity matrix. \begin{example}[Single index model based matrix completion] For the model in Fig~\ref{fig:limit}(a), $g(\bm B)$ is a low sign rank matrix because $\textup{srank}(g(\bm B)-\pi)\leq 1+\textup{rank}(\bm B)=6$ for all $\pi$ in the function range. However, $g(\bm B)$ itself is often high-rank as shown in Fig~\ref{fig:limit}(a). \end{example} \begin{example}[High-rank matrix completion model]\label{ex:high-rank} For the model in Fig~\ref{fig:limit}(b), the matrix $\bm B=\entry{\log(1+\max(i,j)/d)}$ is full-rank. Remarkably, this high-rank matrix belongs to our sign representable function with rank 2, i.e., $\bm B\in \mathcal{M}_{\textup{sgn}}(2)$. This is because $\textup{srank}(\bm B-\pi)=\textup{srank}(\bar \bm B)$, where $\bar \bm B=\entry{\textup{sgn}(\max(i,j)-e^\pi+1)}$ is a block matrix with rank at most 2. More generally, matrices of the type $\bm B=\entry{g(\max(i,j)/d)}$ belong to $\mathcal{M}_{\textup{sgn}}(2r)$, where $g(\cdot)$ is a polynomial of degree $r$. See Section~\ref{sec:signrank} of the Appendix. \end{example} Our proposed nonparametric matrix regression model $\mathcal{F}_{\textup{sgn}}(r)$ therefore implies a new matrix completion model in $\mathcal{M}_{\textup{sgn}}(r)$. In next sections, we establish the general theory for $\mathcal{F}_{\textup{sgn}}(r)$ first, then specialize the results to the high-rank completion problems in Section~\ref{sec:matrixcompletion}. \section{From classification to regression: a learning reduction approach} \label{sec:bridge} In this section, we present a learning reduction approach to estimate $f$ from the model as specified in \eqref{eq:model} and \eqref{eq:sign}. Our main crux is to provably convert the regression estimation problem into a series of sign function estimation problems, which are in turn solved by weighted classifications. More specifically, we dichotomize the response $Y_i$ into a series of binary observations, $\textup{sgn}(Y_i-\pi)$, for $\pi\in\mathcal{H}=\{-1,\ldots,-{1/H}, 0, {1/H}, \ldots,1\}$, where $H\in\mathbb{N}_{+}$ is a resolution parameter that controls the total number of sign functions to estimate. Then, for each $\pi$, we estimate the sign function $\textup{sgn}(f-\pi)$ by performing a classification task, \begin{equation}\label{eq:proposal} \hat \phi_\pi =\argmin_{\phi\in\Phi(r)}{1\over 2n}\sum_{i=1}^n\text{weighted-classification}(\textup{sgn}(Y_i-\pi),\ \textup{sgn} \phi(\bm X_i)), \end{equation} where $\Phi(r)$ is the collection of rank-$r$ trace functions, and the weighted classification$(\cdot,\cdot)$ denotes a classification objective function with a response-specific weight to each sample point. The weight in the objective function is crucial in our method, and we will detail the form in next section. Our final regression function estimate takes the form, \begin{equation}\label{eq:stepfunction} \hat f= {1\over 2H+1}\sum_{\pi \in \mathcal{H}} \textup{sgn} \hat \phi_\pi. \end{equation} \begin{figure}[t] \includegraphics[width=\textwidth]{figure/demo_method.pdf} \caption{Nonparametric matrix regression via sign function series estimation. We use a series of weighted classifications to estimate the sign functions, then obtain the regression function estimate via sign aggregations. Here, $\bm X\in\mathcal{X}$ denotes matrix-valued predictor, $f\colon \mathcal{X}\to \mathbb{R}$ denotes regression function, and $\textup{sgn}(f-\pi)\in\{-1,1\}$ is the sign function, where $\pi\in\{-1,\ldots,-1/H,0,1/H,\ldots, 1\}$ is the series of levels to aggregate in our algorithm.} \label{fig:method} \end{figure} We comment that the $(2H+1)$ estimation tasks of the sign functions are fully separable, leading naturally to a parallel type computation. Moreover, the sign functions bridge the problems of level set estimation and Bayes classification, as we will detail in Section~\ref{sec:identifiability}. Fig~\ref{fig:method} illustrates our main idea graphically. We refer to our method as the {\bf \small A}ggegration of {\bf \small S}tructured {\bf \small SI}gn {\bf \small S}eries for {\bf \small T}race regression, and abbreviate it as \text{\bf \footnotesize ASSIST}. Next, we describe the specific form of weighted classification, the uniqueness of the classification optimizer, as well as the accuracy guarantee of the estimator. \subsection{Statistical characterization of sign functions via weighted classification} For a given level $\pi\in[-1,1]$, define the $\pi$-shifted response $\bar Y_{\pi,i} =Y_i-\pi$ for $i\in[n]$. We propose a weighted classification objective function in~\eqref{eq:proposal} using \begin{equation}\label{eq:loss} L(\phi;(\bm X_i,\bar Y_{\pi,i})_{i\in[n]})={1\over 2n}\sum_{i=1}^n\KeepStyleUnderBrace{|\bar Y_{\pi,i}|}_{\text{response-specific weight}}\times\KeepStyleUnderBrace{|\textup{sgn} \bar Y_{\pi,i} - \textup{sgn} \phi(\bm X_i)|}_{\text{classification loss}}, \end{equation} where $\phi\in \Phi(r)$ is the trace function to be optimized, and $|\bar Y_{\pi, i}|$ serves as the weight. Such a response-specific weight incorporates the magnitude information of the response into classification, in that the response values that are far away from the target level are penalized more heavily in the objective \eqref{eq:loss}. In the special case of a binary response $Y_i\in\{-1,1\}$ and target level $\pi=0$, the objective \eqref{eq:loss} reduces to the usual classification loss. Next, define the weighted classification risk, \begin{equation}\label{eq:constrained} \textup{Risk}_\pi(\phi)=\mathbb{E}L(\phi; (\bm X_i,\bar Y_{\pi,i})_{i\in[n]}), \end{equation} where the expectation is taken with respect to the joint distribution of $(\bm X_i,Y_i)$ i.i.d.\ from $\mathbb{P}_{\bm X,Y}$. The next theorem quantifies the global optimum of \eqref{eq:constrained}. \begin{thm}[Global optimum of weighted classification risk]\label{thm:oracle} For any given level $\pi\in[-1,1]$, under the model specified in \eqref{eq:model} and \eqref{eq:sign}, for all functions $\bar f$ that have the same sign as $\textup{sgn}(f-\pi)$, it holds that $\textup{Risk}_\pi(\bar f) = \inf\{\textup{Risk}_\pi(\phi)\colon \phi\in \Phi(r)\}$. \end{thm} \noindent Theorem~\ref{thm:oracle} suggests a practical procedure to estimate $\textup{sgn}(f-\pi)$ through weighted classifications. The result ensures that the sign function $\textup{sgn}(f-\pi)$ minimizes the weighted classification risk. The inverse, however, may not hold true, due to possible multiple global optimizers of $\textup{Risk}_\pi(\cdot)$. A simple example is a constant regression $f(\bm X)=\mathbb{E}(Y|\bm X) = c$, in which case, every function $\phi\in \Phi(r)$ minimizes $\textup{Risk}_\pi(\cdot)$ at the level $\pi=c$. The next section resolves this issue by characterizing the uniqueness of the risk optimizer. \subsection{Identifiability}\label{sec:identifiability} To establish the statistical guarantee of the minimizer of $\textup{Risk}_\pi(\cdot)$, we first address its uniqueness, up to some sign equivalence. It turns out the local behavior of the regression function $f$ around $\pi$ plays a key role to establish the identifiability of sign function series from weighted classifications. We introduce some additional notation. We call $S_{\textup{bayes}}(\pi)=\{\bm X\in\mathcal{X}\colon f(\bm X)\geq \pi\}$ the Bayes set at level $\pi$, and $\partial S_{\textup{bayes}}(\pi)=\{\bm X\in \mathcal{X}\colon f(\bm X)=\pi\}$ the level set boundary. Note that there is a one-to-one correspondence between the sign function $\textup{sgn}(f-\pi)$ and the Bayes set $S_{\textup{bayes}}(\pi)$. We choose to present the results in terms of $S_{\textup{bayes}}(\pi)$ for easier comparison with the existing classification literature \citep{tsybakov2004optimal,singh2009adaptive}. We call a level $\pi\in[0,1]$ a mass point if the level set boundary $\partial S_{\textup{bayes}}(\pi)$ has a non-zero measure under $\mathbb{P}_{\bm X}$. Let $\mathcal{N}=\{\pi\in[-1,1] \colon \mathbb{P}_{\bm X}\left[f(\bm X)=\pi\right]\neq 0\}$ denote the collection of all mass points in $f$. Assume there exists a constant $c>0$, independent of the feature space dimension, such that $|\mathcal{N}|\leq c<\infty$. We introduce a notion of smoothness for the cumulative distribution function (CDF) of $f(\bm X)$ under measure $ \mathbb{P}_{\bm X}$. \begin{defn} [$\alpha$-smoothness] \label{ass:decboundary} Suppose $\mathbb{P}_{\bm X}$ is a continuous distribution, and denote the CDF $G(\pi)=\mathbb{P}_{\bm X}[f(\bm X)\leq \pi]$. A function $f$ is called $(\alpha,\pi)$-locally smooth, for a given $\pi \notin \mathcal{N}$, if there exist constants $C=C(\pi)>0$ and $\alpha=\alpha(\pi)\geq 0$, such that \begin{equation}\label{eq:mass} \sup_{0\leq t<\rho(\pi, \mathcal{N})}{G(\pi+t)-G(\pi-t)\over t^{\alpha}}\leq C, \end{equation} where $\rho(\pi,\mathcal{N}) = \min_{\pi'\in \mathcal{N}} |\pi-\pi'|$ denotes the distance from $\pi$ to the nearest point in $\mathcal{N}$. We make the convention that $\rho(\pi,\mathcal{N})=2$ (which equals the range of $\pi\in[-1,1]$) when $\mathcal{N}$ is empty, and $\alpha=\infty$ when the numerator in \eqref{eq:mass} is zero. The largest possible $\alpha=\alpha(\pi)$ in \eqref{eq:mass} is called the smoothness index at level $\pi$. The function $f$ is called $\alpha$-globally smooth, if \eqref{eq:mass} holds with a global constant $C$ for all $\pi\in[-1,1]$ except for a finite number of levels. \end{defn} \noindent Fig~\ref{fig:CDF} shows three examples of the CDF with various levels of smoothness. A small value of $\alpha<1$ indicates the infinite density at level $\pi$, or equivalently, when $G(\pi)$ jumps at $\pi$. A large value of $\alpha>1$ corresponds to the case of no point mass around $\pi$, or equivalently, when $G(\pi)$ remains flat. An intermediate case is $\alpha=1$ when $G(\pi)$ has a finite non-zero sub-derivative in the vicinity of $\pi$. The global smoothness index is the minimal $\alpha$ over all $\pi$'s; meanwhile, we allow exceptions for a finite number of levels. \begin{figure}[t] \includegraphics[width=.95\textwidth]{figure/cdf_new.pdf} \caption{Three examples of CDF, $G(\pi)=\mathbb{P}_{\bm X}(f(\bm X)\leq \pi)$, with local smoothness index $\alpha$ at $\pi$ depicted in dashed line. (a) and (b). Function $G(\pi)$ $\alpha=1$ because the $G(\pi)$ has finite sub-derivatives in the range of $\pi$; (c). Function $G(\pi)$ with $\alpha=\infty$ at most $\pi$ (in blue), except for a total number of $|\mathcal{N}|=r$ jump points (in red). Here $|\mathcal{N}|$ denotes the number of jump points.} \label{fig:CDF} \end{figure} Next, we show that the $\alpha$-smoothness with $\alpha\neq 0$ implies the uniqueness of $S_{\textup{bayes}}(\pi)$ for the optimizer of $\textup{Risk}_\pi(\cdot)$. For two sets $S_1, S_2\in \mathcal{X}$, define the probabilistic set difference, \begin{align*} d_{\Delta}(S_1,S_2) = \mathbb{P}_{\bm X}(S_1\Delta S_2)=\mathbb{P}_{\bm X}\{\bm X\colon \bm X\in S_1\setminus S_2 \text{ or }S_2\setminus S_1\}, \end{align*} and the risk difference, \begin{align*} d_\pi(S_1,S_2) = \textup{Risk}_\pi(\textup{sgn}(S_1))-\textup{Risk}_\pi(\textup{sgn}(S_2)). \end{align*} \begin{thm}[Identifiability]~\label{thm:identifiability} Suppose $f$ is $\alpha$-globally smooth over $\mathcal{X}$. Then, \begin{align}\label{eq:identity} d_{\Delta}(S,S_{\textup{bayes}}(\pi)) \lesssim \left[d_\pi(S,S_{\textup{bayes}}(\pi))\right]^{\alpha\over 1+\alpha}+{1\over\rho(\pi, \mathcal{N})} d_\pi(S,S_{\textup{bayes}}(\pi)), \end{align} for all sets $S\in\mathcal{X}$ and all levels $\pi\in[-1,1]$ except for a finite number of levels. \end{thm} \noindent We make two remarks. First, the bound~\eqref{eq:identity} controls the worst-case perturbation of the classifiers under the measure $\mathbb{P}_{\bm X}$ with respect to the weighted classification risks. When $\alpha \neq 0$, the inequality \eqref{eq:identity} immediately implies the uniqueness, up to a measure-zero set in $\mathbb{P}_{\bm X}$, of $S_{\textup{bayes}}(\pi)$ in minimizing $\textup{Risk}_\pi(\cdot)$. Second, our identifiability improves the earlier results for a single level set estimation to multiple level set estimations. Existing work \citep{singh2009adaptive,xu2020class} considered only a finite number of $\pi$'s, and provided only the first term in the bound \eqref{eq:identity}. In contrast, our bound quantifies the full dependence on the level $\pi$, and establishes the recovery condition of $S_{\textup{bayes}}(\pi)$ uniformly over all possible $\pi$'s. It turns out both terms in the bound \eqref{eq:identity} are crucial for our regression function estimation. The first term contributes to the classification error, and the second term contributes to the variance in sign series aggregations. \subsection{Regression risk bound} In this section, we provide the statistical accuracy guarantee for the learning reduction based estimators~\eqref{eq:proposal} and~\eqref{eq:stepfunction}. Our theory consists of three main ingredients. We first leverage the $\alpha$-smoothness to provide a sharp rate for $\hat \phi_\pi$'s classification risk faster than the usual root-$n$ convergence. The improvement stems from the fact that, under the given assumptions, the variance of the excess classification loss is bounded in terms of its expectation. Because the variance decreases as we approach the optimal $\textup{sgn}(f-\pi)$, the risk of $\hat \phi_\pi$ converges more quickly to the optimal risk than the simple uniform converge results would suggest. The second step is to convert the risk error into the probability set error by Theorem~\ref{thm:identifiability}. The last step is to aggregate the set error into the final nonparametric function estimation. A careful error analysis reveals the joint contribution from both sign aggregations and variance-bias trade-off. The next result establishes the estimation accuracy for sign function estimator \eqref{eq:proposal}. \begin{thm}[Sign function estimation]\label{thm:main} Suppose the regression function $f\in\mathcal{F}_{\textup{sgn}}(r)$ is $\alpha$-globally smooth over $\mathcal{X}$, and let $d_{\max}=\max(d_1,d_2)$. Then, for all $\pi\in[-1,1]$ except for a finite number of levels, with high probability at least $1-\exp(-rd_{\max})$ over the training data $(\bm X_i,Y_i)_{i\in[n]}$, we have, \begin{equation}\label{eq:riskbound} \onenormSize{}{\textup{sgn} \hat \phi_\pi- \textup{sgn}(f-\pi)} \lesssim \left({rd_{\max} \over n}\right)^{\alpha\over 2+\alpha}+{1\over \rho^2(\pi, \mathcal{N})}\left({rd_{\max}\over n}\right), \end{equation} where the $L_1$ norm is taken with respect to the measure $\bm X\sim\mathbb{P}_{\bm X}$. \end{thm} \noindent Theorem~\ref{thm:main} quantifies the statistical convergence of the sign function estimation. For a fixed $\pi$, the second term in \eqref{eq:riskbound} is absorbed into the first term, leading to the rate $O(n^{-\alpha/(2+\alpha)})$. We find that the sign estimation reaches a fast rate $1/n$ when $\alpha =\infty$, and reaches a slow rate $1/\sqrt{n}$ when the point mass concentrates with $\alpha=0$. This is consistent with our intuition, because best rate $\alpha = \infty$ corresponds to a clear separation with no point mass at the Bayes set boundary $\partial S_{\textup{bayes}}(\pi)$, whereas the worst rate $\alpha = 0$ corresponds to a heavy mass around $\partial S_{\textup{bayes}}(\pi)$. Furthermore, the sign function estimation achieves consistency in the high dimensional region as long as $n \gg d_{\max}\to \infty$ and $\alpha\neq 0$. Combining the sign representability of the regression function and the uniform sign estimation accuracy, we obtain our main theoretical result on the nonparametric trace regression. \begin{thm}[Regression function estimation]\label{thm:regression} Suppose the same conditions in Theorem~\ref{thm:main} hold. With high probability at least $1-\exp(-rd_{\max})$ over the training data $(\bm X_i,Y_i)_{i\in[n]}$, we have \begin{align}\label{eq:bound} \onenormSize{}{\hat f-f} \lesssim \KeepStyleUnderBrace{\left({rd_{\max}\log H \over n}\right)^{\alpha \over 2+\alpha}}_{\text{estimation error from sign functions}}+\KeepStyleUnderBrace{1\over H}_{\text{reduction bias}}+\KeepStyleUnderBrace{\left({rd_{\max}\over n}\right)H \log H}_{\text{reduction variance}}, \end{align} for any resolution parameter $H \in \mathbb{N}_{+}$. In particular, setting $H\asymp \left( {n\over rd_{\max}} \right)^{1/2}$ gives \begin{equation}\label{eq:final} \onenormSize{}{\hat f-f} \lesssim \left({rd_{\max} \log n\over n}\right)^{\min\left({\alpha\over 2+\alpha}, {1\over 2}\right)}, \end{equation} where the $L_1$ norm is taken with respect to the measure $\bm X\sim\mathbb{P}_{\bm X}$ \end{thm} \noindent Theorem~\ref{thm:regression} establishes the convergence rate of the proposed learning reduction estimator for the nonparametric trace regression. We make three remarks. First, the bound~\eqref{eq:bound} reveals three sources of errors: the estimation error from sign functions, the bias due to sign series representations, and the variance thereof. Recall that $H$ determines the number of sign functions in sign series representations. It controls the bias-variance tradeoff here. Second, the regression is robust to a few off-target classifications, as long as the majorities are accurate. This can also be seen in Fig~\ref{fig:CDF}(a) where the classification is nonidentifiable at some mass point (red line). Nevertheless, the regression estimation is still possible because the nearby classifications provide the sign signal (blue lines). This fact shows the benefit of sign aggregations, and also explains the trade-off in choosing $H$. Intuitively, a larger value of $H$ increases the approximation accuracy, but meanwhile renders the classification harder near the mass points. Third, the final regression error is generally no better than the sign error, as we compare the bounds in \eqref{eq:final} with \eqref{eq:riskbound}. This confirms our premise that classification is easier than regression. On the other hand, our sign representation approach allows us to disentangle the complexity and achieve the theoretical guarantee from classification to regression. \section{Two applications of nonparametric matrix learning} \label{sec:examples} In this section, we apply the general theory in Theorem~\ref{thm:regression} to two specific nonparametric matrix learning problems, the low-rank sparse matrix predictor regression, and the high-rank matrix completion. \subsection{Low-rank sparse matrix predictor regression} \label{sec:sparse} The first problem we consider is matrix predictor regression. In addition to the low sign rank structure, we also introduce a two-way sparsity structure. That is, we impose that some rows and columns of $\bm B$ are zeros, where $\bm B$ is as defined in \eqref{eq:sign}. We comment that sparsity is a commonly used structure in matrix data modeling \citep{zhou2014regularized}, and it is scientifically plausible in numerous applications \citep{Zhang2015}. Specifically, we extend the notation $\Phi(r)$ and $\mathcal{F}_{\textup{sgn}}(r)$ introduced in Definition~\ref{def:caliF} to incorporate the sparsity. Let $\Phi(r,s_1,s_2)$ denote the collection of trace functions, \begin{align*} \Phi(r,s_1,s_2)=\{\phi\colon \bm X\mapsto \langle \bm X, \bm B \rangle +b \ \big| \text{rank}(\bm B)\leq r, \text{supp}(\bm B)\leq (s_1,s_2), (\bm B,b)\in\mathbb{R}^{d_1\times d_2}\times \mathbb{R}\}, \end{align*} where $\text{supp}(\bm B)$ denotes the support of $\bm B$, with the sparsity parameters, $s_1=\newnormSize{}{\bm B}_{1,0}=|\{i\in[d_1]\colon \bm B_i\neq \mathbf{0}\}|$, and $s_2=\newnormSize{}{\bm B^T}_{1,0}=|\{j\in[d_2]\colon \bm B^T_j\neq \mathbf{0}\}|$, denoting the number of non-zero rows and non-zero columns of $\bm B$, respectively. Similarly, let $\mathcal{F}_{\textup{sgn}}(r,s_1,s_2)$ denote a family of rank-$r$, support-$(s_1,s_2)$ sign representable functions based on \eqref{eq:sign}. We have the following result. \begin{thm}[Nonparametric low-rank two-way sparse regression]\label{thm:sparse} Consider the same setup as in Theorem~\ref{thm:regression}, except that we replace $\mathcal{F}_{\textup{sgn}}(r)$ and $\Phi(r)$ with $\mathcal{F}_{\textup{sgn}}(r,s_1,s_2)$ and $\Phi(r,s_1,s_2)$, respectively. Set $H\asymp {\left(n\over r(s_1+s_2)\log d_{\max}\right)}^{1/2}$ in~\eqref{eq:stepfunction}. With high probability at least $1-d_{\max}^{-r(s_1+s_2)}$ over the training data $(\bm X_i, Y_i)_{i\in[n]}$, the estimate \eqref{eq:stepfunction} is bounded by \begin{equation}\label{eq:final2} \onenormSize{}{\hat f- f} \lesssim \left({r(s_1+s_2) \log d_{\max}\log n \over n}\right)^{\min\left({\alpha\over 2+\alpha}, {1\over 2}\right)}. \end{equation} \end{thm} \noindent We make two remarks. First, the bound \eqref{eq:final2} suggests that the estimator remains consistent in the high dimensional regime as $d_{\max}$ and $n\to \infty$, as long as $d_{\max}$ grows sub-exponentially in the sample size $n$. Such a sample complexity shows the pronounced advantage of the low-rank two-way sparse structural model, by comparing \eqref{eq:final2} and \eqref{eq:final}. Second, the two-way sparsity structure facilitates the interpretability, which we further demonstrate through numerical examples in Section~\ref{sec:comparison}. \subsection{High-rank matrix completion}\label{sec:matrixcompletion} The second problem we consider is matrix completion. Let $\bm Y\in\mathbb{R}^{d_1\times d_2}$ be a data matrix generated from the model, \begin{equation}\label{eq:modelcompletion} \bm Y=\bm{\Theta}+\bm{E}, \end{equation} where $\bm{\Theta}\in\mathcal{M}_{\textup{sgn}}(r)$ denotes an unknown signal matrix, and $\bm{E}$ is an error matrix consisting of zero-mean, independent but not necessarily identically distributed entries. For simplicity, we assume $d_1=d_2=d$. Model \eqref{eq:modelcompletion} can be viewed as a special case of model \eqref{sec:idea}, where the predictor space consists of the basis matrices in $\mathbb{R}^{d\times d}$, and the data matrix $\bm Y=\entry{Y_{ij}}$ collects the scalar response $Y_{ij} \in \mathbb{R}$. In this case, the problem of regression estimation becomes the estimation of $\bm{\Theta}$. What is observed is an incomplete data matrix $\bm Y_\Omega$ from \eqref{eq:modelcompletion}, where $\Omega \subset [d]^2$ represents the index set of the observed entries. We allow both uniform and non-uniform sampling schemes for $\Omega$. Let $\Pi=\{p_\omega\}$ be an arbitrarily predefined probability distribution over the full index set with $\sum_{\omega\in[d]^2}p_\omega=1$. Assume the entries $\omega$ in $\Omega$ are i.i.d.\ draws with replacement from the full index set following the distribution $\Pi$. Denote the sampling rule as $\omega \sim \Pi$, and $\bm Y(\omega)$ the matrix entry indexed by $\omega$. Now applying our learning reduction approach to the matrix completion problem~\eqref{eq:modelcompletion} yields the signal matrix estimate \begin{equation}\label{eq:est} \hat \bm{\Theta} = {1\over 2H+1}\sum_{\pi \in \mathcal{H}}\textup{sgn}(\hat \bm Z_\pi), \end{equation} where, for every $\pi\in\{-1,\ldots,-1/H,0,1/H,\ldots,1\}$, the matrix $\hat \bm Z_\pi$ is the solution to the weighted classification \begin{equation*} \hat \bm Z_\pi = \argmin_{\bm Z\colon \textup{rank}(\bm Z)\leq r}\sum_{\omega\in\Omega} \KeepStyleUnderBrace{|\bm Y(\omega)-\pi|}_{\text{weight}}\KeepStyleUnderBrace{|\textup{sgn}(\bm Y(\omega)-\pi)-\textup{sgn}(\bm Z(\omega))|}_{\text{classification loss}}. \end{equation*} \noindent To assess the accuracy of the estimate $\hat \bm{\Theta}=\hat \bm{\Theta}_{d\times d}$ in the high dimensional regime $d\to \infty$, we need to put the model in the nonparametric context of Definition~\ref{ass:decboundary}. We next extend the notion of $\alpha$-smoothness to a discrete feature space as follows. Let $\Delta s = 1/d^2$ denote a small tolerance, where $d^2$ represents the number of elements in the feature space. We quantify the distribution of the entries in matrix $\bm{\Theta}$ using a pseudo density, i.e., histogram with bin width $2\Delta s$. Specifically, let $G(\pi)=\mathbb{P}_{\omega\sim \Pi}[\bm{\Theta}(\omega)\leq \pi]$ denote the CDF of $\bm{\Theta}(\omega)$ under $\omega\sim \Pi$. We partition $[-1,1]=\mathcal{N} \cup \mathcal{N}^c$, where $\mathcal{N}$ consists of levels whose pseudo density based on $2\Delta s$-bin is asymptotically unbounded; i.e, \[ \mathcal{N}=\left\{\pi\in[-1,1] \colon {G(\pi+{\Delta s})-G(\pi-{\Delta s})\over \Delta s} \geq c_1 \right\},\ \text{for some universal constant }c_1>0, \] and $\mathcal{N}^c$ otherwise. Let $|\mathcal{N}|_{\text{cover}}$ be the covering number of $\mathcal{N}$ with $2\Delta s$-bin's; i.e, $|\mathcal{N}|_{\text{cover}} =\text{Leb}(\mathcal{N})/2\Delta s$, where $\text{Leb}(\cdot)$ denotes the Lebesgue measure. The following assumption is a discrete analogy of Definition~\ref{ass:decboundary}. \begin{defn}[$\alpha$-smoothness for discrete distribution] Let $\Pi$ be the sampling distribution over $[d^2]$. We say the signal matrix $\bm{\Theta}(\omega)$ is $\alpha$-globally smooth under $\omega\sim \Pi$, if there exist constants $c_2,c_3>0$, such that $|\mathcal{N}|_{\text{cover}}\leq c_2$, and for all $\pi \in\mathcal{N}^c$, \begin{equation*} \label{eq:smooth} \sup_{\Delta s \leq t<\rho(\pi, \mathcal{N})}{G(\pi+{t})-G(\pi-{t})\over t^\alpha} \leq c_3, \;\; \text{ with } \rho(\pi,\mathcal{N})=\min_{\pi'\in \mathcal{N}}|\pi-\pi'|+\Delta s \end{equation*} and $\rho(\pi,\mathcal{N})$ denotes the adjusted distance from $\pi$ to the nearest point in $\mathcal{N}$. \end{defn} We assess the estimation error of~\eqref{eq:est} using the mean absolute error (MAE), $\text{MAE}(\hat \bm{\Theta}, \bm{\Theta}) = \mathbb{E}|\hat \bm{\Theta}(\omega)-\bm{\Theta}(\omega)|$, where the expectation is with respect to a future observation $\bm{\Theta}(\omega)$ from the distribution $G$. We have the following result. \begin{thm}[Nonparametric matrix completion]\label{thm:estimation} Consider the matrix model~\eqref{eq:modelcompletion} with $\alpha$-smooth signal matrix $\bm{\Theta}\in\mathcal{M}_{\textup{sgn}}(r)$. Set $H \asymp \left( |\Omega|\over dr\right)^{1/2}$. With high probability at least $1-\exp(-dr)$ over $\bm Y_\Omega$, the estimate \eqref{eq:est} satisfies that \begin{equation}\label{eq:real} \textup{MAE}(\hat \bm{\Theta}, \bm{\Theta})\lesssim \left(dr \log|\Omega| \over |\Omega|\right)^{\min({\alpha \over 2+\alpha}, {1\over 2})}. \end{equation} \end{thm} \noindent We remark that our estimation accuracy \eqref{eq:real} applies to both low-rank and high-rank signal matrices. Moreover, the estimation rate depends on the sign complexity $\bm{\Theta}\in\mathcal{M}_{\textup{sgn}}(r)$, where $r$ can be much smaller than the usual matrix rank as shown in Proposition \ref{prop:signrank}. In fact, our theorem can also be relaxed for a growing $|\mathcal{N}|_{\text{cover}}$ as a function of $d$, with a slight modification on the setup; see Appendix~\ref{sec:unbounded} for such an extension. We next illustrate Theorem \ref{thm:estimation} with two matrix completion examples and compare with the existing literature. \begin{example}[Stochastic block model based matrix completion] The stochastic block model \citep{chi2020provable} assumes a checkerboard structure under marginal row and column permutations. The signal matrix belongs to our sign representable family $\bm{\Theta} \in \mathcal{M}_{\textup{sgn}}(r)$, where $r$ is the number of blocks. Besides, the block matrix is $\infty$-globally smooth, because $\mathcal{N}$ consists of finitely many $2\Delta s$-bin's covering the block means. Our signal estimate achieves the rate $\tilde \mathcal{O}(d^{-1/2})$ when $\alpha=\infty$ with no missingness. This rate agrees with the minimax root-mean-square error (RMSE) rate for stochastic block models with a fixed number of blocks \citep{gao2016optimal}. \end{example} \begin{example}[Single index model based matrix completion] The single index model based completion \citep{ganti2015matrix} admits a signal matrix $\bm{\Theta}=g(\bm B)$, where $g$ is an unknown monotonic function, and $\bm B$ is an unknown low-rank matrix. Note that $\bm{\Theta}$ itself is often of a high matrix rank as shown in Fig~\ref{fig:limit}(a). Suppose the CDF of $\bm{\Theta}(\omega)$ has a bounded pseudo density with $\alpha=1$. Applying Theorem~\ref{thm:estimation} yields the estimation error rate $\tilde \mathcal{O}(d^{-1/3})$, which is faster compared to the RMSE rate $\tilde \mathcal{O}(d^{-1/4})$ obtained earlier \citep{ganti2015matrix}. \end{example} Finally, we obtain the sample complexity of the nonparametric matrix completion, summarized in the next corollary. \begin{corollary}[Sample complexity for nonparametric completion] \label{thm:sample-complexity} Suppose the same conditions of Theorem~\ref{thm:estimation} hold. When $\alpha\neq 0$, with high probability at least $1-\exp(-dr)$ over $\mathcal{Y}_\Omega$, \begin{equation*} \textup{MAE}(\hat \bm{\Theta}, \bm{\Theta})\to 0, \quad \text{as}\quad {|\Omega|\over {d} r \log|\Omega|}\to \infty. \end{equation*} \end{corollary} \noindent Corollary \ref{thm:sample-complexity} improves the earlier work~\citep{yuan2016tensor, pmlr-v119-lee20i} by allowing both low-rank and high-rank signals. Moreover, the sample size requirement depends only on the sign complexity $\tilde \mathcal{O}(dr)$, but not the nonparametric complexity $\alpha$. We also note that $\tilde \mathcal{O}(dr)$ roughly matches the degree of freedom of the signals, suggesting the optimality of our sample requirements. \section{Large-margin implementation and ADMM algorithm} \label{sec:estimation} In Section \ref{sec:bridge}, we have established the methodology and theory for the nonparametric matrix trace regression under the 0-1 loss, since this is the canonical loss for classification. However, this loss may be difficult to optimize in practice. In this section, we extend it with a continuous large-margin loss, and present the corresponding optimization algorithm. We consider two loss functions: the hinge loss $F(z) = (1-z)_+$ for support vector machines, and the psi-loss $F(z)=2\min(1,(1-z)_+)$ with $z_{+}=\max(z,0)$~\citep{shen2003psi}. These two losses are most commonly used in classification, and both satisfy the linear excess risk bound; see Section~\ref{sec:large-margin}. We focus on the nonparametric low-rank sparse matrix regression problem. With some straightforward modification, the solution applies to matrix completion and other matrix learning problems as well. \subsection{Large-margin learning} Specifically, we generalize the 0-1 loss minimization \eqref{eq:loss} to the following continuous large-margin loss minimization problem, \begin{align}\label{eq:large-margin} \hat \phi_{\pi, F} = \argmin_{\phi \in\Phi(r,s_1,s_2)}\left\{ {1\over n}\sum_{i=1}^n |Y_i-\pi|F(\phi(\bm X_i)\textup{sgn}(Y_i-\pi))+ \lambda \FnormSize{}{\phi}^2\right\}, \end{align} where $F(z)\colon \mathbb{R}\mapsto \mathbb{R}_{\geq 0}$ is a continuous function of the margin $z=y\phi(\bm X)$, $\lambda>0$ is the penalty parameter, and $\FnormSize{}{\phi}$ is the penalty function. We set $\FnormSize{}{\phi}=\FnormSize{}{\bm B}$, with $\bm B$ being the coefficient matrix associated with $\phi\in\Phi(r,s_1,s_2)$. The use of large-margin loss in \eqref{eq:large-margin} allows us to leverage efficient large-margin optimization algorithms, while maintaining desirable statistical properties under mild conditions. The benefit of ridge penalization has been studied \citep{shen2003psi}. We obtain the corresponding regression function estimate as \begin{equation}\label{eq:stepfunction-large-margin} \hat f_{F} = {1\over 2H+1}\sum_{\pi \in \mathcal{H}} \textup{sgn} \hat \phi_{\pi, F}. \end{equation} \subsection{ADMM optimization} We next present an algorithm to solve \eqref{eq:large-margin} for a given $\pi\in\mathcal{H}$. We first note that the estimation problem \eqref{eq:large-margin} is equivalent to the optimization, \begin{equation}\label{eq:sampleoptim} \min_{\substack{(\bm B,b)\colon \textup{rank}(\bm B)\leq r, \text{supp}(\bm B)\leq (s_1,s_2)}}{1\over n}\sum_{i=1}^n|\bar Y_{\pi, i}| F\big( [\langle \bm X_i,\bm B \rangle+b] \textup{sgn} \bar Y_{\pi, i}\big) + \lambda\FnormSize{}{\bm B}^2, \end{equation} where we recall $\bar Y_{\pi, i}=Y_i-\pi$ is the $\pi$-shifted response. The loss function $F$ can be convex, e.g., hinge loss, or non-convex, e.g., psi-loss. Meanwhile, the optimization \eqref{eq:sampleoptim} has a non-convex feasible region because of the low-rank and sparsity constraints. We propose an alternating direction method of multipliers (ADMM) algorithm to solve \eqref{eq:sampleoptim}. We introduce a dual variable and an additional feasibility constraint to perform coordinate descent in the augmented Lagrangian function. The augmented objective of \eqref{eq:sampleoptim} is \begin{equation*} \label{eq:ADMM} L(\bm B,b, \bm S,\bm{\Lambda},\rho) = {1\over n}\sum_{i=1}^n|\bar Y_{\pi, i}|F\big([\langle \bm X_i,\bm B \rangle+b]\textup{sgn} \bar Y_{\pi, i}\big) + \lambda\FnormSize{}{\bm B}^2+\rho\FnormSize{}{\bm B-\bm S}^2+\langle \bm{\Lambda}, \bm B-\bm S\rangle, \end{equation*} where $\bm B\in \mathbb{R}^{d_1\times d_2}$ is the unconstrained primal variable, $\bm S\in\mathbb{R}^{d_1\times d_2}$ is the constrained dual variable satisfying $\textup{rank}(\bm S)\leq r$ and $\text{supp}(\bm S)\leq (s_1,s_2)$, $\bm{\Lambda}\in\mathbb{R}^{d_1\times d_2}$ is the Lagrangian multiplier, and $\rho>0$ is the step size parameter. Note that in $L(\bm B,b, \bm S,\bm{\Lambda},\rho)$, the non-convexity has moved from the first two terms in $\bm B$ to the last two simpler terms in $\bm S$. This separability simplifies the optimization for a wide range of loss functions and constraints. We next minimize $L(\bm B,b, \bm S,\bm{\Lambda},\rho)$ via coordinate descent, by iteratively updating one variable at a time while holding others fixed. Each update reduces to a simpler problem and can be efficiently solved by standard algorithms. Specifically, given variables $(\bm S,\bm{\Lambda},\rho)$ and $\bar \bm S = (2\rho\bm S-\bm{\Lambda}) / [2(\rho+\lambda)]$, the objective with respect to $(\bm B,b)$ is \begin{equation*} \label{eq:primal} L(\bm B,b|\bm S,\bm{\Lambda},\rho)={1\over n}\sum_{i=1}^n |\bar Y_{\pi, i}| F\big ([\langle \bm X_i,\bm B \rangle+b]\textup{sgn} \bar Y_{\pi, i}\big) +(\lambda+\rho)\FnormSize{}{\bm B-\bar \bm S}^2. \end{equation*} Optimization with~\eqref{eq:primal} is a standard vector based classification problem with a ridge penalty and an offset $\bar \bm S$. There are a number of state-of-art algorithms for weighted SVM~\citep{wang2008probability} and psi-learning~\citep{shen2003psi}, which are readily available to solve this problem. Next, given $(\bm B,b,\bm{\Lambda},\rho)$, and $\bar \bm B=(2\rho\bm B+\bm{\Lambda}) / (2\rho)$, the objective with respect to $\bm S$ is \begin{equation}\label{eq:dual} L(\bm S|\bm B,b,\bm{\Lambda},\rho)=\FnormSize{}{\bm S-\bar \bm B}^2,\quad \text{subject to} ~~ \textup{rank}(\bm S)\leq r \text{ and }\text{supp}(\bm S)\leq (s_1,s_2). \end{equation} This is equivalent to the best sparse low-rank approximation, in the least-square sense, to the matrix $\bm B$. Compared to the original objective~\eqref{eq:sampleoptim}, the least-square objective is easier to handle. A number of learning algorithms have been designed to solve this problem, e.g., sparse PCA, sparse SVD, and projection pursuit \citep{Ma2013}. We adopt the recently developed double projection method, which has a competitive performance in the high dimensional regime \citep{Ma2016}. Finally, the Lagrangian multiplier $\bm{\Lambda}$ is updated by $\bm{\Lambda}\leftarrow\bm{\Lambda}+2\rho(\bm B-\bm S)$. Following some common practice in matrix non-convex optimization \citep{Ma2016}, we run the optimization from multiple initializations to locate a final estimate with the lowest objective value. We summarize the above optimization procedure in Algorithm \ref{alg:weighted}. \subsection{Hyperparameter tuning} We briefly describe the hyperparameters in Algorithm~\ref{alg:weighted} and discuss their choices in practice. There are two sets of hyperparameters, one set for model specification, and the other for algorithmic stability. The model hyperparameters are $(r,s_1,s_2)$, which determine the complexity of sign functions. We choose $(r,s_1,s_2)$ via a grid search based on cross-validation regression error. The resolution in grid search depends on the problem size; for instance, in our brain connectivity data example with $d_1=d_2=68$ in Section~\ref{sec:brain}, we search for the optimal values of $r, s_1,s_2$ over $[d]$, with an increment of 5, under the natural constraint $r\leq s_1=s_2$. The algorithm hyperparameters are $(H, \lambda, \rho)$. For $H$ and $\lambda$, their optimal choices are given in Theorems \ref{thm:regression} and \ref{thm:extension}, respectively. In practice, we default $H=\min(20, \sqrt{n})$, and $\lambda=\min(0.1,n^{-1})$, which perform well in our numerical experiments. For the step size $\rho$ that controls the closeness between the dual and primal variables, we initialize from $1$, and increase its value geometrically by 1.1 during the iterations until the relative change in the primal residual $\FnormSize{}{\bm B-\bar \bm S}$ falls below a threshold~\citep{parikh2014proximal}. In our numerical analyses, we observe this scheme provides a stable optimization trajectory. \begin{algorithm}[t!] \caption{{\bf Nonparametric low-rank two-way sparse matrix regression via ADMM} } \label{alg:weighted} \begin{algorithmic}[1] \INPUT data $(\bm X_i,Y_{\pi, i})_{i\in[n]}$, rank $r$, support $(s_1,s_2)$, ridge parameter $\lambda$, resolution parameter $H$. \For {$\pi \in \mathcal{H}=\{ -1, \ldots, -{1\over H}, 0, {1\over H},\ldots, 1\}$} \State initialize dual variable $\bm S$ randomly, Lagrangian multiplier $\bm{\Lambda}=\mathbf{0}$, step size $\rho=1$, and $\bar Y_{\pi, i}$. \Repeat \State update $(\bm B,b) \leftarrow \argmin L(\bm B, b|\bm S,\bm{\Lambda},\rho)$. \State update $\bm S \leftarrow \argmin \FnormSize{}{\bm S-{1\over 2\rho}(2\rho\bm B+\bm{\Lambda})}^2 \ \text{subject to }\textup{rank}(\bm S)\leq r$ and $\text{supp}(\bm S)\leq (s_1,s_2)$. \State update $\bm{\Lambda} \leftarrow \bm{\Lambda}+2\rho(\bm B-\bm S)$. \State update $\rho\leftarrow1.1\rho$. \Until convergence \State return trace function estimate, $\hat \phi_\pi\colon \bm X\mapsto \langle \hat \bm B, \bm X \rangle+\hat b$. \EndFor \OUTPUT nonparametric regression function estimate, $\hat f= {1\over 2H+1}\sum_{\pi \in \mathcal{H}}\textup{sgn} \hat \phi_\pi$. \end{algorithmic} \end{algorithm} \subsection{Large-margin statistical guarantees}\label{sec:large-margin} We next establish the statistical accuracy for the large-margin estimators under some additional technical assumptions. Let $f_{\textup{bayes},\pi}=\textup{sgn}(f-\pi)$ denote the ground truth sign function at $\pi\in[-1,1]$, and let \begin{align} \label{eq:riskdef} \begin{split} \textup{Risk}_\pi(\phi) & = {1\over 2}\mathbb{E}|Y-\pi| \big|\textup{sgn}(Y-\pi)-\textup{sgn} \phi(\bm X)\big|, \\ \textup{Risk}_{\pi,F}(\phi) & = \mathbb{E}|Y-\pi|F\big(\phi(\bm X)\textup{sgn}(Y-\pi)\big), \end{split} \end{align} denote the 0-1 risk and F-risk, respectively, where $F$ is the surrogate continuous loss, and the expectation is taken with respect to $(\bm X,Y)\sim \mathbb{P}_{\bm X,Y}$. For simplicity, we assume $d_1 = d_2 = d$ and $\FnormSize{}{\bm X}\leq 1$ with probability 1. We consider the high dimensional regime where both $n$ and $d$ grow, while $(r,s_1,s_2)$ remain fixed. We need the following assumptions. \begin{assumption}[Assumptions on surrogate loss]\label{ass:main} \hfill \begin{enumerate} \item[(a)] (Approximation error) For any given $\pi\in[-1,1]$, assume there exist a sequence of functions $\phi^{(n)}_\pi\in\Phi(r,s_1,s_2)$, such that $\textup{Risk}_{\pi,F}(\phi^{(n)}_\pi)-\textup{Risk}_{\pi,F}(f_{\textup{bayes},\pi})\leq a_n$, for some sequence $a_n\to 0$ as $n\to\infty$. Furthermore, assume $\FnormSize{}{\phi_{\pi}^{(n)}} \leq J$ for some constant $J>0$. \item[(b)] (Common loss) $F(z)=(1-z)_{+}$ is hinge loss, or $F(z)=2\min(1,(1-z)_{+})$ is psi-loss. \end{enumerate} \end{assumption} \noindent Assumption~\ref{ass:main}(a) quantifies the representation capability of $F$ and $\Phi(r, s_1, s_2)$. We note that, although the Bayes rule $f_{\textup{bayes},\pi}$ also depends on $n$ implicitly through $d=d(n)$, we drop the dependence on $n$ for simpler notation. Assumption~\ref{ass:main}(b) implies the Fisher consistency bound for the weighted risk \citep{scott2011surrogate}, \begin{equation*} \label{eq:fisher} \textup{Risk}_\pi(\phi)-\textup{Risk}_\pi(f_{\textup{bayes},\pi})\leq C[\textup{Risk}_{\pi,F}(\phi)-\textup{Risk}_{\pi,F}(f_{\textup{bayes},\pi})], \text{ for all $\pi\in[-1,1]$ and all $\phi$}. \end{equation*} where $C=1$ for the 0-1 or the hinge loss, and $C=1/2$ for the psi-loss; see Lemma~\ref{lem:prepare} in Appendix. Therefore, it suffices to bound the excess $F$-risk in order to bound the usual 0-1 risk. Under Assumption \ref{ass:main}, we establish the estimation accuracy guarantee for the large-margin estimators \eqref{eq:large-margin} and \eqref{eq:stepfunction-large-margin}. \begin{thm}[Large-margin estimation]~\label{thm:extension} Consider the same setup as in Theorem~\ref{thm:sparse}, and denote $t_n = {r(s_1+s_2)\log d \over n}$. Suppose the surrogate loss $F$ satisfies Assumption~\ref{ass:main} with $a_n \lesssim t_n^{(\alpha+1)/(\alpha+2)}$. Set $H\asymp t_n^{-1/2}$ in~\eqref{eq:stepfunction-large-margin} and $\lambda\asymp t_n^{(\alpha+1)/(\alpha+2)}+t_n/\rho(\pi,\mathcal{N})$ in \eqref{eq:large-margin}. Then, with high probability at least $1-\exp(-nt_n)$ over the training data $(\bm X_i,Y_i)_{i\in[n]}$, we have: \begin{enumerate}[label=(\alph*)] \item (Sign function estimation). For all $\pi\in[-1,1]$ except for a finite number of levels, \begin{equation*} \onenormSize{}{\textup{sgn}\hat \phi_{\pi,F}-\textup{sgn}(f-\pi)}\lesssim t_n ^{\alpha\over 2+\alpha}+{1\over \rho^2(\pi,\mathcal{N})}t_n. \end{equation*} \item (Regression function estimation). \begin{equation*} \onenormSize{}{\hat f_{F}- f} \lesssim \left(t_n\log n\right)^{\min\left( {1\over 2},{\alpha \over 2+\alpha} \right)}. \end{equation*} \end{enumerate} \end{thm} \section{Simulations} \label{sec:simulation} In this section, we first evaluate the empirical performance of our method \text{\bf \footnotesize ASSIST}\ through four experiments, with varying sample size, response type, matrix dimension, and model complexity. We then compare \text{\bf \footnotesize ASSIST}\ with some alternative methods. \subsection{Impacts of sample size, matrix dimension, and model complexity} \label{sec:validation} We consider a random matrix predictor $\bm X\in\mathbb{R}^{d\times d}$ with i.i.d.\ entries sampled from Uniform[0,1], and simulate two types of response, continuous and binary, through \begin{itemize} \item Continuous regression: $Y=f(\bm X)+\varepsilon$, where $\varepsilon \sim \text{Normal}(0,0.1^2)$; \item Binary regression: $Y\in\{-1,1\}$, with $\mathbb{P}(Y=1|\bm X)={1\over 2}(f(\bm X)+1)$. \end{itemize} We set the regression function $f(\bm X)=h(z)$, where $h\colon \mathbb{R}\to[-1,1]$ is a non-decreasing function, $z\in\mathbb{R}$ is a nonlinear predictor that $z= (G^{-1}\circ \bar G)(\langle \bm X, \bm B \rangle)$, $\circ$ denotes function composition, $\bm B\in\mathbb{R}^{d\times d}$ is a fixed rank-$r$, supp-$(s,s)$ matrix, $\bar G\colon \mathbb{R}\to[0,1]$ is the CDF of $\langle \bm X, \bm B \rangle$ induced by $\bm X\sim \mathbb{P}_{\bm X}$ so that $\bar G(\langle \bm X, \bm B \rangle)\sim$ Uniform[0,1], and $G\colon \mathbb{R}\to[0,1]$ is the CDF of some reference distribution. This construction yields a highly nonlinear function $f$. We set the matrix dimension $d=20,30,\ldots,60$, the training sample size $n=150, 200, \ldots, 400$, and various combinations of $(r,s)$. In this study, we set $\lambda=10^{-2}$, $H=20$, and use the true $(r,s)$ in Algorithm~\ref{alg:weighted}, and study parameter tuning in Section \ref{sec:comparison}. The first experiment assesses the impact of the sample size $n$ for the continuous regression. We set $h(z)=[\exp(z)-1] / [\exp(z)+1]$, $G$ as the CDF of a standard normal distribution, the matrix dimension $d=20$, and the model complexity $(r,s)=(2,2),(2,3),(5,5)$. Fig~\ref{fig:logistic}(a) summarizes the main model configurations, including the density of $z=z(\bm X)$, the function $h=h(z)$, and the resulting density of $f(\bm X)$. Fig~\ref{fig:logistic}(b) reports the prediction error, $\onenormSize{}{\hat f - f}$, as the sample $n$ increases. We see that the error decays polynomially with $n$. We also see that a higher rank $r$ or a higher support $s$ leads to a larger error, as reflected by the upward shift of the curve as $(r,s)$ increases, since it implies a higher model complexity. The second experiment considers a binary response. Fig~\ref{fig:logistic}(c) reports the prediction error $\onenormSize{}{\hat f - f}$ as the sample size $n$ increases. We see that the error decays polynomially with $n$. We also note that, in both cases, the matrix predictor has the dimension $20 \times 20=400$ whereas $n$ is on the order of hundreds. Nevertheless, our nonparametric method consistently learns the function $f$ well from limited data without specifying a priori the functional form. The third experiment evaluates the impact of the matrix dimension $d$. We fix the sample size $n=200$ and increase $d$. Fig~\ref{fig:logistic}(d) reports the prediction error. We see that the error increases slowly with $d$, and the growth appears well controlled by the log rate. Note that, in this example, as $d$ increases, the number of effective entries remains unchanged, but the combinatoric complexity increases in the model space. The increasing error is an unavoidable price to pay for not knowing the positions of the $s$ active entries. This example shows the ability of our method to effectively handle a massive number of noisy features. \begin{figure}[b!] \centering \includegraphics[width=\textwidth]{figure/combined_logistic.pdf} \caption{Finite sample performance under a smooth function. (a) simulation setup; (b) prediction error with varying $n$ and $d=20$ for the continuous response; (c) for the binary response; (d) with varying $d$ and $n=200$. The dashed lines in panels (b)-(d) represent upper bounds $\mathcal{O}(n^{-1/3})$, $\mathcal{O}(n^{-1/3})$, and $\mathcal{O}(\log d)$, respectively. The results are based on 30 data replications.} \label{fig:logistic} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{figure/combined_steps.pdf} \caption{Finite sample performance under a non-smooth function. The setup is similar as Fig~\ref{fig:logistic}. The dashed lines in panels (b)-(d) represent upper bounds $\mathcal{O}(n^{-1/2})$, $\mathcal{O}(n^{-1/2})$, and $\mathcal{O}(\log d)$, respectively.} \label{fig:step} \end{figure} The fourth experiment investigates the impact of smoothness in regression function. In Section~\ref{sec:idea}, we show that the probabilistic behavior of $f(\bm X)$ plays a key role in our learning reduction approach. Here we assess the empirical performance by repeating all the above experiments using a model configuration with $z=z(\bm X)\sim \text{Uniform}[-1,1]$, $h(z)=-0.6+1.2\mathds{1}(z>0)$, and $(r,s)=c(2,2),(2,5),(5,5)$. This case falls on the other end of the spectrum in contrast to the infinity smooth function in Fig~\ref{fig:logistic}(a). That is, $f(\bm X)$ now concentrates at two mass points $\pi=\pm 0.6$. This makes the $\pi$-sign function estimation challenging around $\pi=\pm 0.6$ because of the non-identifiability. Fig~\ref{fig:step} reports the new model configurations and the corresponding results. Interestingly, we find that our method still maintains a good performance. Such a robustness may be explained by the fact that we aggregate in total $2H+1$ sign functions, each of which incurs at most $1/(2H+1)$ error to the regression function estimation. Therefore, our function estimate is robust against some off-target sign estimates, as long as the majority are accurate. This observation is consistent with the consistency result established in Section \ref{sec:bridge}. \subsection{Comparison with alternative methods} \label{sec:comparison} Next, we compare our method with several popular alternative solutions. In this comparison, we adopt the simulation setup as in \cite{relion2019network}, but add more challenging matrix effects. Particularly, in this setup, the response is binary, and the predictor is a symmetric matrix that encodes a network. In this article, we have been targeting a general matrix predictor, which is directly applicable to a symmetric matrix, though we do not focus on symmetry. Moreover, as we show in Section~\ref{sec:joint} of the Appendix, the data generating model falls into our general family of nonparametric trace regression when there is no noise, but no longer so when there is noise. Therefore, we also investigate the performance of our method under model misspecification when including the noise. More specifically, we simulate from a latent variable model $(\bm X,Y)|\pi$, where we generate $\pi$ i.i.d.\ from Uniform[0,1], and conditional on $\pi$, we generate $Y \sim \text{Bernoulli}(\pi)$, and \begin{eqnarray}\label{eq:pattern} \bm X=\entry{\bm X_{ij}},\ \bm X_{ij}\stackrel{\text{indep.}}{\sim} \text{Normal}\left( g_{ij}(\pi)\mathds{1}(\text{edge $(i,j)$ is active}), \sigma^2 \right), \end{eqnarray} where the edge connectivity strength, denoted by $g_{ij}(\pi)$, varies depending on the location of $(i,j)\in[d]^2$, and the mean response $\pi$. Fig~\ref{fig:region} shows the activation pattern we consider that specifies the locations of the active edges. The active region is further divided into several subregions, each of which has its own signal function $g_{ij}(\cdot)\colon [0,1]\to \mathbb{R}$. The function form of $g_{ij}(\cdot)$ is randomly drawn from a pre-specified library consisting of common polynomial, log, and trigonometric functions. We set $d=68$, the training sample size $n=160$, and the testing size $80$. In the noiseless case $\sigma=0$ in~\eqref{eq:pattern}, the cross and block patterns are low-rank with $r = 3$ and 5, respectively, whereas the star and circle patterns are nearly full-rank, with a numerical rank $r \approx 30$ on the supported submatrix. \begin{figure}[t!] \centering \includegraphics[width=3.8cm]{figure/cross.pdf} \includegraphics[width=3.8cm]{figure/block.pdf} \includegraphics[width=3.8cm]{figure/star.pdf} \includegraphics[width=4.7cm]{figure/circle.pdf} \caption{Four activation patterns in simulations. The active region is divided into four or five subregions, denoted by I, II, ..., V, each of which has its own edge connectivity signal $g_{pq}(\pi)$.} \label{fig:region} \end{figure} We compare the following four estimation methods. \begin{itemize} \item Unstructured logistic regression for vector predictors (\text{\bf \small LogisticV}, \citep{Zou2005}). This method vectorizes the matrix predictor into a high dimensional vector, then employs a logistic loss with an elastic net penalty. \item Generalized trace regression for matrix predictors (\LogisticM,~\citep{relion2019network}). This method fits a parametric trace regression model with a logistic link and a symmetric matrix predictor. It imposes a group lasso penalty to encourage two-way sparsity. \item Convolutional Neural Network (\text{\bf \small CNN}) with two hidden layers implemented in Keras \citep{chollet2018deep}. We apply 64 filters with $3\times 3$ convolutional kernels to the matrix-valued predictor, followed by a pooling layer with size $5\times 5$. The resulting features are fed to a fully connected layer of neural network with ReLU activation. \item {\bf A}ggegration of {\bf S}tructured {\bf SI}gn {\bf S}eries for {\bf T}race regression (\text{\bf \footnotesize ASSIST}), our method. \end{itemize} \noindent Among these methods, \text{\bf \small LogisticV}\ serves as a baseline to assess the gain of modeling a matrix predictor over a vector predictor, \LogisticM\ is a parametric model, whereas \text{\bf \small CNN}\ and \text{\bf \footnotesize ASSIST}\ are nonparametric solutions for matrix predictors. We feed each method with the binary response and the network adjacency matrix as the predictor after randomly permuting the node indices. Because \LogisticM\ only supports a symmetric matrix predictor, we provide it with $(\bm X+\bm X^T)/2$ as the input. We use the default parameters of \LogisticM, and select the tuning parameters of \text{\bf \small LogisticV}, \text{\bf \small CNN}, and our method \text{\bf \footnotesize ASSIST}, including the rank $r$ and sparsity parameters $(r,s)$, by 5-fold cross validation. \begin{figure}[b!] \centering \includegraphics[width=\textwidth]{figure/error_tot_comb.pdf} \caption{Performance comparison of various methods under four different activation patterns. Reported are the prediction error $\onenormSize{}{\hat f - f}$, denoted by ``regression", and the misclassification error at $\pi=1/2$, denoted by ``classification". The results are based on 30 data replications.} \label{fig:compare} \end{figure} Fig \ref{fig:compare} reports both the prediction error $\onenormSize{}{\hat f - f}$ and the misclassification error at $\pi=1/2$ of the four methods evaluated on the testing data. For prediction, we see that \text{\bf \footnotesize ASSIST}\ consistently outperforms the alternatives, and the improvement is substantial. For example, the relative reduction using \text{\bf \footnotesize ASSIST}\ over the next best approach, \LogisticM, is over 20\% for patterns (a) and (d), and over 15\% for patterns (b) and (c). These results clearly demonstrate the benefit of our nonparametric approach. Moreover, we find that neither \text{\bf \small LogisticV}\ nor \text{\bf \small CNN}\ has a satisfactory prediction. A possible explanation is that \text{\bf \small LogisticV}\ takes the vectorized matrix as the input and therefore loses the two-way pairing information. Meanwhile, \text{\bf \small CNN}\ assumes spacial ordering within row and column indices. Although local similarity is important for the usual imaging analysis, the row and column indices take no particular order for a network. Actually, adjacency matrices after row or column permutation represent the same network, and thus the index-invariant methods, such as \LogisticM\ and \text{\bf \footnotesize ASSIST}, perform better. For classification, we also see that our method overall performs the best. The only exception is the circle pattern where \text{\bf \small CNN}\ has a slightly lower classification error. This is perhaps due to the fact that the circle is nearly full rank and thus favors a more complicated model. Interestingly, we also find that the advantage of our method is more substantial in regression prediction than in classification, since classification is easier than regression. Moreover, with model noise included, our method still performs well even though the true model does not exactly follow our model specification. \begin{figure}[t!] \centering \includegraphics[width=8.1cm]{figure/est_cross.pdf} \includegraphics[width=8.1cm]{figure/est_block.pdf} \includegraphics[width=8.1cm]{figure/est_star.pdf} \includegraphics[width=8.1cm]{figure/est_circle.pdf} \caption{Example output returned by {\bf \scriptsize ASSIST} based on the moving average of the feature weights, and the scatter plot of the edge connectivity strength, averaged by each subregion, versus the estimated mean response. The dashed curve shows the true function. } \label{fig:compare2} \end{figure} Finally, to illustrate its capability of producing an estimate of high interpretability, Fig \ref{fig:compare2} reports the output of \text{\bf \footnotesize ASSIST}\ based on the moving average of the feature weights $(\hat \bm B_\pi)_{\pi\in\Pi}$. It is observed that the identified activation region agrees well with the truth. We also investigate the relationship between the edge connectivity for individual $i$ and the estimated mean response $\hat \pi_i$ for $i=1,\ldots,n$. The trajectory accurately resembles the ground truth function in each subregion, demonstrating that our method is able to recover the pattern in the matrix predictors $\bm X_i$ against $\hat \pi_i$ on a continuous spectrum. \section{Real data applications} \label{sec:realdata} We present two real data applications, in parallel to the two matrix learning tasks studied in Section \ref{sec:examples}. The first task is the binary-valued trait prediction based on brain connectivity matrix regression, and the second is the continuous-valued matrix completion for imaging analysis. \subsection{Brain connectivity analysis} \label{sec:brain} The first example is a brain connectivity data analysis, which aims to understand the relation between brain connectivity network and cognitive performance. The data is obtained from the Human Connectome Project (HCP) \citep{van2013wu}, and consists of $n=212$ healthy subjects. For each subject, a binary connectivity network is extracted, with nodes corresponding to $d=68$ brain regions-of-interest following the Desikan atlas \citep{desikan2006automated}, and links corresponding to the structural connectivity evaluated by diffusion tensor imaging \citep{zhang2018mapping}. The outcome is the dichotomized version of a visuospatial processing test score, corresponding to a high or low performance score \citep{wang2019common}. We adjust age and gender as additional covariates in our analysis. We note that, although our model focuses on a matrix predictor, it is straightforward to incorporate additional vector-valued covariates. We use a random 60-20-20 split of the data for training, validation, and testing. \begin{table}[t!] \caption{Brain connectivity analysis. (a) Comparison of prediction accuracy measured by AUC, with standard errors over 5-fold cross validation in the parentheses. For {\scriptsize \bf CNN}, there is no report for node selection. (b) Top edges selected by the method {\scriptsize \bf ASSIST-p}. The letters ``r'' and ``l'' in node names indicate the right and left hemisphere, respectively. The $p$-value is calculated from the two-sample test of edge connection strength between two individual groups. } \label{fig:real} \resizebox{\columnwidth}{!}{ \begin{tabular}{ll} a\hspace{6.5cm}b\\ \begin{tabular}{c|cc} \hline Method & AUC & \% of Active Nodes\\ \hline {\bf \footnotesize ASSIST-p} &{\bf 0.73 (0.03)} &88.2 \\ \text{\bf \footnotesize ASSIST}& {\bf 0.77 (0.04)} &97.3 \\ LogisticM&0.72 (0.02)& 100.0\\ LogisticV&0.68 (0.01)&89.7\\ CNN&0.67 (0.03)&-$^{}$\\ \hline \end{tabular} \begin{tabular}{c|ccc} \hline Rank &Node & Node& $p$-value \\ \hline 1&r-inferiortemporal&r-middletemporal&$0.01$\\ 2&r-parstriangularis&r-supramarginal&3e-5\\ 3&l-posteriorcingulate&r-precentral&0.01\\ 4& l-caudalmiddlefronta& l-isthmuscingulate&2e-5\\ 5 &l-lateralorbitofrontal&r-parstriangularis&1e-4\\ \hline \end{tabular} \end{tabular} } \end{table} \begin{figure}[b!] \centering \includegraphics[width=.8\textwidth]{figure/brain.pdf} \caption{Brain connectivity analysis. (a) Top edges overlaid on a brain template. (b) Edge connectivity strength versus estimated mean response. Colored curves represent the moving averages of connectivity strengths, gray bands represent one standard error, and jitter points represent the raw connectivity values (0 or 1).} \label{fig:real2} \end{figure} We compare our method with the same alternatives as in Section \ref{sec:comparison}. Table~\ref{fig:real}(a) shows that our method achieves the highest accuracy, measured by the area under receiver operating characteristic (AUC). Moreover, as common in the high dimensional setting, we see the model with a good cross-validation accuracy tends to include a large number of noise variables. A useful heuristic called the ``one-standard-error rule'', suggested by \cite{hastie2015statistical}, selects the most parsimonious model with cross-validation accuracy within one standard error of the best. We apply this rule and report the results as {\bf \footnotesize ASSIST-p}. It is remarkable to see that {\bf \footnotesize ASSIST-p} results in 12\% reduction of active nodes but still achieves a comparable accuracy to the best one. Table~\ref{fig:real}(b) lists the top brain links identified by our method. The edges are ranked by their maximal values in the feature weights $(\hat \bm B_\pi)_{\pi \in \mathcal{H}}$ via moving averaging. We find that the top edges involve connections between frontal and occipital regions in the right hemisphere. This is consistent with recent findings of dysfunction in right posterior regions for deficits in visuospatial processing \citep{wang2019common}. Fig~\ref{fig:real2}(a) shows the top selected edges overlaid on a brain template. Moreover, we find the relationship between the edge connection strength and the mean response to be nonlinear. Fig~\ref{fig:real2}(b) plots the edge connectivity strength versus the estimated mean response. We see that the connection between r-parstriangularis and r-supramarginal grows slowly when the mean response is small but fast when it is large. In contrary, the connection between r-posteriorcingulate and r-precentral grows fast initially, then reaches a plateau as the mean response increases. Such patterns suggest heterogeneous changes in brain connectivity with respect to the visuospatial processing capability. \subsection{Imaging matrix completion} \label{sec:completion} The second application is an imaging matrix completion, where the goal is to recover and restore the partially observed gray-scaled hot air balloon image. This image is a standard benchmark in computer vision, and is organized as a 217-by-217 matrix, whose entries represent pixel values in $[0,1]$. We randomly mask a subset of entries and perform matrix completion based on the observed entries. We compare our method with three alternatives: a soft imputation method based on matrix nuclear norm regularization (\SoftImpute)~\citep{hastie2015matrix}, a hard imputation method with ridge regression (\HardImpute)~\citep{mazumder2010spectral}, and a hard imputation based on alternating SVD (\ALT)~\citep{rennie2005fast}. We evaluate the recovery accuracy by MAE on the unobserved entries, and we tune all the parameters based on 5-fold cross-validation. \begin{figure}[t!] \includegraphics[width = \textwidth]{figure/completion.pdf} \caption{Matrix completion analysis. (a)-(b) correspond to the 40\% missing rate, and (c)-(d) the 80\% missing rate. Error bars represent the standard error over 5-fold cross-validation. Numbers in the parentheses represent the selected tuning parameters for each method. In (a) and (c), we omit the worst method {\bf \scriptsize ALT} for space consideration.} \label{fig:braincv} \end{figure} We investigate missing percentages at $40\%$ and $80\%$, and vary the rank $r=2,4,\ldots,20$. Fig \ref{fig:braincv} reports the performances of the four methods. We see clearly that our method achieves the best image recovery, with the smallest MAE. Besides, the advantage of our method compared to the alternative solutions is more clear when the missing percentage increases. \section{Discussion} \label{sec:discussion} We have developed a nonparametric trace regression model for studying the relationship between a scalar response and a high dimensional matrix predictor. We propose a learning reduction approach, \text{\bf \footnotesize ASSIST}, using the structured sign function series, which bridges between regression and classification. We establish the theoretical bounds, which concern the fundamental statistical errors, are independent of specific algorithms, and serve as a benchmark on how well any algorithmic procedure could perform. Our numerical results demonstrate the competitive performance of the proposed method. Our work unlocks several possible future directions. One is nonparametric modeling of other nonconventional predictors, such as tensors, functions, and manifold data. Other directions include multi-task learning and compressed sensing. Moreover, our learning reduction approach can be coupled with more sophisticated classifiers, such as neural networks, decision trees, and boosting, for sign function estimation. Finally, the theoretical guarantees we obtain are for the global optimum. How to characterize the behavior of the actual minimizer, or relatedly, the computational error for non-convex matrix based regression remains challenging and open. All these questions are warranted for future research. \section*{Acknowledgements} The research was supported in part by NSF DMS-1915978, NSF DMS-2023239, Wisconsin Alumni Research Foundation (to M.\ Wang), NIH R01 AG061303 (to L.\ Li), and NSF CCF-1740858 (to H.\ Zhang) \bibliographystyle{plainnat}
{ "timestamp": "2021-05-06T02:05:41", "yymm": "2105", "arxiv_id": "2105.01783", "language": "en", "url": "https://arxiv.org/abs/2105.01783" }
\section{Introduction} In this paper, we always work over an algebraically closed field $\mathds{k}$ of characteristic zero. A fusion category $\mathcal{C}$ is a semisimple $\mathds{k}$-linear finite abelian tensor category, we use $\mathcal{O}(\mathcal{C})$ to denote the set of isomorphism classes of simple objects of $\mathcal{C}$. Fusion category $\mathcal{C}$ is a braided fusion category if $\mathcal{C}$ is equipped with a braiding $c$. Let $\mathcal{D}\subseteq\mathcal{C}$ be a fusion subcategory of braided fusion category $\mathcal{C}$. Recall that the centralizer $\mathcal{D}_\mathcal{C}'$ of $\mathcal{D}$ in $\mathcal{C}$ is the fusion subcategory generated by simple objects $Y$ of $\mathcal{C}$ such that $c_{Y,X}c_{X,Y}=\text{id}_{X\otimes Y}$, for all objects $X\in\mathcal{O}(\mathcal{D})$, see \cite{Mu}; in particular, we call $\mathcal{C}':=\mathcal{C}_\mathcal{C}'$ the M\"{u}ger center of $\mathcal{C}$. Let $\mathcal{E}$ be an arbitrary symmetric fusion category, i.e., $\mathcal{E}'=\mathcal{E}$. Recall that a braided fusion category over $\mathcal{E}$ is a braided fusion category $\mathcal{C}$ equipped with a braided tensor embedding $\mathcal{E} \to \mathcal{C}'$, see e.g. \cite[2.9]{DNO}. A braided fusion category over $\mathcal{E}$ is non-degenerate over $\mathcal{E}$ if the embedding $\mathcal{E} \to \mathcal{C}'$ is an equivalence. In the special cases when $\mathcal{E}=\text{Vec}$ or $\mathcal{E}=\text{sVec}$ is the category of finite dimensional vector or super vector spaces over $\mathds{k}$ this specializes to the notions of non-degenerate and slightly degenerate braided fusion categories. A non-degenerate braided fusion category $\mathcal{A}$ is called a minimal extension of a braided fusion category $\mathcal{C}$ if $\mathcal{C}\subseteq\mathcal{A}$ and $\mathcal{C}_\mathcal{A}'=\mathcal{C}'=:\mathcal{E}$. It was conjectured that these minimal extensions exist for an arbitrary braided fusion category $\mathcal{C}$ \cite[Conjecture 5.2]{Mu}. However, counterexamples posed by Drinfeld \cite{Drinfeld} shows that this conjecture fails for some non-degenerate fusion categories over certain Tannakian fusion categories $\mathcal{E}$ (recall that $\mathcal{E}$ is Tannakian if it is braided equivalent to representation category $\text{Rep}(G)$ for some finite group $G$, see e.g. \cite{DrGNO2}). The following special case of the minimal extension conjecture is still open and is of great interest: \begin{conj} \label{minextconj} Any slightly non-degenerate fusion category admits a minimal extension. \end{conj} Note that it is proved in \cite{LKW} and \cite{BGHNPRW} that the set of all minimal extensions (up to a suitable equivalence) of a given slightly non-degenerate category is a torsor over group ${\mathbb Z}/16$. Conjecture \ref{minextconj} states that this torsor is non-empty. Our first main result is a proof of Conjecture \ref{minextconj} for a class of slightly degenerate fusion categories. Recall that a fusion category $\mathcal{C}$ is weakly group-theoretical if $\mathcal{C}$ is Morita equivalent to a nilpotent fusion category $\mathcal{D}$ \cite{ENO3} (see section \ref{section2} for the definition of nilpotency of fusion categories), equivalently, there exists a braided equivalence between Drinfeld centers $\mathcal{Z}(\mathcal{C})\cong\mathcal{Z}(\mathcal{D})$ \cite[Theorem 1.3]{ENO3}. \begin{theo}[Theorem \ref{theo2sec3}]\label{maintheorem1}Every slightly degenerate weakly group-theoretical fusion category admits a minimal extension. \end{theo} In fact, let $\mathcal{C}$ be a braided fusion category such that $\mathcal{C}'=\mathcal{E}$. In Theorem \ref{theo1sec3}, we show $\mathcal{C}$ admits a minimal extension if and only if $\mathcal{C}$ is Witt equivalent to $\mathcal{E}\boxtimes\mathcal{D}$ for some non-degenerate fusion category $\mathcal{D}$, the precise definition of Witt equivalence can be found in section \ref{section2}. Our second main result is to give a sufficient condition when a non-degenerate fusion category of FP-dimension $nd$ contains a pointed non-degenerate fusion subcategory of FP-dimension $d$, where $n$ is a positive integer and $d$ a square-free integer such that $(n,d)=1$. Explicitly, \begin{theo}[Theorem \ref{theo2sec4}]\label{maintheorem2} Assume that $\mathcal{C}$ is a non-degenerate fusion category with $\text{FPdim}(\mathcal{C})=nd$, if $\mathcal{C}$ contains a Tannakian subcategory $\mathcal{E}=\text{Rep}(G)$ satisfying $\mathcal{C}_G^0\cong\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{A}$, moreover for any object $V\in\mathcal{O}(\mathcal{C})$, $(\text{FPdim}(V)^2,d)=1$. Then $\mathcal{C}\cong\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{C}(\mathbb{Z}_d,q)_\mathcal{C}'$, where $\mathcal{C}(\mathbb{Z}_d,q)$ is the non-degenerate fusion category determined by metric group $(\mathbb{Z}_d,q)$. \end{theo} Here, $\mathcal{C}_G^0$ is the de-equivariantization of $\mathcal{E}_\mathcal{C}'$ by $\mathcal{E}$ \cite{DrGNO2}. According to \cite[Theorem 3.1]{Na}, integral non-degenerate weakly group-theoretical fusion categories $\mathcal{C}$ of FP-dimension $nd$ satisfy conditions of Theorem \ref{maintheorem2}, see Theorem \ref{theo1sec4}. Besides, combined with Theorem \ref{maintheorem1}, we can also extend the conclusion of Theorem \ref{maintheorem2} to slightly degenerate weakly group-theoretical fusion categories, see Corollary \ref{coro1sec4}. The paper is organized as follows. In section \ref{section2}, we recall some basic properties of fusion categories that we use throughout. In section \ref{section3}, we give a partial answer on minimal extension of slightly degenerate fusion categories in Theorem \ref{theo2sec3}. In section \ref{section4}, we classify non-degenerate weakly group-theoretical fusion categories of particular FP-dimensions in Theorem \ref{theo2sec4}. In particular, given a prime $p$ such that $(p,d)=1$, if $\mathcal{C}'\subseteq \text{sVec}$, then we obtain group-theoretical property of integral fusion categories $\mathcal{C}$ of FP-dimensions $p^md$ in Corollary \ref{coro2sec4}. This paper was written during a visit of the second named author at University of Oregon supported by China Scholarship Council (Grant no.201806140143), he appreciates the Department of Mathematics for their warm hospitality. The work of V.~O. was partially supported by the HSE University Basic Research Program, Russian Academic Excellence Project '5-100' and by the NSF grant DMS-1702251. \section{Preliminaries}\label{section2} In this section, we recall some definitions and properties of fusion categories and braided fusion categories, we refer the reader to \cite{DrGNO2,EGNO,ENO1,ENO3}. Let $\mathds{k}^{*}:=\mathds{k}\backslash {\{0}\}$, $\mathbb{Z}_r:=\mathbb{Z}/r$, $r\in \mathbb{N}$. \subsection{Fusion categories} Let $\mathcal{C}$ be a fusion category, the cardinal of $\mathcal{O}(\mathcal{C})$ is called rank of $\mathcal{C}$, and denoted by $\text{rank}(\mathcal{C})$. Let $\text{Gr}(\mathcal{C})$ be the Grothendieck ring of $\mathcal{C}$. Then there is a unique ring homomorphism FPdim(-) from the Grothendieck ring $\text{Gr}(\mathcal{C})$ to $\mathds{k}$ such that $\text{FPdim}(X)\geq1$ is an algebraic integer for all objects $X\in\mathcal{O}(\mathcal{C})$ \cite[Theorem 8.6]{ENO1}, and $\text{FPdim}(X)$ is called the Frobenius-Perron dimension of object $X$. The Frobenius-Perron dimension $\text{FPdim}(\mathcal{C})$ of fusion category $\mathcal{C}$ is defined by \begin{align*} \text{FPdim}(\mathcal{C}):=\sum_{X\in\mathcal{O}(\mathcal{C})}\text{FPdim}(X)^2. \end{align*} A simple object $X$ of fusion category $\mathcal{C}$ is invertible if $\text{FPdim}(X)=1$. Fusion category $\mathcal{C}$ is a pointed fusion category, if simple objects of $\mathcal{C}$ are all invertible. In this case, $\mathcal{C}\cong\text{Vec}_G^\omega$, the category of finite-dimensional $G$-graded vector spaces of $\mathds{k}$ where $G=\mathcal{O}(\mathcal{C})$ is a finite group induced by fusion rules of $\mathcal{C}$ and $\omega\in Z^3(G,\mathds{k}^*)$ is a $3$-cocylce. In the following, we use $G(\mathcal{C})$ to denote the group of isomorphism classes of invertible objects of fusion category $\mathcal{C}$. A fusion category $\mathcal{C}$ is weakly integral if $\text{FPdim}(\mathcal{C})\in\mathbb{Z}$; $\mathcal{C}$ is integral if $\text{FPdim}(X)\in\mathbb{Z}$, $\forall X\in\mathcal{O}(\mathcal{C})$. Let $\mathcal{C}_\text{ad}$ be the adjoint fusion subcategory of $\mathcal{C}$, i.e., $\mathcal{C}_\text{ad}$ is generated by simple objects $Y$ such that $Y\subseteq X\otimes X^*$ for some simple object $X\in\mathcal{C}$. If $\mathcal{C}$ is weakly integral then $\mathcal{C}_\text{ad}$ is integral \cite[Proposition 8.27]{ENO1}. Let $G$ be a finite group, for a $G$-graded fusion category $\mathcal{C}=\oplus_{g\in G}\mathcal{C}_g$, we use the following notation to denote the Frobenius-Perron dimension of component $\mathcal{C}_g$ \begin{align*} \text{FPdim}(\mathcal{C}_g):=\sum_{X\in\mathcal{O}(\mathcal{C}_g)}\text{FPdim}(X)^2. \end{align*} If the $G$-grading of $\mathcal{C}$ is faithful, that is, for any $g\in G$, the component $\mathcal{C}_g\neq 0$, then \begin{align*} \text{FPdim}(\mathcal{C})=|G|\text{FPdim}(\mathcal{C}_g) \end{align*} by \cite[Proposition 8.20]{ENO1}. Any fusion category $\mathcal{C}$ has a faithful grading with $\mathcal{C}_\text{ad}$ being its trivial component, the grading is called the universal grading of $\mathcal{C}$. Moreover, for any faithful $G$-grading of $\mathcal{C}=\oplus_{g\in G}\mathcal{C}_g$, we have $\mathcal{C}_\text{ad}\subseteq\mathcal{C}_{e}$ \cite [Corollary 3.7]{GN}. A fusion category $\mathcal{C}$ is said to be nilpotent if there exists a natural number $n$ such that $\mathcal{C}^{(n)}=\text{Vec}$ \cite{GN}, where $\text{Vec}$ is the category of finite-dimensional vector spaces over $\mathds{k}$, and \begin{align*} \mathcal{C}^{(0)}:=\mathcal{C}, ~\mathcal{C}^{(1)}:=\mathcal{C}_\text{ad},~ \mathcal{C}^{(j)}:=(\mathcal{C}^{(j-1)})_\text{ad}, ~j\geq1. \end{align*} Obviously, pointed fusion categories are nilpotent, and fusion categories of prime power FP-dimensions are also nilpotent \cite [Theorem 8.28]{ENO1}, for example. A fusion category $\mathcal{C}$ is weakly group-theoretical if it is Morita equivalent to a nilpotent fusion category $\mathcal{D}$; $\mathcal{C}$ is group-theoretical if it is Morita equivalent to a pointed fusion category, see \cite{ENO3} for details. \subsection{Braided fusion categories} A braided fusion category $\mathcal{C}$ is a fusion category with a braiding \begin{align*} c_{X,Y}:X\otimes Y \overset{\sim}{\to}Y\otimes X, ~\forall X,Y\in\mathcal{C}. \end{align*} Two objects $X,Y\in\mathcal{C}$ are said to centralize each other, if \begin{align*} c_{Y,X}c_{X,Y}=\text{id}_{X\otimes Y}. \end{align*} Let $\mathcal{D}\subseteq\mathcal{C}$ be a fusion subcategory. Then the centralizer of $\mathcal{D}$ in $\mathcal{C}$ is the fusion subcategory generated by objects of $\mathcal{C}$ that centralize every object of $\mathcal{D}$. We call $\mathcal{C}':=\mathcal{C}_\mathcal{C}'$ the M\"{u}ger center of $\mathcal{C}$ \cite{Mu}. Moreover, braided fusion category $\mathcal{C}$ is a pre-modular fusion category if $\mathcal{C}$ is a spherical fusion category \cite{EGNO}. A braided fusion category $\mathcal{E}$ is symmetric if $\mathcal{E}=\mathcal{E}'$. For any symmetric fusion category $\mathcal{E}$, there exists a finite group $G$ and a central element $u\in G$ such that $\mathcal{E}\cong\text{Rep}(G,u)$ \cite{De}, where $\text{Rep}(G,u)$ is the category of finite-dimensional representation of $G$ and $u$ acts as parity automorphism for any $X\in\mathcal{O}(\mathcal{E})$. A symmetric fusion category $\mathcal{E}$ is a Tannakian fusion category if $\mathcal{E}\cong \text{Rep}(G)$, the braiding of $\text{Rep}(G)$ is given by reflection of vector spaces. A Tannakian subcategory $\mathcal{E}\subseteq\mathcal{C}$ is maximal, if $\mathcal{E}$ is not contained in any other Tannakian subcategory of $\mathcal{C}$. We say that $\mathcal{C}$ is a fusion category over $\mathcal{E}$ if there exists a braided tensor functor $F:\mathcal{E}\to\mathcal{Z}(\mathcal{C})$ such that $\mathcal{E}$ is mapped faithfully to $\mathcal{C}$ via functor $\text{Forg}\circ F$, where $\text{Forg}:\mathcal{Z}(\mathcal{C})\to\mathcal{C}$ is the forgetful functor \cite[Definition 4.16]{DrGNO2}. A fusion category $\mathcal{C}$ is a braided fusion category over $\mathcal{E}$, if $\mathcal{C}$ is braided and $\mathcal{E}\subseteq\mathcal{C}'$. In particular, $\mathcal{C}$ be is a non-degenerate fusion category over $\mathcal{E}$ if $\mathcal{C}'=\mathcal{E}$. Given a braided fusion category $\mathcal{C}$, let $\mathcal{E}=\text{Rep}(G)\subseteq\mathcal{C}$ be a non-trivial Tannakian subcategory, then there exists a $G$-graded fusion category $\mathcal{C}_G$, which is called the de-equivariantization of $\mathcal{C}$ by $\mathcal{E}$, such that $\mathcal{C}\cong (\mathcal{C}_G)^G$ as braided fusion categories. In general, $\mathcal{C}_G$ is a braided $G$-crossed fusion category, but the trivial component $\mathcal{C}_G^0\cong \mathcal{E}'_G$ is a braided fusion category, where $\mathcal{E}'$ is the centralizer of $\mathcal{E}$ in $\mathcal{C}$. In particular, if $\mathcal{E}\subseteq \mathcal{C}'$, then $\mathcal{C}_G$ is a braided fusion category, and $(\mathcal{C}_G)'\cong (\mathcal{C}')_G$, see \cite[section 4]{DrGNO2}. Let $\mathcal{C}$ be a braided fusion category. Then $\mathcal{C}$ is said to be non-degenerate if its M\"{u}ger center $\mathcal{C}'=\text{Vec}$. A pre-modular fusion category $\mathcal{C}$ is a modular fusion category if $\mathcal{C}$ is non-degenerate. It is known that pointed non-degenerate fusion categories are in bijective correspondence with metric groups $(G,q)$, where $G$ is an abelian group, $q: G\to \mathds{k}^*$ is a non-degenerate quadratic form, see \cite{DrGNO2, EGNO} for details. We use $\mathcal{C}(G,q)$ to denote the pointed non-degenerate fusion category determined by the metric group $(G,q)$ below. A braided fusion category $\mathcal{C}$ is slightly degenerate if $\mathcal{C}'=\text{sVec}$, where $\text{sVec}$ is the symmetric category of finite-dimensional super vector spaces over $\mathds{k}$. We say a slightly degenerate fusion category $\mathcal{C}$ splits, if there exists a braided equivalence $\mathcal{C}\cong \text{sVec}\boxtimes\mathcal{D}$, where $\mathcal{D}$ is a non-degenerate fusion category. Slightly degenerate pointed fusion categories split \cite[Proposition 2.6]{ENO3}, for example. $\mathcal{C}$ is a super-mdoular fusion category if $\mathcal{C}'=\text{sVec}$ and $\mathcal{C}$ is spherical. In particular, for any slightly degenerate fusion category $\mathcal{C}$, if $\mathcal{C}$ contains a Tannakian subcategory $\mathcal{E}=\text{Rep}(G)$, then $\mathcal{C}_G^0$ is again slightly degenerate \cite[Propositon 4.30]{DrGNO2}. \subsection{Witt equivalence and Witt groups} Let $\mathcal{C}$ be a braided fusion category, $(A,m)\in\mathcal{C}$ a commutative algebra, where $m:A\otimes A\to A$ is the multiplication such that $m=m\circ c$. $A$ is a connected \'{e}tale algebra if $\text{Hom}_\mathcal{C}(I,A)\cong\mathds{k}$ and the category of right $A$-modules $\mathcal{C}_A$ in $\mathcal{C}$ is a semisimple category \cite[Proposition 2.7, Definition 3.1]{DMNO}. In addition, we say two \'{e}tale algebras $A,B\in\mathcal{C}$ are centralizing each other if $c_{B,A}\circ c_{A,B}=\text{id}_{A\otimes B}$. Let $\mathcal{E}$ be a symmetric fusion category. By \cite[Proposition 3.22]{DMNO}, the symmetric category $\mathcal{E}\boxtimes\mathcal{E}$ contains a connected \'{e}tale algebra $A$ such that $(\mathcal{E}\boxtimes\mathcal{E})_A\cong\mathcal{E}$ as braided fusion category. In fact, the \'{e}tale algebra \begin{align*} A=\mathcal{I}(I):=\oplus_{X\in\mathcal{O}(\mathcal{E})}X\boxtimes X^*, \end{align*} where $\mathcal{I}$ is the right adjoint functor of the surjective braided tensor functor \begin{align*} \otimes:\mathcal{E}\boxtimes\mathcal{E}\to\mathcal{E}, ~X\boxtimes Y\mapsto X\otimes Y, ~\forall X,Y\in\mathcal{E}. \end{align*}Then for any braided fusion categories $\mathcal{C},\mathcal{D}$ over $\mathcal{E}$, \begin{align*} \mathcal{C}\boxtimes_\mathcal{E}\mathcal{D}:=(\mathcal{C}\boxtimes\mathcal{D})_A, \end{align*} is also a braided fusion category over $\mathcal{E}$, where $(\mathcal{C}\boxtimes\mathcal{D})_A$ is the category of right $A$-modules of $A$ in $\mathcal{C}\boxtimes\mathcal{D}$. Recall that two non-degenerate fusion categories $\mathcal{C},\mathcal{D}$ over $\mathcal{E}$ is Witt equivalent if and only if there exists a fusion category $\mathcal{A}$ over $\mathcal{E}$ such that there is a braided equivalence \begin{align*} \mathcal{C}\boxtimes_\mathcal{E}\mathcal{D}^\text{rev}\cong\mathcal{Z}(\mathcal{A},\mathcal{E}), \end{align*} where $\mathcal{Z}(\mathcal{A},\mathcal{E})$ is the centralizer of $\mathcal{E}$ in Drinfeld center $\mathcal{Z}(\mathcal{A})$, $\mathcal{D}^\text{rev}:=\mathcal{D}$ as fusion category, the braiding $c^\text{rev}$ of $\mathcal{D}^\text{rev}$ is given by \begin{align*} c^\text{rev}_{X,Y}:=c_{Y,X}^{-1}, ~\forall X,Y\in\mathcal{D}^\text{rev}. \end{align*}The Witt equivalence of $\mathcal{C}$ will be denoted by $[\mathcal{C}]$, and the Witt equivalence classes of non-degenerate fusion categories over $\mathcal{E}$ form the Witt groups group $\mathcal{W}(\mathcal{E})$, we denote $\mathcal{W}:=\mathcal{W}(\text{Vec})$ and $s\mathcal{W}:=\mathcal{W}(\text{sVec})$, see \cite{DMNO,DNO} for detail. Given a braided fusion category $\mathcal{C}$ with $\mathcal{C}'\cong\mathcal{E}$, M. M\"{u}ger conjectured that there is a non-degenerate fusion category $\mathcal{A}$ such that $\mathcal{C}\subseteq\mathcal{A}$ as braided fusion subcategory and $\mathcal{C}_\mathcal{A}'\cong\mathcal{E}$ \cite[Conjecture 5.2]{Mu}, and $\mathcal{A}$ is called a minimal extension of $\mathcal{C}$. Obviously, if a slightly degenerate category $\mathcal{C}$ splits, then it has a minimal extension. This fails for some non-trivial Tannakian fusion category $\mathcal{E}$ by V. Drinfeld \cite{Drinfeld}, however. Let $\mathcal{C}$ be a super-modular fusion category, assume that $\mathcal{C}$ admits a minimal modular extension $\mathcal{A}$, i.e., the minimal extension $\mathcal{A}$ of $\mathcal{C}$ is a modular fusion category, then it was showed in \cite{BGHNPRW,LKW} that $\mathcal{A}$ has exactly 16 modular equivalence classes, and these minimal modular extensions are also Witt inequivalent \cite{DNO}. \section{On the minimal extension conjecture}\label{section3} In this section, we study the minimal extension for braided fusion categories, and we show that every slightly degenerate weakly group-theoretical fusion category admits a minimal extension. To begin with, we need the following lemma, which is a direct result of \cite[Corollary 3.26]{DMNO}. \begin{lemm}\label{lemm1sec3}Let $\mathcal{C}$ be a non-degenerate fusion category over $\mathcal{E}$, and $\mathcal{D}$ a braided fusion category. Then we have a braided equivalence $\mathcal{C}\boxtimes\mathcal{D}\cong\mathcal{C}\boxtimes_\mathcal{E} (\mathcal{E}\boxtimes\mathcal{D})$. \end{lemm} \begin{proof}Notice that we have an injective braided tensor functor \begin{align*}\iota:\mathcal{C}\boxtimes\mathcal{D}\to \mathcal{C}\boxtimes (\mathcal{E}\boxtimes\mathcal{D}),~X\boxtimes Y\mapsto X\boxtimes I\boxtimes Y, \end{align*} and a surjective braided tensor functor $F:\mathcal{C}\boxtimes (\mathcal{E}\boxtimes\mathcal{D})\to\mathcal{C}\boxtimes_\mathcal{E}(\mathcal{E}\boxtimes\mathcal{D})$. Then the composition $F\circ\iota:\mathcal{C}\boxtimes\mathcal{D}\to\mathcal{C}\boxtimes_\mathcal{E} (\mathcal{E}\boxtimes\mathcal{D})$ is a braided tensor functor. By definition \cite{DNO}, we have \begin{align*} \mathcal{C}\boxtimes_\mathcal{E}(\mathcal{E}\boxtimes\mathcal{D})=(\mathcal{C}\boxtimes\mathcal{E}\boxtimes\mathcal{D})_A \end{align*} where $A:=\mathcal{I}(I)=\oplus_{X\in\mathcal{O}(\mathcal{E})}X\boxtimes X^*\boxtimes I$, and $\mathcal{I}$ is the right adjoint functor of tensor functor $\otimes:\mathcal{E}\boxtimes\mathcal{E}\to\mathcal{E}$. While $(\mathcal{C}\boxtimes\mathcal{D})\cap A\cong I$, therefore $F\circ\iota$ is injective \cite[Proposition 3.4]{LKW}, so $F\circ\iota$ is a braided equivalence by \cite[Proposition 6.3.3]{EGNO}, as we have the following equations \begin{align*} \text{FPdim}(\mathcal{C}\boxtimes\mathcal{D})=\text{FPdim}(\mathcal{C}\boxtimes_\mathcal{E} (\mathcal{E}\boxtimes\mathcal{D}))=\text{FPdim}(\mathcal{C})\text{FPdim}(\mathcal{D}), \end{align*} this finishes the proof. \end{proof} For any connected \'{e}tale algebra $A\in\mathcal{C}$, a right $A$-module $(M,\mu)$ is a dyslectic (or, local) $A$-module, if $\mu\circ c_{A,M}\circ c_{M,A}=\mu$, where $\mu:M\otimes A\to M$ is the $A$-module structure of $M$. We use $\mathcal{C}_A^0$ to denote the category of dyslectic modules of $A$ \cite{DMNO}. \begin{theo}\label{theo1sec3}Let $\mathcal{B}$ and $\mathcal{C}$ be two Witt equivalent non-degenerate fusion categories over a symmetric fusion category $\mathcal{E}$, moreover $\mathcal{B}\cong\mathcal{E}\boxtimes\mathcal{D}$ where $\mathcal{D}$ is non-degenerate. Then $\mathcal{C}$ admits a minimal extension $\mathcal{A}$. In addition, $[\mathcal{C}]=[\mathcal{E}\boxtimes\mathcal{A}]$. \end{theo} \begin{proof} By assumption, there exists a fusion category $\mathcal{A}_1$ over $\mathcal{E}$ such that we have a braided equivalence $\mathcal{C}\boxtimes_\mathcal{E}\mathcal{B}^\text{rev}\cong\mathcal{Z}(\mathcal{A}_1,\mathcal{E})$. On the one hand, by Lemma \ref{lemm1sec3}, we have the following braided equivalences \begin{align*} \mathcal{C}\boxtimes\mathcal{D}^\text{rev}\cong\mathcal{C}\boxtimes_\mathcal{E}\mathcal{B}^\text{rev}\cong \mathcal{Z}(\mathcal{A}_1, \mathcal{E}). \end{align*} On the other hand, \cite[Theorem 3.13]{DrGNO2} says that $\mathcal{Z}(\mathcal{A}_1)\cong\mathcal{D}^\text{rev}\boxtimes\mathcal{A}$, where $\mathcal{A}$ is the centralizer of $\mathcal{D}^\text{rev}$ in $\mathcal{Z}(\mathcal{A}_1)$, and then $\mathcal{A}$ is also non-degenerate. By \cite[Theorem 3.10]{DrGNO2}, \begin{align*} \text{FPdim}(\mathcal{Z}(\mathcal{A}_1))=\text{FPdim}(\mathcal{E})\text{FPdim}(\mathcal{Z}(\mathcal{A}_1, \mathcal{E})), \end{align*} hence we have equations \begin{align*} \text{FPdim}(\mathcal{A})=\frac{\text{FPdim}(\mathcal{Z}(\mathcal{A}_1))}{\text{FPdim}(\mathcal{D})}=\frac{\text{FPdim}(\mathcal{E})\text{FPdim}(\mathcal{Z}(\mathcal{A}_1, \mathcal{E}))}{\text{FPdim}(\mathcal{D})}=\text{FPdim}(\mathcal{E})\text{FPdim}(\mathcal{C}). \end{align*} We claim that $\mathcal{A}$ is a minimal extension of $\mathcal{C}$. Note that we have an injective braided tensor functor from $\mathcal{C}\boxtimes \mathcal{D}^\text{rev}$ to $\mathcal{Z}(\mathcal{A}_1)$, by definition $\mathcal{C}$ and $\mathcal{D}^\text{rev}$ are centralizing each other in braided fusion category $\mathcal{C}\boxtimes \mathcal{D}^\text{rev}$, hence they are still centralizing each other in $\mathcal{Z}(\mathcal{A}_1)$. Therefore, we have an inclusion $\mathcal{D}^\text{rev}\subseteq\mathcal{C}'_{\mathcal{Z}(\mathcal{A}_1)}$, where $\mathcal{C}'_{\mathcal{Z}(\mathcal{A}_1)}$ is the centralizer of $\mathcal{C}$ in $\mathcal{Z}(\mathcal{A}_1)$. While centralizer of $\mathcal{D}^\text{rev}$ in $\mathcal{Z}(\mathcal{A}_1)$ is $\mathcal{A}$, by \cite[Corollary 3.11]{DrGNO2} $\mathcal{C} \subseteq\mathcal{A}$. Let $A$ be the connected \'{e}tale algebra such that $(\mathcal{E}\boxtimes\mathcal{E})_A\cong\mathcal{E}$. Denote $\mathcal{M}:=\mathcal{E}\boxtimes\mathcal{A}$, since $A\cap\mathcal{E}=I$, $\mathcal{M}_A^0$ is also non-degenerate over $\mathcal{E}$ by \cite[Corollary 4.6]{DNO}, and we have the following braided equivalence \begin{align*} \mathcal{M}\boxtimes_\mathcal{E}(\mathcal{M}_A^0)^\text{rev}\cong\mathcal{Z}(\mathcal{M}_A,\mathcal{E}). \end{align*} In particular, $\text{FPdim}(\mathcal{M}_A^0)=\frac{\text{FPdim}(\mathcal{M})}{\text{FPdim}(\mathcal{E})^2}=\text{FPdim}(\mathcal{C})$. While $\mathcal{C}\cong\text{Vec}\boxtimes\mathcal{C}$ as a fusion subcategory of $\mathcal{M}$ and $\mathcal{C}\cap A=I$, so there is a injective braided tensor functor $F:\mathcal{C}\to\mathcal{M}_A^0$ \cite[Proposition 3.4]{LKW}, which must be an equivalence by \cite[Proposition 6.3.3]{EGNO}. That is, \begin{align*} \mathcal{M}\boxtimes_\mathcal{E}\mathcal{C}^\text{rev}\cong\mathcal{Z}(\mathcal{M}_A,\mathcal{E}), \end{align*} so $[\mathcal{E}\boxtimes\mathcal{A}]=[\mathcal{C}]$ by definition, this finishes the proof. \end{proof} \begin{remk} Let $\mathcal{C}$ and $\mathcal{D}$ Witt equivalent braided fusion categories. If $\mathcal{C}$ has a minimal extension $\mathcal{A}$, then Theorem \ref{theo1sec3} implies that $\mathcal{D}$ is also Witt equivalent to $\mathcal{E}\boxtimes\mathcal{A}$. Hence, $\mathcal{D}$ has a minimal extension. Consequently, the property whether a braided fusion category admits a minimal extension is a Witt invariant. In addition, if $\mathcal{C}$ is a pre-modular fusion category, we don't know whether its minimal extensions $\mathcal{A}$ admit a spherical structure, however. \end{remk} Therefore, to prove minimal extension conjecture for non-degenerate fusion categories over a symmetric fusion category $\mathcal{E}$ is equivalent to show that the group homomorphism $S_\mathcal{E}:\mathcal{W}\to \mathcal{W}(\mathcal{E})$ is surjective, where \begin{align*} S_\mathcal{E}:[\mathcal{A}]\mapsto[\mathcal{E}\boxtimes\mathcal{A}],~ \text{for all}~[\mathcal{A}]\in\mathcal{W}, \end{align*} $[\mathcal{A}]$ is the Witt equivalence class of non-degenerate fusion category $\mathcal{A}$, see also \cite[Question 5.15]{DNO}. When $\mathcal{E}=\text{sVec}$, it is well-known $\text{ker}(S_\text{sVec})=\mathcal{W}_\text{Ising}\cong\mathbb{Z}_{16}$ \cite[Proposition 5.14]{DNO}, where $\mathcal{W}_\text{Ising}\subseteq\mathcal{W}$ is the subgroup generated by the Witt equivalence class of Ising fusion category. Assume that $\mathcal{E}=\text{Rep}(G)$ is a Tannakian fusion category, then for any non-degenerate fusion category $\mathcal{C}$ over $\mathcal{E}$, $\mathcal{C}_G$ is a non-degenerate fusion category \cite[Proposition 4.56]{DrGNO2}. Thus, there is a well-defined group homomorphism $\phi_\mathcal{E}$ between Witt groups $\mathcal{W}(\mathcal{E})$ and $\mathcal{W}$, where \begin{align*} \phi_\mathcal{E}:\mathcal{W}(\mathcal{E})\to\mathcal{W}, ~[\mathcal{C}]\mapsto[\mathcal{C}_G],~ \text{for all}~ [\mathcal{C}]\in\mathcal{W}(\mathcal{E}), \end{align*} see \cite[Remark 5.8]{DNO}. For any non-degenerate fusion category $\mathcal{A}$, note that $(\mathcal{E}\boxtimes\mathcal{A})_G\cong\mathcal{A}$ as braided fusion category, so $\phi_\mathcal{E}(S_\mathcal{E}([\mathcal{A}]))=[\mathcal{A}]$. Therefore, \begin{coro}Let $\mathcal{E}=\text{Rep}(G)$ be a Tannakian fusion category. Then $\phi_\mathcal{E}\circ S_\mathcal{E}=\text{id}$, so there exists a split exact sequence \begin{align*} 1\to \text{ker}(\phi_\mathcal{E})\overset{i}{\to} \mathcal{W}(\mathcal{E})\underset{S_\mathcal{E}}{\overset{\phi_\mathcal{E}}{\rightleftarrows} }\mathcal{W}\to 1. \end{align*} \end{coro} Hence $S_\mathcal{E}$ is injective for all Tannakian fusion categories $\mathcal{E}$. However, Drinfeld's counterexample \cite{Drinfeld} shows that $S_\mathcal{E}$ is not surjective for some Tannakian categories $\mathcal{E}$, in general. In \cite{OsYu}, we will continue to consider the structures of subgroup $\text{ker}(\phi_\mathcal{E})$ and Witt group $s\mathcal{W}$. Recall that a braided fusion category $\mathcal{C}$ is said to be anisotropic if $\mathcal{C}$ does not contain any non-trivial Tannakian subcategory; while $\mathcal{C}$ is weakly anisotropic, if $\mathcal{C}$ does not contain a non-trivial Tannakian subcategory (if exists) that is invariant under all braided auto-equivalences of $\mathcal{C}$ \cite{DrGNO2}. \begin{theo}\label{theo2sec3}Let $\mathcal{C}$ be a slightly degenerate weakly group-theoretical fusion category. Then $\mathcal{C}$ admits a minimal extension. \end{theo} \begin{proof} By Theorem \ref{theo1sec3}, it suffices to show that $\mathcal{C}$ is Witt equivalent to a split slightly degenerate fusion category. When $\mathcal{C}$ is a pointed fusion category, then \cite[Proposition 2.6]{ENO3} means $\mathcal{C}\cong\text{sVec}\boxtimes\mathcal{D}$, where $\mathcal{D}$ is a pointed non-degenerate fusion category. Assume $\mathcal{C}$ is not a pointed fusion category below. On the one hand, if $\mathcal{C}$ is weakly anisotropic, then $\mathcal{C}$ is braided equivalent to $\mathcal{B}\boxtimes\mathcal{D}$ by \cite[Theorem 1.1]{Na}, where $\mathcal{B}\cong \text{Vec}$ or $\mathcal{B}$ is equivalent to an Ising category $\mathcal{I}$, $\mathcal{D}$ is a pointed fusion category. So $\mathcal{B}$ is non-degenerate and $\mathcal{D}$ is slightly degenerate. While \cite[Proposition 2.6]{ENO3} implies $\mathcal{D}\cong \text{sVec}\boxtimes\mathcal{D}_0$, where $\mathcal{D}_0$ is a pointed non-degenerate fusion category, therefore $\mathcal{C} \cong \text{sVec}\boxtimes(\mathcal{B}\boxtimes\mathcal{D}_0)$. That is, $\mathcal{C}$ splits and admits a minimal extension. On the other hand, if $\mathcal{C}$ is not a weakly anisotropic fusion category, then $\mathcal{C}$ contains a maximal Tannakian subcategory $\text{Rep}(G)$ such that slightly degenerate fusion category $\mathcal{C}_G^0$ is a weakly anisotropic fusion category by \cite[Corollary 5.19]{DrGNO2}. By \cite[Corollary 4.6] {DNO}, we have a braided fusion categories tensor equivalence $\mathcal{C}\boxtimes_{\text{sVec}}(\mathcal{C}_G^0)^\text{rev}\cong\mathcal{Z}(\mathcal{C}_G, \text{sVec})$. Thus, $\mathcal{C}$ and $\mathcal{C}_G^0$ are Witt equivalent by definition, consequently $\mathcal{C}$ admits a minimal extension. \end{proof} Hence, let $\mathcal{D}$ be a slightly degenerate fusion category, if $\mathcal{D}$ is Witt equivalent to a weakly group-theoretical fusion category, then Theorem \ref{theo1sec3} and Theorem \ref{theo2sec3} together show $\mathcal{D}$ admits a minimal extension. \begin{remk}In fact, the proof of Theorem \ref{theo1sec3} implies that the minimal extension $\mathcal{A}$ in Theorem \ref{theo2sec3} is a braided fusion subcategory of $\mathcal{Z}(\mathcal{C}_G)$. Moreover, $\mathcal{A}$ is weak also a weakly group-theoretical fusion category \cite[Proposition 4.1]{ENO3}. In particular, $\mathcal{A}$ is integral if $\mathcal{C}$ is integral. \end{remk} \section{Classifications of certain non-degenerate weakly group-theoretical fusion categories}\label{section4} In this section, we study the structure of non-degenerate and slightly degenerate weakly group-theoretical fusion categories of particular Frobenius-Perron dimensions. Let $G$ be a finite group. Given a $G$-graded fusion category $\mathcal{C}=\oplus_{g\in G}\mathcal{C}_g$, assume $\mathcal{C}_e\cong\text{Vec}_N^\omega$, where $N$ is a finite group and $\omega\in Z^3(N,\mathds{k}^*)$ is a $3$-cocycle. By definition, the $G$-grading structure of $\mathcal{C}$ induces an action of group $N$ on sets $\mathcal{O}(\mathcal{C}_g)$ for all $g\in G$. The following lemma is trivial, we include it for the reader's interest. \begin{lemm}\label{lemm1sec4}Let $\mathcal{C}=\oplus_{g\in G}\mathcal{C}_g$ be a faithful $G$-graded fusion category, and $\mathcal{C}_e\cong\text{Vec}_N^\omega$ is pointed, where $G,N$ are finite groups and $\omega\in Z^3(N,\mathds{k}^*)$ is a $3$-cocycle. Then for any element $ g\in G$, N acts transitively on $\mathcal{O}(\mathcal{C}_g)$. \end{lemm} \begin{proof} By definition, for any $g\in G$, let $X\in\mathcal{C}_g$ be an arbitrary simple object, there exists a subgroup $H$ of $N$ such that $X\otimes X^*=\oplus_{h\in H}h$, then $\text{FPdim}(X)^2=|H|$. Note that the orbit of $X$ has a size of $[N:H]$, while by \cite[Proposition 8.20]{ENO1} \begin{align*} \text{FPdim}(\mathcal{C}_g)=\text{FPdim}(\mathcal{D})=|N|=[N:H]\text{FPdim}(X)^2, \end{align*} which then shows the orbit of $X$ is exactly $\mathcal{O}(\mathcal{C}_g)$. \end{proof} Now we are ready to give our first classification theorem of non-degenerate fusion categories. \begin{theo}\label{theo1sec4}Let d be a square-free positive integer. Assume that $\mathcal{C}$ is an integral non-degenerate weakly group-theoretical fusion category such that $\text{FPdim}(\mathcal{C})=nd$ and $(n,d)=1$. Then $\mathcal{C}\cong \mathcal{C}(\mathbb{Z}_d,q)\boxtimes \mathcal{C}(\mathbb{Z}_d,q)_\mathcal{C}'$ as braided fusion category. \end{theo} \begin{proof} Assume that $\mathcal{C}$ is not pointed, otherwise the result is trivial. Since $\mathcal{C}$ is a weakly group-theoretical integral fusion category, by \cite[Theorem 3.1]{Na} $\mathcal{C}$ contains a non-trivial Tannakian subcategory. Let $\mathcal{E}=\text{Rep}(G)$ be a maximal Tannakian subcategory of $\mathcal{C}$, then $\mathcal{C}_G$ is a $G$-crossed braided fusion category whose trivial component is braided equivalent to $\mathcal{C}_G^0$. Moreover, the $G$-grading of $\mathcal{C}_G$ is faithful and $\mathcal{C}_G^0$ is a non-degenerate fusion category by \cite[Proposition 4.56]{DrGNO2}. Meanwhile, \cite[Corollary 5.19]{DrGNO2} says $\mathcal{C}_G^0$ is weakly anisotropic, therefore, \cite[Theorem 1.1]{Na} means $\mathcal{C}_G^0$ is a pointed fusion category. So $\mathcal{C}_G^0$ contains a pointed non-degenerate fusion category $\mathcal{C}(\mathbb{Z}_d,q)$, since $|G|^2$ divides $\text{FPdim}(\mathcal{C})$. Let $\mathcal{B}:=\mathcal{C}_G$ and $\mathcal{B}=\oplus_{g\in G}\mathcal{B}_g$ be the $G$-grading of $\mathcal{B}$, then $\mathcal{B}_\text{ad}\subseteq\mathcal{B}_e$ by universal property of $\mathcal{B}_\text{ad}$, so $\mathcal{B}$ is nilpotent. Hence, for any $g\in G$ and any object $X\in\mathcal{O}(\mathcal{B}_g)$, we have that $\text{FPdim}(X)^2$ divides $\text{FPdim}(\mathcal{B}_e)$ \cite[Corollary 5.3]{GN}, so $(\text{FPdim}(X),d)=1$. By Lemma \ref{lemm1sec4} $\text{rank}(\mathcal{B}_g)=\frac{\text{FPdim}(\mathcal{B}_e)}{\text{FPdim}(X)^2}$, in particular $d| \text{rank}(\mathcal{B}_g)$. Since $\mathcal{B}_e=\mathcal{C}_G^0$ is a non-degenerate fusion category, by \cite[Lemma 10.7]{Kir} $\text{rank}(\mathcal{B}_g)$ is equal to the cardinal of $\mathcal{O}(\mathcal{C}^0_G)^g$, the set of isomorphism classes of simple objects of $\mathcal{C}^0_G$ that are fixed by $g$. Note that $g$ acts as a braided tensor equivalence on $\mathcal{C}_G^0$, thus $\mathcal{O}(\mathcal{C}_G^0)^g$ is an abelian group and contains $\mathbb{Z}_d$ as a subgroup, which then implies that $G$ acts trivially on $\mathcal{C}(\mathbb{Z}_d,q)$. For $(d,|G|)=1$ and $H^i(\mathbb{Z}_d,G)=0$ for all non-negative integer $i$, fusion category $\mathcal{E}'=(\mathcal{C}_G^0)^G\subseteq\mathcal{C}$ contains a fusion subcategory $\mathcal{C}(\mathbb{Z}_d,q)^G\cong \text{Rep}(G)\boxtimes \mathcal{C}(\mathbb{Z}_d,q)$. Therefore, $\mathcal{C}$ contains a non-degenerate fusion subcategory $\mathcal{C}(\mathbb{Z}_d,q)$. By \cite[Theorem 3.13]{DrGNO2} we have a braided tensor equivalence $\mathcal{C}\cong\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{D}$, where $\mathcal{D}$ is the centralizer of $\mathcal{C}(\mathbb{Z}_d,q)$, and $\mathcal{D}$ is also a non-degenerate fusion category. \end{proof} Next we will generalize Theorem \ref{theo1sec4} to strictly weakly integral fusion categories. While in this case, it was shown in \cite[Theorem 1.1]{Na} that if a braided weakly group-theoretical fusion category $\mathcal{C}$ is weakly anisotropic, $\mathcal{C}$ needs not be a pointed fusion category. Hence, the proof of Theorem \ref{theo1sec4} fails in general. Let $\mathcal{E}=\text{Rep}(G)$ be a Tannakian fusion category. Assume that \begin{align*} B:=\mathcal{I}(I)=\oplus_{X\in\mathcal{O}(\mathcal{E})}X\boxtimes X^* \end{align*} is the connected \'{e}tale algebra such that $(\mathcal{E}\boxtimes\mathcal{E})_B\cong\mathcal{E}$ as braided fusion categories \cite[Remark 2.11]{DNO}. Let $\mathcal{B}$ and $\mathcal{C}$ be braided fusion categories, which contain $\mathcal{E}$ as a fusion subcategory. Since $\mathcal{E}\boxtimes\mathcal{E}\subseteq \mathcal{C} \boxtimes\mathcal{B}$, we have a braided fusion category $(\mathcal{C}\boxtimes\mathcal{B})_B^0$ by \cite[Proposition 4.30]{DrGNO2}, we denote it by $\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B}$, for simplification of notation. Given two $G$-crossed braided fusion categories $\mathcal{C}_G$ and $\mathcal{D}_G$, notice that the fusion category $\mathcal{C}_G\boxtimes\mathcal{B}_G$ is graded by group $G\times G$ with trivial component $\mathcal{C}_G^0\boxtimes\mathcal{B}_G^0$. Let $\mathcal{C}_G\boxtimes_G\mathcal{B}_G$ denote the fusion subcategory of $\mathcal{C}_G\boxtimes\mathcal{B}_G$ graded by group $H$ of diagonal elements of $G\times G$, so $H\cong G$. Then there exists a well-defined action of $G$ on the braided fusion subcategory $\mathcal{C}_G^0\boxtimes\mathcal{B}_G^0$, and naturally $\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B}\cong (\mathcal{C}_G\boxtimes_G\mathcal{B}_G)^G$ as braided fusion categories by \cite[Theorem 7.12]{ENO2}. In addition, $\mathcal{C}\boxtimes_\mathcal{E}\mathcal{B}$ contains a Tannakian subcategory $(\mathcal{E}\boxtimes\mathcal{E})_B\cong \text{Rep}(G)$, then it follows from \cite[Lemma 4.32]{DrGNO2} that there is a braided fusion category equivalence \begin{align*} (\mathcal{C}\boxtimes_\mathcal{E}\mathcal{B})_G^0\cong(\mathcal{C}\boxtimes\mathcal{B})_{G\times G}^0, \end{align*} which then is equivalent to $\mathcal{C}^0_G\boxtimes\mathcal{B}^0_G$. \begin{prop}\label{prop1sec4}Let $A,B$ be two connected \'{e}tale algebras in a non-degenerate fusion category $\mathcal{C}$. If $A\cap B=I$, moreover $A,B$ are centralizing each other, then we have the following braided fusion categories equivalences \begin{align*} (\mathcal{C}_B^0)_{A\otimes B}^0\cong \mathcal{C}_{A\otimes B}^0\cong(\mathcal{C}_A^0)_{A\otimes B}^0. \end{align*} \end{prop} \begin{proof}As \'{e}tale algebras $A,B$ are centralizing each other, so $A\otimes B$ is also a commutative algebra; since \'{e}tale algebras are self-dual \cite[Remark 3.4]{DMNO}, $\text{Hom}_\mathcal{C}(I,A\otimes B)\cong\text{Hom}_\mathcal{C}(A,B)$ is one-dimensional, hence $A\otimes B$ is connected. It is easy to see that $A\otimes B$ is a commutative algebra over $A$, hence we have an abelian category equivalence $(\mathcal{C}_A)_{A\otimes B}\cong\mathcal{C}_{A\otimes B}$, so $A\otimes B$ is a connected \'{e}tale algebra by \cite[Proposition 2.7, Proposition 3.16]{DMNO}. By \cite[Corollary 3.30, Corollary 3.32]{DMNO} $(\mathcal{C}_A^0)_{A\otimes B}^0$ and $(\mathcal{C}_B^0)_{A\otimes B}^0$ are non-degenerate fusion categories with same Frobenius-Perron dimension. It follows from \cite[Proposition 6.3.3]{EGNO} and \cite[Corollary 3.26]{DMNO} that $\mathcal{C}_{A\otimes B}^0\cong(\mathcal{C}_A^0)_{A\otimes B}^0$ as braided fusion categories, then $(\mathcal{C}_B^0)_{A\otimes B}^0\cong \mathcal{C}_{A\otimes B}^0\cong (\mathcal{C}_A^0)_{A\otimes B}^0$. \end{proof} \begin{lemm}\label{lemm2sec4}Let $\mathcal{C},\mathcal{D}$ be braided fusion categories. Assume that there exists a Tannakian category $\text{Rep}(G)$ such that $\mathcal{C}_G^0\cong\mathcal{D}_G^0$, then $\mathcal{C}\cong\mathcal{D}\widehat{\boxtimes}_\mathcal{E}\mathcal{Z}(\text{Vec}_G^\omega)$ for some $\omega\in Z^3(G,\mathds{k}^*)$. \end{lemm} \begin{proof}Since we have a braided equivalence $\mathcal{C}_G^0\cong\mathcal{D}_G^0$, the $G$-crossed braided fusion categories $\mathcal{C}_G$ and $\mathcal{D}_G$ are differed from a 3-cocycle $\omega\in Z^3(G,\mathds{k}^*)$ by \cite[Theorem 8.9]{ENO2}. While, by definition, the associator of $\mathcal{D}_G\boxtimes_G \text{Vec}_G^\omega$ differs from that of $\mathcal{D}_G$ by the $3$-cocylce $\omega$. That is, we have an equivalence of $G$-crossed braided fusion categories $\mathcal{D}_G\boxtimes_G \text{Vec}_G^\omega\cong\mathcal{C}_G$. Consequently, we have the following braided tensor equivalences \begin{align*} \mathcal{D}\widehat{\boxtimes}_\mathcal{E}\mathcal{Z}(\text{Vec}_G^\omega)\cong(\mathcal{D}_G\boxtimes_G \text{Vec}_G^\omega)^G\cong (\mathcal{C}_G)^G\cong\mathcal{C}, \end{align*} as desired. This finishes the proof of the lemma. \end{proof} For any braided fusion category $\mathcal{C}$, the equivalence classes of invertible $\mathcal{C}$-module categories form a group, which is called the Picard group $\text{Pic}(\mathcal{C})$ of $\mathcal{C}$ \cite[section 4]{ENO2}. In addition, if $\mathcal{C}$ is a non-degenerate fusion category, then \cite[Theorem 5.2]{ENO2} says that the group $\text{Aut}^\text{br}_\otimes(\mathcal{C})$ of braided tensor equivalences of $\mathcal{C}$ is isomorphic to $\text{Pic}(\mathcal{C})$. \begin{theo}\label{theo2sec4} Let $\mathcal{C}$ be a non-degenerate fusion category with $\text{FPdim}(\mathcal{C})=nd$, where $n$ is a positive integer, and $d$ is a square-free integer such that $(n,d)=1$. Assume that $\mathcal{C}$ contains a Tannakian subcategory $\mathcal{E}=\text{Rep}(G)$ satisfying $\mathcal{C}_G^0\cong\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{A}$, moreover for any $V\in\mathcal{O}(\mathcal{C})$, $(\text{FPdim}(V)^2,d)=1$. Then $\mathcal{C}\cong\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{C}(\mathbb{Z}_d,q)_\mathcal{C}'$ as braided fusion category. \end{theo} \begin{proof} Note that $G$ acts as braided equivalence on non-degenerate fusion category $\mathcal{C}_G^0$, then it preserves group $\mathbb{Z}_d$, which then induces a group homomorphism \begin{align*}\rho: G\to \text{Pic}(\mathcal{C}(\mathbb{Z}_d,q))\cong \text{Aut}^\text{br}_{\otimes}(\mathcal{C}(\mathbb{Z}_d,q)). \end{align*} Since $|G|^2$ divides $\text{FPdim}(\mathcal{C})$ and $d$ is square-free, we have $(|G|,d)=1$, and then $H^m(G,\mathbb{Z}_d)=0$ for all nonnegative integers $m$. Particularly, the $3$-obstruction $o_3(\rho)$ in $H^3(G,\mathbb{Z}_d)$ vanishes, so we can lift $\rho$ to a $2$-homomorphism $\underline{\rho}:\underline{G}\to \underline{\text{Pic}(\mathcal{C}(\mathbb{Z}_d,q))}$ by \cite[Corollary 7.9]{ENO2}. Meanwhile fusion subcategory $\mathcal{A}$ is exactly the centralizer of $\mathcal{C}(\mathbb{Z}_d,q)$ in $\mathcal{C}_G^0$, hence $G$ also preserves $\mathcal{A}$, hence we have another well-defined $2$-homomorphism $\underline{G}\to \underline{\text{Pic}(\mathcal{A})}$. Then there exists an action of $G\times G$ on $\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{A}$, with $(g,h)\mapsto g\boxtimes h$, $g,h\in G$. To obtain a $G$-crossed extension of $\mathcal{C}(\mathbb{Z}_d,q)$, by \cite[Theorem 7.12]{ENO2} we have to lift $\rho$ to a $3$-homomorphism $\underline{\underline{\rho}}: \underline{\underline{G}}\to \underline{\underline{\text{Pic}(\mathcal{C}(\mathbb{Z}_d,q))}}$, which results in a $4$-obstruction $o_4(\rho)$ in the group $H^4(G,\mathds{k}^*)$, but the obstruction vanishes by \cite[Theorem 8.16]{ENO2}. Let $\widetilde{\mathcal{C}(\mathbb{Z}_d,q)}$ be the faithful braided $G$-crossed extension of $\mathcal{C}(\mathbb{Z}_d,q)$ determined by $\underline{\underline{\rho}}$. Then it follows from \cite[Proposition 4.56]{DrGNO2} that $\widetilde{\mathcal{C}(\mathbb{Z}_d,q)}^G$ is a non-degenerate fusion category. Let $\mathcal{B}:=\widetilde{\mathcal{C}(\mathbb{Z}_d,q^{-1})}^G$, where $q^{-1}$ is the inverse quadratic form of $q$ on $\mathbb{Z}_d$, that is, $q^{-1}(v):=q(v)^{-1}$, for all $v\in\mathbb{Z}_d$. Obviously, $\mathcal{C}(\mathbb{Z}_d,q^{-1})\cong\mathcal{C}(\mathbb{Z}_d,q)^\text{rev}$ and $\mathcal{B}^\text{rev} \cong\widetilde{\mathcal{C}(\mathbb{Z}_d,q)}^G$, where $\mathcal{C}(\mathbb{Z}_d,q)^\text{rev}$ and $\mathcal{B}^\text{rev}$ are non-degenerate fusion categories with reverse braiding, respectively. Next we show that there exists a non-degenerate fusion category $\mathcal{D}$ of FP-dimension $nd$ which also contains $\mathcal{C}(\mathbb{Z}_d,q)$ such that $\mathcal{D}_G^0\cong\mathcal{C}_G^0$. To do this, we first show that there exists a non-degenerate fusion category $\widetilde{\mathcal{A}}$ such that $\widetilde{\mathcal{A}}_G^0\cong\mathcal{A}$. Since $\mathcal{E}\boxtimes\mathcal{E}\subseteq \mathcal{C} \boxtimes\mathcal{B}$, we have a non-degenerate fusion category $\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B}:=(\mathcal{C}\boxtimes\mathcal{B})_B^0$ by \cite[Corollary 3.30]{DMNO}. Note that $\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B} $ contains a Tannakian subcategory $\mathcal{E}\cong \text{Rep}(G)$, moreover we show that $\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B}$ contains a $G$-equivariant connected \'{e}tale algebra $A$ of FP-dimension $d$. In fact, by definition $\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B}\cong(\mathcal{C}_G\boxtimes_G\mathcal{B}_G)^G$, and $$(\mathcal{C}^0_G\boxtimes\mathcal{B}^0_G)\cong\mathcal{A}\boxtimes \mathcal{C}(\mathbb{Z}_d,q) \boxtimes \mathcal{C}(\mathbb{Z}_d,q^{-1})\cong\mathcal{A}\boxtimes\mathcal{Z}(\mathcal{C}(\mathbb{Z}_d,q)),$$ invertible diagonal simple objects of $\mathcal{C}(\mathbb{Z}_d,q) \boxtimes \mathcal{C}(\mathbb{Z}_d,q^{-1})$ generate a Tannakian subcategory $\mathcal{H}\cong \text{Rep}(\mathbb{Z}_d)$, which obviously admits a $G$-equivariant structure by \cite[Theorem 9.5]{ENO2}. Hence we have an exact sequence of finite groups $1\to\mathbb{Z}_d\to \widetilde{G}\to G\to 1$, which splits as $(d,|G|)=1$, so $\widetilde{G}\cong\mathbb{Z}_d\rtimes G$. Consequently, there is a braided equivalence $\mathcal{H}^G\cong \text{Rep}(\widetilde{G})$, \'{e}tale algebra $A$ is exactly the regular algebra $\text{Fun}(\widetilde{G}/G)$, which is a $G$-equivariant connected \'{e}tale algebra. Then there is another non-degenerate fusion category $(\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_A^0$ by \cite[Corollary 3.30]{DMNO}. We claim that non-degenerate fusion category $\widetilde{\mathcal{A}}=(\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_A^0$ is exactly the fusion category that we wanted. In fact, \cite[Lemma 4.32]{DrGNO2} says that there are braided equivalences of non-degenerate fusion categories \begin{align*} (\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_G^0\cong (\mathcal{C}\boxtimes\mathcal{B})_{G\times G}^0\cong \mathcal{C}^0_G\boxtimes\mathcal{B}^0_G\cong\mathcal{A}\boxtimes\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{C}(\mathbb{Z}_d,q^{-1}). \end{align*} Since $A\in(\mathcal{C}^0_G\boxtimes\mathcal{B}^0_G)^G$ is a $G$-equivariant \'{e}tale algebra, and centralizer of $\mathcal{E}=\text{Rep}(G)$ inside $\mathcal{C}\boxtimes_\mathcal{E}\mathcal{B}$ is $(\mathcal{C}^0_G\boxtimes\mathcal{B}^0_G)^G$, so $\mathcal{E}$ and $A$ are centralizing each other. Hence, as a braided fusion category, $((\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_A^0)_G^0$ is equivalent to $((\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_G^0)_A^0$ by Proposition \ref{prop1sec4}. Then we have braided equivalences of non-degenerate fusion categories \begin{align*} ((\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_A^0)_G^0\cong((\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_G^0)_A^0\cong(\mathcal{A}\boxtimes\mathcal{C}(\mathbb{Z}_d,q) \boxtimes\mathcal{C}(\mathbb{Z}_d,q^{-1}))_A^0\cong\mathcal{A}. \end{align*} Taking $\mathcal{D}:=(\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_A^0\widehat{\boxtimes}_\mathcal{E}\mathcal{B}^\text{rev}$, consequently, \cite[Corollary 3.30]{DMNO} shows that $\mathcal{D}$ is also a non-degenerate fusion category. Moreover, we have the following braided equivalences of non-degenerate fusion categories \begin{align*} ((\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_A^0\widehat{\boxtimes}_\mathcal{E}\mathcal{B}^\text{rev})_G^0\cong ((\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_A^0\boxtimes \mathcal{B}^\text{rev})_{G\times G}^0\cong ((\mathcal{C}\widehat{\boxtimes}_\mathcal{E}\mathcal{B})_A^0)_G^0\boxtimes (\mathcal{B}^\text{rev})_G^0\cong \mathcal{A}\boxtimes\mathcal{C}(\mathbb{Z}_d,q). \end{align*} By Lemma \ref{lemm2sec4}, $\mathcal{C}\cong\mathcal{D}\widehat{\boxtimes}_\mathcal{E}\mathcal{Z}(\text{Vec}_G^\omega)$ for some $\omega\in Z^3(G,\mathds{k}^*)$. In the last, we show that $\mathcal{D}$ then $\mathcal{C}$ contains $\mathcal{C}(\mathbb{Z}_d,q)$ as a fusion subcategory. Now we show that $G$ acts on $\mathcal{C}(\mathbb{Z}_d,q)$ trivially. If not, then by \cite[Lemma 10.7]{Kir}, there exists a non-trivial component of the $G$-crossed grading of $\widetilde{\mathcal{C}(\mathbb{Z}_d,q)}$ which has rank less than $d$. Since $\mathcal{C}(\mathbb{Z}_d,q)$ is the trivial component of $\widetilde{\mathcal{C}(\mathbb{Z}_d,q)}$, and the $G$-grading of $\widetilde{\mathcal{C}(\mathbb{Z}_d,q)}$ is faithful, there exists non-invertible simple objects of $\widetilde{\mathcal{C}(\mathbb{Z}_d,q)}$ with FP-dimensions $\sqrt{t}$ by \cite[Corollary 5.3]{GN}, where $t>1$ is an integer and $t|d$. While previous equivalence $\mathcal{C}\cong\mathcal{D}\widehat{\boxtimes}_\mathcal{E}\mathcal{Z}(\text{Vec}_G^\omega)$ means that $\mathcal{C}$ contains a simple object $V$ of FP-dimension $s\sqrt{t}$ for some algebraic integer $s$ with $s^2\in\mathbb{Z}$, this contradicts to the assumption $(\text{FPdim}(V)^2,d)=1$. Then $G$ acts on $\mathcal{C}(\mathbb{Z}_d,q)$ trivially, and each component of the $G$-crossed grading of $\widetilde{\mathcal{C}(\mathbb{Z}_d,q)}$ has rank $d$ by \cite[Lemma 10.7]{Kir}. Consequently, $\widetilde{\mathcal{C}(\mathbb{Z}_d,q)}\cong\ \mathcal{C}(\mathbb{Z}_d,q)\boxtimes \text{Vec}_G$ as a $G$-crossed fusion category by \cite[Remark 7.11]{ENO2}, then $\mathcal{B}^\text{rev}\cong\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{Z}(\mathcal{E})$ as braided fusion category. And by definition of $\mathcal{D}$, $\mathcal{C}(\mathbb{Z}_d,q)\subseteq\mathcal{D}$ as a fusion subcategory. Therefore, we see that $\mathcal{C}$ also contains a non-degenerate fusion subcategory $\mathcal{C}(\mathbb{Z}_d,q)$, and then \cite[Theorem 3.13]{DrGNO2} implies that we have a braided equivalence of fusion categories $\mathcal{C}\cong\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{C}(\mathbb{Z}_d,q)'$ as required. \end{proof} \begin{remk} In Theorem \ref{theo2sec4}, the condition that $(\text{FPdim}(V)^2,d)=1$ for all simple objects $V\in \mathcal{C}$ can not be dropped. For example, let $d>1$ be a square-free odd integer, there exists a non-degenerate fusion category $\mathcal{C}$ of FP-dimension $4d$, which is braided equivalent to a $\mathbb{Z}_2$-equivariantization of a Tambara-Yamagami fusion category $\mathcal{TY}(\mathbb{Z}_d,\tau,\mu)$. It was proved that $\mathcal{C}$ contain a simple object of FP-dimension $\sqrt{d}$ and $\mathcal{C}_\text{pt}=\text{Rep}(\mathbb{Z}_2)$, see \cite[Theorem 3.1]{BGNPRW} for details. Obviously, $\mathcal{C}\ncong\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{C}(\mathbb{Z}_d,q)_\mathcal{C}'$ as braided fusion category. \end{remk} \begin{coro}Let $\mathcal{C}$ be a weakly group-theoretical non-degenerate fusion category, and $d$ be a square-free integer. Assume that $\text{FPdim}(\mathcal{C})=nd$ and $(n,d)=1$. If $(\text{FPdim}(X)^2,d)=1$ for all $X\in\mathcal{O}(\mathcal{C})$, then $\mathcal{C}\cong\mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{C}(\mathbb{Z}_d,q)_\mathcal{C}'$ as braided fusion category. \end{coro} \begin{proof} Assume $\mathcal{C}$ is not pointed, otherwise the result is trivial. If $\mathcal{C}$ is a weakly anisotropic fusion category, it follows from \cite[Theorem 1.1]{Na} that $\mathcal{C}$ is braided equivalent to a Deligne tensor product of an Ising category and a pointed fusion category. If not, let $\mathcal{E}=\text{Rep}(G)$ be a maximal Tannakian subcategory of $\mathcal{C}$. By \cite[Corollary 5.19]{DrGNO2} $\mathcal{C}_G^0$ is a weakly anisotropic fusion category, as $|G|^2$ divides $\text{FPdim}(\mathcal{C})$, thus \cite[Theorem 1.1]{Na} says that $\mathcal{C}_G^0$ satisfies the conditions of Theorem \ref{theo2sec4}, hence $\mathcal{C}$ contains a non-degenerate fusion category $\mathcal{C}(\mathbb{Z}_d,q)$. \end{proof} Moreover, by using Theorem \ref{theo2sec3} and Theorem \ref{theo2sec4}, we have the following direct result. \begin{coro}\label{coro1sec4}Let $n$ be an integer, $d$ an odd square-free integer such that $(n,d)=1$. Let $\mathcal{C}$ be a slightly degenerate weakly group-theoretical fusion category with $\text{FPdim}(\mathcal{C})=nd$. If there existsa minimal extension $\mathcal{A}$ is $\mathcal{C}$ such that $(\text{FPdim}(V)^2,d)=1$ for any $V\in\mathcal{O}(\mathcal{A})$, then $\mathcal{C}\cong \mathcal{C}(\mathbb{Z}_d,q)\boxtimes\mathcal{C}(\mathbb{Z}_d,q)_\mathcal{C}'$ as braided fusion category. \end{coro} Recall that a braided fusion category $\mathcal{C}$ of FP-dimension $p^ar^bd$ is weakly group-theoretical \cite[Proposition 3.14]{Yu}, where $p,r$ are primes, $a,b$ are non-negative integers, $d$ is a square-free integer such that $(pr,d)=1$; for non-degenerate fusion categories, this conclusion was first proved in \cite[Corollary 5.4]{Na}. Therefore, if $\mathcal{C}'\subseteq \text{sVec}$ and $\mathcal{C}$ is integral, then for all simple objects $X$ of $\mathcal{C}$, $\text{FPdim}(X)^2$ divides $\text{FPdim}(\mathcal{C})$ by \cite[Theorem 2.11]{ENO3} and \cite[Corollary 3.4]{Yu}, hence $(\text{FPdim}(X),d)=1$. Thus, Theorem \ref{theo1sec4} shows that $\mathcal{C}\cong\mathcal{C}(\mathbb{Z}_m,q)\boxtimes\mathcal{D}$, where $\mathcal{D}$ is a braided fusion category with M\"{u}ger center $\mathcal{D}'\subseteq \text{sVec}$, and $m=d$ or $\frac{d}{2}$. In particular, we have the following corollary, which generalizes the conclusions of \cite[Theorem 4.7]{DongNa} and \cite[Proposition 3.8, Corollary 3.12]{Yu}. \begin{coro}\label{coro2sec4}Let $\mathcal{C}$ be an integral braided fusion category of FP-dimension $p^nd$ and $\mathcal{C}'\subseteq \text{sVec}$, where $p$ is a prime, $n$ is a nonnegative integer and $d$ is a square-free integer such that $(p,d)=1$. Then $\mathcal{C}$ is nilpotent and group-theoretical. \end{coro} \begin{proof} By Theorem \ref{theo2sec4}, if $\mathcal{C}'=\text{Vec}$, then $\mathcal{C}\cong \mathcal{D}\boxtimes \mathcal{C}(\mathbb{Z}_d,q)$ as braided fusion category, and $\text{FPdim}(\mathcal{D})=p^n$. If $\mathcal{C}'=\text{sVec}$, then $\mathcal{C}$ is nilpotent by \cite[Corollary 3.12]{Yu} when $p$ is odd; if $p=2$, then previous argument shows that $\mathcal{C}$ contains $\mathcal{C}(\mathbb{Z}_d,q)$ as a fusion subcategory, consequently $\mathcal{C}\cong \mathcal{D}\boxtimes \mathcal{C}(\mathbb{Z}_d,q)$, where M\"{u}ger center $\mathcal{D}'\subseteq \text{sVec}$ and $\text{FPdim}(\mathcal{D})=2^n$. Since $\mathcal{C}$ is braided equivalent to a Deligne tensor product fusion subcategories of prime power FP-dimensions, it is nilpotent by \cite[Theorem 8.28]{ENO1}, and $\mathcal{C}$ is group-theoretical \cite[Corollary 6.8]{DrGNO1}. \end{proof}
{ "timestamp": "2021-05-06T02:07:30", "yymm": "2105", "arxiv_id": "2105.01814", "language": "en", "url": "https://arxiv.org/abs/2105.01814" }
\section{Introduction} \IEEEPARstart{W}{\lowercase{i}}th the increasing penetration of renewable energies in the power system, more Voltage Source Converters (VSC) for High Voltage Direct Current (HVDC) have been used in applications such as long transmission links and cable lines \cite{book_oriol}. Due to its improved efficiency, easier scalability to high voltage levels and inherent redundancy, the modular multilevel converter (MMC) has become the preferred converter choice for VSC-HVDC applications \cite{visionary_paper_oriol, lesnicar_paper}. Compared to classic two- and three-level converters, controlling the MMC is more complex since it has additional degrees of freedom that can be used to improve the converter performance \cite{joan_adria,daniel_iecon,daniel_tpwd,7482715}. For a proper operation, several magnitudes of the MMC must be regulated (i.e. the AC and DC networks currents, circulating current, sub-module capacitor voltages). Regarding the internal energy of the converter, different control strategies have been proposed to regulate the total stored energy, to balance the energies between the arms and phase-legs and to maintain similar voltage levels within the sub-module capacitors \cite{b_williams,j_pou}. Under balanced grid conditions, the energy balancing is not a major issue as all the MMC's sub-modules remain close to their nominal voltage values. Nevertheless, energy deviations may occur during power changes in the network which must be compensated to maintain the proper operation of the system \cite{enric,Enric_2,6863645}. On the other hand, during unbalanced AC grid faults, the internal energy deviations are higher due to uneven power exchanges between the converter's arms and phase-legs which must be quickly compensated to avoid tripping the converter. Relevant reference calculation approaches and control strategies have been proposed to analyze and mitigate the effects of unbalanced AC network voltages. In \cite{sul_2}, the controllers were derived targeting the horizontal (between phase-legs) energy regulation of the converter during AC single-line-to-ground fault. Still focusing on the phase-leg balancing of the MMC, \cite{wang} improves it through AC zero-sequence voltage injection while \cite{ou} analyzes the energy dynamic response for different DC circulating current scenarios. By considering not only the horizontal balancing but also the energy transfer between the upper and lower arms, further improvements can be achieved in the transient response of the converter \cite{Leon}. References \cite{Liang,bergna_3,6877696} derive distinct methods to perform the vertical energy balance of the MMC, but they only consider the DC characteristics of the circulating current. Authors in \cite{Moez_2,PRIETOARAUJO2017424,Moez_1} employ both AC and DC components of the circulating current in their control design. In \cite{Moez_2}, the MMC's AC circulating current reference calculation is performed based only on the positive sequence of the AC grid voltage, whereas \cite{PRIETOARAUJO2017424} also uses the negative sequence component. Still, the former proposals neglected the impact that the equivalent impedance of the MMC causes in the arms' applied voltages. Such influence is considered in \cite{Moez_1}, whereby an optimization algorithm using a linear matrix inequality approach is proposed to calculate the circulating current reference. Although the previous methods are capable of controlling the MMC under unbalanced AC network conditions, during unbalanced scenarios whereby the positive and negative components of either the AC grid voltages or the MMC's internal voltages are equal, they might result in singularities in the current reference calculation. This problem was addressed for two-level VSCs by \cite{adria}, and possible solutions were proposed for MMC applications during the former AC grid voltage sag in \cite{edu_tran}. To the best of the authors knowledge, a control strategy that is capable of dealing with different unbalanced AC voltage sags (in particular internal singular voltage sags and singular AC network faults), while maintaining the upper and lower arms energies balanced has not been proposed yet. Another research gap considered in this paper regards the addition of the internal impedance effects on the AC additive voltage during the derivation of the MMC's circulating current references. Next, the main contributions of this paper are highlighted: \begin{itemize} \item Comparison among different AC additive current reference calculations and their potential usage during singular voltage condition. \item The DC differential zero-sequence voltage component $U_{diff}^{0DC}$ is employed to enhance the power balancing between the upper and lower arms throughout the operation of the converter (balanced or unbalanced). \item Improvements in the solutions proposed in \cite{edu_tran} with the addition of $U_{diff}^{0DC}$ and the MMC's equivalent impedance. \item The degrees of freedom of the MMC are fully exploited by the current reference calculations. \item Comprehensive additive current reference calculation able to operate in any grid voltage condition. \end{itemize} The proposed reference calculation is compared with different methods by means of time-domain simulation results for distinct AC grid and internal singular voltage sag conditions. In addition, the effects of $U_{diff}^{0DC}$ are also analyzed according to the reference calculation approach employed. \vspace{-0.3cm} \section{MMC system description and modelling} \vspace{-0.1cm} The model of the three-phase MMC is shown in Fig. \ref{fig:MMC}. The converter consists of three legs, one per phase, where each leg has two stacks of $N_{arm}$ sub-modules (SMs), known as the upper and lower arms. The topologies employed in the sub-modules varies according to the application requirements, where the half-bridge structure is widely used due to its simplicity and lower costs \cite{mmc_book}. \begin{figure}[!h] \vspace{-0.3cm} \centerline{\includegraphics[width=0.85\linewidth]{./FIGURES/MMC_NLC_COMPLETE.pdf}} \vspace{-0.1cm} \caption{Complete model of the MMC converter.} \label{fig:MMC} \vspace{-0.3cm} \end{figure} For steady-state modelling purposes, the phasor notation $\underline{X}^k = X_r^k + j X_i^k = X^k \phase{\theta^k}$ will be adopted, with $x(t) = X^k \text{Re}\{e^{j( \omega t + \theta^k)}\}$ and $k \in \{a,b,c\}$. Therefore, the main quantities for each phase are: the AC grid voltages $\underline{U}_{g}^k$; the upper and lower arms applied voltages $\underline{U}_{u,l}^k$; the upper and lower DC grid voltages $U_{u,l}^{DC}$; the voltage between the 0 DC reference node and the neutral $n$ of the AC three-phase system $\underline{U}_{0n}$; the upper and lower arm currents $\underline{I}_{u,l}^k$; the AC grid current $\underline{I}_{s}^k$; the arm impedances $R_a$ and $L_a$; the grid equivalent resistance and inductance $R_s$ and $L_s$; and the sub-module capacitors $C_{SM}$. To ease the understanding of the MMC circuit and the derivation of the control strategies, the following quantities are defined: \vspace{-0.1cm} \begin{equation} \small \left\{ \begin{array}{l} \underline{U}_{diff} \triangleq \dfrac{1}{2}(-\underline{U}_{u}+\underline{U}_{l})\\ \underline{U}_{sum} \triangleq \underline{U}_{u}+\underline{U}_{l}\\ \underline{I}_{sum} \triangleq \dfrac{1}{2}(\underline{I}_{u}+\underline{I}_{l})\\ \end{array}\right. \quad \left\{ \begin{array}{l} Z_{arm}\triangleq R_a+j\omega L_a\\ Z_{s}\triangleq R_s+j\omega L_s \\ Z_{eq}\triangleq Z_s + \dfrac{Z_a}{2} \\ \end{array}\right. \label{eq:varchange2} \end{equation} \noindent with $k \in \{a,b,c\}$ and $\underline{U}_{diff}^k$, $\underline{U}_{sum}^k$ and $\underline{I}_{sum}^k$ as the differential and additive applied voltages and circulating currents of the converter, respectively. As it will be described later, the additive currents perform an important role to regulate the power transfer from/to the HVDC network to/from the MMC and to maintain the internal power of the converter balanced, which can be achieved by exchanging power among phase-legs and between the upper and lower arms. \vspace{-0.5cm} \section{MMC control system} \vspace{-0.1cm} In this section, the overall control system to regulate the MMC and the methodology to calculate the additive and AC network current references are presented. The employed control scheme, shown in Fig. \ref{fig:control_scheme}, uses the design procedures derived in \cite{PRIETOARAUJO2017424} and can be divided into two main parts: the AC grid current control and the circulating current control. For the AC grid current stage, the current references are calculated based on the active and reactive power set-points required by the Transmission System Operator (TSO) and the magnitude of the positive-sequence component of the AC network voltage. Then, such references are employed into the grid side current control loops. The energy controllers are designed in order to maintain the internal energy balance of the converter. This is achieved through six different control loops which regulates the MMC's total internal energy $E_t$, the energy difference between the converter's phase-legs $E_{a\to b}$ and $E_{a\to c}$ and the energy mismatch between the converter's arms $E_{u\to l}^k$. Such energy regulators result in the power references necessary to calculate the AC and DC inner current set-points, highlighted in yellow in Fig. \ref{fig:control_scheme}. These current references can be obtained through different methods according to the quantities that are used in the calculation (see method selection in Fig. \ref{fig:control_scheme} and Sections \ref{sec:comparison} and \ref{sec:proposed}), which are later tracked by the additive current controllers. \begin{figure}[!h] \vspace{-0.3cm} \centerline{\includegraphics[width=1\linewidth]{./FIGURES/MMC_OVERALL_CONTROL_DSPIER_WITH_SWITHC_time.pdf}} \vspace{-0.1cm} \caption{Overall control scheme of the MMC converter for grid-following applications.} \label{fig:control_scheme} \vspace{-0.5cm} \end{figure} For the sake of completeness, the reference calculation procedures for the AC grid currents and for the DC component of the additive currents are briefly described next, which more details can be found in \cite{PRIETOARAUJO2017424}. On the other hand, a comprehensive analysis of different methods to calculate the current references for the AC component of the additive current is given. In addition, such approaches are compared regarding their applicability during faults and exploitation of the MMC's degrees of freedom. \vspace{-0.5cm} \subsection{AC network current reference calculation} Under balanced conditions, the AC grid currents present a symmetrical profile. However, during unbalanced AC voltage sags, the three-phase system may have different voltage levels for each phase (due to the presence of negative-sequence components), which might result in unbalanced currents circulating through the AC network. In general, the AC grid current references are calculated by considering only the positive-sequence of the AC grid voltages, either for balanced or unbalanced scenarios \cite{Akagi2007InstantaneousPT}. \vspace{-0.5cm} \subsection{Additive current reference calculation} Generally, the DC components of the additive current are used to regulate the power transfer among the phase-legs of the converter. Whereas the AC components are employed to control the power exchanged between the MMC's upper and lower arms (vertical balancing) \cite{Leon}. \subsubsection{DC component of the additive current} The DC terms of the additive current can be applied in the regulation of the energy exchanged horizontally. In addition, notch filters must be used in order to eliminate the line and double-line frequency power components coming from the energy controllers \cite{PRIETOARAUJO2017424}. \subsubsection{AC component of the additive current} To calculate the AC additive current references it is necessary to obtain a mathematical expression relating the power difference between the MMC's upper and lower halves with their respective applied voltages. At one hand, the arm's applied voltages can be considered to be equal to the AC network voltage (assuming that the equivalent impedance of the converter is small). The main advantages of such approach regards its simplicity and straightforwardness, since it uses the measurements from the AC system. However, this method presents a discontinuity when the positive- and negative-sequence components of the AC grid voltages are equal or almost equal (singular condition) \cite{edu_tran}. As a consequence, the additive current references saturate, compromising the vertical energy balancing controller. Another candidate solution would to employ the differential voltages resultant from the grid side current control as the arm's applied voltages, but as it will be demonstrated later, this strategy also fails if the internal voltages present singular characteristics. \vspace{-0.2cm} \section{Comparison among different AC additive current reference calculation strategies} \label{sec:comparison} In this section, different methods to calculate the AC additive current references are presented. These approaches vary according to the voltages that are considered to be applied into the MMC's arms (AC grid or differential voltages). For balanced and several unbalanced network conditions, the distinct strategies can maintain the converter stable and provide proper current references. However, certain unbalanced AC grid voltage sags may lead to internal singular voltage conditions that must also be addressed during the derivation of the additive current references to avoid discontinuities of the system. \vspace{-0.3cm} \subsection{Internal singular voltage sag analysis} \label{sec:internal} An internal singular voltage event is defined when the positive-sequence component of the AC differential voltage is equal to the negative one $(\underline{U}_{diff}^+ = \underline{U}_{diff}^-)$. In order to derive the expression for such fault, let's first assume that the AC differential voltages are calculated as \cite{Enric_2} \begin{subequations} \vspace{-0.2cm} \small \begin{equation} \underline{U}_{diff}^{+} = \underline{U}_{g}^{+} + \underline{Z}_{eq} \underline{I}_s^{+} \label{eq:KVL_pos} \end{equation} \begin{equation} \underline{U}_{diff}^{-} = \underline{U}_{g}^{-} + \underline{Z}_{eq} \underline{I}_s^{-} \label{eq:KVL_neg} \end{equation} \label{eq:KVL} \vspace{-0.5cm} \end{subequations} The scenario where $\underline{U}_{diff}^+ = \underline{U}_{diff}^-$ is obtained by equating \eqref{eq:KVL_pos} and \eqref{eq:KVL_neg}. As a result, an expression that describes in which condition the converter's applied voltages present singular behavior is given as follows \vspace{-0.2cm} \begin{equation} \small \underline{U}_{g}^{-} = \underline{U}_{g}^{+} + \underbrace{\underline{Z}_{eq}\left(\underline{I}_s^{+} -\underline{I}_s^{-} \right) }_\text{internal factor} \label{eq:int_sing} \end{equation} As mentioned in Section III-A, the AC grid controllers are designed to inject only positive-sequence current component into the grid, thus $\underline{I}_s^- = 0$. Furthermore, it can be observed that this type of fault not only depends on the AC grid voltage characteristics but is also affected by the interaction between the MMC's equivalent impedance and the AC grid currents. In this paper, it is considered that the internal factor is constant, since the controllers will keep injecting the same positive-sequence current levels into the AC grid throughout the converter's operation. \vspace{-0.3cm} \subsection{Method 0 - Initial approach $\underline{U}_{u,l}^k = \underline{U}_{g}^k$} This methodology is the most straightforward one to calculate the AC additive current references of the MMC. It assumes that the equivalent impedance of the converter is small and by doing so, the AC arm voltage can be considered equal to the AC grid voltage. However, during AC grid faults where the positive-sequence voltage component is equal to the negative one this method fails \cite{PRIETOARAUJO2017424}. Such condition arises because the current reference calculation will present a discontinuity in this operating point. As a consequence, it will try to impose very high AC additive currents to circulate through the MMC, which must be disconnected to avoid damaging the converter. \vspace{-0.3cm} \subsection{Method 1 - Method 0 considering $U_{diff}^{0DC}$} This reference calculation applies the techniques developed in \cite{edu_tran} employing $U_{diff}^{0DC}$. Next, the method's working principle along with the calculation and regulation of $U_{diff}^{0DC}$ are described. \subsubsection{Working principle} This method removes one of the degrees of freedom from the AC additive current that does not contribute to the power exchanged within the MMC arms. By doing so, the discontinuity is avoided but there will be a sustained constant energy deviation between the phase-legs of the converter throughout the fault. This approach can be further improved by replacing the degree of freedom that was lost with the DC differential zero-sequence voltage component $U_{diff}^{0DC}$. \subsubsection{Regulation of $U_{diff}^{0DC}$} During balanced conditions, this voltage is equal to zero as both upper and lower arms have the same DC voltage level. Under transients, on the other hand, $U_{diff}^{0DC}$ is different than zero and its magnitude can be applied into the MMC's arms to improve the energy balancing between them. Such degree of freedom can be obtained as \vspace{-0.1cm} \begin{equation} \small U_{diff}^{0DC} = \dfrac{P_{u \to l}^a +P_{u \to l}^b + P_{u \to l}^c}{3I_{sum}^{0DC}} \label{eq:Udiff_0DC} \vspace{-0.1cm} \end{equation} \noindent where $P_{u \to l }^{abc}$ are the power difference between the upper and lower arms and $I_{sum}^{0DC} \neq 0$ (to avoid discontinuities). By using this degree of freedom, the energy deviations are eliminated and controlled power can be transmitted within the MMC's arms, enhancing the converter response during singular AC grid voltage conditions (see Section VI). In Fig. \ref{fig:Udiff_0DC_control}, the control structure employed in the regulation of $U_{diff}^{0DC}$ is depicted. As aforementioned, the unregulated value of $U_{diff}^{0DC}$ is calculated based on the equation shown above, and compared to its desired magnitude (set to be equal to zero), resulting in the error \textbf{\textit{e}}. This error goes to a PI controller, which is designed in order to quickly compensate any voltage disparity caused by the zero-sequence power mismatch between the upper and lower arms of the converter. By doing so, the sustained energy deviations observed when other control methods are used, as it was pointed out by \cite{edu_tran}, can be eliminated. The controller gains employed in this paper are set to be equal to $k_p=0.25$ and $k_i=12$, whereby the gain selection was done based on the response of $U_{diff}^{0DC}$ for different AC and internal singular voltage sag conditions. Finally, a saturation block is added as a safety factor in order to prevent high values of $U_{diff}^{0DC}$ which would result in overmodulations. \begin{figure}[!h] \vspace{-0.3cm} \centerline{\includegraphics[width=0.7\linewidth]{./FIGURES/U_diff_DC_control.pdf}} \vspace{-0.1cm} \caption{$U_{diff}^{0DC}$ control structure.} \label{fig:Udiff_0DC_control} \vspace{-0.3cm} \end{figure} \vspace{-0.3cm} \subsection{Method 2 - Arm voltages equal to the DC and AC differential voltages $\underline{U}_{u,l}^{+-} = \underline{U}_{diff}^{+-}$ + $U_{diff}^{0DC}$} An alternative solution to the former problems would be to replace the AC grid voltages by the positive- and negative-sequence components of the internal differential voltages of the converter. Consequently, the MMC equivalent impedance will not be neglected, having a more realistic AC voltage level in the converter's arms. For this strategy, however, the differential voltage controllers should be fast enough in order to avoid inaccuracies due to the interactions between the two regulators \cite{harnefors}. Nevertheless, even if such requirements are fulfilled, in a situation where the MMC's internal differential voltages are equal, $U_{diff}^{+} = U_{diff}^-$, the discontinuity will also occur and the vertical energy balancing of the converter will be compromised. \vspace{-0.3cm} \subsection{Method 3 - Method 1 with $\underline{U}_{u,l}^{+-} = \underline{U}_{diff}^{+-} + U_{diff}^{0DC}$} Now, the principles in Method 1 are extended by considering that the arm's applied voltages are equal to the differential ones. If only the AC differential terms are employed, during fault event where $U_{diff}^{+} = U_{diff}^-$, the continued energy deviation would also be observed and would be compensated with the zero sequence DC differential voltage. However, as it will be shown in Section \ref{sec:Results}, the energy drifts will cause the $U_{diff}^{0DC}$ controllers to saturate due to the limited voltage application range. Consequently, the vertical energy regulators will not be able to compensate the power transferred within the converter's arms, leading to the eventual disconnection of the system. \vspace{-0.3cm} \section{Method 4 - Proposed approach considering the additive and differential voltage components in the arm} \label{sec:proposed} The previous methods present distinct strategies to calculate the AC additive current by using different quantities as the applied voltages in the MMCs arms. But, they share the same characteristic where the additive voltage effects in the arm are neglected. At one hand, it seems that such voltage cannot be part of the AC additive current references without an iterative control method. However, using a simple mathematical substitution, which does not require any optimization, iterative control or violate any constraint imposed in the steady-state analysis, it is possible to obtain an expression that can be employed as such reference calculation for any AC grid or arm singular voltage sag condition. To do so, let's first consider that the applied voltages in the six arms are \vspace{-0.2cm} \begin{subequations} \small \begin{equation} \underline{U}_{u,l}^{k} = \mp \underline{U}_{diff}^{k} +\dfrac{\underline{U}_{sum}^{k}}{2} \label{eq:U_upper_simp} \end{equation} \vspace{-0.5cm} \begin{align} \label{eq:U_ul_abc} &u_{u,l}^{k}(t) =\sqrt{2}\Bigg( \mp U_{diff}^{+}\cos\left(\omega t +\theta_{diff}^{+} + \alpha^k\right) \mp \\& \nonumber \mp U_{diff}^{-}\cos\left(\omega t +\theta_{diff}^{-} - \alpha^k\right) +\dfrac{U_{sum}^{+}}{2}\cos{\left(\omega t + \theta_{sum}^{+} + \alpha^k \right)} + \\& \nonumber+ \dfrac{U_{sum}^{-}}{2}\cos{\left(\omega t + \theta_{sum}^{-} -\alpha^k \right)}\Bigg) +\left(\dfrac{U_{sum}^{kDC}}{2}\mp U_{diff}^{0DC}\right) \end{align} \end{subequations} \noindent where the sign $\mp$ indicates that the differential voltage components for the upper arms are negative, $u_u^k=-u_{diff}^k + \frac{u_{sum}^k}{2}$, while for the lower arms the differential voltages are positive, $u_l^k=u_{diff}^k + \frac{u_{sum}^k}{2}$. In addition, $\alpha^a = 0, \alpha^b = -\frac{2\pi}{3}, \alpha^c =\frac{2\pi}{3}$ and $k\in \{a,b,c\}$. $\theta_{diff}^{+}$ and $\theta_{diff}^{-}$ are the phase-angles of the positive and negative sequence components of the differential voltages, whereas $\theta_{sum}^{+}$ and $\theta_{sum}^{-}$ are the phase-angles for the positive- and negative-sequence additive voltages. The upper and lower arms currents can be described as follows \vspace{-0.1cm} \begin{subequations} \small \begin{equation} \underline{I}_{u,l}^{k} = \pm \dfrac{\underline{I}_{s}^{k}}{2} +\underline{I}_{sum}^{k} \label{eq:I_ul_s} \end{equation} \vspace{-0.5cm} \begin{align} &i_{u,l}^{k}(t) =\sqrt{2}\Bigg( \pm \dfrac{I_{s}^{+}}{2}\cos\left(\omega t +\phi_{s}^{+} + \alpha^k \right) \pm \nonumber\\& \pm \dfrac{I_{s}^{-}}{2}\cos\left(\omega t + \phi_{s}^{-} -\alpha^k \right) + I_{sum}^{+}\cos{\left(\omega t + \phi_{sum}^{+} + \alpha^k \right)}+\nonumber\\& + I_{sum}^{-}\cos{\left(\omega t + \phi_{sum}^{-} -\alpha^k \right)}\Bigg)+ I_{sum}^{kDC} \label{eq:Iul_abc} \end{align} \end{subequations} \begin{equation} \small P_{u\to l}^{k}= \underline{U}_{u}^{k}\underline{I}_{u}^{k}-\underline{U}_{l}^{k}\underline{I}_{l}^{k}, \quad k\in \{a,b,c\} \label{eq:Pul_tot_abc} \end{equation} \begin{figure*}[!t] \footnotesize \begin{subequations} \begin{equation} P_{u\to l}^{a}= \begin{bmatrix} U_{diff}^{+} \\ U_{diff}^{-} \\ I_{s}^{+} \\ I_{s}^{-} \\ \end{bmatrix}^T \begin{bmatrix} -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{+}}) & -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{-}}) & 0 & 0 \\ -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{+}}) & -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{-}}) & 0 & 0 \\ 0 & 0 & \dfrac{\cos{(\theta_{sum}^{+} - \phi_{s}^{+})}}{2} & \dfrac{\cos{(\theta_{sum}^{-} - \phi_{s}^{+})}}{2} \\ 0 & 0 & \dfrac{\cos{(\theta_{sum}^{+} - \phi_{s}^{-})}}{2} & \dfrac{\cos{(\theta_{sum}^{-} - \phi_{s}^{-})}}{2}\\ \end{bmatrix} \begin{bmatrix} I_{sum}^{+}\\ I_{sum}^{-}\\ U_{sum}^{+}\\ U_{sum}^{-} \end{bmatrix} -2U_{diff}^{0DC}I_{sum}^{aDC} \label{eq:Pu_l_a} \end{equation} \begin{align} P_{u\to l}^{b}= \begin{bmatrix} U_{diff}^{+} \\ U_{diff}^{-} \\ I_{s}^{+} \\ I_{s}^{-} \\ \end{bmatrix}^T \begin{bmatrix} -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{+})} & -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{-} + \frac{2\pi}{3})} & 0 & 0 \\ -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{+} - \frac{2\pi}{3} )} & -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{-})} & 0 & 0 \\ 0 & 0 & \dfrac{\cos{(\theta_{sum}^{+} - \phi_{s}^{+})}}{2} & \dfrac{\cos{(\theta_{sum}^{-} - \phi_{s}^{+}-\frac{2\pi}{3})}}{2} \\ 0 & 0 & \dfrac{\cos{(\theta_{sum}^{+} - \phi_{s}^{-} + \frac{2\pi}{3})}}{2} & \dfrac{\cos{(\theta_{sum}^{-} - \phi_{s}^{-})}}{2}\\ \end{bmatrix} \begin{bmatrix} I_{sum}^{+}\\ I_{sum}^{-}\\ U_{sum}^{+}\\ U_{sum}^{-} \end{bmatrix}- \\ \nonumber -2U_{diff}^{0DC}I_{sum}^{bDC} \label{eq:Pu_l_b} \end{align} \begin{align} P_{u\to l}^{c}= \begin{bmatrix} U_{diff}^{+} \\ U_{diff}^{-} \\ I_{s}^{+} \\ I_{s}^{-} \\ \end{bmatrix}^T \begin{bmatrix} -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{+})} & -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{-} - \frac{2\pi}{3})} & 0 & 0 \\ -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{+} + \frac{2\pi}{3} )} & -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{-})} & 0 & 0 \\ 0 & 0 & \dfrac{\cos{(\theta_{sum}^{+} - \phi_{s}^{+})}}{2} & \dfrac{\cos{(\theta_{sum}^{-} - \phi_{s}^{+}+\frac{2\pi}{3})}}{2} \\ 0 & 0 & \dfrac{\cos{(\theta_{sum}^{+} - \phi_{s}^{-} - \frac{2\pi}{3})}}{2} & \dfrac{\cos{(\theta_{sum}^{-} - \phi_{s}^{-})}}{2}\\ \end{bmatrix} \begin{bmatrix} I_{sum}^{+}\\ I_{sum}^{-}\\ U_{sum}^{+}\\ U_{sum}^{-} \end{bmatrix}- \\ \nonumber -2U_{diff}^{0DC}I_{sum}^{cDC} \label{eq:Pu_l_c} \end{align} \label{eq:Pu_l_abc} \end{subequations} \vspace*{-0.35cm} \hrulefill \vspace*{-0.5cm} \end{figure*} \noindent where similarly to the arms' applied voltages, the sign $\pm$ is an indication that $i_u^k = \frac{i_{s}^k}{2} + i_{sum}^k$ and $i_l^k = -\frac{i_{s}^k}{2} + i_{sum}^k$. Furthermore, $\phi_{s}^{+}$ and $\phi_{sum}^{+}$ are the phase-angles of the positive-sequence additive and AC grid currents, respectively, while $\phi_{s}^{-}$ and $\phi_{sum}^{-}$ are the phase-angles of the negative-sequence current components. Having defined the upper and lower arms voltages and currents with the additive and differential quantities, it is possible to describe the power difference between the upper and lower arms as Replacing \eqref{eq:U_ul_abc} and \eqref{eq:Iul_abc} in \eqref{eq:Pul_tot_abc}, the power differences are obtained in matrix form as shown in \eqref{eq:Pu_l_abc}. It can be observed that the left matrix consists of the AC grid currents and differential voltages whereas the far right matrix contains the additive voltages and currents. Furthermore, the phase-angle differences for each power element indicates an interaction between the additive and differential quantities. Although \eqref{eq:Pul_tot_abc} can express the power transfer between the upper and lower arms of the converter, employing it in a control strategy to calculate the AC components additive current references might be challenging as it would require iterative calculation methods. A simple approach would be to neglect the additive voltages and to consider only the differential terms in the reference calculation. However, as discussed in Section IV, this approximation fails during internal singular voltage sag conditions. In order to overcome such issue and to increase the operating range of the converter, the proposed reference calculation substitute $\underline{U}_{sum}^{+-}$ by an equation containing the arm impedance and the additive current, yielding \vspace{-0.1cm} \begin{subequations} \small \begin{equation} \underline{U}_{sum}^{+} =-2 Z_{arm}I_{sum}^{+}\phase{\rho+\phi_{sum}^{+}} \end{equation} \begin{equation} \underline{U}_{sum}^{-} =-2 Z_{arm}I_{sum}^{-}\phase{\rho+\phi_{sum}^{-}} \end{equation} \label{eq:Usum_new} \vspace{-0.3cm} \end{subequations} \noindent where $\rho$ is the phase-angle of the arm impedance $\underline{Z}_{arm}$. Replacing the additive voltages in \eqref{eq:Pu_l_abc} with the new expressions from \eqref{eq:Usum_new}, the final equations for the vertical power transfer are obtained and expressed in \eqref{eq:Pu_l_abc_simp}. \begin{figure*}[!t] \footnotesize \begin{subequations} \begin{equation} P_{u\to l}^{a}= \begin{bmatrix} U_{diff}^{+} \\ U_{diff}^{-} \\ I_{s}^{+} \\ I_{s}^{-} \\ \end{bmatrix}^T \begin{bmatrix} -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{+})} & -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{-})}\\ -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{+})} & -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{-})} \\ -Z_{arm}\cos(\rho + \phi_{sum}^{+}-\phi_{s}^{+}) & -Z_{arm}\cos(\rho + \phi_{sum}^{-} - \phi_{s}^{+})\\ -Z_{arm}\cos(\rho + \phi_{sum}^{+}-\phi_{s}^{-}) & -Z_{arm}\cos(\rho + \phi_{sum}^{-}-\phi_{s}^{-})\\ \end{bmatrix} \begin{bmatrix} I_{sum}^{+}\\ I_{sum}^{-}\\ \end{bmatrix} -2U_{diff}^{0DC}I_{sum}^{aDC} \label{eq:Pu_l_a_simp} \end{equation} \begin{equation} P_{u\to l}^{b}= \begin{bmatrix} U_{diff}^{+} \\ U_{diff}^{-} \\ I_{s}^{+} \\ I_{s}^{-} \\ \end{bmatrix}^T \begin{bmatrix} -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{+})} & -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{-}+ \frac{2\pi}{3})}\\ -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{+}-\frac{2\pi}{3})} & -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{-})} \\ -Z_{arm}\cos(\rho + \phi_{sum}^{+}-\phi_{s}^{+}) & -Z_{arm}\cos(\rho + \phi_{sum}^{-} - \phi_{s}^{+}+\frac{2\pi}{3})\\ -Z_{arm}\cos(\rho + \phi_{sum}^{+}-\phi_{s}^{-}-\frac{2\pi}{3}) & -Z_{arm}\cos(\rho + \phi_{sum}^{-}-\phi_{s}^{-})\\ \end{bmatrix} \begin{bmatrix} I_{sum}^{+}\\ I_{sum}^{-}\\ \end{bmatrix} -2U_{diff}^{0DC}I_{sum}^{bDC} \label{eq:Pu_l_b_simp} \end{equation} \begin{equation} P_{u\to l}^{c}= \begin{bmatrix} U_{diff}^{+} \\ U_{diff}^{-} \\ I_{s}^{+} \\ I_{s}^{-} \\ \end{bmatrix}^T \begin{bmatrix} -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{+})} & -2\cos{(\theta_{diff}^{+} - \phi_{sum}^{-}- \frac{2\pi}{3})}\\ -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{+}+\frac{2\pi}{3})} & -2\cos{(\theta_{diff}^{-} - \phi_{sum}^{-})} \\ -Z_{arm}\cos(\rho + \phi_{sum}^{+}-\phi_{s}^{+}) & -Z_{arm}\cos(\rho + \phi_{sum}^{-} - \phi_{s}^{+}-\frac{2\pi}{3})\\ -Z_{arm}\cos(\rho + \phi_{sum}^{+}-\phi_{s}^{-}+\frac{2\pi}{3}) & -Z_{arm}\cos(\rho + \phi_{sum}^{-}-\phi_{s}^{-})\\ \end{bmatrix} \begin{bmatrix} I_{sum}^{+}\\ I_{sum}^{-}\\ \end{bmatrix} -2U_{diff}^{0DC}I_{sum}^{cDC} \label{eq:Pu_l_c_simp} \end{equation} \label{eq:Pu_l_abc_simp} \end{subequations} \vspace*{-0.15cm} \hrulefill \vspace*{-0.5cm} \end{figure*} Comparing \eqref{eq:Pu_l_abc} and \eqref{eq:Pu_l_abc_simp}, it can be noted that the substitution does not change the number of terms in the new power transfer equations; thus, the degrees of freedom of the converter are still being fully exploited. The three vertical power quantities $\left(P_{u\to l}^{a}, P_{u\to l}^{b}, P_{u\to l}^{c}\right)$ are adjusted based on four parameters $\left(I_{sum}^{+},I_{sum}^{-}, \phi_{sum}^{+}, \phi_{sum}^{-} \right)$, since the AC grid current and the differential voltage magnitudes are given values that are regulated independently of the internal powers. By choosing that the reactive component of the positive additive current is equal to zero $(\sin(\phi_{sum}^+) = 0)$, \eqref{eq:Pu_l_abc_simp} can be reduced to \vspace{-0.4cm} \begin{equation} \small \hspace{-0.2cm} \underbrace{\begin{bmatrix} P_{u\to l}^{a}\\P_{u\to l}^{b}\\P_{u\to l}^{ac} \end{bmatrix}}_\text{P} = \underbrace{\begin{bmatrix} M_{11}&M_{12}&M_{13}\\ M_{21}&M_{22}&M_{23}\\ M_{31}&M_{32}&M_{33}\\ \end{bmatrix}}_\text{M} \underbrace{\begin{bmatrix} I_{sum}^{-}\cos{\phi_{sum}^{-}}\\I_{sum}^{-}\sin{\phi_{sum}^{-}}\\I_{sum}^{+}\cos{\phi_{sum}^{+}} \end{bmatrix}}_\text{$I^{AC}$} -2U_{diff}^{0DC} \underbrace{\begin{bmatrix} I_{sum}^{aDC}\\I_{sum}^{bDC}\\I_{sum}^{cDC} \end{bmatrix}}_\text{$I^{DC}$} \label{eq:matrix} \vspace{-0.3cm} \end{equation} \noindent where, \vspace{-0.3cm} {\small \begin{align*}\nonumber &M_{11} = Z_{arm}\left(-I_{s}^{+}\cos{\left(\rho-\phi_{s}^{+}\right)}-I_{s}^{-}\cos{\left(\rho-\phi_{s}^{-}\right)}\right)\nonumber-\\ &\nonumber -2U_{diff}^+\cos{\left(\theta_{diff}^{+}\right)}-2U_{diff}^-\cos{\left(\theta_{diff}^{-}\right)} \end{align*} \vspace{-0.5cm} \begin{align*}\nonumber &M_{12} = Z_{arm}\left(I_{s}^{+}\sin{\left(\rho -\phi_{s}^{+}\right)}+I_{s}^{-}\sin{\left(\rho-\phi_{s}^{-}\right)}\right)\nonumber-\\& \nonumber -2U_{diff}^+\sin{\left(\theta_{diff}^{+}\right)}-2U_{diff}^-\sin{\left(\theta_{diff}^{-}\right)} \end{align*} \vspace{-0.3cm} \begin{align*}\nonumber &M_{13} = M_{11} \end{align*} \vspace{-0.3cm} \begin{align*}\nonumber &M_{21} = Z_{arm}\left(I_{s}^{+}\cos{\left(\rho-\phi_{s}^{+} - \dfrac{2\pi}{3}\right)} -I_{s}^{-}\cos{\left(\rho-\phi_{s}^{-}\right)}\right)-\\& \nonumber-2U_{diff}^+\cos{\left(\theta_{diff}^{+}+\frac{2\pi}{3}\right)}-2U_{diff}^-\cos{\left(\theta_{diff}^{-}\right)} \end{align*} \vspace{-0.3cm} \begin{align*}\nonumber &M_{22} = Z_{arm}\left(-I_{s}^{+}\cos{\left(\rho-\phi_{s}^{+} - \dfrac{\pi}{6}\right)}+I_{s}^{-}\sin{\left(\rho-\phi_{s}^{-}\right)}\right)-\\& \nonumber-2U_{diff}^+\cos{\left(\theta_{diff}^{+}+\dfrac{\pi}{6}\right)}-2U_{diff}^-\sin{\left(\theta_{diff}^{-}\right)} \end{align*} \vspace{-0.3cm} \begin{align*}\nonumber &M_{23} = Z_{arm}\left(-I_{s}^{+}\sin{\left(\rho-\phi_{s}^{+}\right)}-I_{s}^{-}\cos\left(\rho-\phi_{s}^{-} + \frac{2\pi}{3}\right)\right)-\\& \nonumber - 2U_{diff}^-\cos{\left(\theta_{diff}^{-}-\frac{2\pi}{3}\right)}-2U_{diff}^+\cos{\left(\theta_{diff}^{+}\right)} \end{align*} \vspace{-0.3cm} \begin{align*}\nonumber &M_{31} = Z_{arm}\left(-I_{s}^{+}\cos{\left(\rho-\phi_{s}^{+} + \dfrac{2\pi}{3}\right)}-I_{s}^{-}\cos{\left(\rho-\phi_{s}^{-}\right)}\right)-\\& \nonumber-2U_{diff}^+\cos{\left(\theta_{diff}^{+}-\dfrac{2\pi}{3}\right)}-2U_{diff}^-\cos{\left(\theta_{diff}^{-}\right)} \end{align*} \vspace{-0.3cm} \begin{align*}\nonumber &M_{32} = Z_{arm}\left(I_{s}^{+}\cos{\left(\rho-\phi_{s}^{+}+\dfrac{\pi}{6}\right)}+I_{s}^{-}\sin{\left(\rho-\phi_{s}^{-}\right)}\right)+\\& \nonumber+2U_{diff}^+\cos{\left(\theta_{diff}^{+}-\dfrac{\pi}{6}\right)}-2U_{diff}^-\sin{\left(\theta_{diff}^{-}\right)} \end{align*} \vspace{-0.3cm} \begin{align*}\nonumber &M_{33} = Z_{arm}\left(-I_{s}^{+}\cos{\left(\rho-\phi_{s}^{+}\right)}-I_{s}^{-}\cos{\left(\rho-\phi_{s}^{-} - \dfrac{2\pi}{3}\right)}\right)-\\& \nonumber-2U_{diff}^-\cos{\left(\theta_{diff}^{-}+\dfrac{2\pi}{3}\right)}-2U_{diff}^+\cos{\left(\theta_{diff}^{+}\right)} \end{align*} } Based on the vertical power references provided by the energy controllers, the AC additive current references can be obtained from \eqref{eq:matrix} as \vspace{-0.3cm} {\small \begin{align} \label{eq:final_Ref} &\begin{bmatrix} I_{sum}^{-}\cos{\phi_{sum}^{-}}\\I_{sum}^{-}\sin{\phi_{sum}^{-}}\\I_{sum}^{+}\cos{\phi_{sum}^{+}} \end{bmatrix} = 2U_{diff}^{0DC} \begin{bmatrix} I_{sum}^{aDC}\\I_{sum}^{bDC}\\I_{sum}^{cDC} \end{bmatrix}\nonumber + \dfrac{1}{\det{M}} \\& \footnotesize{\begin{bmatrix} M_{22}M_{33}-M_{23}M_{32}&M_{13}M_{32}-M_{12}M_{33}&M_{12}M_{23}-M_{13}M_{22}\\ M_{23}M_{31}-M_{21}M_{33}&M_{11}M_{33}-M_{13}M_{31}&M_{11}M_{23}-M_{13}M_{21}\\ M_{21}M_{32}-M_{22}M_{31}&M_{12}M_{31}-M_{11}M_{32}&M_{11}M_{22}-M_{12}M_{21}\\ \end{bmatrix}}\nonumber\\& \begin{bmatrix} P_{u\to l}^{a}\\P_{u\to l}^{b}\\P_{u\to l}^{ac} \end{bmatrix} \end{align}} The determinant of matrix $M$, during an internal singular voltage sag condition and considering that the AC grid current consists only of positive-sequence component, is equal to \vspace{-0.3cm} {\small \begin{align} \label{eq:sing} & \det M = -\dfrac{3Z_{arm}^3I_{s}^{+^3}\sqrt{3}\cos(\rho-\phi_{s}^{+})}{2} - \\& \nonumber -3U_{diff}^+\sqrt{3}I_s^{+^2}Z_{arm}^2\cos(2\rho-2\phi_{s}^{+}+\theta_{diff}^{+})-\\& \nonumber-6I_s^+\sqrt{3}U_{diff}^{+^2}Z_{arm}\cos(2\theta_{diff}^{+}+\rho-\phi_{s}^{+})-\\& \nonumber {-6I_s^+\cos(\rho-\phi_{s}^{+})\sqrt{3}U_{diff}^{+^2}Z_{arm} -6I_s^{+^2}\cos(\theta_{diff}^{+})\sqrt{3}U_{diff}^+Z_{arm}^2} \end{align} } As some TSOs demand the injection of reactive currents to provide voltage support \cite{ELIA, 124} or active currents for frequency support \cite{NG} to the faulted phases throughout voltage sag events, the positive-sequence component of the AC grid current $I_s^+$ will generally be different than zero. Therefore, the suggested reference calculation will not present any discontinuities during internal singular voltage sag conditions, as it can be noted from \eqref{eq:sing}. Finally, the main characteristics of the different AC additive current reference calculation methods are summarized in Table \ref{tab:method_comp}. \vspace{-0.3cm} \begin{table}[ht] \centering \small \caption{Methods summary}\renewcommand\arraystretch{1} \vspace{-0.3cm} \begin{tabular}[c]{ccccccc} \hline\hline \multirow{2}{*}{\textbf{Characteristics}} & \multicolumn{5}{c}{\textbf{Method}}\\ & 0 & 1 & 2 & 3 & 4 \\ \hline \multirow{1}{*}{MMC equivalent impedance}&\ding{53} & \ding{53} & \ding{51} & \ding{51} & \ding{51} \\\hline \multirow{1}{*}{No energy drifts among phase-legs}& \ding{51} & \ding{53} & \ding{51} & \ding{53} & \ding{51} \\\hline \multirow{1}{*}{Used for any voltage sag condition}& \ding{53} & \ding{53} & \ding{53} & \ding{53} & \ding{51} \\\hline \multirow{1}{*}{Additive voltage effects}& \ding{53} & \ding{53} & \ding{53} & \ding{53} & \ding{51} \\ \hline \multirow{1}{*}{Degrees of freedom are fully exploited}& \ding{53} & \ding{53} & \ding{53} & \ding{53} & \ding{51} \\ \hline\hline \end{tabular} \label{tab:method_comp} \vspace{-0.8cm} \end{table} \section{Case study} \label{sec:Results} In this section, time-domain simulations are carried out in Matlab$\textsuperscript \textregistered$ Simulink to analyze the performance of the different reference calculation methods during AC grid (Section \ref{sec:AC_grid}) and internal singular (Section \ref{sec:int_grid}) voltage sag conditions. The simulations are performed considering an accelerated model of the MMC \cite{Xu} and employing the Nearest Level Control (NLC) technique to calculate the number of active sub-modules in each arm \cite{5673482}. In addition, the converter is considered to be operating under balanced AC grid conditions when the fault occurs. Both fault events last three seconds (starting at $t = 2$ s and restored at $t = 5$ s) \footnote{Note that for real networks, the maximum allowed time for fault-ride through would be equal to 250 ms \cite{Entso-e}.} in order to verify if the methods are able to keep the converter operational and to highlight the differences among them. Table \ref{tab:param_v2} details the system parameters for the case studies. \vspace{-0.3cm} \begin{table}[ht] \centering \small \caption{System parameters}\renewcommand\arraystretch{1} \vspace{-0.3cm} \begin{tabular}[c]{lccl} \hline\hline \textbf{Parameter} & \textbf{Symbol} & \textbf{Value} & \textbf{Units} \\ \hline Rated power & $S$ & 1000 & MVA \\ Rated power factor & $\cos \phi$ & 0.95 (c) & - \\ AC-side rated voltage & $\underline{U}_{g}$& 325 & kV \\ HVDC link voltage & $U^{DC}$ & $\pm$320 & kV \\ Phase reactor impedance & $\underline{Z}_s$ & 0.005+j 0.18 & pu \\ Arm reactor impedance & $\underline{Z}_{arm}$ & 0.01+j 0.15 & pu \\ Converter modules per arm & $N_{u,l_{arm}}^k$ & 433 & - \\ Sub-module capacitance & $C_{SM}$ & 9.5 & mF \\ \hline\hline \end{tabular} \label{tab:param_v2} \vspace{-0.5cm} \end{table} \vspace{-0.2cm} \subsection{AC grid singular voltage condition} \label{sec:AC_grid} This case study is performed to illustrate the different dynamic behaviors that the converter will present according to the AC additive current reference calculation method used during an AC grid singular voltage condition type C \cite{edu_tran}. Under such fault event, the positive- and negative-sequence AC network voltage components have the same magnitude and phase-angle. In Fig. \ref{fig:energy_ACsing}, the MMC's internal energy transfer is shown for each phase throughout the simulation. It can be observed that only Method 0 leads to the disconnection of the converter which is in agreement with the theoretical analysis (see Section IV). Furthermore, when the fault occurs, a short sustained drift in the energy transfer between the arms of phase $a$ $(E_{u \to l}^a)$ for Methods 1 and 3 is noted. This happens because the DC differential zero-sequence voltage $U_{diff}^{0DC}$ controller saturates while attempting to eliminate the energy difference between the phase-legs of the converter, as shown in Fig. \ref{fig:Udiff_0DC_ACsing}. Otherwise, the modulation strategy might result in negative voltage levels to be applied into the SMs, which is not possible considering that half-bridge topologies are employed. If full-bridge SMs are considered, negative voltages could be imposed improving the dynamic response of the internal energy balancing. \vspace{-0.3cm} \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_AC_SING_NOV_4.pdf}} \vspace{-0.2cm} \caption{Energy difference between the MMC arms during AC grid voltage singular condition.} \label{fig:energy_ACsing} \vspace{-0.5cm} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=0.75\linewidth]{./FIGURES/ALL_Udiff0DC_MARCH_3.pdf}} \vspace{-0.2cm} \caption{$U_{diff}^{0DC}$ levels during AC grid singular voltage sag for the different reference calculation methods.} \label{fig:Udiff_0DC_ACsing} \vspace{-0.2cm} \end{figure} In Fig. \ref{fig:power_balance}, the time-domain waveforms for the upper and lower power mismatches obtained employing \eqref{eq:Pu_l_abc_simp} are shown. The vertical power transfer regarding $U_{diff}^{0DC}$ is depicted in green, whereas the average AC power mismatches between the upper and lower arms is highlighted in red and the total value is represented as the blue continuous line. Among the fault occurrence and clearance transients highlighted in the figure, the DC component contribution is more evident during the fault event for phase $a$. Due to the characteristics of the fault and the AC grid current controller design, the active power injected in the faulted phases $b$ and $c$ is reduced, which is reflected inside the converter as a reduction in the DC additive circulating current levels for those phases. As a result, even though the DC zero-sequence voltage component is common for all the three phases, its effect is more significant for phase $a$ since its DC additive current level is maintained constant during the fault event. Finally, it can be better noted during the fault occurrence transient for phase $a$ that the power provided by $U_{diff}^{0DC}$ provides a negative component which reduces the high oscillations caused by the AC power part in the total vertical power transfer. \begin{figure}[!h] \centerline{\includegraphics[width=0.9\linewidth]{./FIGURES/P_ul_tot_junction.pdf}} \vspace{-0.2cm} \caption{Vertical power transfer during AC network singular voltage sag condition.} \label{fig:power_balance} \vspace{-0.5cm} \end{figure} \subsection{Internal singular voltage condition} \label{sec:int_grid} The objective of this case study is to show that the proposed AC additive current reference calculation can avoid the discontinuity of the system even during internal singular voltage sags. The AC grid voltages for this fault are calculated based on the expression given in \eqref{eq:int_sing}, assuming an internal factor equals to $\underline{Z}_{eq}\underline{I}_s^{+}=0.24 \phase{87.75^o}$ pu and $\underline{U}_g^+ = 0.5\phase{0^o}$ pu, resulting in the positive- and negative-sequence AC differential voltages to be equal to $\underline{U}_{diff}^+=\underline{U}_{diff}^-=0.56\phase{25.49}^o$. The energy transfer profiles for each phase are shown in Fig. \ref{fig:energy_internal} for all the different reference calculation methods. It can be noted that, although Method 0 fails for AC grid singular voltage sag, it is capable of handling the internal singular voltage condition along with Method 4. \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_INT_SING_NOV_4.pdf}} \vspace{-0.2cm} \caption{Energy difference between the MMC arms during internal singular voltage sag condition.} \label{fig:energy_internal} \vspace{-0.2cm} \end{figure} Regarding the DC zero-sequence differential voltage, Method 0 does not use it whereas all the other methods present either short or long saturation periods, as it can be seen in Fig. \ref{fig:Udiff_0_INTsing}. Method 2 is saturated for the maximum and minimum voltage levels, but it is not able to improve the energy regulation, leading to the system disconnection. Methods 1 and 3 result in a sustained saturation, but they also fail to track the desired energy references. Although $U_{diff}^{0DC}$ is also saturated using Method 4, it is quickly recovered. \begin{figure}[!h] \centerline{\includegraphics[width=0.75\linewidth]{./FIGURES/ALL_Udiff_0DC_MARCH_3_INTERNAL.pdf}} \vspace{-0.2cm} \caption{$U_{diff}^{0DC}$ levels during internal singular voltage sag for the different methods.} \vspace{-0.2cm} \label{fig:Udiff_0_INTsing} \vspace{-0.5cm} \end{figure} In Fig. \ref{fig:internal_waveforms}, the waveforms of the MMC quantities are presented showing its dynamic behavior during the fault event and when it is cleared. It can be noted that the arms applied voltages for phase $b$ and $c$ become equal for this type of fault. In addition, all the voltages applied to the converter are higher than zero $(0 \leq U_{u,l}^k)$, which is obtained since the $U_{diff}^{0DC}$ is saturated. Finally, Method 4 was the only AC additive current reference calculation approach that avoided the converter to be tripped for both AC grid and internal singular voltage sag condition. \vspace{-0.2cm} \begin{figure}[!htb] \centerline{\subfigure[Transition from normal operation to fault event.]{\includegraphics[width=1\linewidth]{./FIGURES/METHOD_4_NORMAL_FAULT_NOV_4.pdf}}} \centerline{\subfigure[Fault to normal.]{\includegraphics[width=1\linewidth]{./FIGURES/METHOD_4_FAULT_NORMAL_NOV_4.pdf}}} \vspace{-0.2cm} \caption{MMC waveforms during fault transients when Method 4 is employed. a) Fault is applied to the system and, b) Fault event is cleared.} \label{fig:internal_waveforms} \vspace{-0.5cm} \end{figure} \subsection{Other singular fault scenarios} In this section, the proposed method is compared with the other approaches for different types of internal and AC network singular voltage sag conditions. The simulations conducted focused on the upper and lower arms’ energy mismatch throughout the operation of the converter in order to further validate the proposed method. In Figs. \ref{fig:energy_AC_D} to \ref{fig:energy_AC_G} the results obtained during AC grid singular voltage sags D to G \cite{edu_tran} are shown, whereas Figs. \ref{fig:energy_internal_D} to \ref{fig:energy_internal_G} depict similar voltage sags, however reflected to the applied arm voltages of the converter characterizing $U_{diff}^+=U_{diff}^-$. The results confirm the conclusions drawn for the type C singular voltage sags. Regarding the AC grid singular conditions, it can be noted that Method 0 result in eventual disconnections of the converter (faults C to F), but it is able to marginally maintain the system operating during a type G fault. However, this method results in undesired sustained energy drifts during the aforementioned fault. In terms of Methods 1 and 3, during faults E and G, specifically, sustained energy drifts are observed for phase $a$, while the remaining phases present slow dynamics. Methods 2 and 4 have faster dynamics, quickly compensating the energy deviations. During internal singular voltage conditions, Methods 1 to 3 are unable to regulate the converter, resulting in eventual protection trips (between 1 to 1.5 s after the fault’s occurrence for Methods 1 and 3 and within 100 ms for Method 2). On the other hand, Methods 0 and 4 are capable of managing the energy drifts caused by these faults. Finally, it should be mentioned that for all types of fault conditions presented, the proposed Method 4 was the only approach able to compensate the occurrences allowing the converter to safely reach steady-state conditions. \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_SING_D_MAR_9.pdf}} \vspace{-0.2cm} \caption{Energy mismatches for AC network voltage singular type D.} \label{fig:energy_AC_D} \vspace{-0.6cm} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_SING_E_MAR_9.pdf}} \vspace{-0.2cm} \caption{Energy mismatches for AC network voltage singular type E.} \label{fig:energy_AC_E} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_SING_F_MAR_9.pdf}} \vspace{-0.2cm} \caption{Energy mismatches for AC network voltage singular type F.} \label{fig:energy_AC_F} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_SING_G_MAR_9.pdf}} \vspace{-0.2cm} \caption{Energy mismatches for AC network voltage singular type G.} \label{fig:energy_AC_G} \vspace{-0.5cm} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_INT_SING_D_MAR_9.pdf}} \vspace{-0.2cm} \caption{Energy mismatches for internal voltage singular type D.} \label{fig:energy_internal_D} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_INT_SING_E_MAR_9.pdf}} \vspace{-0.2cm} \caption{Energy mismatches for internal voltage singular type E.} \label{fig:energy_internal_E} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_INT_SING_F_MAR_9.pdf}} \vspace{-0.2cm} \caption{Energy mismatches for internal voltage singular type F.} \label{fig:energy_internal_F} \vspace{-0.5cm} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=1\linewidth]{./FIGURES/ALL_ENERGIES_INT_SING_G_MAR_9.pdf}} \vspace{-0.2cm} \caption{Energy mismatches for internal voltage singular type G.} \label{fig:energy_internal_G} \vspace{-0.5cm} \end{figure} \subsection{Internal parameters deviations} In this case study, the performance of the proposed method 4 is analyzed considering parameters deviations in the arm impedances of the MMC during an internal singular voltage sag condition type D (see Section \ref{sec:internal} and \cite{edu_tran}). For the different internal parameter set-ups simulated, it is considered that both reference calculations and design of the controller gains are done based on the parameters given in Table \ref{tab:param_v2}. The effects of the arm impedance deviations are analyzed into two different set-ups. In the first one, unbalanced errors within $\pm5\%$ are considered, where in the second case the asymmetry can achieve errors up to $\pm10\%$. The arm impedance values for the different asymmetric cases are highlighted in Table \ref{tab:impedances}. Finally, the analysis is performed through time-domain simulations of the main quantities of the converter, as well as, its internal energy. In Figs. \ref{fig:internal_waveforms_error5} and \ref{fig:energy_5e}, the waveforms for the $\pm5\%$ deviations are depicted. It can be noted that the arm impedance errors do not interfere with the proposed method, since it is still capable of maintaining the proper operation of the system even during the fault. Now, the errors are increased to $\pm10\%$ and the results are shown in Figs. \ref{fig:internal_waveforms_error10} and \ref{fig:energy_10e}. It can be observed that although the arm impedances present high asymmetric values, leading 100Hz oscillations in the power in the AC-side of the converter, such asymmetry does not affect the DC-side. If the reference calculations and the controllers are not properly designed, under such impedance condition, undesired 50Hz oscillations would appear in the DC-side current. Finally, comparing the energy plots during both unbalance scenarios, it is clear that the most severe case presents higher energy deviation among the phase-legs of the converter, but such deviation can be compensated during the steady-state and does not affect the overall performance of the MMC. \vspace{-0.3cm} \begin{table}[ht] \centering \small \caption{Arm impedance values for Case study D}\renewcommand\arraystretch{1} \vspace{-0.3cm} \begin{tabular}[c]{lccl} \hline\hline Deviation of $\pm5\%$ &Deviation of $\pm10\%$ \\ \hline $Z_{u}^{a} = Z_{arm} - 0.05Z_{arm}$ & $Z_{u}^{a} = Z_{arm} - 0.015Z_{arm}$ \\ $Z_{u}^{b} = Z_{arm} - 0.01Z_{arm}$ & $Z_{u}^{b} = Z_{arm} - 0.1Z_{arm}$ \\ $Z_{u}^{c} = Z_{arm} + 0.02Z_{arm}$ & $Z_{u}^{c} = Z_{arm} + 0.13Z_{arm}$\\ $Z_{l}^{a} = Z_{arm} + 0.03Z_{arm}$ & $Z_{l}^{a} = Z_{arm} + 0.05Z_{arm}$ \\ $Z_{l}^{b} = Z_{arm} + 0.015Z_{arm}$ & $Z_{l}^{b} = Z_{arm} + 0.1Z_{arm}$ \\ $Z_{l}^{c} = Z_{arm} + 0.025Z_{arm}$ & $Z_{l}^{c} = Z_{arm} - 0.08Z_{arm}$ \\ \hline\hline \end{tabular} \label{tab:impedances} \end{table} \noindent where $Z_{u,l}^k$ is the upper and lower arms impedances, with $k \in \{a,b,c\}$. \vspace{-0.2cm} \begin{figure}[!htb] \centerline{\subfigure[Transition from normal operation to fault event.]{\includegraphics[width=1\linewidth]{./FIGURES/NORMAL_FAULT_5e.pdf}}} \centerline{\subfigure[Fault to normal.]{\includegraphics[width=1\linewidth]{./FIGURES/FAULT_NORMAL_5e.pdf}}} \vspace{-0.2cm} \caption{MMC waveforms during fault an interior singular voltage sag type D considering unbalanced arm impedances conditions within $\pm 5\%$ error . a) Fault is applied to the system and, b) Fault event is cleared.} \label{fig:internal_waveforms_error5} \vspace{-0.2cm} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=0.75\linewidth]{./FIGURES/ENERGY_5e.pdf}} \vspace{-0.2cm} \caption{Energy mismatches considering arm impedance unbalances within $\pm5\%$.} \label{fig:energy_5e} \vspace{-0.5cm} \end{figure} \vspace{-0.2cm} \begin{figure}[!htb] \centerline{\subfigure[Transition from normal operation to fault event.]{\includegraphics[width=1\linewidth]{./FIGURES/NORMAL_FAULT_10e.pdf}}} \centerline{\subfigure[Fault to normal.]{\includegraphics[width=1\linewidth]{./FIGURES/FAULT_NORMAL_10e.pdf}}} \vspace{-0.2cm} \caption{MMC waveforms during fault an interior singular voltage sag type D considering unbalanced arm impedances conditions within $\pm 10\%$ error . a) Fault is applied to the system and, b) Fault event is cleared.} \label{fig:internal_waveforms_error10} \vspace{-0.2cm} \end{figure} \begin{figure}[!h] \centerline{\includegraphics[width=0.75\linewidth]{./FIGURES/ENERGY_10e.pdf}} \vspace{-0.2cm} \caption{Energy mismatches considering arm impedance unbalances within $\pm10\%$.} \label{fig:energy_10e} \vspace{-0.5cm} \end{figure} \subsection{Distinctions among the reference calculation methods} In this section, the main differences among the presented methods and the requirements for their implementation in a real system are discussed. The fundamental disparities among Methods 0 to 4 regard the consideration of the arms’ and AC network’s impedances effects and the usage of $U_{diff}^{0DC}$. Method 0 neglects both impedances and it is the only one that does not consider $U_{diff}^{0DC}$ in its vertical power equations. For Method 1, complex mathematical techniques to remove the degrees of freedom that do not contribute in the power transfer are required but still it does not acknowledge the impedances effects. Methods 2 and 3 extended the techniques applied in Methods 0 and 1, respectively, by considering the impedance impacts in the differential voltages. In Method 4, the impedances contributions are respected not only for the differential voltages but also for the additive ones. From an implementational perspective, the previous methods present contrasting degrees of complexity. Regarding hardware requirements (e.g. sensors for measurements), all methods share similar control structures and would require similar measurements. For the methods regulating the DC differential zero-sequence voltage, no extra sensors are needed, since such quantity is calculated based on existing measurements. Considering the computational aspect, the implementation of Method 0 is the easiest one among the presented approaches as the most complex mathematical operation required is the inversion of a 3x3 matrix. Method 2 presents a slightly higher complexity level compared to Method 0 due to two main factors; 1- the usage of the differential quantities and the DC additive currents (the magnitude and phase-angle of the differential voltages, as well as, the DC current magnitudes can be obtained through basic operations and digital filters performed internally by the micro-controller), 2- the regulation of $U_{diff}^{0DC}$. Methods 1 and 3, although use different voltages in the calculations (AC grid and the AC differential voltages, respectively), they both require to compute the Moore-Penrose pseudoinverse in order to obtain their current references. Such complex mathematical operation is not required by Method 4. In this Method, the same procedures employed in Method 2 to obtain the differential voltages and DC additive currents are used only requiring an extra operation in the micro-processor to obtain the magnitudes and phase-angles of the AC additive currents. By having these values, the last operation required by the proposed Method 4 would be to solve equation \eqref{eq:final_Ref}. In Table \ref{tab:Differences_methods}, the different measurements and the mathematical operations required by each method are highlighted. In summary, Method 0 is the most straight-forward and simplest to implement among all. Method 2 and 4 does not significantly increase the computational burden, whereas Methods 1 and 3 are the ones which require the highest computational burden, due to the Moore-Penrose pseudoinverse calculation. Finally, only Method 4 was capable of handling all the different fault case scenarios analyzed. \begin{table}[!h] \centering \small \caption{Distinction among Methods} \renewcommand\arraystretch{1} \begin{tabular}{cccccc} \hline\hline \multirow{2}{*}{\textbf{Additional operation and control}} & \multicolumn{5}{c}{\textbf{Method}} \\ & 0 & 1 & 2 & 3 & 4 \\ \hline Control of $U_{diff}^{0DC}$ & \ding{53} & \ding{51} & \ding{51} & \ding{51} & \ding{51} \\ \hline \begin{tabular}[c]{@{}c@{}}Magnitude and phase-angle \\ of $u_{diff}^{+-}$\end{tabular} & \ding{53} & \ding{53} & \ding{51} & \ding{51} & \ding{51} \\\hline \begin{tabular}[c]{@{}c@{}}Calculation of the Moore-Penrose\\ pseudo matrix\end{tabular} & \ding{53} & \ding{51} & \ding{53} & \ding{51} & \ding{53} \\\hline \begin{tabular}[c]{@{}c@{}}Extra matrix to remove\\ the AC additive current component\end{tabular} & \ding{53} & \ding{51} & \ding{53} & \ding{51} & \ding{53} \\\hline Arm impedance value & \ding{53} & \ding{53} & \ding{53} & \ding{53} & \ding{51} \\\hline Magnitude and phase-angle of $u_g^{+-}$ & \ding{51} & \ding{51} & \ding{53} & \ding{53} & \ding{53} \\\hline \begin{tabular}[c]{@{}c@{}}Inversion and solution \\ of 3x3 matrix\end{tabular} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} \\ \hline\hline \end{tabular} \label{tab:Differences_methods} \vspace{-0.5cm} \end{table} \section{Conclusions} An improved inner current reference calculation method for MMCs operating under normal and unbalanced network conditions has been presented. Particularly, the MMC operation during AC grid or internal singular voltage sag conditions. Such fault events are quite challenging to handle as they might lead to the eventual disconnection of the system by trying to impose high inner current references. The reference calculation has been formulated on the $abc$ additive and differential reference frames and it enables to calculate the arms' energy transfer considering all the degrees of freedom of the MMC; thus, the effects of the MMC's and AC side impedances are regarded. This is achieved through a mathematical substitution which does not require any iterative calculation method or optimization, whereby the AC additive voltage is replaced by the voltage drop in arms' impedance caused by the additive currents. Simulation results validate that the proposed reference calculation technique is able to provide the adequate converter references and to maintain the converter operating during both AC grid and internal singular voltage conditions. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-06T02:12:05", "yymm": "2105", "arxiv_id": "2105.01908", "language": "en", "url": "https://arxiv.org/abs/2105.01908" }
\section{Method} \section{Introduction} \label{sec:intro} Airborne LiDAR is one of the most detailed and accurate methods for surveying large geographic areas quickly. It is able to capture topography data through vegetation and is more stable across illumination changes than photographic methods. Because of these benefits, along with the increasing availability of LiDAR sensors and the demand to produce high quality surveys, more organizations are relying on this technology. In addition to providing highly accurate point data, LiDAR also provides an intensity measurement. LiDAR intensity is recorded as the return strength or amplitude of the return signal. As intensity is directly related to surface reflectance and other surface characteristics, it has applications in feature extraction, classification, segmentation, surface analysis, object detection, and recognition \cite{intensityoverview}. However, collecting airborne LiDAR over large areas can be very time consuming, and frequently requires multiple flights---either simultaneously or sequentially, to adequately map entire regions. This produces inconsistencies in the intensity measurement, as intensity itself is dependent upon the sensor's calibration, as well as on environmental factors, such as humidity, temperature, or wetness~\cite{intensityoverview}. By extension, intensities between adjacent or overlapping scans can be vastly different which is problematic when trying to use this measurement in many applications. Traditional methods for harmonizing the intensity involve multiple stages of processing. Starting from the raw intensity measurements, a correction model is typically employed to adjust the intensity values that reduces variation from the effects of various parameters such as range or angle of incidence. Secondly, most intensity processing systems utilize some normalization method that uses scaling or shifting to adjust the overall brightness to improve harmonization with neighboring tiles or overlapping regions~\cite{intensityoverview}. These processing methods can be difficult over the course of long collection campaigns. We propose a novel method for point cloud intensity harmonization using a deep neural network, which is capable of harmonizing scans from many different sources. We compare this method to several baselines, including interpolation-based methods as well as histogram matching. We show that our method is comparable to the best baseline in the simplest case, and surpasses it when there are distinct regions with unique physical brightness distributions present in the scan collection. Our method only requires the point cloud information from each scan and that there is sufficient overlap between scans. \section{Related Works} \label{sec:Related Works} \textbf{Radiometric Calibration} Scene radiance can be modeled as a nonlinear response function of the image brightness~\cite{kimthesis}. By using radiometric calibration, the brightness of the image can be mapped to a standardized unit, which makes it easier to compare images over a period of time. Radiometric calibration is used by many vendors working in airborne LiDAR to harmonize intensities, but this process is expensive. \textbf{Image Harmonization} There is significant research seeking to harmonize images for various contexts. For example, compositing is a technique that combines two or more images into a single image, often to create the illusion that the images are from the same scene. Several deep neural networks~\cite{qi2016pointnet, qi2017pointnet, luan2018deep} have been proposed to accomplish more seamless compositing between images. These methods harmonize composite images by translating the foreground into the domain of the background. Other methods~\cite{DBLP:colortransfer1, HWANG20191} seek only to harmonize the color between two images. A harmonization method for LiDAR would reduce or altogether eliminate the need for radiometric calibration. However, image harmonization techniques are not usable on point clouds. Unlike point clouds, images provide an inherent grid structure. \textbf{Neural Network Models for Point clouds.} There have been many recent advancements in point cloud reasoning. The family of PointNet models~\cite{qi2016pointnet, qi2017pointnet} are designed to work on point clouds directly without imposing any additional structures. In contrast, some research has explored ways to understand point clouds using convolution, which is used to extract features from images. One example, KPConv~\cite{thomas2019kpconv} uses a deformable set of kernel points that act in much the same way as in image convolution. Other approaches explore transforming point clouds into more familiar structures. One such approach is Basis Point Sets~\cite{bps} which encodes a point cloud directly into a feature vector. In our work, we explore how a deep learning neural network might provide accurate color transfer across multiple scans. \section{LiDAR Harmonization} \label{sec:LiDAR Harmonization} We address the problem of harmonizing the intensity values of a set of overlapping point clouds, each of which is captured by sensors potentially featuring different models, calibrations, or capture conditions. Given a set of source clouds, we want to adjust the intensities of each point such that the source cloud intensity distribution matches the intensity distribution of the target cloud, but still conforms to the physical brightness distribution of the scan area. Similar to~\cite{kimthesis}, we define the relationship between a source point cloud intensity and the harmonized source point cloud intensity to be the nonlinear response function $I_x = f(H_x)$. $f$ represents the added linear and nonlinear differences that are inherent in the source sensor. We model this as a monotonic function, which is therefore invertible. Obtaining this inverse function provides a mapping from the source intensities to the target intensities, thereby harmonizing the source and target. \begin{equation} \label{eqn:inverse} H_x = g(I_x), g = f^{-1} \end{equation} We propose a deep neural network based regression model capable of performing this task. Our network only relies on a sufficiently large overlap region between source and target scans. Our model is also not greatly affected by shifts in physical brightness distributions, such as transitions from dense urban areas to forested regions, both of which will have noticeably different intensity distributions. \subsection{Architecture} An overview of our harmonization approach can be seen in \figref{architecture}. Given a point cloud neighborhood from a source scan with a central point $x$ and intensity $\hat{I_x}$, we want to predict the harmonized intensity of $x$ relative to a target scan, $\hat{H_x}$. In the ideal case, training would be as simple as finding source and target points with the same coordinate points and building $g$ from \eqnref{inverse} through any given function approximation algorithm. Having source-target point pairs with the same coordinates isolates the differences between sensors, and would make this task fairly trivial. However, it is rare for individual points from different scans to have the same exact coordinates. To address this, our architecture leverages a standard PointNet~\cite{qi2016pointnet} to accurately interpolate the intensity $I_x$ at the target point location within the source scan. Additionally, our architecture allows for harmonizing multiple sources by encoding the point source into a lower dimensional space. Specifically, we encode the point source into a 3 dimensional embedding with a dictionary size of 45. We learn this embedding lookup for each sensor. Our architecture then predicts the correct mapping $g$ for each source by passing the interpolated value concatenated with the difference between the source and target sensor embeddings into a multilayer perceptron (MLP), producing a harmonized value $H_x$. The loss for our model is formulated by the following equation, where $I_x$ is output of the PointNet, $H_x$ is the output of the MLP, and $\rho$ is the loss function: \begin{flalign} \ell(\psi)_I &= \sum \rho(I_x, \hat{I_x}) \\ \ell(\phi)_H &= \sum \rho(H_x, \hat{H_x}) \\ L(\Theta)_T &= \ell(\psi)_I + \:\ell(\phi)_H \end{flalign} \subsection{Dataset} The NYU DublinCity LiDAR dataset~\cite{zolanvari2019dublincity} is a high resolution collection of 41 LiDAR scans over an area of Dublin, Ireland. These scans are already well harmonized, and so they provide a convenient ground truth by which to evaluate our method. From DublinCity we form a new dataset to train our method. After selecting a target scan $P_t$, all other scans with sufficient overlap in the target scan area are collected. The average scan size in DublinCity is around 30 million points. We define sufficient overlap to be at least 200 thousand points in the overlap region. Neighborhoods are then built by picking target points from $P_t$ and taking the closest 150 neighbors within 1 meter in the source scan. Using a set of monotonically increasing response functions from Columbia's Database of Response Functions~\cite{CAVE_0039}, we randomly assign response functions as synthetic corruption to each source scan. Examples are created from these source neighborhood-target point pairs, and corruption is applied to the neighborhoods based on their source. The target point is the harmonization ground truth, $\hat{H_x}$. The corruption transformation for each example is also applied to a copy of the target point and saved as the interpolation ground truth for that example, $\hat{I_x}$. In addition to these examples, we also sample points outside the overlap region for each scan. From these points, we build more neighborhoods as above. Since we are no longer in the overlap region, there is no target harmonization point. We utilize the same embedding since we are mapping within the same scan. While this seems like it should be a trivial operation, we found that it improved overall performance. We suspect this is because it improves the network's ability to interpolate. We apply the pre-assigned corruption to these neighborhoods as well. We define this collection of examples as well as the collection of examples from within the overlap region as the ``no shift" dataset. DublinCity has a consistent intensity distribution for each scan. However, we wish to model regions that have physical brightness shifts, which are common in large LiDAR collections. A second dataset is created as an exact copy of the first. However, before applying the corruption, a global shift transformation is applied along the x-axis of the DublinCity LiDAR dataset. This transformation lowers the intensities over the left half of the region. This global shift simulates an area with physical brightness differences. In our implementation, we use a sigmoid transformation to achieve this effect: \begin{flalign*} \label{sigmoid} I_{x_{\text{shift}}} = s*I_xe^{-l(x-h)}+v \end{flalign*} where x is the normalized x-component of the point being shifted. This transformation produces a noticeable shift along x-axis with a transition zone that quickly ends the shift and returns to the original intensity distribution. In order to produce this significant shift in brightness, we use values $h=.5$, $v=.3$, $l=100$, and $s=.5$. We define this new dataset as the ``with shift" dataset. Stratified random sampling is used to balance the number of neighborhoods that come from the different source scans, as some scans have much larger overlap areas, and to balance the target intensities. In addition, we oversample the training dataset so that we have a balance of examples across the entire range of intensities for each scan. \subsection{Implementation Details} The Pytorch~\cite{pytorch} framework is used to implement and train our model. We use the Adam~\cite{adam} optimizer with a cyclical learning~\cite{cyclical} rate, with maximum learning rate $1e^{-3}$ and minimum learning rate $1e^{-7}$. The learning rate is stepped up from the minimum to the maximum and back down each epoch. The maximum learning rate is lowered by 20 percent each epoch. We use a batch size of 50 and train for 40 epochs. Our MLP uses a hidden layer size of 100, with a ReLU activation layer and dropout, with dropout rate of 0.3. \section{Evaluation} \label{sec:Evaluation} We evaluate our PointNet-interpolation method for LiDAR harmonization by comparing harmonized source scans to their original ground truth values. We report the mean absolute error (MAE) on two distinct datasets which are explained in the following section. We also compare our method to several baseline methods. \subsection{Quantitative Analysis} We evaluate our method as well as several baselines by comparing the harmonized output to the original ground truth values of each scan. Each baseline consists of one interpolation method paired with one harmonization method. The interpolation methods include standard interpolation, nearest neighbor interpolation, and cubic-spline interpolation. For harmonization, least squares approximation (``Linear") and the same MLP used in the PointNet-interpolation model are used. Finally, we compare our method with an entirely different approach: histogram matching. Histogram matching does not rely on the physical geometry of the point cloud but instead only depends on the entire distribution of intensities. Given a source and a reference distribution, histogram matching is able to transform the distribution to look like the reference. This technique is often used in image processing to balance contrast across an image by transforming the image's brightness distribution to be uniform. We evaluate our model and baselines by performing harmonization on the corrupted source scans. Since the scans are large, an evaluation tile is generated. This tile is chosen from an area that is outside the overlap region. We evaluate on numerous neighborhood sizes, but found that smaller neighborhood sizes were more effective. An overview of our results can be seen in \tblref{LiDARerror}. For all results, we use a neighborhood size of five. \subsection{Qualitative Analysis} As can be seen in our table of results, our method performs exceedingly well on the ``With Shift" dataset versus all other baselines. We visualize the difference in performance between histogram matching and our method in \figref{qualitative}. The target scan is shown in (e), and the shifted region is visible on the left. The source scan (b) comes from this region, but histogram matching is biased towards the target distribution, as seen in (d). Training requires a large number of training samples from across the range of intensities. Since we depend on what data from the only overlap region, it can be challenging to find intensities in certain ranges. This degrades our method's performance, which is noticeable in (c), as our model was unable to find an adequate sample of source neighborhoods with target pairs in the middle and upper ranges. \small{ \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{figures/qual.png} \caption{Qualitative results of our method compared to histogram matching. Color represents the intensity measurement. Our method is not affected by shifts in physical brightness distributions} \label{fig:qualitative} \end{figure}} \small{ \begin{table}[t] \centering {\renewcommand{\arraystretch}{1.2}% \begin{tabular}{|cc| c c |} \hline \multicolumn{2}{|c|}{Method} & \multirow{2}{*}{No Shift} & \multirow{2}{*}{With Shift}\\ Interpolation & Harmonization & & \\\hline \multirow{2}{*}{Linear} & MLP & 0.077 & 0.073 \\ & Linear & 0.078 & 0.079 \\\hline \multirow{2}{*}{Cubic} & MLP & 0.073 & 0.075 \\ & Linear & 0.073 & 0.075 \\\hline \multirow{2}{*}{Nearest} & MLP & 0.079 & 0.086 \\ & Linear & 0.073 & 0.090 \\\hline PointNet & MLP & \textbf{0.052} & \textbf{0.040} \\\hline \multicolumn{2}{|c|}{Histogram Matching} & 0.053 & 0.138 \\\hline \end{tabular}} \caption{Quantitative harmonization results for different methods on DublinCity with and without a global shift. Results are given as mean absolute error (MAE).} \label{tab:LiDARerror} \end{table}} \section{Conclusion} \label{sec:Conclusion} We proposed an approach for LiDAR dataset harmonization that takes inspiration from approaches for image harmonization. The key challenges in the task are addressing the lack of truly matched pairs. We addressed this with a point cloud neural network architecture. Our approach is able to incorporate a variety of input features and is more accurate and robust than other approaches.
{ "timestamp": "2021-05-06T02:06:23", "yymm": "2105", "arxiv_id": "2105.01793", "language": "en", "url": "https://arxiv.org/abs/2105.01793" }
\section{{\textsc{DeepRT}\xspace} Workflow} \label{sec:workflow} \ccomment{ \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/workflow.pdf} \caption{{\textsc{DeepRT}\xspace} System workflow.} \label{fig:wf} \vspace{-5mm} \end{figure}} In this part we present an overview of the whole system of {\textsc{DeepRT}\xspace}, shown in Figure \ref{fig:overview}. {\textsc{DeepRT}\xspace} is a system built on top of the scheduling scheme presented in the previous section, aiming at providing a soft real time inference service for CNN models on GPU. First, {\textsc{DeepRT}\xspace} has a Performance Profiler which performs offline performance analysis to obtain the execution times of batched job instances of different batch sizes and different categories. When a new request arrives at {\textsc{DeepRT}\xspace}, {\textsc{DeepRT}\xspace} first routes this pending request to a two-phase Admission Control Module where {\textsc{DeepRT}\xspace} performs an admission control test to determine whether the pending request is schedulable under current system workload, based on the data provided by the Performance Profiler. If the new request together with current workload is schedulable, {\textsc{DeepRT}\xspace} begins the processing of this request by passing its frames to a DisBatcher, where the frames get batched with frames from other requests and queued. Then, an EDF scheduler commands a Worker on GPU to start processing the job instances. We also have an Adaptation Module which monitors the performance of the Worker and feeds back online performance to the Performance Profiler, and adjusts the execution plan when necessary. \section{GPU Execution Characteristics} \label{sec:char} \begin{figure*} \centering \begin{subfigure}{.28\textwidth} \centering \vspace{-12pt} \includegraphics[width = 1\linewidth]{figures/sec2-same-latency-v3.png} \caption{Median execution time when executing multiple instances of the same model.} \label{fig:same-latency} \end{subfigure} \hspace{15pt} \begin{subfigure}{.28\textwidth} \centering \vspace{-12pt} \includegraphics[width = 1\linewidth]{figures/sec2-same-thru-v4.png} \caption{Overall throughput when executing multiple instances of the same model.} \label{fig:same-thru} \end{subfigure} \hspace{15pt} \begin{subfigure}{.28\textwidth} \centering \vspace{-12pt} \includegraphics[width = 1\linewidth]{figures/sec2-batch-latency-v2.png} \caption{Median execution time when processing data in batches.} \label{fig:batch-latency} \end{subfigure} \begin{subfigure}{.28\textwidth} \centering \vspace{-3pt} \includegraphics[width = 1\linewidth]{figures/sec2-batch-thru-v3.png} \caption{Overall throughput when processing data in batches.} \label{fig:batch-thru} \end{subfigure} \hspace{15pt} \begin{subfigure}{.28\textwidth} \centering \vspace{-3pt} \includegraphics[width = 1\linewidth]{figures/sec2-comp-latency-v3.png} \caption{Comparison of execution time between concurrent execution and batch processing. } \label{fig:comp-latency} \end{subfigure} \hspace{15pt} \begin{subfigure}{.28\textwidth} \centering \vspace{-3pt} \includegraphics[width = 1\linewidth]{figures/sec2-comp-thru-v3.png} \caption{Comparison of throughput between concurrent execution and batch processing.} \label{fig:comp-thru} \end{subfigure} \vspace{2pt} \caption{These figures show the execution time and throughput performance under different concurrency and batch size conditions. In (e) and (f), ``Cx By'' means running x model instances concurrently while each instance processing batches sized y. ``RN'' refers to ResNet, ``V'' refers to VGG, ``Inc.'' refers to Inception.}\label{fig:sec2} \vspace{-15pt} \end{figure*} While there has been a tremendous amount of research endeavor and industrial solutions aiming at optimizing the processing latency and throughput of a single deep learning request on GPU, \emph{e.g.}, model pruning, fewer works focus on the handling of multiple concurrent requests, which is in fact an important scenario in edge computing. In order to design an edge based system aiming at providing soft real time inference services for multiple clients, first we need to understand how GPU behaves under multiple requests. In this section, we show the performance characteristics of processing multiple deep learning requests on GPU, which is the foundation of our soft real time GPU scheduler. \subsection{Experimental Settings for the Analyses} We mainly focus on the latency and throughput performance of CNN inference. When batching is enabled, the latency $l$ of performing CNN inference upon some input data consists of two parts: \begin{equation} \vspace{-2pt} l = l_q + l_e, \vspace{-2pt} \label{eq:latency} \end{equation} where $l_e$ is the real execution time of performing CNN inference upon input data on GPU, and $l_q$ is the queuing time spent by some input data that arrive early in waiting for input data that arrive late. $l_q$ hinges on the specific design of the inference system, so throughput this section we only measure $l_e$ and call it \emph{execution time} to avoid ambiguity. All the performance measurements are carried out using a mature cloud and edge inference solution, Triton Inference Server \cite{triton}, developed by NVIDIA. The hardware setting is introduced in Section \ref{sec:impl}. The way we measure execution time and throughput is as follows. We use Triton's \texttt{perf\_analyzer} to send requests to Triton server and record median execution time and throughput. Each time we send one or several requests to the server for inference, depending on the concurrency number, and each request may contain one image or a batch of multiple images; when inference result is received from the server, we immediately send out another request(s). This process is repeated over a fixed time interval, $20$ seconds. We use images sized $3\times 224\times 224$ (RGB channels $\times$ height $\times$ width), which is the default image size in Triton. We choose $6$ widely used deep learning models -- ResNet50, ResNet101, ResNet152, VGG16, VGG19, and Inception-v3. They belong to $3$ types, ResNet \cite{he2016deep}, VGGNet \cite{simonyan2014very}, and Inception-v3 \cite{szegedy2016rethinking}. In our setting, ResNet and VGGNet models are built upon the ONNX framework \cite{bai2019}, and Inception-v3 is of the GraphDef format \cite{tensorflow}, which is a tool used by Tensorflow to represent models. Our setting covers different types of models, different model sizes in each type, and different frameworks to show the universality of our conclusions. \vspace{-5pt} \subsection{Concurrent Execution of Models} We first study the performance characteristics when executing multiple models concurrently on GPU. To be more specific, there are multiple model instances loaded on GPU, and these models process image frames received from clients all at the same time; we study how concurrency affects the execution of each model. This analysis can be further divided into two parts: concurrent execution of multiple instances of the same model on GPU, and concurrent execution of different models. \textbf{Concurrent execution of the same model.} When different clients request to process their data with the same model, except for batching the data and processing them as a whole in one model instance, another common approach taken by Triton and some platforms is to replicate the same model on GPU to get several model instances, and to process each request with one of the model instances concurrently. In this part we study the performance of running different number of duplicate instances of the same model concurrently (see Figure \ref{fig:same-latency} and \ref{fig:same-thru}). For Inception we show its performance up to a concurrency number of $6$, as more concurrent instances overload the system. We have also recorded the variance of different executions, but the variances are too small to display. \ccomment{ \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/sec2-same-latency-v2.png} \vspace{1pt} \caption{.} \label{fig:same-latency} \vspace{-5mm} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/sec2-same-thru-v2.png} \vspace{1pt} \caption{.} \label{fig:same-thru} \vspace{-5mm} \end{figure}} From Figure \ref{fig:same-latency} we can see the linear relationship between execution time and the number of concurrent instances. We think this is due to how CUDA schedules multiple programs. When multiple programs run concurrently on GPU, CUDA schedules their warps (a warp contains multiple CUDA threads which can be executed in parallel) with a time sliced scheduler \cite{wang2017quality}. Warps from different contexts cannot be executed simultaneously and some warps have to wait in queue for their time slice share. When we execute multiple instances of the same model, since all instances have exactly the same warps but the warps are in different contexts, the execution time of each warp grows linearly with the concurrency number. Figure \ref{fig:same-thru} shows how throughput changes with the concurrency number. We can see that if we increase concurrency number to $2$, there is a slight increase in throughput, but afterwards the throughput stays at a stable value. A crucial conclusion we draw in this part is that, although increasing concurrency to a certain level can improve inference throughput by a small margin, execution time increases linearly as concurrency number grows. Let's look at an example to see how this observation affects scheduling algorithm design. Imagine there are two images to be processed with a specified model, say, ResNet50. Processing the two images one by one inside one single model instance instead of processing them concurrently in two model instances, achieves a slightly lower throughput but the average latency is reduced by $25\%$, since the first image is finished early and its execution time is not affected by the concurrent execution of the second image. Therefore, the latency of the first image is reduced by half compared to running concurrently, while the latency of the second image remains approximately the same as when running concurrently. \begin{center} \begin{table} \footnotesize \centering \begin{tabular}{ c| C{16pt} C{18pt} C{19pt} C{19pt} C{18pt} C{18pt} C{18pt} } \hline \multicolumn{7}{c}{Execution time (ms)} \\ \hline & - & RN50 & RN101 & RN152 & V16 & V19 & Inc. \\ \hline RN50 & 3.5 {\scriptsize(0.0)} & 6.4 {\scriptsize(0.0)} & 6.9 {\scriptsize(0.6)} & 7.1 {\scriptsize(0.4)} & 11.2 {\scriptsize(0.0)} & 11.6 {\scriptsize(0.6)} & 5.8 {\scriptsize(0.8)} \\ \hline RN101 & 6.4 {\scriptsize(0.0)} & 11.1 {\scriptsize(0.4)} & 11.8 {\scriptsize(0.0)} & 12.0 {\scriptsize(0.5)} & 18.0 {\scriptsize(0.1)} & 20.3 {\scriptsize(0.1)} & 9.0 {\scriptsize(1.3)} \\ \hline RN152 & 9.0 {\scriptsize(0.1)} & 15.5 {\scriptsize(0.3)} & 16.4 {\scriptsize(0.5)} & 16.8 {\scriptsize(0.1)} & 24.6 {\scriptsize(0.1)} & 27.6 {\scriptsize(0.1)} & 14.6 {\scriptsize(0.4)}\\ \hline V16 & 4.5 {\scriptsize(0.0)} & 5.9 {\scriptsize(0.2)} & 6.0 {\scriptsize(0.4)} & 6.3 {\scriptsize(0.2)} & 8.1 {\scriptsize(0.0)} & 8.8 {\scriptsize(0.0)} & 5.2 {\scriptsize(0.5)}\\ \hline V19 & 5.3 {\scriptsize(0.1)} & 6.9 {\scriptsize(0.4)} & 7.0 {\scriptsize(0.4)} & 7.4 {\scriptsize(0.2)} & 8.8 {\scriptsize(0.2)} & 9.6 {\scriptsize(0.0)} & 6.1 {\scriptsize(0.5)}\\ \hline Inc & 9.3 {\scriptsize(1.4)} & 25.3 {\scriptsize(0.6)} & 29.0 {\scriptsize(0.5)} & 28.9 {\scriptsize(0.4)} & 37.6 {\scriptsize(1.3)} & 42.9 {\scriptsize(0.8)} & 15.2 {\scriptsize(0.5)}\\ \hline \end{tabular} \vspace{6pt} \begin{tabular}{ c|C{18pt} C{20pt} C{24pt} C{24pt} C{18pt} C{18pt} C{18pt} } \hline \multicolumn{7}{c}{Throughput (img/s)} \\ \hline & - & RN50 & RN101 & RN152 & V16 & V19 & Inc. \\ \hline RN50 & 282.1 & 155.8 & 145.4 & 143.4 & 89.2 & 87.1 & 174.7 \\ \hline RN101 & 154.9 & 90.4 & 84.7 & 82.1 & 55.6 & 51.0 & 103.1 \\ \hline RN152 & 111.6 & 64.2 & 60.9 & 59.6 & 40.7 & 36.8 & 69.1\\ \hline V16 & 222.8 & 178.0 & 166.8 & 162.7 & 123.4 & 113.1 & 187.4\\ \hline V19 & 190.5 & 148.3 & 144.4 & 142.9 & 113.1 & 103.8 & 162.1\\ \hline Inc & 105.9 & 39.7 & 34.4 & 34.5 & 26.6 & 23.6 & 65.5\\ \hline \end{tabular} \vspace{5pt} \caption{Execution time and throughput when running two different model instances concurrently.} \label{tb:sec2-diff} \vspace{-15pt} \end{table} \end{center} \vspace{-15pt} \textbf{Concurrent execution of different models.} A common practice of Triton Inference server and other solutions when different clients send requests to process data with different models is to execute the models simultaneously. This part analyzes the performance characteristics when different models run concurrently on GPU. Among the $6$ models introduced previously, in each experimental run we choose two models and execute them concurrently\footnote{In this analysis, we study how the execution of a model is affected by another model instance. We don't analyze the situations where there are three or more instances since the current setting already shows the complicated interference between model executions.}. The other experimental setups are the same as in the previous part. In Table \ref{tb:sec2-diff}, we show the median execution time and throughput when executing a model specified by the leftmost column concurrently with another model specified by the uppermost row. As a comparison, we use the columns marked by ``-'' to show the performance when a model is executed alone. The data show that when a model M is executed along with different other models, its execution time and throughput performance vary greatly\footnote{There is an observable trend, however, that when a model is executed concurrently with models of the same type (\emph{e.g.}, a ResNet101 model and a ResNet152 model), the performance tend to be similar, but performance discrepancy still exists. This trend is a piece of evidence for our following hypothesis. The kernels of models belonging to the same type have similar sizes, thus they cause similar amount of interference.}. Explaining such an observation requires a detailed study of how internal scheduling happens at a low level on GPU, which is very difficult since GPU drivers are not open sourced. Our hypothesis is that different kernel (a CUDA function) sizes of different models cause different slowdowns. CUDA uses a time sliced scheduler to schedule programs from different contexts (corresponding to different models) and the kernels are the smallest scheduling unit. They get scheduled non-preemptively on GPU. Different models are composed of distinct numbers and types of kernels; when a model M is executed concurrently with model N and the kernels of M are larger in sizes but smaller in quantity, model M will have more GPU time share and thus exhibit higher throughput and smaller execution time. \ccomment{ \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/sec2-batch-latency.png} \vspace{1pt} \caption{.} \label{fig:batch-latency} \vspace{-5mm} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/sec2-batch-thru.png} \vspace{1pt} \caption{.} \label{fig:batch-thru} \vspace{-5mm} \end{figure}} The observation we make from this part, is that concurrently executing multiple instances of different models results in complex interference between models. If we want to design a real time system, we need to figure out the interference in any subset of all admitted models in order to obtain their worst-case execution times. When there are $z$ models, there are ${z\choose 2}$ combinations of models to profile if we only execute two model instances concurrently. In fact, if we allow any number of concurrent model instances, we need to profile $\sum_{k=1}^z {z\choose k} = 2^z$ combinations of models, which is practically impossible. However, if we manage to execute the different model instances sequentially instead of concurrently, there won't be any interference between the sequential model executions. \vspace{-5pt} \subsection{Inference in Batches.} A well-known conclusion about GPU-based deep learning model training and inference is that batch processing can boost their throughput. In this part we show the performance characteristics when we do inference in batches. The experimental setup of this part is the same as in the previous two parts, except that each time we only execute one model instance instead of concurrently executing two or more instances. The comparison between different batch sizes is shown in Figure \ref{fig:batch-latency} and Figure \ref{fig:batch-thru}. As expected, batching increases throughput with a cost of higher execution time. \vspace{-5pt} \subsection{Concurrent Execution vs Batch Processing} \ccomment{ \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/sec2-comp-latency.png} \vspace{1pt} \caption{.} \label{fig:comp-latency} \vspace{-5mm} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/sec2-comp-thru.png} \vspace{1pt} \caption{.} \label{fig:comp-thru} \vspace{-5mm} \end{figure}} Since concurrent execution of multiple instances and batching inputs into larger tensors can both process multiple requests simultaneously, in this part we compare how these two approaches affect system performance. On each of the aforementioned $6$ models, we fix the number of concurrently processed requests (concurrent model instances $\times$ batch size) and see how the execution time and throughput vary. The result is shown in Figure \ref{fig:comp-latency} and Figure \ref{fig:comp-thru}, where we compare concurrency - batch combinations of $4\times 1$, $2\times 2$, and $1\times 4$. We can see that batch processing should be favored over concurrent model instance execution in terms of increasing system throughput and reducing request execution time. \subsection{Summary of Observations} In summary, We make three observations: \begin{itemize} \item Executing two or more models simultaneously does not notably improve system throughput, and it increases latency of each request due to increased execution time. \item When executing multiple instances from different models on the same GPU, there exists interference between concurrent executions, making it cumbersome to profile or estimate the worst-case execution time, hindering the design of a real time system. \item Processing input data in batches increases system throughput, much more than concurrent model execution does, with a sacrifice of increased request execution time. \end{itemize} Two key takeaways from these observations are: (1) Concurrently executing multiple model instances does not provide high value to our goal of guaranteeing maximum latency while maintaining high throughput; instead, the interference it introduces makes designing a real time system difficult. So we would like to execute model instances sequentially instead of concurrently to avoid interference. (2) Batching increases throughput , but we need to make sure that the increased latency (increased execution time plus queuing time when an image frame waits for other frames belonging to the same batch) does not cause deadline misses. \section{Conclusion} \label{sec:conc} We present {\textsc{DeepRT}\xspace}, a soft real time scheduler for performing CNN inference on the edge. {\textsc{DeepRT}\xspace} consists of $5$ modules -- a Performance Profiler, an Admission Control Module, DisBatcher, an Execution Worker, and an Adaptation Module. {\textsc{DeepRT}\xspace} uses time windows to batch input data, the lengths of which are determined by the requests' deadlines, and processes the batched data sequentially. Our evaluation results show that {\textsc{DeepRT}\xspace} is able to provide guarantee on inference latency while maintaining high inference throughput. \section{{{{\textsc{DeepRT}\xspace}}} System Design} \label{sec:design} In this section we present the whole system of {\textsc{DeepRT}\xspace}, shown in Figure \ref{fig:overview}. {\textsc{DeepRT}\xspace} is a scheduling system built on top of the scheduling scheme presented in the previous section, aiming at providing a soft real time inference service for CNN models on GPU. {\textsc{DeepRT}\xspace} consists of $5$ parts: a Performance Profiler, a two-phase Admission Control Module, a DisBatcher, an Execution Worker, and an Adaptation Module. \ccomment{ First, {\textsc{DeepRT}\xspace} has a Performance Profiler which performs offline performance analysis to obtain the execution times of batched job instances of different batch sizes and different categories. When a new request arrives at {\textsc{DeepRT}\xspace}, {\textsc{DeepRT}\xspace} first routes this pending request to a two-phase Admission Control Module where {\textsc{DeepRT}\xspace} performs an admission control test to determine whether the pending request is schedulable under current system workload, based on the data provided by the Performance Profiler. If the new request together with current workload is schedulable, {\textsc{DeepRT}\xspace} begins the processing of this request by passing its frames to a DisBatcher, where the frames get batched with frames from other requests into job instances and queued. Then, an EDF scheduler commands a Worker on GPU to start processing the job instances. We also have an Adaptation Module which monitors the performance of the Worker and adjusts the execution plan when necessary. } \vspace{-5pt} \subsection{Performance Profiler} \label{ssec:perf_prof} Our Performance Profiler works offline. For each deep learning model that we want to execute at the edge server, for each frame shape that {\textsc{DeepRT}\xspace} permits as legitimate shape, and for different batch sizes, the Performance Profiler executes each batch of frames on GPU multiple times, and records the execution time of each run. For each setting, we obtain a list of running times and take the worst-case running time\footnote{In practice we take 99 percentile running time to filter out outliers.}. In this way, we create a lookup table containing the execution times of different sized batches of frames with different shapes processed by different deep learning models. Whenever a new request comes to the system, we look up this table, find the corresponding model and shape, and feed the results to Admission Control Module to make admission decisions. \vspace{-5pt} \subsection{Admission Control Module} When a new request arrives at {\textsc{DeepRT}\xspace}, it is first routed to the Admission Control Module. Since we target building a soft real time system, {\textsc{DeepRT}\xspace} is selective with the requests in case too many requests cause serious deadline misses. The Admission Control Module decides whether a pending request is admitted to {\textsc{DeepRT}\xspace}. As we discussed in Section \ref{sec:model}, {\textsc{DeepRT}\xspace} uses DisBatcher to transform frames into task instances, which are intrinstically non-preemptive multiframe tasks. Therefore, performing admission control for DisBatcher based {\textsc{DeepRT}\xspace} is equivalent to performing admission control for non-preemptive multiframe tasks. Some past works propose to perform admission control for EDF under the non-preemptive multiframe workload scenario using demand-bound functions \cite{baruah2010non}\cite{baruah2010preemptive}\cite{baruah2006schedulability}\cite{chakraborty2002schedulability}. A demand bound function represents the maximum execution demand of a task set in any time interval of a given length. Then this approach compares the demand bound with available resources to decide whether a task set is schedulable. This approach suffers from pseudo-polynomial complexity and inaccuracy of their approximate algorithms in calculating the demand bound functions. Another approach performs simulation based feasibility analysis \cite{moyo2010schedulability}. Basically this approach represents time with a clock variable. When the clock reaches the arrival time of a job, the job is released to a deadline queue. It simulates the execution of the job by simply incrementing the clock by the job's worst-case execution time. Then it compares the virtual completion time of the job, which is the current value of the clock variable, with the job's deadline, to determine whether there is a deadline miss. Since different tasks may have different initial release times, this approach uses a tree to represent all possible execution sequences, making its time complexity non polynomial. Since the goal of {\textsc{DeepRT}\xspace} is to provide real time inference while maintaining high throughput, the Admission Control Module should admit as many requests as possible but not too many requests which overload the system. Therefore, it has to perform an exact analysis of schedulability. {\textsc{DeepRT}\xspace} adopts the simulation based approach as it is an exact analysis. The time complexity of this approach can be greatly reduced in {\textsc{DeepRT}\xspace} to linear with respect to the number of frames. The reason is that in {\textsc{DeepRT}\xspace} the requests for inference all have specific release times, because users' video frames occur at specific times instead of at arbitrary times, and the release times are communicated with the Admission Control Module. Since we also know the time when every time window starts, we can know the release times of all job instances. In this way, we know exactly when and in what order each job instance ``arrives'' at the GPU, so we can build an exact execution schedule in linear time instead of building a tree of execution sequences. Note that in building such an execution schedule, {\textsc{DeepRT}\xspace} requires the execution times of different job instances obtained in Section \ref{ssec:perf_prof}. The assumption for the exact schedulability analysis is thus accurate job instance execution time profiling. To further improve the Admission Control Module by reducing its complexity, before we run the simulation based schedulability test, we first use a utilization based test to filter out obviously infeasible requests. The goal of the utilization based Phase 1 test is to reject a pending request as fast as possible which will obviously cause deadline misses if accepted. The simulation based Phase 2 test is an exact test, which refines the results from Phase 1 test and ultimately decides whether the new request gets admitted. \textbf{Phase 1.} In Phase 1, {\textsc{DeepRT}\xspace} uses the utilization of task instances to reject a pending request that will obviously cause deadline violations. We calculate and evaluate the utilization of task instances because task instances are actually executed by the GPU . We define the average utilization of a task instance $s$ to be \vspace{-5pt} \begin{equation} U_s = \frac{\sum_{i=1}^{N_s}E_i}{N_sP_s}, \label{eq:ut} \vspace{-3pt} \end{equation} where $N_s$ denotes the total number of job instances in task instance $s$, and $P_s$ is the period of $s$ which equals to the time window length used to generate $s$. Within each period there is at most one job instance. $E_i$ denotes the execution time of a job instance $i$ belonging to $s$. We further define the average utilization of a task instance set $\Sigma$ to be $U = \sum_{s\in \Sigma}U_s$. \ccomment{ \begin{equation} U = \sum_{s\in \Sigma}U_s ~. \label{eq:u} \end{equation} } The task instance set comprises task instances generated from all existing requests and the new pending request. Naturally in order not to overload the system, we should have $U \leq 1$, or $\sum_{s\in \Sigma}\frac{\sum_{i=1}^{N_s}E_i}{N_sP_s} \leq 1$. The complexity of calculating $\sum_{i=1}^{N_s}E_i$ is linear with respect to $N_s$, since we need to estimate the number of frames that fall inside each time window, and look up the profiling execution time table to decide $E_i$. In order to boost the speed of Phase 1, we make an approximation when calculating $U_s$. As the complexity of calculating $\sum_{i=1}^{N_s}E_i$ comes from the fact that we don't know how many frames fall into each batch, we use an average number of frames over all time windows to estimate the exact number of frames in each time window. We denote the set of requests of the same category with $I^g$. The average number of frames inside a time window of a request $I^g_m\in I^g$ is $\frac{W_g}{p^g_m}$, where $W_g$ is the period of the corresponding time window, and $p^g_m$ is the period of request $I^g_m$. Therefore, for all the requests of category $g$, the average number of frames in one time window, denoted by $n_g$, can be represented by $ n_g = \lfloor\sum_{I^g_m\in I^g}\frac{W_g}{p^g_m}\rfloor. $ We look up the execution time $E^{n_g}$of a batch of $n_g$ frames from the table and we can get an estimation of $U_s$: \vspace{-5pt} \begin{equation} \vspace{-5pt} \Tilde{U_s} = \frac{E^{n_g}}{P_s}. \vspace{-1pt} \end{equation} Please note that Phase 1 accepts more pending requests than {\textsc{DeepRT}\xspace} can handle while not rejecting feasible requests by underestimating total workload. The reasons of this underestimation are twofold. First, it's clear that the sufficient necessary condition for a set of periodic preemptive tasks to be schedulable is that the total utilization is no larger than 1; but this is only a necessary condition for preemptive multiframe tasks \cite{mok1997multiframe}, let alone non-preemptive multiframe tasks in our scenario. Second, we use an estimated average utilization of a task instance which doesn't consider peak utilization. Also, we use a floor operator in calculating $n_g$ above. In this way, Phase 1 admission control can give prompt responses to some clients and reduces some workload for Phase 2. \textbf{Phase 2.} While {\textsc{DeepRT}\xspace}'s Phase 1 test admits requests generously, the Phase 2 test does an exact schedulability analysis to control admission to the system. The Phase 2 test consists of three sequential steps --- system state recording, pseudo job instances generation, and an EDF imitator algorithm. \begin{algorithm}[t] \SetAlgoLined \KwIn{Sorted deadline queue $Q$, list of job instances $L$ ordered by release times} \KwOut{Whether the jobs are schedulable with EDF} Initialization: $t \gets 0$\; \While{$Q$ not empty {\normalfont \textbf{or}} $L$ not empty}{ \eIf{$Q$ is empty}{ release job $i$ from $L$ to $Q$\; $t\gets R_i$\; }{pop job $k$ from $Q$\; $t\gets t+E_k$\; \If{$t>R_k+D_k$}{\Return not schedulable} \While{$L$ not empty {\normalfont \textbf{and}} $R_{L[1]}<t$}{ release job $i$ from $L$ to $Q$\;} }} \Return schedulable \label{al:edfimi} \caption{EDF imitator algorithm} \end{algorithm} In the first step, system state recording, the Admission Control Module captures the current system state, which includes four parts: (1) the number of frames of each category that have already arrived at {\textsc{DeepRT}\xspace} and wait to be batched by the DisBatcher, (2) the already batched job instances in the deadline queue waiting to be processed by GPU, (3) the periods of all time windows, and (4) the period and number of remaining frames of each request. Essentially these four parts describe the existing system workload, and how to batch image frames to job instances. Once the Admission Control Module learns about the current system state, it proceeds to the second step, where it simulates the process of DisBatcher and generates pseudo job instances from all the requests, including the pending request being tested. It implements a virtual representation of the DisBatcher mechanism introduced in Section \ref{sec:model}, where the time and workload are both simulated. For each request, the Admission Control Module estimates the arrival time of each frame using its period, and compares the arrival time with the starting time and end time of the time windows to see which window each frame falls into. In this way the Admission Control Module is able to know the number of frames to be batched in each window. The Admission Control Module looks up the execution time of each batched job instance from the execution time table to get a list of virtual job instances. This list contains the ``future'' job instances from all task instances, and the job instances are ranked according to their release times, which is the time when a job instance is pushed to the deadline queue. This is done through simultaneously running the aforementioned DisBatcher simulator over all categories of requests and appending the job instance with the smallest release time to the list each time. With the system current state captured from Step 1 and the future virtual job instances obtained from Step 2, the Admission Control Module moves to step three, where it uses an EDF imitator algorithm to determine whether these job instances can be scheduled by EDF. The EDF imitator algorithm is shown in Algorithm $1$. In this algorithm, $Q$ is a sorted deadline queue storing job instances, and $L$ is the list of job instances obtained from Step 2 which are released in the ``future''. Note that $L$ is already ordered by release times. We use $R_i$ to denote the release time of a job $i$, $E_i$ to denote the execution time of a job $i$, $D_i$ to denote the relative deadline of $i$, and $L[1]$ to denote the $1^{st}$ element of list $L$. The complexity of this algorithm is $O(N)$, where $N$ is the total number of frames. \vspace{-12pt} \subsection{DisBatcher Module and Execution Worker} DisBatcher is the core component of {\textsc{DeepRT}\xspace}. It is responsible for transforming frames received from admitted user requests to job instances that are suitable to execute on GPU. It's an implementation of the batching approach presented in Section \ref{sec:model}. \vspace{-0.1pt} Once a request is admitted by the Admission Control Module, the Admission Control Module sends request-related metadata including frame shape, requested model, period, and relative deadline to the DisBatcher. Then clients will directly send frames to DisBatcher. DisBatcher keeps track of all the admitted requests in {\textsc{DeepRT}\xspace}, together with their metadata. DisBatcher manages a frame queue for each category of requests. These queues store frames which arrive during their time windows and wait to be aggregated into batches of frames, or \emph{tensors}. DisBatcher utilizes reccurent countdown timers to implement time windows. It keeps a timer for each category of requests with the timer's countdown interval equal to the time window length. When a timer expires, DisBatcher batches all the frames in its corresponding queue to a tensor and immediately starts timer countdown again. Whenever a new request is admitted, DisBatcher updates the countdown interval of the corresponding timer if the new request's relative deadline is smaller than the current smallest deadline. The DisBatcher wraps a tensor inside a new job instance, the relative deadline of which is equal to the corresponding time window length, and it pushes this job instance onto an execution queue. A Worker on GPU subscribes to this execution queue and processes the job instances according to EDF. We implement the execution queue with a priority queue. The priority is determined by the job instances' absolute deadlines (release time + relative deadline). The Worker is the execution engine which actually processes the batched job instances with a requested model on GPU. It repeatedly consumes the execution queue mentioned above whenever there are job instances inside. It processes the job instances one after another. The Worker is also responsible for monitoring the performance of execution. It detects and records deadline misses. It also detects whether the execution time of a job instance is larger than the profiled worst-case execution time and report overruns to the Adaptation Module. We employ an optimization technique to further reduce frame latency. Occasionally GPU is idle while there are frames which have already arrived but wait to be batched by DisBatcher. When {\textsc{DeepRT}\xspace} detect this situation, it batches the frames before their timer expires and immediately sends the batched job instance to GPU for processing. In this way, {\textsc{DeepRT}\xspace} can reduce the latency of these frames and meanwhile increase the utilization of GPU. \vspace{-5pt} \subsection{Adaptation Module} Since the hardware devices upon which {\textsc{DeepRT}\xspace} operates are commercial off-the-shelf computing devices that don't have any hard deadline guarantees, {\textsc{DeepRT}\xspace} is only able to provide soft real time services. The response time of processing the same request under the same setting may vary between different runs, occasionally leading to job overruns. As the GPU executes jobs non-preemptively and we use EDF, one single job overrun can possibly cause unpredictable deadline misses of many other jobs in the system. When overruns happen, we need a method to ``punish'' the overrun job, and more importantly, to avoid deadline misses as much as possible. In {\textsc{DeepRT}\xspace}, each job instance category has a penalty initialized to be $0$. When the Worker observes that the execution time of a job instance exceeds the profiled execution time, the Adaptation Module will increase the penalty of the job instance category by the excess part. Meanwhile, the Adaptation Module informs the DisBatcher to decrease the shape (resolution) of tensors belonging to that category. These tensors will not be batched with other tensors with the same smaller shape in order not to disturb job instances' priorities. The Worker will record the execution time of the new job instance and subtract the saved execution time from penalty. When penalty becomes non positive, the Adaptation Module commands the DisBatcher to resume the original shape of the overrun job instance and sets its penalty back to $0$. \ccomment{ \subsection{Supporting Non Real Time Requests} {\textsc{DeepRT}\xspace} also supports non real time requests, requests which don't require a latency guarantee. When non real time requests arrive, {\textsc{DeepRT}\xspace} uses a polling server to transform them into periodically executed jobs. {\textsc{DeepRT}\xspace} allocates a portion of GPU utilization to these requests. The portion is by default set to be $20\%$ but it's configurable. } \section{Evaluation} \label{sec:eval} We evaluate {\textsc{DeepRT}\xspace} by answering these questions: \begin{itemize} \item How well does {\textsc{DeepRT}\xspace} perform in terms of meeting deadline requirements as compared to state-of-the-art latency-centric CNN inference scheduling approaches? \item Is {\textsc{DeepRT}\xspace} able to provide high throughput while guaranteeing soft real time services? \item How effective is the Admission Control Module in making schedulability decisions for new requests? What's the overhead of running this module? \item How robust is {\textsc{DeepRT}\xspace} against overruns and how quickly can {\textsc{DeepRT}\xspace} bring the system back to normalcy? \end{itemize} \vspace{-10pt} \subsection{Experimental Dataset} We use the DAVIS dataset \cite{perazzi2016benchmark} as the workload. This dataset consists of video frames of 480p ($480\times 854$) and 1080p ($1080\times 1920$) resolution. We also downsample the video frames to various resolution formats in order to enrich the request data. On the desktop computer with an RTX 2080 card, we set the request video data to have 3 resolution formats: $1080\times 1920$, $480\times 854$, and $240\times 352$. On the Jetson TX2, as its computing power is smaller, we use frames of $360\times 640$, $240\times 352$, and $224\times 224$. All videos are colored videos and they all have the 3 RGB channels. It is worth mentioning that {\textsc{DeepRT}\xspace} is agnostic of video contents since different videos exhibit the same characteristics when being processed by classification models. \vspace{-5pt} \subsection{{\textsc{DeepRT}\xspace} vs. Existing Inference Systems } \label{ssec:vs-state} In this part, we compare the performance of {\textsc{DeepRT}\xspace} and the state-of-the-art CNN inference systems with respect to their abilities to meet the latency requirements specified by user applications. \textbf{Baseline.} We compare {\textsc{DeepRT}\xspace} against these approaches which enable batching or adaptive batching to achieve low-latency high-throughput inference: \begin{itemize} \item AIMD is an implementation of the dynamic batching scheme used by Clipper \cite{crankshaw2017clipper} and MArk \cite{zhang2019mark}. As the name suggests, when inference latency does not exceed the latency objective, batch size increases additively. If latency objective is violated, a multiplicative reduction of batch size is performed. \item BATCH is the scheme used by Triton Inference Server \cite{triton}. It performs batching over request data with a fixed batch size determined empirically. We set the batch sizes as small as possible to reduce the latency of each job. \item BATCH-Delay is another scheme provided by Triton Inference Server. Apart from imposing a fixed number as the batch size, BATCH-Delay also imposes a time limit to each model. This scheme batches input data either when the number of frames in a batch reaches the configured batch size, or when the time limit is reached, whichever occurs first. \end{itemize} It is worth mentioning that all of these approaches process multiple requests concurrently under multitenancy situation. For BATCH and BATCH-Delay, we set the batch size as small as possible in order to reduce the latency of each job. However, sometimes a small batch size could drain GPU memory since there are too many jobs executed concurrently on GPU. If this happens, we increase the batch size until the memory problem is alleviated. \textbf{Request traces.} Each time we run {\textsc{DeepRT}\xspace} or the aforementioned inference systems, we feed the system with multiple synthesized requests to perform inference on their data. The requests are independent of each other and they arrive at the system one at a time. We use tweets traces from Twitter \cite{twitter} as a reference to determine the interval between the arrival of requests. Each request contains a video with a fixed number of frames, and each frame is released periodically according to its frame rate. In order to demonstrate the universality of {\textsc{DeepRT}\xspace} on various kinds of applications, we randomly set the period and relative deadline of the frames in a video. The period and relative deadline of the frames are sampled from a Gamma distribution independently. We select Gamma distribution to generate random period and relative deadline settings because the generated random numbers by Gamma distribution start from $0$, and it's a common distribution in queuing theory. The shape parameter $k$ and the scale parameter $\theta$ of the Gamma distribution are set to $2$ and $5$, respectively. Then we scale the samples to appropriate values. For each request, we randomly choose an input shape and a model from the $6$ models listed in Section \ref{sec:char} plus Mobilenet-v2 \cite{sandler2018mobilenetv2}, and we limit the number of categories of requests. \begin{center} \begin{table} \footnotesize \centering \begin{tabular}{ c|c|c|c } \hline \hline \multicolumn{4}{c}{Mean values of period and relative deadline (ms)} \\ \hline & Trace 1 & Trace 2 & Trace 3 \\ \hline Desktop & 50 & 150 & 250 \\ \hline Jetson TX2 & 300 & 450 & 600\\ \hline \end{tabular} \vspace{5pt} \caption{Mean values of frame period and relative deadline when generating request traces.} \label{tb:trace} \vspace{-20pt} \end{table} \end{center} \vspace{-15pt} On both the desktop computer and Jetson TX2, we synthesize $3$ traces of requests using the aforementioned approach. Each trace contains $20$ to $30$ requests. The periods and relative deadlines of all requests in the same trace are obtained from scaling the randomly sampled values with the same factor. The mean values of periods and relative deadlines of these traces are shown in Table \ref{tb:trace}. In each run, we feed an inference system with one trace of requests, wait till all frames are processed, and record frame deadline misses as an indication of the system's ability to perform real time inference. Since {\textsc{DeepRT}\xspace}'s Admission Control Module admits requests selectively while other approaches don't have admission control measures, in order to guarantee the fairness of comparison, we record the accepted requests from {\textsc{DeepRT}\xspace} and feed these requests to other systems. Moreover, we disable {\textsc{DeepRT}\xspace}'s Adaptation Module which potentially reduces frame shapes. \begin{figure*} \vspace{-10pt} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width = 0.65\linewidth]{figures/eval-ddl-miss-server.png} \caption{Desktop computer.} \label{fig:miss-rate-server} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width = 0.65\linewidth]{figures/eval-ddl-miss-jetson.png} \caption{Jetson TX2.} \label{fig:miss-rate-jetson} \end{subfigure} \vspace{3pt} \caption{Comparison of deadline miss rates between {\textsc{DeepRT}\xspace} and state-of-the-art inference systems on 3 synthesized request traces on the desktop computer and Jetson TX2.} \label{fig:miss-rate} \vspace{-13pt} \end{figure*} \begin{figure*} \centering \begin{subfigure}{0.32\textwidth} \includegraphics[width = 0.85\linewidth]{figures/ddl-server-3.png} \caption{Trace 1 on desktop computer.} \label{fig:ddl-cdf-server-1} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width = 0.85\linewidth]{figures/ddl-server-2.png} \caption{Trace 2 on desktop computer.} \label{fig:ddl-cdf-server-2} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width = 0.85\linewidth]{figures/ddl-server-1.png} \caption{Trace 3 on desktop computer.} \label{fig:ddl-cdf-server-3} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width = 0.85\linewidth]{figures/ddl-jetson-1.png} \caption{Trace 1' on Jetson TX2.} \label{fig:ddl-cdf-jetson-1} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width = 0.85\linewidth]{figures/ddl-jetson-2.png} \caption{Trace 2' on Jetson TX2.} \label{fig:ddl-cdf-jetson-2} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width = 0.85\linewidth]{figures/ddl-jetson-3.png} \caption{Trace 3' on Jetson TX2.} \label{fig:ddl-cdf-jetson-3} \end{subfigure} \vspace{3pt} \caption{CDF of overdue time under the synthesized request traces of Figure \ref{fig:miss-rate} on the desktop computer and Jetson TX2.} \label{fig:ddl-cdf} \vspace{-15pt} \end{figure*} \textbf{Results on achieving real time inference.} We first show the deadline miss rates of {\textsc{DeepRT}\xspace} and the other inference systems in Figure \ref{fig:miss-rate}. To distinguish the traces on Jetson TX2 from the traces on desktop computer, we use ``Trace x$'$'' to indicate it is for Jetson TX2. We can see that for all $6$ traces {\textsc{DeepRT}\xspace} shows the lowest deadline miss rates. When the mean values of period and relative deadline are $50ms$, the deadline miss rate of {\textsc{DeepRT}\xspace} is still $5\%$ while it handles $4$ concurrent requests and a total number of $10$ requests. {\textsc{DeepRT}\xspace} exhibits the lowest deadline miss rates because its design focuses on meeting requests' deadlines and it considers special characteristics of GPU mentioned in Section \ref{sec:char}. The results demonstrate {\textsc{DeepRT}\xspace}'s ability to perform soft real time inference services for multiple requests. Note that {\textsc{DeepRT}\xspace} does not completely avoid deadline misses due to job instance overruns. Interestingly, Clipper shows the highest deadline miss rates in all runs. We think the reason is that the AIMD based adaptive batching scheme assumes abundant resources and is more suitable for cloud-scale inference. \begin{figure} \centering \begin{subfigure}{0.23\textwidth} \includegraphics[width = 1.0\linewidth]{figures/eval-memory-server.png} \caption{Desktop computer.} \label{fig:eval-memory-server} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width = 1.0\linewidth]{figures/eval-memory-jetson.png} \caption{Jetson TX2.} \label{fig:eval-memory-jetson} \end{subfigure} \vspace{3pt} \caption{Peak memory usage of {\textsc{DeepRT}\xspace} vs. state-of-the-art approaches under the request traces of Figure \ref{fig:miss-rate}.} \label{fig:eval-memory} \vspace{-15pt} \end{figure} \begin{figure*} \vspace{-10pt} \centering \begin{subfigure}{0.24\textwidth} \includegraphics[width = 0.95\linewidth]{figures/eval-req-server.png} \caption{Number of admitted requests under 3 request traces on desktop computer.} \label{fig:req-server} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width = 0.95\linewidth]{figures/eval-thru-server.png} \caption{Throughput under 3 request traces on desktop computer.} \label{fig:thru-server} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width = 0.95\linewidth]{figures/eval-req-jetson.png} \caption{Number of admitted requests under 3 request traces on Jetson TX2.} \label{fig:req-jetson} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width = 0.95\linewidth]{figures/eval-thru-jetson.png} \caption{Throughput under 3 request traces on Jetson TX2.} \label{fig:thru-jetson} \end{subfigure} \vspace{3pt} \caption{Throughput comparison between {\textsc{DeepRT}\xspace} and Sequential EDF on desktop computer and Jetson TX2.} \label{fig:throughput} \vspace{-15pt} \end{figure*} We also analyse the frames that are finished processing after the deadlines. In soft real time systems, jobs whose deadlines are missed still provide some value if they can be completed as early as possible. We examine the distribution of overdue time for each inference approach and show the results in the form of CDFs in Figure \ref{fig:ddl-cdf}. We can see that {\textsc{DeepRT}\xspace} performs the best in terms of deadline overdue time due to its utilization of the EDF algorithm. \textbf{Peak GPU memory usage.} We also measure the peak GPU memory usage of the $4$ approaches under different traces. Peak GPU memory usage is an important metric since the computations on GPU are memory demanding. An inference system consuming too much GPU memory could drain the memory, leading to memory allocation errors. On the desktop computer we use \texttt{nvidia-smi} to measure GPU memory usage, and on Jetson TX2 we use \texttt{tegrastats}. The results are shown in Figure \ref{fig:eval-memory}. \vspace{-10pt} \subsection{Throughput performance of {\textsc{DeepRT}\xspace}} In this part we evaluate the throughput performance of {\textsc{DeepRT}\xspace}. Since there is no existing real time scheduler for CNN inference on the edge, we implement a real time scheduler, Sequential EDF (SEDF), and compare the throughput performance of {\textsc{DeepRT}\xspace} agaist SEDF. We would like to examine whether {\textsc{DeepRT}\xspace} is able to offer high inference throughput while meeting latency requirements compared to SEDF. As its name suggests, SEDF processes input frames from multiple requests one by one according to the frames' deadlines. It doesn't execute multiple models concurrently, nor does it perform batching. We also implement an EDF imitator as the admission control policy of SEDF. It is worth mentioning that we don't compare {\textsc{DeepRT}\xspace} with the baseline approaches in Section \ref{ssec:vs-state} since those approaches are not real time schedulers and therefore do not provide latency guarantees. Besides, they do not have admission control modules which reject requests if they can cause deadline misses. Therefore, we compare {\textsc{DeepRT}\xspace} with a soft real time scheduler SEDF to guarantee a fair comparison. We would like to see (1) how many concurrent requests each inference system can handle and (2) what is the throughput of each system. We use the same method as Subsection \ref{ssec:vs-state} to generate request traces, except that we increase the frequency of request arrival to saturate the inference systems. Another difference is that, we feed the two systems with the same pending requests, but we let each of them determine which requests to admit. The number of concurrent requests each approach can handle and the average throughput each approach achieves are shown in \ref{fig:throughput}. We observe that on all the traces {\textsc{DeepRT}\xspace} performs better than or as well as SEDF, due to the fact that the novel batching approach of {\textsc{DeepRT}\xspace} leverages the batching ability of GPU and exhibits high throughput while providing latency guarantee. We can see that on the third traces of both devices, {\textsc{DeepRT}\xspace} largely outperforms SEDF, while the differences on the first traces are smaller. As the mean relative deadlines of the requests in the third traces are larger, more frames can be batched due to our DisBatcher design, boosting the throughput performance of {\textsc{DeepRT}\xspace}. On the first traces, fewer frames are batched so {\textsc{DeepRT}\xspace} doesn't have a high performance gain compared to SEDF. \vspace{-5pt} \subsection{Evaluating the Admission Control Module} In this part we evaluate the performance of the Admission Control Module. Specifically, we would like to examine (1) whether the Admission Control Module is able to accurately model the system to make admission decisions, and (2) what is the running time of the Admission Control Module. \textbf{Accuracy of the EDF imitator.} Naturally, we would like the Admission Control Module to admit as many requests as possible to increase throughput while not violating any deadline requirements. That is the reason why we employ an EDF imitator as an exact analysis tool to determine schedulability. We evaluate how accurate the EDF imitator is in estimating future job instance executions. We generate $3$ traces of requests with the method in Subsection \ref{ssec:vs-state}. The only difference lies in the mean values of periods and relative deadlines. For the first trace, we set the mean period to be $100ms$ and mean relative deadline to be $300ms$. For the second trace, we set both values to be $200ms$. And for the third trace we set them to be $300ms$ and $100ms$, respectively. The reason of using these configurations is that we would like to examine the EDF imitator under various batch sizes and various deadlines. We only perform this experiment on the desktop computer as the effectiveness of the EDF imitator is the same across all platforms as long as the profiled worst-case job instance execution times are accurate. \ccomment{ \begin{figure} \centering \includegraphics[width = 0.6\linewidth]{figures/eval-accuracy.png} \vspace{3pt} \caption{CDF of the differences between predicted latencies by the EDF imitator and latencies measured in real executions.} \label{fig:eval-accuracy} \vspace{-15pt} \end{figure} \begin{figure} \centering \includegraphics[width = 0.6\linewidth]{figures/eval-running-time.png} \vspace{3pt} \caption{Running time of the Admission Control Module when the requests contain different number of frames.} \label{fig:eval-run-time} \vspace{-15pt} \end{figure} } \begin{figure} \centering \begin{minipage}{.485\linewidth} \centering \includegraphics[width = 1.\linewidth]{figures/eval-accuracy.png} \caption{CDF of the differences between predicted latencies by the EDF imitator and latencies measured in real executions.} \label{fig:eval-accuracy} \end{minipage}% \hspace{4pt} \begin{minipage}{.485\linewidth} \centering \includegraphics[width = 1.\linewidth]{figures/eval-running-time.png} \caption{Median running time of the Admission Control Module when the requests contain different number of frames.} \label{fig:eval-run-time} \end{minipage}% \vspace{-20pt} \end{figure} We use the difference between the estimated latency of frame inference from the EDF imitator and the actual frame latency measured during real executions as the metric of accuracy, as the major goal of {\textsc{DeepRT}\xspace} is to provide latency guarantee for users. The result is shown in Figure \ref{fig:eval-accuracy}. We can see that the difference is the smallest on the trace with the smallest deadline, and \emph{vice versa}. On the first trace which corresponds to the largest relative deadline ($300ms$), the difference can be as large as $250ms$. We find that the large latency differences happen on latter frames in a request frame sequence. In fact, when the EDF imitator is performed on some requests, it considers all frames in the requests and latency estimation errors accumulate over the frame sequences. But large latency differences are rare and still smaller than the corresponding relative deadlines ($250ms<300ms$). Overall the EDF imitator is sufficiently accurate to predict whether the deadline of a frame will be missed. \textbf{Admission Control Module running time.} As mentioned in Section \ref{sec:design}, the complexity of the EDF imitator is linear with respect to the number of frames in all requests. We evaluate the running time of the Admission Control Module on both devices under different number of frames. Specifically, we generate $4$ request traces for both devices using the previous method; the requests in the $4$ traces contain videos with $10^2$, $10^3$, $10^4$, and $10^5$ frames, respectively. The running times (see Figure \ref{fig:eval-run-time}) are all below $1$ second except the case where Jetson TX2 processes requests with $10^5$ frames, where the running time is $5.9$ seconds. If we consider the normal frame rate of a video to be $30fps$, $10^5$ frames correspond to a video of approximately one hour. In fact, if {\textsc{DeepRT}\xspace} is used to perform inference on long videos, we can calculate the least common multiple of their periods and run the EDF imitator over twice that time period, significantly reducing the running time. \vspace{-5pt} \subsection{Adapt to Overruns} We evaluate how quickly {\textsc{DeepRT}\xspace} reacts to job instance overruns and eliminate deadline misses caused by these overruns. We generate request traces with periods and relative deadlines to be $200ms$ for desktop and $600ms$ for Jetson TX2. In each run we manually inject a short waiting time to $5$ consecutive job instances and measure the number of deadline misses caused by the injected waiting time. If a certain method reacts to overruns faster, it can reduce the number of deadline misses. We run this experiment on both the desktop computer and Jetson TX2. The lengths of the waiting times are set to be $100ms$, $200ms$, $500ms$, and $1000ms$. We compare the number of deadline misses between enabling and disabling the Adaptation Module in Figure \ref{fig:eval-adapt}. We can see that even without the Adaptation Module, {\textsc{DeepRT}\xspace} is still able to bring the system back to normalcy after experiencing some deadline misses. The reason is that {\textsc{DeepRT}\xspace} does not achieve $100\%$ utilization of the GPU as it is a real time system. There is idle time between job instance executions which act as buffer absorbing the overruns. The Adaptation Module enhances the ability to absorb the overruns. \section{Implementation} \label{sec:impl} This section presents the implementation details of {\textsc{DeepRT}\xspace}. All the scheduling actions of {\textsc{DeepRT}\xspace} occur on CPU except the executions of CNN inference job instances. In order to make sure that {\textsc{DeepRT}\xspace} gives prompt scheduling decisions, we assign the DisBatcher with the highest Linux user-space priority by setting its \texttt{nice} value. We implement {\textsc{DeepRT}\xspace} with the deep learning framework Pytorch in Python. However, the mechanism of {\textsc{DeepRT}\xspace} is completely framework agnostic. For communication between {\textsc{DeepRT}\xspace} modules, we use a light-weighted messaging library ZeroMQ. We use two edge devices equipped with GPUs to run and evaluate \textsc{DeepRT}\xspace. The first device we use has a GeForce RTX 2080 Graphics Card with $2944$ CUDA cores. It has an Intel i7-9700K 8-core CPU and $64$GB memory. The second device is an NVIDIA Jetson TX2 Developer Kit. Jetson TX2 is a computer specially designed to provide deep learning inference services on the edge. It is equipped with a GPU with $256$ CUDA cores. It has a 6-core CPU with $8$GB memory. \section{Introduction} \label{sec:introduction} \ccomment{ TODO \textsc{DeepRT}\xspace is a layer between xxx and GPU? x TODO say CNN. x TODO keep in mind that latency = execution time + queuing delay TODO mention execution model? TODO if job instance execution time > period or deadline, reject TODO the right question is not how many to batch, the right question when to batch x TODO mention the difference between cloud inference and edge inference. Cloud inference focuses on throughput of a lot of requests. Edge inference focuses on reducing latency for a limited number of requests. x TODO mention that ave latency of concurrent 2 latency higher than sequential 2 TODO how to call deadline pass part } The ubiquitous smartphones and Internet of Things (IoT) platforms, such as smart home solutions \cite{smartthings} and modern scientific experiment frameworks \cite{nguyen20174ceed}, produce a tremendous amount of data every day, especially video data. Meanwhile, we are also witnessing a rapid development of deep neural networks, especially Convolutional neural networks (CNNs), and their hardware accelerators which empower fast and large-scale neural networks training and inference. These two trends proliferate a wide variety of computer vision applications across self-driving cars \cite{bojarski2016end}, mobile augmented reality \cite{jain2015overlay}\cite{liu2018edge}, mobile adaptive video streaming \cite{yeo2018neural}, and vehicle re-identification in urban surveillance \cite{liu2016deep}, to name a few. These applications benefit from CNN's excellent performance in making predictions through incorporating inference of trained CNN models into system design. However, using the CNN models to build vision applications does not come for free. Smartphones or IoT devices usually do not have sufficient computing or memory resources to support prompt CNN inference in place. While offloading CNN computations to cloud servers is an option \cite{crankshaw2017clipper}\cite{gujarati2017swayam}, many works propose to perform deep learning inference on the edge servers \cite{fang2019serving}\cite{hsu2019couper}\cite{zhou2019adaptive}. The reasons are twofold: (1) Some data that need to be processed by CNN models, \emph{e.g.} images taken from smartphone cameras, contain private or proprietary information. Users are reluctant to upload them to a public cloud server for processing. (2) A lot of the aforementioned applications are sensitive to latency, and require real time CNN inference. For example, an interactive application typically requires a response time less than $100$ milliseconds \cite{miller1968response}. But the wide area network links between users and cloud servers exhibit the notorious issue of unbounded delay and jitter, which could greatly undermine user experience. In this work, we limit our scope to handling these latency sensitive applications on the edge. Specifically, we focus on soft real time inference requests that desire real time responses but can tolerate occasional deadline misses, such as mobile augmented reality, mobile neural adaptive video streaming, and path planning in self-driving \cite{liu2020removing}. In order to guarantee real time services, we can certainly dedicate the deep learning accelerator on an edge server to a single application, but it's a waste of the precious accelerator resources. Supporting multitenancy and sharing the computation resources among multiple applications that have access to the edge decreases the cost of each client, but it inevitably affects latency performance of each application, as edge servers don't have unlimited resources due to space and budget limits. Faced with the trade-off between reducing application latency and increasing system throughput, a number of research approaches are proposed. In \cite{crankshaw2017clipper}, the authors propose a cloud based prediction serving system. It employs adaptive batching to maximize throughput while trying to reach a query latency target. However, cloud based solutions cannot be directly migrated to the edge paradigm. The reasons are twofold. First, cloud based solutions assume that the resources are abundant. Second, cloud servers aim to provide services for plenty of users so they usually set throughput as their primary goal, whereas clients seeking edge based services are most concerned about low latency guarantee. In \cite{fang2017qos}, an adaptive batching algorithm is proposed to increase GPU utilization which both increases system throughput and decreases average latency of the tasks, but it does not target soft real time inference requests. DeepQuery \cite{fang2019serving} considers real time tasks, but its major focus is to optimize the non-real time tasks while totally isolating real time tasks. As far as we know, none of the existing works propose a soft real time CNN inference scheduler for GPU on the edge. We would like to ask this question: is it possible to provide soft real time inference services to clients of an edge server while preserving high throughput? \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/overview-v3.pdf} \caption{DeepRT system overview.} \label{fig:overview} \vspace{-5mm} \end{figure} Designing such a system is challenging. First, in order to guarantee soft real time services while maintaining throughput performance, it is necessary to inspect how different scheduling factors, including the number of concurrent models and batching, affect latency and throughput performance of the requests. Second, based upon the understanding of the complicated relationship between scheduling factors and system performance, and the fact that GPU operations are non-preemptive \cite{elliott2013gpusync}, how should we schedule the processing of requests to meet their deadlines while maintaining high throughput? Specifically, if we employ adaptive batching that batches data from different requests to increase throughput, we need to be aware that data from different requests arrive independently, and that data arriving early have to be queued to wait for data from other requests in the same batch. How do we determine batch size to meet deadlines of all requests? And when we obtain the batches, how to schedule their execution on GPU non-preemptively? Third, after we determine the scheduling algorithm, we need to design an appropriate admission control test in order not to overload the system. If some tasks don't proceed as expected and cause deadline misses, we need a mechanism to tackle overruns and resolve the issue of deadline misses as fast as possible. In order to handle these challenges, we have done a comprehensive analysis of how inference performs under different factors, and propose a GPU inference scheduler, \textsc{DeepRT}\xspace, which is able to provide soft real time inference services for multiple requests made from edge clients. Each request is to perform inference with a client-specified CNN model on a video consisting of a series of video frames which arrive at the system periodically. Specifically, we first use an edge inference platform developed by NVIDIA, Triton Inference Server \cite{triton}, to study the latency and throughput performance characteristics when there are multiple CNN model instances to be executed and when request data are batched into different batch sizes. We have two key findings: (1) Executing multiple model instances concurrently on GPU doesn't significantly improve throughput. On the contrary, it greatly increases latency and under some circumstances makes it very difficult to estimate a worst-case latency. (2) Batching is able to increase system throughput and outperforms concurrent model instance execution in throughput increase, but it also sacrifices inference latency. In light of these observations, we design the {\textsc{DeepRT}\xspace} system (see Figure \ref{fig:overview}), and within {\textsc{DeepRT}\xspace} we put forward a batching mechanism, DisBatcher, which is able to batch as many data in one batch from different requests as possible, and we propose to execute the inference CNN models over these batched data sequentially instead of concurrently. The ordering of execution is determined by the Earliest Deadline First (EDF) algorithm, since EDF is optimal in non-idling non-preemptive scheduling. We also propose a two-phase Admission Control Module which determines whether new requests should be admitted, and an Adaptation Module which makes adaptation decisions in case of job overruns. We show that DisBatcher guarantees real time processing of all requests admitted by the Admission Control Module. Our {\textsc{DeepRT}\xspace} design also makes it easy to support non-real time requests, by batching them with the DisBatcher and assigning the batched data from non-real time requests with a low priority. Overall, this paper makes the following contributions: \begin{itemize} \item We perform a systematic analysis of latency and throughput performance of CNN inference under multitenancy situation. \item We propose a complete set of solutions of a soft real time CNN inference scheduler for GPU on the edge, including (1) a Performance Profiler, (2) a two-phase Admission Control Module , (3) a DisBatcher mechanism which batches image frames from multiple requests, (4) using EDF to schedule the batched jobs sequentially, and (5) an Adaptation Module. As far as we know, we are the first to design a soft real time CNN inference system over GPU on the edge. \item We conduct comprehensive experiments to validate the performance of this system. \end{itemize} In this work we assume that each request is a video. With minor modifications our system can also support the processing of other types of data on GPU, \emph{e.g.}, IoT sensory data. \ccomment{ \textcolor{red}{We focus on building a GPU scheduler, and leave the design of a scheduler involving data communication to future work.}} It is also worth mentioning that we target building a GPU scheduler for edge servers and the GPU uses the CUDA framework. This is the most common hardware setting in edge based inference systems. Support for other accelerators such as FPGA and TPU and for other frameworks such as OpenGL are left to future work. \ccomment{ The rest of the paper is organized as follows. In Section \ref{sec:char} we study the characteristics of CNN inference under different scheduling factors. Afterwards, in Section \ref{sec:model}, we present how scheduling is performed in \textsc{DeepRT}\xspace. Section \ref{sec:design} describes {\textsc{DeepRT}\xspace} system design. We discuss implementation details in Section \ref{sec:impl}. Section \ref{sec:eval} presents our evaluation results. In Section \ref{sec:relatedwork}, we introduce state-of-the-art works on similar topics. Section \ref{sec:conc} concludes the paper. } \section{Acknowledgments} \begin{acks} This work is supported by the National Science Foundation under grant NSF 1827126. \end{acks} {\footnotesize \bibliographystyle{acm}} \balance { \small \section{{{\textsc{DeepRT}\xspace}} Scheduling Scheme} \label{sec:model} \begin{figure*} \centering \includegraphics[width=1\linewidth]{figures/timewindow-v2.4.pdf} \vspace{-15pt} \caption{An illustration of our batching mechanism. We show how two requests of category \emph{a} are batched into a task instance. The dashed lines represent the arrival times of frames, and the arrows pointing upward represent their deadlines. Frames of category \emph{a} are batched into a job instance if they arrive during the same time window. Then the job instance are pushed to a deadline queue for processing.} \label{fig:tw} \vspace{-5mm} \end{figure*} Based upon the observations we make in the previous section, we propose the soft real time scheduling scheme of {\textsc{DeepRT}\xspace}, which is proven to meet the deadlines of admitted requests while being able to process input data in batches in order to boost inference throughput. Specifically, we propose a batching approach called DisBatcher, which batches input data from multiple requests according to their release times and deadlines, and pushes the batched data to a deadline queue to be processed by the GPU. We propose to schedule the execution of the batches with EDF. Then we prove that DisBatcher ensures all deadlines of the input data are met if the batches can be scheduled with EDF. \subsection{System Model} \textbf{Data Model.} This paper assumes that the GPU resources on the edge are shared by multiple users. Each user sends a request to {\textsc{DeepRT}\xspace}, and each request corresponds to a video, the \emph{frames} of which need to be processed by a user-specified CNN model. Different users may request different CNN models. When a request comes, if {\textsc{DeepRT}\xspace} confirms that the new request and all the existing requests in the system can be scheduled, {\textsc{DeepRT}\xspace} will admit the new request\footnote{In this paper, we use admitted requests and requests interchangeably, and we call requests that wait to be verified pending requests.}. The video frames of an admitted request arrive at {\textsc{DeepRT}\xspace} in an online manner, and the interval between two frames is determined by a frame rate, or \emph{period}\footnote{We only consider the scenario where all frames are raw, uncompressed video frames. Decompression of compressed frames should be coordinated by a CPU scheduler and is not the scope of {\textsc{DeepRT}\xspace}. Also, we assume that the frames in a video arrive one by one, but {\textsc{DeepRT}\xspace} can also handle the situation where a chunk of frames arrive at a time, used by video coding techniques like H.264. }. Each request also has a \emph{relative deadline}, indicating the desired maximum latency of performing inference on each of the video frames. The relative deadline of a request is not necessarily equal to its period. Different videos may have different input \emph{shapes}, which equal to the number of channels (most videos use RGB channels so this number is $3$) $\times$ frame height $\times$ frame width. \textbf{Execution Model.} The two key takeaways from Section \ref{sec:char} drive us to avoid concurrent model execution and leverage batching to increase system throughput. We propose the following execution model for {\textsc{DeepRT}\xspace}. When a frame from a request arrives, {\textsc{DeepRT}\xspace} does not process this frame immediately. Instead, {\textsc{DeepRT}\xspace} waits for frames of the same shape requiring the same CNN model, batches these frames together, and processes the batched data on GPU. In fact, all frames that are of the same shape and require the same model can be batched together regardless of which request they belong to, while frames with different shape or require different models should not be batched because GPU cannot execute in parallel the same kernels for these frames. Since the batch of frames from multiple requests is what the GPU actually executes, we call the job of processing a batch of frames a \emph{job instance}. {\textsc{DeepRT}\xspace} processes multiple job instances \emph{one at a time} instead of concurrently. In {\textsc{DeepRT}\xspace}, the frames with the same shape requiring the same model are said to be of the same \emph{category}. Likewise, we call the requests containing the same category of frames to be of the same category. Executing job instances sequentially instead of concurrently makes Equation \ref{eq:latency} become $l=l_{q_b}+l_{q_j}+l_e$, where $l_{q_b}$ denotes frame queuing time waiting for other frames in the same batch, and $l_{q_j}$ denotes queuing time of a job instances. In order to provide a guarantee on $l$, two questions need to be answered: (a) How many frames should be in each batched job instance, or how does {\textsc{DeepRT}\xspace} determine which frames should be put into the same batch? (b) GPU operations are non-preemptive, in the sense that operations already launched on GPU cannot be preempted. How to schedule the non-preemptive batched job instances? \vspace{-10pt} \subsection{{\textsc{DeepRT}\xspace} Batching Approach} This subsection answers the first question. There are already research efforts \cite{crankshaw2017clipper}\cite{fang2019serving} or industrial solutions \cite{triton} which process batches containing fixed or adaptive number of images from multiple requests concurrently. While these approaches are capable of increasing inference throughput and reducing latency, their approaches don't support real time services, since they don't have a deadline-centric soft real time design. A real time system with batching enabled has to make sure the queuing time of a frame arriving early waiting for frames arriving late does not cause a deadline miss of the early frame. In order to guarantee real time processing of frames, instead of answering the question ``what is the right number of frames inside a batch'', we try to answer another question ``when should a frame be put inside a batch''. We propose the DisBatcher approach to answer this question. DisBatcher divides time into contiguous time intervals with the same length, called \emph{time windows}. The end time of a window coincides with the start time of next time window. Frames that fall into the same time window should be batched together into one job instance regardless of which request the frames belong to, as long as the frames are of the same category, and the batching action happens at the end of a time window. DisBatcher sets the relative deadline of a job instance to be the length of one time window, which means that a job instance generated at the end of one time window, should be completed before the end of the next time window. For simplicity, We call the end time of a time window, which is also the start time of the next time window, a time window \emph{joint}. As we previously mentioned, frames of different categories should not be batched together. DisBatcher creates different and independent time windows for each category of frames. With this time window approach, we transform the processing of one category of requests into a series of non-preemptive job instances, or a \emph{task instance}. A task instance's relative deadline is equal to its period, which is the length of the time window used to generate the task instance. This task instance is not strictly a periodic task, since each job instance has different execution times due to different number of frames inside the batches, and some job instances have $0$ as their execution times (no frames in this batch). An example of using time windows to batch frames is shown in Figure \ref{fig:tw}. How to set appropriate time window lengths in order to ensure that for all frames their latencies do not exceed their deadlines remains a challenge. DisBatcher utilizes a deadline-centric design. For one category of requests, DisBatcher sets the time window length to be half of the smallest relative deadline among all requests, regardless of other parameters such as request arrival periods. In fact, we have the following theorem. \vspace{-5pt} \theoremstyle{definition} \begin{theorem} Given a set of deep learning inference requests $I$. Each request $I^g_m, g\in \Gamma, m\in M_g$ consists of a series of frames which need to be processed by GPU, where $\Gamma$ denotes the set of all request categories and $M_g$ denotes the set of all requests in category $g$. Each request also has a relative deadline $d^g_m$. A request is schedulable if the latency of each frame is smaller than or equal to $d^g_m$. Likewise, we call a job instance schedulable if its latency is no larger than its deadline. If we use the time window scheme to batch process the frames with the time window length of each category $W_g$ set to \vspace{-2pt} \begin{equation} \vspace{-2pt} W_g = \frac{1}{2}\min_{m\in M_g} d^g_m, g\in \Gamma, \end{equation} all requests in $I$ are schedulable if the periodic batched job instances are schedulable. \end{theorem} \begin{proof} When the length of time windows of a request category is equal to a half of the smallest relative deadline in this request category, for all the frames in all the requests of this category, there are at least two time window joints between the arrival time and the deadline of a frame (inclusive). This argument is illustrated in Figure \ref{fig:tw}, where the length of time windows is set to be smaller than half of the smallest relative deadline. All frames of the same category that arrive at the same time window are batched at the end of the current time window , which is also the first time window joint they encounter. Since the batched job instance takes the next time window joint as its deadline, all frames can be finished processing before the second time window joint if the batched job instance meets its deadline. Since there are at least two time window joints between the arrival time of a frame and the deadline of a frame, the deadline of a job instance will be earlier than the deadline of all its corresponding frames. Therefore, if the job instances are schedulable, so will be the frames. \end{proof} \vspace{-10pt} \subsection{Job Instance Scheduling with EDF} With the DisBatcher approach, we transform the problem of how to batch and schedule frames in real time to the problem of scheduling a set of non-preemptive periodic task instances. The new task instances are not strictly the traditional non-preemptive periodic tasks, since job instances of the same task instance have variant execution times. The non-preemptive periodic tasks whose jobs have variant execution times are called non-preemptive multiframe tasks \cite{moyo2010schedulability}\cite{chakraborty2002schedulability}\cite{baruah2010preemptive}. Non-preemptive periodic tasks with fixed execution times are a special case of non-preemptive multiframe tasks. There are two types of scheduling algorithms for non-preemptive multiframe tasks --- algorithms that do not permit the processor to be idle when there are jobs that have been released but have not completed execution (non-idling), and algorithms that allow idle times (idling). For {\textsc{DeepRT}\xspace}, we choose non-idling Earliest Deadline First (EDF) as its scheduling algorithm, the reasons of which are twofold. First, the inserted idle times lead to a waste of precious GPU computation power, sacrificing the total throughput of the system. Second, although some idling algorithms can perform better than non-idling algorithms in terms of the number of schedulable tasks \cite{ekelin2006clairvoyant}, they often depend on complicated heuristics, and their performance gains are only demonstrated through simulations. In fact, finding an optimal schedule in idling non-preemptive context is an NP-hard problem \cite{gary1979computers}. On the contrary, EDF has been shown to be an optimal scheduling algorithm for non-preemptive multiframe tasks in non-idling context \cite{george1995optimality}\cite{baruah2006schedulability}\cite{chakraborty2002schedulability}. In Section \ref{sec:design}, we will discuss how {\textsc{DeepRT}\xspace} performs admission control in order to make sure that all job instances are schedulable. It is worth mentioning that the scheduling scheme of {\textsc{DeepRT}\xspace} makes it easy to support non-real time requests. We treat non-real time requests in a similar way with real time requests. We also use DisBatcher to batch non-real time requests to transform the frames into task instances. However, we don't batch non-real time requests together with real time requests for performance isolation; we use a large time window for non-real time requests to make sure they obtain a low deadline priority, and we impose them a large arrival period regardless of their true arrival periods so that they don't aggregate to large batches and cause too much priority inversion. \section{Related Work} \label{sec:relatedwork} \textbf{\ \ \ Deep Learning Inference on the Edge.} There has been a surge in industrial and research efforts to develop deep learning inference systems on cloud or on edge. Tensorflow-Serving \cite{olston2017tensorflow} and Triton Inference Server \cite{triton} are two industrial general-purpose inference platforms. Clipper \cite{crankshaw2017clipper} is a cloud based throughput and latency oriented model serving system with a modular design. Mainstream \cite{jiang2018mainstream} enables work sharing among different vision applications to increase throughput. In \cite{fang2017qos}, the authors propose to use deep reinforcement learning to adaptively select model and batch size to optimize QoS defined as combination of accuracy and latency. Swayam \cite{gujarati2017swayam} is a cloud based machine learning serving system which autoscales resources to meet SLA goals. In \cite{zhou2019adaptive}, the authors propose to partition CNN models across multiple IoT devices to speed up the inference. Most of the above efforts aim to improve throughput and latency performance, but either (1) they assume abundant cloud resources and achieve their performance goals through scaling the resources, or (2) they improve latency performance but don't have soft real time guarantee. DeepQuery \cite{fang2019serving} co-schedules real time and non-real time tasks on GPU, but its primary focus is to optimize the performance for non-real time tasks. \begin{figure} \centering \begin{subfigure}{0.23\textwidth} \includegraphics[width = 1.0\linewidth]{figures/eval-adapt-server.png} \caption{On desktop computer.} \label{fig:eval-adapt-server} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width = 1.0\linewidth]{figures/eval-adapt-jetson.png} \caption{On Jetson TX2.} \label{fig:eval-adapt-jetson} \end{subfigure} \vspace{3pt} \caption{Comparison of the number of deadline misses caused by manually injected overruns between enabling and disabling the Adaptation Module.} \label{fig:eval-adapt} \vspace{-15pt} \end{figure} Meanwhile a few works study the performance characteristics of deep learning inference on the edge. In \cite{hanhirova2018latency} the authors study latency and throughput performance of object recognition and object detection deep learning models. Their study focuses more on latency-throughput trade-off due to different batch sizes. In \cite{liang2020ai} the authors characterized different AI applications using their specially built edge accelerators, and evaluate the benefit of model splitting. \textbf{Augmenting GPU with Real Time Features.} Some researchers look deeper into how GPU works and try to add real time features to GPU based computations. In \cite{wang2017quality}, the authors propose to provide QoS support for GPU applications through fine-grained management of GPU resources such as registers, memory, and computation cycles. GPUSync \cite{elliott2013gpusync} is a real time management framework supporting multiple scheduling policies such as rate-monotonic and EDF using synchronization-based management. In \cite{liu2020removing}, the authors propose to separate CNN input data into different regions of importance and prioritize critical tasks by optimizing the importance of regions. There have also been some works providing GPU with preemption ability by implementing GPU context switches \cite{tanasic2014enabling}\cite{park2015chimera}\cite{wang2016simultaneous}\cite{wu2017flep}. All these works differ from our work in that they provide real time features to GPU processing by manipulating lower level components such as GPU driver. \textbf{Latency-centric IoT Data Processing.} Apart from processing computer vision application requests using GPU, researchers have proposed various scheduling systems to perform traditional processing on video or IoT contents. In \cite{chu1999cpu}, the authors propose a CPU service class for multimedia real time processing, and put forward some scheduling algorithms to process different service classes on CPU. Janus \cite{rivas2010janus} provides a cross-layer CPU scheduling architecture for virtual machine monitors to schedule soft real time multimedia processing tasks. VideoStorm \cite{zhang2017live} has an offline profiler which generates videos' resource-quality profile, and uses this profile to jointly optimize processing quality and latency. Miras \cite{yang2019miras} proposes a reinforcement learning based scheduling scheme for scientific workflow data on cloud, minimizing average response time of the workflow requests. \section{sensecloud} \label{sec:sensecloud} \section{SenseLoc}
{ "timestamp": "2021-05-06T02:07:02", "yymm": "2105", "arxiv_id": "2105.01803", "language": "en", "url": "https://arxiv.org/abs/2105.01803" }
\section{Introduction} Recent years have seen renewed interest in triangular-lattice antiferromagnets featuring anisotropic interactions and other traits conducive to exotic quantum ground states, particularly in the hunt for experimental realizations of quantum spin liquids. Mapping the phase diagrams of these materials is thus of paramount importance, as the variation in magnetic anisotropy, relative exchange coupling strengths, and corresponding magnetic Hamiltonians offered by different materials are fundamentally important to the pursuit of such states. One system that sparked an explosion of experimental and theoretical interest is the triangular-lattice antiferromagnet YbMgGaO$_4$\xspace, which has led to a broad research effort that currently involves a number of related compounds \cite{sanders2017magnetism,baenitz2018planar,sichelschmidt2020effective,sichelschmidt2019electron,cevallos2018anisotropic,li2018absence,liu2018rare,ding2019gapless,ranjith2019field,ranjith2019anisotropic,xing2019field,bordelon2019field,xing2019synthesis,xing2019class,scheie2020crystal,bastien2020long}. There are two important physical aspects of this group of materials that have become clear due to recent insights. Because of the localized nature of the $f$-orbitals, the ranges of the effective spin-spin interactions in these insulating materials are strongly limited, implying that all of the Kramers-ion-based materials are expected to be closely described by the same nearest-neighbor anisotropic-exchange model with the parameters that are permitted by triangular-lattice symmetry \cite{li2016anisotropic,rau2018frustration,zhu2018topography,maksimov2019anisotropic}. This reasonably compact model should provide a consistent interpretation of the current and future experiments and give important new insights into fundamental properties of these materials \cite{li2020spin}. The second generic feature is the strong effect of disorder observed in some of the well-studied representatives of these compounds. It has been argued theoretically that even a benign form of bond disorder necessarily generates perturbations that are relevant in the Imry-Ma sense \cite{imry1975random,zhu2017disorder,parker2018finite}, making a consideration of the defects an inevitable and essential part of a realistic description of most anisotropic-exchange magnets. Empirically, many of the newly synthesized materials seem to show no magnetic ordering \cite{sanders2017magnetism,baenitz2018planar,sichelschmidt2020effective,sichelschmidt2019electron,cevallos2018anisotropic,li2018absence,liu2018rare,ding2019gapless,ranjith2019field,ranjith2019anisotropic,xing2019field,bordelon2019field,xing2019synthesis,xing2019class,scheie2020crystal,bastien2020long}. Then, given the strong disorder effects, it is a question whether the non-magnetic ground states of these materials are due to a genuine quantum-disordered spin-liquid (SL) phase, or due to a scenario similar to the disorder-induced “spin-liquid mimicry,” suggested for YbMgGaO$_4$\xspace \cite{zhu2017disorder}. There is also an intriguing broader question on whether the disorder-induced spin-liquid-like behavior retains any of the unique and desired properties of the intrinsic spin liquids, thus potentially turning disorder into a feature rather than an obstacle \cite{kimchi2018valence,andrade2018cluster,bilitewski2017jammed}. A counterintuitive example of the role of disorder in a related material is the case of NaYbO$_2$, where introducing Na$^+$ site vacancies leads to an antiferromagnetic transition at a few Kelvin and thus disorder supports an ordered phase\cite{guo2020magnetic}. Further understanding and insight into the role of disorder are needed to make progress in this direction. One of the persistent issues in the studies of the anisotropic-exchange magnets in general and of the rare-earth family in particular is the identification of their model parameters \cite{maksimov2020rethinking}. In the case of the disorder-induced pseudo-SL state, this problem is further aggravated, as it is not clear what state the disorder-free system would have assumed. In this work, we propose that the experimental and theoretical investigations of the \emph{field-induced phases} offers a powerful instrument to significantly narrow the allowed parameter space to a region that is consistent with the material's phenomenology. In YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace, the source of disorder is due to the $R\bar{3}m$ symmetry, leading to fifty-fifty site mixing of the non-magnetic cations Mg$^{2+}$ or Zn$^{2+}$ with the Ga$^{3+}$. Efforts to determine the exchange parameters for placing YbMgGaO$_4$\xspace in proposed phase diagrams and otherwise comparing to the numerous theoretical investigations to affirm or deny a QSL state were obstructed by the various broadening effects and consequentially enhanced uncertainty in the measurements. Several studies concentrated their efforts on further refining measurements of the exchange parameters \cite{zhang2018hierarchy,steinhardt2020fieldinduced}. In this work, we provide a detailed study of the field-induced effects and characteristics of YbZnGaO$_4$ and offer comparison with the results in YbMgGaO$_4$ \cite{steinhardt2020fieldinduced}. Informed by measurements from high resolution magnetometry and a variety of neutron scattering techniques, we put forward a theoretical analysis of the structure of their field-induced phase diagram and propose a parameter region for both materials that is compatible with our empirical findings. \section{Results} \begin{figure*}[t] \includegraphics[width=\linewidth]{TDO_SQUID_AC_letteredc.eps} \vskip -0.2cm \caption{Tunnel diode oscillation (TDO) frequency and ac susceptibility. (a) and (b) TDO ($\Delta f/f \propto \Delta \textbf{M}/ \Delta \textbf{H}$) shows an anomaly for $\textbf{H}\parallel c$ and $\textbf{H}\perp\textbf{c}$ respectively which weakens as temperature increases (curves offset for clarity in (b)). (c) Anomaly’s response to applied field shows anisotropy. (e) $d\textbf{M}/d\textbf{H}$ measured with SQUID corroborates TDO measurement. (f) integrating TDO $\Delta$frequency from 0 to the approximate saturation is further corroborated by dc susceptibility measurements. (g) Comparison of critical temperatures measured in this work for YbZnGaO$_4$ to earlier measurements by Ma et al. \cite{ma2018spin}} \label{TDO_SQUID} \end{figure*} \subsection{Preliminary characterization} Susceptibility measurements on a 1.81 mg single crystal sample of YbZnGaO$_4$ conducted using an in-house Cryogenic S700X SQUID magnetometer (with 3He probe) yield low-temperature fits suggesting Curie-Weiss temperatures of $\Theta_{CW}=-2.67$ K and $\Theta_{CW}=-2.62$ K for parallel and perpendicular to the sample $\vec{c}$ axis, respectively (see Supplementary Figure 4). These values are more isotropic than those reported for YbMgGaO$_4$\xspace or for YbZnGaO$_4$\xspace in earlier works though YbZnGaO$_4$\xspace was also more isotropic than YbMgGaO$_4$\xspace in those measurements) \cite{ma2018spin}. We emphasize that while the difference in the Curie-Weiss temperature for the in-plane vs out-of-plane field has initially suggested a rather strong easy-plane character of YbMgGaO$_4$\xspace \cite{paddison2017continuous}, the subsequent spectroscopic studies have hinted at a rather moderate $XXZ$ anisotropy, yielding a nearly Heisenberg value of $\Delta\!=\!0.8$--0.9 \cite{zhang2018hierarchy}. In the case of YbZnGaO$_4$\xspace, Curie-Weiss temperatures for the in-plane and out-of-plane fields from our measurements and those of earlier works \cite{ma2018spin} are much closer to the Heisenberg limit. Given the trend, this indicates that the anisotropy in YbZnGaO$_4$\xspace may, in fact, be of the easy-axis type. A more direct demonstration of that is offered by the results that are discussed in Sec.~II.B. A unique opportunity afforded by a close comparison between YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace is the qualitative contrast of the effect of the cation substitution on the disorder between the two materials. One obvious consideration is the differing ionic radii of the Mg$^{2+}$ and Zn$^{2+}$, which are approximately 72 and 74 pm respectively (a 2.7$\%$ difference). This small difference yields slightly smaller lattice parameters for YbMgGaO$_4$\xspace when compared to YbZnGaO$_4$\xspace, as shown by comparison of parameters given by references \cite{li2015gapless} and \cite{ma2018spin}, respectively, which, in turn, may yield marginally stronger exchange for YbMgGaO$_4$\xspace, also explaining the smaller Curie-Weiss temperatures and lower field onset for the anomaly discussed in Sec.~II.B for YbZnGaO$_4$\xspace. This is consistent with the observation that the related compound NaYbO$_2$, with an in-plane lattice parameter smaller by only about 1.8$\%$ from YbMgGaO$_4$, shows a significantly larger Curie-Weiss temperature ($\Theta_{CW} = -10.38$ K\cite{bordelon2019field} as opposed to ~$-4$ K for YbMgGaO$_4$\xspace)\cite{steinhardt2020fieldinduced}. Furthermore, both Zn$^{2+}$ and Ga$^{3+}$ have $d^{10}$ electronic configurations, whereas Mg$^{2+}$ is $p^6$. While the displacement of Yb$^{3+}$ can still be expected based on the charge difference between the cations, leading to the observed broadening in inelastic neutron scattering studies of the single magnon dispersion and crystal electric field levels \cite{li2017crystalline}, the local environment may be more homogeneous for YbZnGaO$_4$\xspace, and may be related to the difference in anisotropic response under field. However, further studies will be required to compare the effective role of disorder between the two systems. \subsection{Tunnel diode oscillator technique and SQUID magnetization} The first indications of the magnetic transitions in YbZnGaO$_4$\xspace, which are similar to the ones we have previously identified in YbMgGaO$_4$\xspace \cite{steinhardt2020fieldinduced}, were found using the tunnel diode oscillator (TDO) technique (see methods). As the applied magnetic field is changed, the sample magnetization $M (\vec{H})$ with respect to field is altered, thus changing the inductance of the coil and the measured resonant frequency of the circuit, yielding a signal proportional to $\chi (\vec{H})$. From Figure \ref{TDO_SQUID} (a), where the change in resonant frequency is plotted versus the applied field, a clear nonlinearity is apparent beginning just below 1 T. This nonlinearity persists to at least 2 T for the field parallel to sample $\vec{c}$ axis. Upon raising the temperature, the feature is completely suppressed at about 4 K, affirming its magnetic origin. This behavior is also consistent with a similar feature in YbMgGaO$_4$\xspace \cite{steinhardt2020fieldinduced}. As the sample orientation is rotated with respect to the field, the anisotropic response is made apparent, with the feature encompassing a broader range of the field for $\textbf{H}\perp\textbf{c}$. This feature is confirmed by more conventional magnetization and susceptibility measurements carried out using the in-house SQUID magnetometer with a 3He probe in temperatures down to 300 mK. The same distinct plateau-like feature is apparent in the $\chi(\vec{H})$ in both TDO and SQUID measurements, as is clear from Figures \ref{TDO_SQUID} (d) and (e). Integrating the change in frequency with respect to applied field yields a curve consistent with magnetization as measured in SQUID (see Figure \ref{TDO_SQUID} (c)). We note that the anomaly in YbZnGaO$_4$\xspace measured via TDO and SQUID magnetization occurs at a slightly lower field compared to the analogous feature measured in YbMgGaO$_4$\xspace \cite{steinhardt2020fieldinduced} (see Supplementary Figure 5). The features in the magnetization derivative and tunnel diode oscillator technique (TDO) measurements in both in-plane and out-of-plane fields are reminiscent of the plateau-like behavior that is expected in the canonical Heisenberg or $XXZ$ nearest-neighbor triangular-lattice magnets \cite{starykh2015unusual}. Importantly, the TDO and magnetization measurements in YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace, together with the Curie-Weiss extrapolations from the susceptibility mentioned above, suggest an important distinction between the two materials. The effects of the in-plane and the out-of-plane field directions show different behavior as a function of the field orientation - compare Figure~\ref{TDO_SQUID}(b) of this work and Figure~1(c) in Ref.~\citep{steinhardt2020fieldinduced}. Specifically, the relative severity of the anomaly in TDO data in YbZnGaO$_4$\xspace in $H\!\parallel\! c$ are resemblant of that in $H\!\perp\! c$ data of YbMgGaO$_4$\xspace, and vice versa. This suggests that YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace correspond to different types of the $XXZ$ anisotropy, the easy-axis and the easy-plane, respectively. \subsection{ac susceptibility and disorder} From our measurements of ac susceptibility of YbZnGaO$_4$\xspace (see Figure \ref{TDO_SQUID} (f)), we find critical temperatures corresponding to the characteristic cusps of the ac susceptibility occur at substantially lower temperatures than previously measured \cite{ma2018spin}. Indeed, our characteristic temperatures are around $20\%-30\%$ lower for all comparable frequencies - see Figure \ref{TDO_SQUID} (g). Consequently, for our measurement of $\Delta P = \frac{\Delta T_f}{T_f \Delta \log(\omega)}$, a quantitative measure of the freezing temperatures per decade of characteristic temperatures, we find a value of $\Delta P = 0.139(4)$, substantially larger than the $\Delta P = 0.053(4)$ of previous work. This value of $\Delta P$ is typically associated with superparamagnetic behavior \cite{mydosh1993spin} as opposed to spin glasses. That being said, insulating spin glasses typically show greater frequency dependence~\cite{mydosh1993spin}. If this $\Delta P$ \emph{is} interpreted as indicative of superparamagentic behavior, it may be understood as a consequence of many microscopic domains with insignificant cooperative freezing. This phenomenon is likely due to disorder, but further suppression of the freezing temperature could be attributed to the high degree of frustration as well as disorder. The percentage of frozen spins is estimated to be approximately 16\% (see Supplementary Figure 9), comparable to the previous study \cite{ma2018spin} and to the case of YbMgGaO$_4$\xspace \cite{paddison2017continuous}. General questions about the effects of disorder in spin systems persist, especially in light of its aforementioned potential relationship to QSL states for some materials \cite{kimchi2018valence,andrade2018cluster,bilitewski2017jammed}. The origin and effects of disorder related to the observed spin-liquid features of YbMgGaO$_4$\xspace have been addressed earlier \cite{zhu2017disorder,li2017crystalline}. \subsection{Neutron scattering} \begin{figure*} \includegraphics[width=\linewidth]{CORELLI_YZGO.eps} \vskip -0.2cm \caption{Diffuse neutron scattering. First seven panels show evolution of diffuse neutron scattering with increasing applied field for $\vec{H} \parallel \vec{c}$ for data integrated over $0.5 < L < 0.5$. The eighth panel shows integrations of line cuts across the first BZ edge, where the uncertainty represents one standard deviation. Sample temperature was 130 mK and a 20 K background was subtracted to isolate magnetic contributions.} \label{CORELLI} \end{figure*} For this work, diffuse neutron scattering data were collected at CORELLI at Oak Ridge National Laboratory in total scattering mode (see Figure \ref{CORELLI}). The sample was aligned with the $[h,k,0]$ scattering plane, and applied field along the sample $\vec{c}$ axis. Data were collected at 130 mK for 0, 1, 1.5, 2, 3, 4, and 5 T. Color maps of the neutron scattering intensity after subtracting the 20 K as background reveal the evolution of the magnetic structure, which is qualitatively comparable to the YbMgGaO$_4$\xspace data. With no applied field, the intensity largely resides at the high-symmetry M points on the edges of the Brillouin zone (see also Supplementary Figure 6, left panel). At 1 T the scattering intensity is almost completely uniform along the zone edge, while at 1.5 T the intensity is found predominantly at the high-symmetry K points. Intensity at the M points is well established to correspond to stripe-ordered states in long-range ordered triangular systems, while intensity at the K point is generally suggestive of 120$^\circ$-type ordering or other three-sublattice states \cite{maksimov2019anisotropic}. As in the case of YbMgGaO$_4$\xspace, this migration of the scattering intensity with applied field corresponds to the anomaly observed in the magnetometry data. The changes in intensity were further confirmed by measurements with the triple-axis spectrometer at SPINS (see Supplementary Figure 7). We further measured inelastic neutron scattering (INS) from a YbZnGaO$_4$\xspace single crystal sample at CNCS\cite{ehlers2011cncs} in applied field(see Figure \ref{CNCS}) at Oak Ridge National Laboratory. Here we again see a clear evolution of the intensity as a function of energy and $Q$ with increasing applied field. We note that at low field the intensity is notably concentrated on the M points. The intensity has no clear dispersion up to about 3 T, but instead consists of a broad continuum, similar to the 0 T with weakening intensity as the spins are presumably canted further from the scattering plane with increased field. At 4 T, the scattering remains broad in energy, but a dispersion is faintly visible at low energy along the zone edge. This dispersion has minima at the zone edges and rises as it approaches the $\Gamma$ points. The shape of this dispersion closely resembles what is measured in the polarized state measured at 8 T, indicating that the system is approaching polarization. Measurements at DCS\cite{COPLEY2003477} (see Supplementary Figure 6) and SPINS (see Supplementary Figure 7) at the National Institute of Standards and Technology confirm the features shown in Figure \ref{CNCS} in a diversity of backgrounds and instrument setups. At 8 T the system is nearly completely polarized, and a clear dispersion is evident. As in the case of YbMgGaO$_4$\xspace \cite{paddison2017continuous}, the dispersion is broadened, likely due to the disorder and the resulting distribution of exchange parameters and $g$-factors. Additional measurements to characterize YbZnGaO$_4$\xspace's response to applied field were conducted using polarized neutrons at BT7\cite{lynn2012bt7} at the National Institute of Standards and Technology, with a vertical guide field set up (see Supplementary Figure 8). After subtracting background measurements (40 K) from base (0.3 K) and correcting for the polarization rate, comparison of the spin flip (SF, measuring the in-plane component) shows a stronger in-plane component along the zone edge at 0 T, compared to 2 T, with particularly high intensity in the vicinity of the high-symmetry M point, likely affirming increased canting of the spins with increasing field, but also possibly indicating a reduced spin component parallel to the zone edge (pointing to nearest neighbors in real space). This greater intensity near the zone edge for the SF scattering at 0 T can also be seen in the orthogonal cut from M to $\Gamma$. \subsection{Model and zero-field phases} The interplay of the crystal field and spin-orbit coupling on the magnetic moment of the Kramers ion results in the splitting of its levels into a well-separated doublet structure built from a mix of various spin and orbital states. The exchange interactions of the lowest doublets (pseudo-spins-$\frac{1}{2}$) are constrained only by the discrete lattice symmetry. For the triangular lattice of YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace, this general anisotropic-exchange nearest-neighbor model is given by \begin{align} \label{HJpm} {\cal H}=&\sum_{\langle ij\rangle}\Big\{J \Big(S^{x}_i S^{x}_j+S^{y}_i S^{y}_j+\Delta S^{z}_i S^{z}_j\Big)\\ +&2 J_{\pm \pm} \Big[ \Big( S^x_i S^x_j - S^y_i S^y_j \Big) \cos\tilde{\varphi}_\alpha- \Big( S^x_i S^y_j+S^y_i S^x_j\Big)\sin\tilde{\varphi}_\alpha \Big]\nonumber\\ +&J_{z\pm}\Big[ \Big( S^y_i S^z_j +S^z_i S^y_j \Big) \cos\tilde{\varphi}_\alpha -\Big( S^x_i S^z_j+S^z_i S^x_j\Big)\sin\tilde{\varphi}_\alpha \Big]\Big\},\nonumber \end{align} where bond angles $\tilde{\varphi}_\alpha$ are that of the primitive vectors of the lattice with the $x$ axis, $\tilde{\varphi}_\alpha\!=\!\{0,2\pi/3,-2\pi/3\}$. The $J_{\pm \pm}$ and $J_{z\pm}$ bond-dependent terms are due to the strong spin orbit coupling\cite{li2016anisotropic}. The zero-field phase diagram of the model \eqref{HJpm} with an additional second-nearest-neighbor exchange $J_2$ has been studied extensively \cite{li2016anisotropic,zhu2018topography,maksimov2019anisotropic,liu2016semiclassical,luo2017ground,rousochatzakis2016kitaev,iaconis2018spin}, and its 3D classical version is shown in Figure~\ref{fig_3d} in the $J_2$-$J_{\pm\pm}$-$J_{z\pm}$ axes for $\Delta=1$, with all couplings in units of $J>0$. There are three main ordered states in the antiferromagnetic limit of the $XXZ$ interaction: a coplanar three-sublattice $120^\circ$ state, which corresponds to the ordering vector at $K$ points, a collinear stripe-\textbf{x} two-sublattice state with spins pointing along the nearest-neighbor bonds and the ordering vector at $M$ points, and a second collinear stripe-\textbf{yz} state with the same ordering vector but spins tilted out of the lattice plane and perpendicular to the nearest-neighbor bonds. We should note that the phase diagram in Figure~\ref{fig_3d} is simplified. The simplification comes from only taking the single-\textbf{Q} spiral ansatz that does not include more complicated multi-\textbf{Q} states \cite{luo2017ground,iaconis2018spin}. Moreover, the quantum version of the phase diagram also has a spin-liquid phase \cite{zhu2018topography,maksimov2019anisotropic}, which is located along the tricritical boundary between stripe and $120^\circ$ states for a limited range of the $XXZ$ anisotropy near the Heisenberg limit. \subsection{Exploring the XXZ parameter space} One of the puzzling features observed in some of the first experiments in YbMgGaO$_4$\xspace \cite{li2015gapless} was an indication of the field-induced phase crossovers, seen in magnetic susceptibility. Recent susceptibility and TDO measurements of YbMgGaO$_4$\xspace have further supported these observations \cite{steinhardt2020fieldinduced}. Here we present a variety of measurements on high quality single crystal samples to show that very similar features, indicating field-induced crossovers for both $H\!\parallel\! c$ and $H\! \perp\! c$ also occur in YbZnGaO$_4$\xspace. The neutron scattering measurements in the out-of-plane magnetic field $H\!\parallel\! c$ \cite{steinhardt2020fieldinduced}, also indicated a field-induced crossover and brought another piece of evidence to light. Neutron diffraction has showed that the field-induced crossover is accompanied by a shift of magnetic intensity from the $M$-points at the lower fields to $K$-points at the higher fields. In an ordered state, such an intensity shift would correspond to a transition from a four-sublattice to a three-sublattice state. Our key finding is that this feature alone allows one to put strong boundaries on the exchange parameters of the system when the phase diagram of relevant parameters is considered. From the susceptibility data presented above and as established by earlier work in the case of YbMgGaO$_4$\xspace, YbZnGaO$_4$\xspace can be characterized as easy-axis anisotropy while YbMgGaO$_4$\xspace is easy-plane. Therefore, in the following we use two representative values of $XXZ$ anisotropy for the model (\ref{HJpm}), the easy-plane case $\Delta\!=\!0.8$ that is related to YbMgGaO$_4$\xspace, and the easy-axis case $\Delta\!=\!1.1$ as related to YbZnGaO$_4$\xspace. First, we explore the parameter region that would allow for a transition from the four-sublattice to the three-sublattice state at some finite value of the out-of-plane magnetic field $H_c$ using classical energy minimization. We fix the second-nearest-neighbor coupling $J_2$ to the value $0.05J$ (red line in Figure~\ref{fig_3d}) and the $XXZ$ anisotropy in the model (\ref{HJpm}) to the two values discussed above. That leaves the bond-dependent anisotropies $J_{\pm\pm}$ and $J_{z\pm}$ and the field as the parameters to scan through. We highlight our findings in Figure~\ref{HcHsJzpJp} (a) and (b) in the form of an intensity plot of the field $H_c$ of such a four-to-three sublattice transition in units of the saturation field, $H_c/H_s$, and in the $J_{\pm\pm}$--$J_{z\pm}$ axes (in units of $J$). The $120^\circ$ phase is a three-sublattice state already at zero field and remains such for all the fields, while most of the stripe phase regions remain four-sublattice throughout the entire field range. Our key finding is illustrated by the gradient-color regions interpolating between the ``only-three-'' and ``only-four-sublattice" regions in Figure~\ref{HcHsJzpJp} (a) and (b). They demonstrate that already at the level of our classical energy analysis, the four-to-three sublattice transition indeed takes place at some value of $H_c\!<\!H_s$ in a surprisingly extended region that emanates from the $120^\circ$ part of the phase diagram into the stripe-\textbf{yz} phase and extends up to $J_{z\pm} \!\sim\! J$ in both cases, with the intensity emphasizing how far this transition is from the saturation field $H_s$. It should be noted that for $S=1/2$, quantum effects in zero field are known to broaden the stability region of the $120^\circ$ phase beyond the boundaries of the classical model consideration \cite{zhu2018topography}. Therefore, one may also expect the region of the four-to-three sublattice transition to extend beyond the classical predictions in this work. While one can expect that quantum effects will further stabilize and extend the field-induced three-sublattice states, we note that experimentally the ``4-to-3'' (or M-to-K) transition in both YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace occurs at a rather low field, $H_c \!< \!0.5 H_s$, which provides further restrictions on the possible parameter ranges. At the level of our approximations, this constraint puts YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace in a close proximity to the $120^\circ$ phase boundary, giving the upper bound $J_{z\pm}/J \!\alt\! 0.4$. As one can see in Figure~\ref{HcHsJzpJp} (a), there is a narrow region inside the stripe-\textbf{x} phase where the transition of interest also occurs, but it involves another transition back to the four-sublattice state at higher fields, so we do not consider it as a relevant region to the materials in question here. \begin{figure*} \includegraphics[width=\linewidth]{CNCS_2magnon_fixedlabels.PNG} \vskip -0.2cm \caption{Inelastic neutron scattering. Panels (a-e) show evolution of inelastic neutron scattering with increasing applied field for $\vec{H} \parallel \vec{c}$ for integer fields from 0 to 4 T, where the 8 T data (where magnetic excitations have been lifted to above 1.2 meV) has been used for background subtraction). Panels (f) and (g) are calculated using SpinW for 0 T (stripe yz) and 2.5 T (V-state), respectively, with a summation from the possible magnetic domain orientations to best compare to the short-range order observed in experiment. Parameters used for SpinW calculations are (in meV) $J=0.15$, $J_{\pm\pm} = -0.0045$, $J_{z\pm} = 0.045$, $J_{zz}=0.165$, $J^2=0.0075$, $J_{zz}^2=0.00825$, $g_{\parallel}=3.82$, such that $J_{\pm\pm}/J = -0.03$, and $J_{z\pm}/J=0.3$. The dashed lines of (f) and (g) indicate the minimum energy for the 2-magnon dispersion for the possible domains. (h) shows 8 T dispersion, and a dotted line indicating the calculated curve from LSWT for the parameters described above. Experimental data are shown for an integration perpendicular to the path direction of width Q = 0.36 \r{A}$^{-1}$. Cuts for experimental data were generated using Horace\cite{Ewings2016132}. } \label{CNCS} \end{figure*} \begin{figure}[t] \includegraphics[width=0.7\linewidth]{3DPhD} \caption{A simplified classical zero-field $J_2$-$J_{\pm\pm}$-$J_{z\pm}$ phase diagram of the model (\ref{HJpm}). Dashed red line marks a representative value of the $J_2$ term used for all panels of Figure~\ref{HcHsJzpJp}}. \label{fig_3d} \end{figure} \begin{figure*}[b] \includegraphics[width=0.75\linewidth]{combined_fig.png} \vskip -0.2cm \caption{(a) Intensity plot of $H_c/H_s$ in $J_{z\pm}-J_{\pm\pm}$ axes for $\Delta =1.1$, $J_2=0.05J$. $H_c$ is the critical field of the transition from the four- to three-sublattice state, $H_s$ is the saturation field. Dot represents parameter set used in SpinW calculations. $H$ was scanned in steps of 0.1 and units of $g\mu_{\rm B} J$, equivalent to steps of 0.021 in terms of $H_c/H_s$ for $H_s = 4.815$. (b) Same as in (a) for $\Delta =0.8$, where step size in $H_c/H_s$ was 0.026 corresponding to $H_s = 3.87$. (c) Intensity plot of the magnetic susceptibility, $\chi=dM/dH$, in the $J_{z\pm}$--$H$ plane for $\Delta =1.1$, $J_2=0.05J$, and $J_{\pm\pm}=0$. Singularities correspond to phase transitions. (d) Same as in (c) for $\Delta =0.8$.} \label{HcHsJzpJp} \vskip -0.2cm \end{figure*} \subsection{Field-induced states} To provide further insights into the effects of the field, we explore the new phases it can produce. In some cases, the field-induced transformation of the spin structures involves simple canting toward the field until a full saturation is reached at some $H_s$. However, in frustrated spin systems, or systems with anisotropic interactions, the field-evolution is more complicated. The case of the triangular-lattice Heisenberg antiferromagnet is paradigmatic in this respect \cite{starykh2015unusual}, showcasing a well-known and much-studied sequence of transitions from ``Y'' to ``up-up-down'' (UUD) plateau and ``V'' states in its field-evolution toward saturation. The $XXZ$ extension of the same model also includes non-coplanar ``umbrella'' and coplanar ``fan'' states \cite{yamamoto2014quantum}. Next-nearest neighbor interactions and anisotropies also introduce a wider variety of four-sublattice field-induced states \cite{ye2017half,seabra2011competition}. However, the field-evolution of the phases of the anisotropic-exchange model \eqref{HJpm}, which combines frustration from the bond-dependent terms with that of the triangular-lattice geometry, remains largely unexplored. In this work, we offer some essential understanding and make significant steps of such an exploration. We use the same representative values of $J_2\!=\!0.05J$ and of the easy-axis and easy-plane $XXZ$ anisotropies $\Delta\!=\!1.1$ and $\Delta\!=\!0.8$ as in Figs.~\ref{HcHsJzpJp} (a) and (b) to provide an insight into the rich phase diagram of the model \eqref{HJpm} in a field. In Figure ~\ref{HcHsJzpJp} (c) and (d), we present intensity plots of the magnetic susceptibility $\chi\!=\!dM/dH$ in the $J_{z\pm}$--$H$ plane. They are obtained from the classical energy minimization of the four- and three-sublattice states in a field for the two values of the $XXZ$ anisotropy and for $J_{\pm\pm}\!=\!0$. Since singularities in $\chi$ correspond to the phase transitions, these figures, in fact, constitute the 2D $J_{z\pm}$--$H$ phase diagrams of the model (\ref{HJpm}) at fixed $\Delta$ and $J_2$ along the constant-$J_{\pm\pm}$ cut shown in Figs. \ref{HcHsJzpJp} (a) and (b). Similar vertical cuts along $J_{z\pm}$ for the other values of $J_{\pm\pm}$ and $J_2$ are also presented below to provide an understanding of the field-evolution of different states across the other dimensions of the phase diagram. In case of the easy-axis $XXZ$ anisotropy in Figure~\ref{HcHsJzpJp} (c), at the lower values of $J_{z\pm}$ one can observe the expected canonical sequence of the Y-UUD-V phase transitions of the three-sublattice states in the triangular-lattice $XXZ$ model \cite{starykh2015unusual,yamamoto2014quantum}. Their corresponding spin structures are sketched in the figure. The field-induced behavior of the stripe states also includes multiple transitions. At the lowest field, the collinear stripe-\textbf{yz} spin configuration, in which spins are tilted off the basal plane of the lattice, is deformed into a non-coplanar four-sublattice state with all four spins on the elementary plaquette having different tilt angles. There is a broad crossover from this state to a structure with three spins forming an ``umbrella'' and the fourth strictly antiparallel to the magnetic field, see the sketches of the spin order in Figure~\ref{HcHsJzpJp} (c). This latter state is stable in a wide field region. As the field increases further, at not too small $J_{z\pm}$ there is a spin-flop-like transition to a similar state, umbrella with a fourth spin parallel to the field. For the yet larger values of $J_{z\pm}$, transition to the saturation occurs directly from this ``umbrella+up" state via a first-order transition. The main feature in both Figs. \ref{HcHsJzpJp} (c) and (d), which is also our key finding, is that the region of stability of the three-sublattice states, related to the experimentally observed intensity at the $K$ point, \textit{expands} at the larger values of the magnetic field. Therefore, there is a region of the model parameters where an evolution from the four-sublattice to the saturated state \emph{necessarily} proceeds via a high-field three-sublattice state. For the easy-axis case of Figure~\ref{HcHsJzpJp} (c), this high-field state is a coplanar ``V'' state. For the easy-plane $XXZ$ anisotropy case of Figure~\ref{HcHsJzpJp} (d), the four-sublattice phases and all the discussed trends are the same, while the three-sublattice region, classically, is a single noncoplanar ``umbrella'' state, which is a canted $120^\circ$ structure. We note that in the quantum case and for not too small $XXZ$ anisotropy $\Delta$, this umbrella state is replaced by the same sequence of Y-UUD-V phases as in Figure~\ref{HcHsJzpJp} (c) \cite{yamamoto2014quantum}. In contrast with the large-$J_{z\pm}$ first-order transition from the four-sublattice state to saturation, the transition from both V and umbrella states to saturation is second-order. We note that the discussed transitions agree with the prior classical Monte-Carlo simulations \cite{steinhardt2020fieldinduced} conducted in the context of the parameter search for YbMgGaO$_4$\xspace for a narrower range of parameters. \begin{figure*} \includegraphics[width=\linewidth]{d11_j2_jpp} \vskip -0.2cm \caption{Intensity plots of the magnetic susceptibility for the easy-axis case, $\Delta =1.1$ and for various $J_2$ and $J_{\pm\pm}$. The axes and color-scale are the same as in Figure~\ref{HcHsJzpJp} (c), which is also the central panel here.} \label{fig_d11_j2_jpp} \end{figure*} \subsection{Further evolution of the phases} \begin{figure*} \includegraphics[width=\linewidth]{d08_j2_jpp} \caption{Same as in Figure~\ref{fig_d11_j2_jpp} for $\Delta =0.8$. Phase diagrams without three-sublattice states are marked with the red frame.} \label{fig_d08_j2_jpp} \end{figure*} Here we present further details on the field-evolution of various phases of the model (\ref{HJpm}) for different choices of $J_2$ and $J_{\pm\pm}$. Our Figs.~\ref{fig_d11_j2_jpp} and \ref{fig_d08_j2_jpp} show the intensity plots of the magnetic susceptibility in the $J_{z\pm}$--$H$ plane for the same choices of the $XXZ$ anisotropy $\Delta=1.1$ and $\Delta=0.8$ as in Figs.~\ref{HcHsJzpJp} (c) and (d), respectively. Each row of the graphs corresponds to a different constant-$J_{\pm\pm}$ cut of Figs.~\ref{HcHsJzpJp} (a) and (b) and each column corresponds to a different constant-$J_2$ cut of the 3D phase diagram in Figure~\ref{fig_3d}. While the variations of the $J_{\pm\pm}$ term simply give an elaborate dissection of what happens underneath the projected view of the intensity plots of Figure~\ref{HcHsJzpJp} (a) and (b) for $J_{\pm\pm}=\pm 0.05J$ cuts, it is clear that the next-nearest-neighbor interaction $J_2$ works strongly against the field-induced three-sublattice states. This is in accord with Figure~\ref{fig_3d}, which shows that at $J_2\approx0.125J$ the $120^{\degree}$ state is completely eliminated from the zero-field phase diagram. The sets of parameters where no three-sublattice state is observed at any field are highlighted with a red frame in Figure~\ref{fig_d08_j2_jpp}. There are other changes that include shrinking or expanding of the phases already described and the appearance of a new canted stripe state for large $J_2$ and negative $J_{\pm\pm}$ that occur at $J_{z\pm}\lesssim 0.2J$ and larger fields for $\Delta\!=\!1.1$ and for all fields for $\Delta\!=\!0.8$, see upper right panels of Figs.~\ref{fig_d11_j2_jpp} and \ref{fig_d08_j2_jpp}. The main takeaway from Figs.~\ref{fig_d11_j2_jpp} and \ref{fig_d08_j2_jpp} is that the region of the four- to three-sublattice transition is limited by the extent of $J_{\pm\pm}$ already shown in Figs.~\ref{HcHsJzpJp} (a) and (b) and more strongly by the next-nearest-neighbor interaction $J_2$. Therefore, the experimental observations that the critical field of such a transition $H_c\!\alt\!H_s$ in both YMGO and YZGO put strong bounds on the anisotropic-exchange parameters. This analysis also limits next-nearest neighbor interactions to $J_2\!\alt\!0.1J$ as only four-sublattice states survive for larger $J_2$. \subsection{$S({\bf Q},\omega)$ for select parameters} In order to relate the above theoretical analysis to our experimental results for YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace, we used SpinW \cite{toth2015linear} to calculate linear spin-wave theory (LSWT) $S({\bf Q},\omega)$ for representative sets of parameters from the identified regions of the phase diagram that are shown by dots in Figs.~\ref{HcHsJzpJp} (a) and (b). Since the phase diagrams are for the parameters in units of the overall scale $J$, the latter is set to match LSWT dispersion in high fields. Comparison of the data to calculations from SpinW is inherently challenging owing to the implicit assumption in the SpinW calculations of the absence of disorder (which is necessary to reproduce the broadening in energy and $Q$ observed in experiment). Nonetheless, for the parameters indicated in Figure \ref{HcHsJzpJp} (a) we find very good qualitative agreement between calculations and data (see Figure \ref{CNCS}) in the absence of disorder. The SpinW optimization algorithm optmagsteep was used with iterative manual adjustment of the spin state until the stable spin state (free of imaginary modes) was found\footnote{We should note that the ``V'' state spectrum is unstable for 2.5 T near the $\Gamma$ point for the chosen set of parameters. The instability is weak and may signify a transition to a more complicated multi-$\mathbf{Q}$ state. While the manual tweaking allows us to find a quasi-stable state with the gapped spectrum, the stable ``V'' state spectrum is gapless as any three-sublattice state of \eqref{HJpm} with broken continuous symmetry, see also Ref.\cite{maksimov2019anisotropic} for the $120^\circ$ state spectrum.}. In particular, comparing the diffuse scattering from Figure \ref{CNCS} (a) 0 T and the resolution-convoluted dispersion calculated from SpinW in Figure \ref{CNCS} (f), we note the enhanced intensity in the vicinity of M and K clearly present in both. Furthermore, the greater intensity residing in a range of energy from near the elastic line to about 0.2 meV in the calculation is clearly present, albeit undoubtedly broadened by disorder, in the experimental data. As discussed in previous works \cite{paddison2017continuous,ma2018spin,li2017crystalline}, disorder ``smears'' the intensity in $Q$ and $E$, which would also account for the intensity visible at higher energies in the INS data. In Figure \ref{CNCS} (g) we show the calculated dispersion using SpinW at 2.5 T, where the spins reside in the `V' state. The dispersion has again been convoluted with the resolution of the instrument for best comparison to the experimental data. Note that the intensity at the $\Gamma$ point (though suppressed as discussed below) is at approximately 0.5 meV, as expected from comparison to the 2 and 3 T data. Furthermore, the modes above the Brillouin zone edge (which we emphasize follow from the superposition of 3 domains in the clean limit) offer a hint as to the the origins of the the broad continuum apparent in the data. As the field is raised, the contributions from modes above the zone edge in the SpinW calculation are reduced (compare the intensity from modes above M and K at 0 and 2 T). When the system is fully saturated, and the `V'-state gives way to the field-induced ferromagnetic state, the modes above the zone edge are completely absent (see the curve calculated from the analytic expression following from linear spin-wave theory superimposed on the 8 T INS data in Figure \ref{CNCS} (h)). This evolution of intensity is qualitatively consistent with that observed in the data, where broadening due to disorder smears out the modes. That is, the modes residing above the zone edge contribute to the continuum at low to intermediate field, but are reduced in intensity as the system enters the (subject-to-disorder) `V'-state, and are replaced by the broadened, otherwise singular mode when the system is fully polarized. We also point out that an apparent visibility of the magnon modes in LSWT $S({\bf Q},\omega)$ in the vicinity of the $\Gamma$ point at low fields, Figure~\ref{CNCS}(f),(g), and lack thereof in the experimental data, Figure~\ref{CNCS}(a-c), can be explained by a significant interaction of the single-magnon branch with the two-magnon continuum. While the quantitative calculation of such effects in the anisotropic-exchange models is quite involved \cite{Kopietz2020, winter2017breakdown} and is beyond the scope of the present work, we, nevertheless, provide an intuitive insight into it by showing the bottom of the two-magnon continuum for three magnetic domains in Figure~\ref{CNCS}(f) and (g). As shown by the dashed lines in Figure~\ref{CNCS} for both 0 T and 2.5 T, the bottom of the continuum has lower energy than one magnon modes at the large portion of the Brillouin zone, including the $\Gamma$ point. For $S=1/2$ systems, such an overlap can lead to strong decays and near-complete disappearance of the well-defined magnon excitations in the corresponding range of momenta together with the other phenomena such as strong renormalization of the single-magnon branches \cite{zhitomirsky2013colloquium,chernyshev2009spin}. These effects require significant coupling between one- and two-magnon sectors - an inevitable consequence of the anisotropic exchange terms in \eqref{HJpm}, which are allowed by the presence of strong spin-orbit coupling in the rare-earth-based and some transition-metal compounds\cite{maksimov2020rethinking,winter2017models,Kopietz2020}. We have also calculated the 0 T $S(\vec{Q},\omega)$ for the parameters identified as appropriate for the easy-plane character of YbMgGaO$_4$\xspace, identified by a dot in Figure~\ref{HcHsJzpJp} (b), (see Supplementary Figure 10), which is in excellent qualitative agreement with Figure 2 (a) of \cite{paddison2017continuous}. We further calculated the $S(\vec{Q},\omega)$ for the field-induced polarized state, where the overall scaling of the model (given by $J$) was chosen for best agreement. Within the uncertainty of the broad scattering observed in experiment, this calculation also yields a very good qualitative agreement (Supplementary Figure 11). \section{Discussion} The hunt for the QSL state has yielded numerous studies of interesting new compounds and underlying physical phenomena, even though the ultimate goal may remain elusive. The search for a QSL state in the context of YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace materials seems \emph{likely} nearing its end, although one cannot claim yet with absolute certainty that the issue is completely settled \cite{wu2020exact}. In the present work, we have established a potentially close description of YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace in a hypothetical \emph{clean} limit, allowing for the implications of the disordered, real-world materials to follow. While possibly denied the coveted QSL state, a deeper understanding of the rich and diverse phase diagram of the triangular-lattice antiferromagnets is still a highly satisfying reward that will likely serve as a road map for future studies, in particular for designing new materials or advancing these studies to explore this rich phase diagram. The present work seeks to advance our understanding of the phase diagram describing YbMgGaO$_4$\xspace, YbZnGaO$_4$\xspace, and many related systems. We note that our specific conclusions about these materials are a part of a greater effort to narrow down possible magnetic exchange parameters in these systems \cite{zhang2018hierarchy,bachus2020field,li2020reinvestigation,steinhardt2020fieldinduced}, and provide a guideline on how to address the issue of chemical disorder in real-world materials. In a very recent work \cite{steinhardt2020fieldinduced} by some of the same authors, the phase crossovers similar to those shown in Figure \ref{TDO_SQUID} and Figure \ref{CORELLI} were used to constrain parameters of YbMgGaO$_4$\xspace, with the focus on reproducing them in the disorder-free limit, but \emph{without} a broader map of the phase diagram beyond the observations offered by an optimization algorithm. Indeed, the methods used in that prior work suggest a general approach to find exchange parameters for disordered systems. By contrast, the goal of \emph{this} work is to provide an expanded context of the phase diagram in which the observed crossovers \emph{can occur in principle}, offering a common description for the phenomena observed in both systems. As such, the present work will serve as an invaluable map in the search of QSL and other intriguing states and phenomena in frustrated triangular-lattice compounds, and will importantly benefit the materials design efforts in this field. With this last point in mind, recent experiments in the ytterbium-based chalcogenides also show promising QSL features and numerous phase transitions induced by an applied magnetic field \cite{liu2018rare,ding2019gapless,ranjith2019anisotropic,ranjith2019field,baenitz2018planar,xing2019field}. The precise nature of these transitions is not yet known and needs to be analyzed in a manner and within the framework suggested in the present study. These recent studies also highlight potential further insights that can bring deeper understanding of YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace by experiments with the field along different directions. Altogether, we have demonstrated the power of the experimental and theoretical insights in identifying relevant parameter spaces of YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace and, potentially, other related materials. We have investigated their field-induced phases and have shown that despite their disorder-induced pseudo-SL ground states, we can significantly narrow the allowed regions of their phase diagram that are compatible with the phenomenologies of these materials. Similar consideration can be applied to the other materials such as chalcogenides. More experimental and theoretical investigations, such as targeted materials design, neutron scattering for the in-plane field directions augmented by analytical and numerical studies for this setting, and utilizing alternative tuning parameters such as external pressure, all could be useful for exploring the diverse phase diagram of this group of compounds and shedding further light on their rich physics. \section{Methods} \subsection{Synthesis} Samples were synthesized from finely mixed Yb$_2$O$_3$ (99.9$\%$), ZnO(99.9$\%$), and Ga$_2$O$_3$ (99.999$\%$) powders at 1350 $\degree$C. High quality single-crystals (such as pictured in Supplementary Figure 2) were grown using the optical floating zone technique. A typical growth was conducted in 1 MPa O$_2$ atmosphere with a speed ranging from 4 to 10 mm/hour. Single crystal quality was confirmed via laue x-ray diffraction (Supplementary Figure 3) while powder xray diffraction (PXRD) was used to confirm the correct phase at every step of synthesis (see Supplementary Figure 1). \subsection{High Resolution Magnetization Measurements} High-resolution measurements of magnetization were achieved with the complimentary tunnel diode oscillator (TDO) technique \cite{steinhardt2020fieldinduced}. In a TDO measurement, a tunnel diode is biased to operate in the “negative resistance” region of the IV-curve. This provides power that maintains the resonance of a LC-circuit at a frequency range between 10 and 50 MHz. A nearly single-crystal sample with dimensions of ~2 mm in length and ~1 mm in diameter was wound inside a detection coil, with the $\textbf{c}$ axis of the sample aligned with the coil axis. The sample and coil constitute the inductor of the LC circuit. With the application of field, the sample magnetization change induces a change in the inductance, and thus shifts the resonance frequency. This technique enables highly sensitive detection of changes of magnetic moments ~ $10^{-15}$ Am$^2$. \cite{van1975tunnel}. Magnetization was also directly measured via an in-house Cryogenic S700X SQUID magnetometer in temperatures down to 300 mK using a 3He probe. A 1.81 mg sample was mounted on a silver straw with vacuum grease in $\textbf{H} \parallel \textbf{c}$ and $\textbf{H}\perp\textbf{c}$ orientations. ac-susceptibility measurements were carried out on a long, approximately rectangular sample with field perpendicular to the sample $c$ axis in a dilution refrigerator with a base temperature of 20 mK. \subsection{Neutron Scattering} \subsubsection{Diffuse magnetic scattering at CORELLI} Diffuse neutron scattering data were collected at the CORELLI spectrometer at Spallation Neutron Source, Oak Ridge National Laboratory \cite{rosenkranz2008corelli}. This instrument is a quasi-Laue TOF instrument equipped with a 2D detector, with a -20$\degree$ to +150$\degree$ in-plane coverage. The incident neutron energy was between 10 meV and 200 meV. A superconducting magnet was used to provide a vertical magnetic field up to 5 T, which constrained the out-of-plane coverage to $\pm$ 8$\degree$. A ~0.8 g single crystal was mounted on a Cu plate in a dilution refrigerator. The sample was aligned with the $(\textbf{h}, \textbf{k}, 0)$ plane horizontal and the magnetic field along the [0,0,$\textbf{l}$] direction. Neutron-absorbing Cd was used to shield the sample holder to reduce the background scattering. Experiments were conducted with applied fields at the base temperature of 130 mK by rotating the crystal through 180$\degree$ in 3$\degree$ steps, and then at 20 K in the same fields for background subtraction. The data were reduced using Mantid for the Lorentz and spectrum corrections \cite{michels2016expanding}. To account for the temperature factor in our background subtraction in total-scattering mode, we compared the ratio of integrated intensities of a rectangular volume of reciprocal space at 5 T for both temperatures. The region was bounded by $-0.45 < \textbf{h} < -0.35$, $0.5 < \textbf{k} < 0.6$, and $-2 < \textbf{l} < 2$. We used this region, away from the zone edge, and our 5 T data to ensure the comparison was unaffected by diffuse magnetic scattering. This integration approximates the ratio of Bose population factors - our scaling factor was $1.01\pm0.005$. To improve statistics, we used symmetry operations. All analysis and visualization were performed using Mantid and Python. \subsubsection{Inelastic neutron scattering at the CNCS, DCS, and SPINS} We conducted inelastic neutron scattering experiments at the Cold Neutron Chopper Spectrometer\cite{ehlers2011cncs} (CNCS) at Oak Ridge National Laboratory, and the Disk-Chopper Spectrometer\cite{COPLEY2003477} (DCS) and Spin Polarized Inelastic Neutron Spectrometer (SPINS) at the National Institute of Standards and Technology. The same ~1.4 g single crystal sample was aligned to use the $[h,k,0]$ scattering plane and with the field parallel to the sample $\vec{c}$ axis (along the [001] direction). All measurements were carried out in dilution refrigerators, with sample mounted on copper sample mounts such as pictured in Supplementary Figure 2. The base temperatures for CNCS, DCS, and SPINS measurements were 50 mK, 70 mK, and 60 mK, respectively. For CNCS and DCS, the sample was rotated through approximately 180 degrees. The incident energy used at CNCS, DCS, and SPINS was 3.9 meV, 3.55 meV, and 3.7 meV, respectively. Analysis of CNCS data was carried out in large part using the HORACE software package\cite{Ewings2016132}. Analysis of DCS data was carried out in large part using the DAVE software package\cite{azuah2009dave}. \subsubsection{Polarized neutron scattering at BT7} We measured using polarized neutrons at the BT7\cite{lynn2012bt7} beamline at the National Instistute of Standards and Technology. We measured with a fixed final energy of 14.7 meV at about 0.5 meV such that the FWHM of the collimated beam was 1.08 meV and encompassed the energy range of the continuum as pictured for low fields in Figure \ref{CNCS}. Samples were again aligned to use the [h,k,0] scattering plane, and measurements were conducted using a vertical guide field (parallel to the sample c-axis). A 3He cryostat was used. 0 T measurements were performed in the absence of a magnet, and then a 7 T magnet was added to perform measurements at 2 T. Flipping ratios ranged from 19 to 33 for the 0 T measurements and from ~17 to ~28 for the 2 T measurements. Polarization correction was performed using \emph{pbcor} software. \label{sec:methods} \subsubsection{Theory} For the LSWT $S({\bf Q},\omega)$ we used SpinW calculations \cite{toth2015linear}. The global phase diagram was obtained using classical energies for the single-${\bf Q}$ states, see Refs.~\citep{zhu2018topography,maksimov2019anisotropic}, and for the field-induced phases, classical energy minimization of the three- and four-sublattice structures was used. \section{Data availability} All relevant data are available from the authors upon reasonable request. This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan(http://energy.gov/downloads/doe-public-access-plan). \begin{acknowledgments} The work of A.~L.~C. was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Awards No. DE-FG02-04ER46174 and DE-SC0021221. P.~A.~M. acknowledges support from JINR Grant for young scientists 20-302-03. A.~L.~C. would like to thank Kavli Institute for Theoretical Physics (KITP) where this work was advanced. KITP is supported by the National Science Foundation under Grant No. NSF PHY-1748958. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by the National Science Foundation Cooperative Agreement No. DMR1157490 and DMR-1644779, the State of Florida and the U.S. Department of Energy. A portion of this research used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. We acknowledge the support of the National Institute of Standards and Technology, U.S. Department of Commerce, in providing the neutron research facilities used in this work. The identification of any commercial product or trade name does not imply endorsement or recommendation by the National Institute of Standards and Technology. \end{acknowledgments} \section{Author Contributions} Research conceived by S.H.; Samples synthesized by W.S., C.M. and S.H.; Magnetic measurements performed and analyzed by Z.S., W.S., D.G., and S.H.; Neutron scattering measurements performed and analyzed by W.S., S.D., N.P.B., A.P., Y.L., G.X., Y.Z., J.W.L, and S.H.; Theoretical calculations performed by P.A.M and A.L.C.; Manuscript written by W.S., P.A.M, A.L.C., and S.H.; Project supervised by A.L.C and S.H.; All authors commented on the manuscript. \section{Competing Interests} The Authors declare no Competing Financial or Non-Financial Interests. \bibliographystyle{apsrev4-1} \section{Introduction} Recent years have seen renewed interest in triangular-lattice antiferromagnets featuring anisotropic interactions and other traits conducive to exotic quantum ground states, particularly in the hunt for experimental realizations of quantum spin liquids. Mapping the phase diagrams of these materials is thus of paramount importance, as the variation in magnetic anisotropy, relative exchange coupling strengths, and corresponding magnetic Hamiltonians offered by different materials are fundamentally important to the pursuit of such states. One system that sparked an explosion of experimental and theoretical interest is the triangular-lattice antiferromagnet YbMgGaO$_4$\xspace, which has led to a broad research effort that currently involves a number of related compounds \cite{sanders2017magnetism,baenitz2018planar,sichelschmidt2020effective,sichelschmidt2019electron,cevallos2018anisotropic,li2018absence,liu2018rare,ding2019gapless,ranjith2019field,ranjith2019anisotropic,xing2019field,bordelon2019field,xing2019synthesis,xing2019class,scheie2020crystal,bastien2020long}. There are two important physical aspects of this group of materials that have become clear due to recent insights. Because of the localized nature of the $f$-orbitals, the ranges of the effective spin-spin interactions in these insulating materials are strongly limited, implying that all of the Kramers-ion-based materials are expected to be closely described by the same nearest-neighbor anisotropic-exchange model with the parameters that are permitted by triangular-lattice symmetry \cite{li2016anisotropic,rau2018frustration,zhu2018topography,maksimov2019anisotropic}. This reasonably compact model should provide a consistent interpretation of the current and future experiments and give important new insights into fundamental properties of these materials \cite{li2020spin}. The second generic feature is the strong effect of disorder observed in some of the well-studied representatives of these compounds. It has been argued theoretically that even a benign form of bond disorder necessarily generates perturbations that are relevant in the Imry-Ma sense \cite{imry1975random,zhu2017disorder,parker2018finite}, making a consideration of the defects an inevitable and essential part of a realistic description of most anisotropic-exchange magnets. Empirically, many of the newly synthesized materials seem to show no magnetic ordering \cite{sanders2017magnetism,baenitz2018planar,sichelschmidt2020effective,sichelschmidt2019electron,cevallos2018anisotropic,li2018absence,liu2018rare,ding2019gapless,ranjith2019field,ranjith2019anisotropic,xing2019field,bordelon2019field,xing2019synthesis,xing2019class,scheie2020crystal,bastien2020long}. Then, given the strong disorder effects, it is a question whether the non-magnetic ground states of these materials are due to a genuine quantum-disordered spin-liquid (SL) phase, or due to a scenario similar to the disorder-induced “spin-liquid mimicry,” suggested for YbMgGaO$_4$\xspace \cite{zhu2017disorder}. There is also an intriguing broader question on whether the disorder-induced spin-liquid-like behavior retains any of the unique and desired properties of the intrinsic spin liquids, thus potentially turning disorder into a feature rather than an obstacle \cite{kimchi2018valence,andrade2018cluster,bilitewski2017jammed}. A counterintuitive example of the role of disorder in a related material is the case of NaYbO$_2$, where introducing Na$^+$ site vacancies leads to an antiferromagnetic transition at a few Kelvin and thus disorder supports an ordered phase\cite{guo2020magnetic}. Further understanding and insight into the role of disorder are needed to make progress in this direction. One of the persistent issues in the studies of the anisotropic-exchange magnets in general and of the rare-earth family in particular is the identification of their model parameters \cite{maksimov2020rethinking}. In the case of the disorder-induced pseudo-SL state, this problem is further aggravated, as it is not clear what state the disorder-free system would have assumed. In this work, we propose that the experimental and theoretical investigations of the \emph{field-induced phases} offers a powerful instrument to significantly narrow the allowed parameter space to a region that is consistent with the material's phenomenology. In YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace, the source of disorder is due to the $R\bar{3}m$ symmetry, leading to fifty-fifty site mixing of the non-magnetic cations Mg$^{2+}$ or Zn$^{2+}$ with the Ga$^{3+}$. Efforts to determine the exchange parameters for placing YbMgGaO$_4$\xspace in proposed phase diagrams and otherwise comparing to the numerous theoretical investigations to affirm or deny a QSL state were obstructed by the various broadening effects and consequentially enhanced uncertainty in the measurements. Several studies concentrated their efforts on further refining measurements of the exchange parameters \cite{zhang2018hierarchy,steinhardt2020fieldinduced}. In this work, we provide a detailed study of the field-induced effects and characteristics of YbZnGaO$_4$ and offer comparison with the results in YbMgGaO$_4$ \cite{steinhardt2020fieldinduced}. Informed by measurements from high resolution magnetometry and a variety of neutron scattering techniques, we put forward a theoretical analysis of the structure of their field-induced phase diagram and propose a parameter region for both materials that is compatible with our empirical findings. \section{Results} \begin{figure*}[t] \includegraphics[width=\linewidth]{TDO_SQUID_AC_letteredc.eps} \vskip -0.2cm \caption{Tunnel diode oscillation (TDO) frequency and ac susceptibility. (a) and (b) TDO ($\Delta f/f \propto \Delta \textbf{M}/ \Delta \textbf{H}$) shows an anomaly for $\textbf{H}\parallel c$ and $\textbf{H}\perp\textbf{c}$ respectively which weakens as temperature increases (curves offset for clarity in (b)). (c) Anomaly’s response to applied field shows anisotropy. (e) $d\textbf{M}/d\textbf{H}$ measured with SQUID corroborates TDO measurement. (f) integrating TDO $\Delta$frequency from 0 to the approximate saturation is further corroborated by dc susceptibility measurements. (g) Comparison of critical temperatures measured in this work for YbZnGaO$_4$ to earlier measurements by Ma et al. \cite{ma2018spin}} \label{TDO_SQUID} \end{figure*} \subsection{Preliminary characterization} Susceptibility measurements on a 1.81 mg single crystal sample of YbZnGaO$_4$ conducted using an in-house Cryogenic S700X SQUID magnetometer (with 3He probe) yield low-temperature fits suggesting Curie-Weiss temperatures of $\Theta_{CW}=-2.67$ K and $\Theta_{CW}=-2.62$ K for parallel and perpendicular to the sample $\vec{c}$ axis, respectively (see Supplementary Figure 4). These values are more isotropic than those reported for YbMgGaO$_4$\xspace or for YbZnGaO$_4$\xspace in earlier works though YbZnGaO$_4$\xspace was also more isotropic than YbMgGaO$_4$\xspace in those measurements) \cite{ma2018spin}. We emphasize that while the difference in the Curie-Weiss temperature for the in-plane vs out-of-plane field has initially suggested a rather strong easy-plane character of YbMgGaO$_4$\xspace \cite{paddison2017continuous}, the subsequent spectroscopic studies have hinted at a rather moderate $XXZ$ anisotropy, yielding a nearly Heisenberg value of $\Delta\!=\!0.8$--0.9 \cite{zhang2018hierarchy}. In the case of YbZnGaO$_4$\xspace, Curie-Weiss temperatures for the in-plane and out-of-plane fields from our measurements and those of earlier works \cite{ma2018spin} are much closer to the Heisenberg limit. Given the trend, this indicates that the anisotropy in YbZnGaO$_4$\xspace may, in fact, be of the easy-axis type. A more direct demonstration of that is offered by the results that are discussed in Sec.~II.B. A unique opportunity afforded by a close comparison between YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace is the qualitative contrast of the effect of the cation substitution on the disorder between the two materials. One obvious consideration is the differing ionic radii of the Mg$^{2+}$ and Zn$^{2+}$, which are approximately 72 and 74 pm respectively (a 2.7$\%$ difference). This small difference yields slightly smaller lattice parameters for YbMgGaO$_4$\xspace when compared to YbZnGaO$_4$\xspace, as shown by comparison of parameters given by references \cite{li2015gapless} and \cite{ma2018spin}, respectively, which, in turn, may yield marginally stronger exchange for YbMgGaO$_4$\xspace, also explaining the smaller Curie-Weiss temperatures and lower field onset for the anomaly discussed in Sec.~II.B for YbZnGaO$_4$\xspace. This is consistent with the observation that the related compound NaYbO$_2$, with an in-plane lattice parameter smaller by only about 1.8$\%$ from YbMgGaO$_4$, shows a significantly larger Curie-Weiss temperature ($\Theta_{CW} = -10.38$ K\cite{bordelon2019field} as opposed to ~$-4$ K for YbMgGaO$_4$\xspace)\cite{steinhardt2020fieldinduced}. Furthermore, both Zn$^{2+}$ and Ga$^{3+}$ have $d^{10}$ electronic configurations, whereas Mg$^{2+}$ is $p^6$. While the displacement of Yb$^{3+}$ can still be expected based on the charge difference between the cations, leading to the observed broadening in inelastic neutron scattering studies of the single magnon dispersion and crystal electric field levels \cite{li2017crystalline}, the local environment may be more homogeneous for YbZnGaO$_4$\xspace, and may be related to the difference in anisotropic response under field. However, further studies will be required to compare the effective role of disorder between the two systems. \subsection{Tunnel diode oscillator technique and SQUID magnetization} The first indications of the magnetic transitions in YbZnGaO$_4$\xspace, which are similar to the ones we have previously identified in YbMgGaO$_4$\xspace \cite{steinhardt2020fieldinduced}, were found using the tunnel diode oscillator (TDO) technique (see methods). As the applied magnetic field is changed, the sample magnetization $M (\vec{H})$ with respect to field is altered, thus changing the inductance of the coil and the measured resonant frequency of the circuit, yielding a signal proportional to $\chi (\vec{H})$. From Figure \ref{TDO_SQUID} (a), where the change in resonant frequency is plotted versus the applied field, a clear nonlinearity is apparent beginning just below 1 T. This nonlinearity persists to at least 2 T for the field parallel to sample $\vec{c}$ axis. Upon raising the temperature, the feature is completely suppressed at about 4 K, affirming its magnetic origin. This behavior is also consistent with a similar feature in YbMgGaO$_4$\xspace \cite{steinhardt2020fieldinduced}. As the sample orientation is rotated with respect to the field, the anisotropic response is made apparent, with the feature encompassing a broader range of the field for $\textbf{H}\perp\textbf{c}$. This feature is confirmed by more conventional magnetization and susceptibility measurements carried out using the in-house SQUID magnetometer with a 3He probe in temperatures down to 300 mK. The same distinct plateau-like feature is apparent in the $\chi(\vec{H})$ in both TDO and SQUID measurements, as is clear from Figures \ref{TDO_SQUID} (d) and (e). Integrating the change in frequency with respect to applied field yields a curve consistent with magnetization as measured in SQUID (see Figure \ref{TDO_SQUID} (c)). We note that the anomaly in YbZnGaO$_4$\xspace measured via TDO and SQUID magnetization occurs at a slightly lower field compared to the analogous feature measured in YbMgGaO$_4$\xspace \cite{steinhardt2020fieldinduced} (see Supplementary Figure 5). The features in the magnetization derivative and tunnel diode oscillator technique (TDO) measurements in both in-plane and out-of-plane fields are reminiscent of the plateau-like behavior that is expected in the canonical Heisenberg or $XXZ$ nearest-neighbor triangular-lattice magnets \cite{starykh2015unusual}. Importantly, the TDO and magnetization measurements in YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace, together with the Curie-Weiss extrapolations from the susceptibility mentioned above, suggest an important distinction between the two materials. The effects of the in-plane and the out-of-plane field directions show different behavior as a function of the field orientation - compare Figure~\ref{TDO_SQUID}(b) of this work and Figure~1(c) in Ref.~\citep{steinhardt2020fieldinduced}. Specifically, the relative severity of the anomaly in TDO data in YbZnGaO$_4$\xspace in $H\!\parallel\! c$ are resemblant of that in $H\!\perp\! c$ data of YbMgGaO$_4$\xspace, and vice versa. This suggests that YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace correspond to different types of the $XXZ$ anisotropy, the easy-axis and the easy-plane, respectively. \subsection{ac susceptibility and disorder} From our measurements of ac susceptibility of YbZnGaO$_4$\xspace (see Figure \ref{TDO_SQUID} (f)), we find critical temperatures corresponding to the characteristic cusps of the ac susceptibility occur at substantially lower temperatures than previously measured \cite{ma2018spin}. Indeed, our characteristic temperatures are around $20\%-30\%$ lower for all comparable frequencies - see Figure \ref{TDO_SQUID} (g). Consequently, for our measurement of $\Delta P = \frac{\Delta T_f}{T_f \Delta \log(\omega)}$, a quantitative measure of the freezing temperatures per decade of characteristic temperatures, we find a value of $\Delta P = 0.139(4)$, substantially larger than the $\Delta P = 0.053(4)$ of previous work. This value of $\Delta P$ is typically associated with superparamagnetic behavior \cite{mydosh1993spin} as opposed to spin glasses. That being said, insulating spin glasses typically show greater frequency dependence~\cite{mydosh1993spin}. If this $\Delta P$ \emph{is} interpreted as indicative of superparamagentic behavior, it may be understood as a consequence of many microscopic domains with insignificant cooperative freezing. This phenomenon is likely due to disorder, but further suppression of the freezing temperature could be attributed to the high degree of frustration as well as disorder. The percentage of frozen spins is estimated to be approximately 16\% (see Supplementary Figure 9), comparable to the previous study \cite{ma2018spin} and to the case of YbMgGaO$_4$\xspace \cite{paddison2017continuous}. General questions about the effects of disorder in spin systems persist, especially in light of its aforementioned potential relationship to QSL states for some materials \cite{kimchi2018valence,andrade2018cluster,bilitewski2017jammed}. The origin and effects of disorder related to the observed spin-liquid features of YbMgGaO$_4$\xspace have been addressed earlier \cite{zhu2017disorder,li2017crystalline}. \subsection{Neutron scattering} \begin{figure*} \includegraphics[width=\linewidth]{CORELLI_YZGO.eps} \vskip -0.2cm \caption{Diffuse neutron scattering. First seven panels show evolution of diffuse neutron scattering with increasing applied field for $\vec{H} \parallel \vec{c}$ for data integrated over $0.5 < L < 0.5$. The eighth panel shows integrations of line cuts across the first BZ edge, where the uncertainty represents one standard deviation. Sample temperature was 130 mK and a 20 K background was subtracted to isolate magnetic contributions.} \label{CORELLI} \end{figure*} For this work, diffuse neutron scattering data were collected at CORELLI at Oak Ridge National Laboratory in total scattering mode (see Figure \ref{CORELLI}). The sample was aligned with the $[h,k,0]$ scattering plane, and applied field along the sample $\vec{c}$ axis. Data were collected at 130 mK for 0, 1, 1.5, 2, 3, 4, and 5 T. Color maps of the neutron scattering intensity after subtracting the 20 K as background reveal the evolution of the magnetic structure, which is qualitatively comparable to the YbMgGaO$_4$\xspace data. With no applied field, the intensity largely resides at the high-symmetry M points on the edges of the Brillouin zone (see also Supplementary Figure 6, left panel). At 1 T the scattering intensity is almost completely uniform along the zone edge, while at 1.5 T the intensity is found predominantly at the high-symmetry K points. Intensity at the M points is well established to correspond to stripe-ordered states in long-range ordered triangular systems, while intensity at the K point is generally suggestive of 120$^\circ$-type ordering or other three-sublattice states \cite{maksimov2019anisotropic}. As in the case of YbMgGaO$_4$\xspace, this migration of the scattering intensity with applied field corresponds to the anomaly observed in the magnetometry data. The changes in intensity were further confirmed by measurements with the triple-axis spectrometer at SPINS (see Supplementary Figure 7). We further measured inelastic neutron scattering (INS) from a YbZnGaO$_4$\xspace single crystal sample at CNCS\cite{ehlers2011cncs} in applied field(see Figure \ref{CNCS}) at Oak Ridge National Laboratory. Here we again see a clear evolution of the intensity as a function of energy and $Q$ with increasing applied field. We note that at low field the intensity is notably concentrated on the M points. The intensity has no clear dispersion up to about 3 T, but instead consists of a broad continuum, similar to the 0 T with weakening intensity as the spins are presumably canted further from the scattering plane with increased field. At 4 T, the scattering remains broad in energy, but a dispersion is faintly visible at low energy along the zone edge. This dispersion has minima at the zone edges and rises as it approaches the $\Gamma$ points. The shape of this dispersion closely resembles what is measured in the polarized state measured at 8 T, indicating that the system is approaching polarization. Measurements at DCS\cite{COPLEY2003477} (see Supplementary Figure 6) and SPINS (see Supplementary Figure 7) at the National Institute of Standards and Technology confirm the features shown in Figure \ref{CNCS} in a diversity of backgrounds and instrument setups. At 8 T the system is nearly completely polarized, and a clear dispersion is evident. As in the case of YbMgGaO$_4$\xspace \cite{paddison2017continuous}, the dispersion is broadened, likely due to the disorder and the resulting distribution of exchange parameters and $g$-factors. Additional measurements to characterize YbZnGaO$_4$\xspace's response to applied field were conducted using polarized neutrons at BT7\cite{lynn2012bt7} at the National Institute of Standards and Technology, with a vertical guide field set up (see Supplementary Figure 8). After subtracting background measurements (40 K) from base (0.3 K) and correcting for the polarization rate, comparison of the spin flip (SF, measuring the in-plane component) shows a stronger in-plane component along the zone edge at 0 T, compared to 2 T, with particularly high intensity in the vicinity of the high-symmetry M point, likely affirming increased canting of the spins with increasing field, but also possibly indicating a reduced spin component parallel to the zone edge (pointing to nearest neighbors in real space). This greater intensity near the zone edge for the SF scattering at 0 T can also be seen in the orthogonal cut from M to $\Gamma$. \subsection{Model and zero-field phases} The interplay of the crystal field and spin-orbit coupling on the magnetic moment of the Kramers ion results in the splitting of its levels into a well-separated doublet structure built from a mix of various spin and orbital states. The exchange interactions of the lowest doublets (pseudo-spins-$\frac{1}{2}$) are constrained only by the discrete lattice symmetry. For the triangular lattice of YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace, this general anisotropic-exchange nearest-neighbor model is given by \begin{align} \label{HJpm} {\cal H}=&\sum_{\langle ij\rangle}\Big\{J \Big(S^{x}_i S^{x}_j+S^{y}_i S^{y}_j+\Delta S^{z}_i S^{z}_j\Big)\\ +&2 J_{\pm \pm} \Big[ \Big( S^x_i S^x_j - S^y_i S^y_j \Big) \cos\tilde{\varphi}_\alpha- \Big( S^x_i S^y_j+S^y_i S^x_j\Big)\sin\tilde{\varphi}_\alpha \Big]\nonumber\\ +&J_{z\pm}\Big[ \Big( S^y_i S^z_j +S^z_i S^y_j \Big) \cos\tilde{\varphi}_\alpha -\Big( S^x_i S^z_j+S^z_i S^x_j\Big)\sin\tilde{\varphi}_\alpha \Big]\Big\},\nonumber \end{align} where bond angles $\tilde{\varphi}_\alpha$ are that of the primitive vectors of the lattice with the $x$ axis, $\tilde{\varphi}_\alpha\!=\!\{0,2\pi/3,-2\pi/3\}$. The $J_{\pm \pm}$ and $J_{z\pm}$ bond-dependent terms are due to the strong spin orbit coupling\cite{li2016anisotropic}. The zero-field phase diagram of the model \eqref{HJpm} with an additional second-nearest-neighbor exchange $J_2$ has been studied extensively \cite{li2016anisotropic,zhu2018topography,maksimov2019anisotropic,liu2016semiclassical,luo2017ground,rousochatzakis2016kitaev,iaconis2018spin}, and its 3D classical version is shown in Figure~\ref{fig_3d} in the $J_2$-$J_{\pm\pm}$-$J_{z\pm}$ axes for $\Delta=1$, with all couplings in units of $J>0$. There are three main ordered states in the antiferromagnetic limit of the $XXZ$ interaction: a coplanar three-sublattice $120^\circ$ state, which corresponds to the ordering vector at $K$ points, a collinear stripe-\textbf{x} two-sublattice state with spins pointing along the nearest-neighbor bonds and the ordering vector at $M$ points, and a second collinear stripe-\textbf{yz} state with the same ordering vector but spins tilted out of the lattice plane and perpendicular to the nearest-neighbor bonds. We should note that the phase diagram in Figure~\ref{fig_3d} is simplified. The simplification comes from only taking the single-\textbf{Q} spiral ansatz that does not include more complicated multi-\textbf{Q} states \cite{luo2017ground,iaconis2018spin}. Moreover, the quantum version of the phase diagram also has a spin-liquid phase \cite{zhu2018topography,maksimov2019anisotropic}, which is located along the tricritical boundary between stripe and $120^\circ$ states for a limited range of the $XXZ$ anisotropy near the Heisenberg limit. \subsection{Exploring the XXZ parameter space} One of the puzzling features observed in some of the first experiments in YbMgGaO$_4$\xspace \cite{li2015gapless} was an indication of the field-induced phase crossovers, seen in magnetic susceptibility. Recent susceptibility and TDO measurements of YbMgGaO$_4$\xspace have further supported these observations \cite{steinhardt2020fieldinduced}. Here we present a variety of measurements on high quality single crystal samples to show that very similar features, indicating field-induced crossovers for both $H\!\parallel\! c$ and $H\! \perp\! c$ also occur in YbZnGaO$_4$\xspace. The neutron scattering measurements in the out-of-plane magnetic field $H\!\parallel\! c$ \cite{steinhardt2020fieldinduced}, also indicated a field-induced crossover and brought another piece of evidence to light. Neutron diffraction has showed that the field-induced crossover is accompanied by a shift of magnetic intensity from the $M$-points at the lower fields to $K$-points at the higher fields. In an ordered state, such an intensity shift would correspond to a transition from a four-sublattice to a three-sublattice state. Our key finding is that this feature alone allows one to put strong boundaries on the exchange parameters of the system when the phase diagram of relevant parameters is considered. From the susceptibility data presented above and as established by earlier work in the case of YbMgGaO$_4$\xspace, YbZnGaO$_4$\xspace can be characterized as easy-axis anisotropy while YbMgGaO$_4$\xspace is easy-plane. Therefore, in the following we use two representative values of $XXZ$ anisotropy for the model (\ref{HJpm}), the easy-plane case $\Delta\!=\!0.8$ that is related to YbMgGaO$_4$\xspace, and the easy-axis case $\Delta\!=\!1.1$ as related to YbZnGaO$_4$\xspace. First, we explore the parameter region that would allow for a transition from the four-sublattice to the three-sublattice state at some finite value of the out-of-plane magnetic field $H_c$ using classical energy minimization. We fix the second-nearest-neighbor coupling $J_2$ to the value $0.05J$ (red line in Figure~\ref{fig_3d}) and the $XXZ$ anisotropy in the model (\ref{HJpm}) to the two values discussed above. That leaves the bond-dependent anisotropies $J_{\pm\pm}$ and $J_{z\pm}$ and the field as the parameters to scan through. We highlight our findings in Figure~\ref{HcHsJzpJp} (a) and (b) in the form of an intensity plot of the field $H_c$ of such a four-to-three sublattice transition in units of the saturation field, $H_c/H_s$, and in the $J_{\pm\pm}$--$J_{z\pm}$ axes (in units of $J$). The $120^\circ$ phase is a three-sublattice state already at zero field and remains such for all the fields, while most of the stripe phase regions remain four-sublattice throughout the entire field range. Our key finding is illustrated by the gradient-color regions interpolating between the ``only-three-'' and ``only-four-sublattice" regions in Figure~\ref{HcHsJzpJp} (a) and (b). They demonstrate that already at the level of our classical energy analysis, the four-to-three sublattice transition indeed takes place at some value of $H_c\!<\!H_s$ in a surprisingly extended region that emanates from the $120^\circ$ part of the phase diagram into the stripe-\textbf{yz} phase and extends up to $J_{z\pm} \!\sim\! J$ in both cases, with the intensity emphasizing how far this transition is from the saturation field $H_s$. It should be noted that for $S=1/2$, quantum effects in zero field are known to broaden the stability region of the $120^\circ$ phase beyond the boundaries of the classical model consideration \cite{zhu2018topography}. Therefore, one may also expect the region of the four-to-three sublattice transition to extend beyond the classical predictions in this work. While one can expect that quantum effects will further stabilize and extend the field-induced three-sublattice states, we note that experimentally the ``4-to-3'' (or M-to-K) transition in both YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace occurs at a rather low field, $H_c \!< \!0.5 H_s$, which provides further restrictions on the possible parameter ranges. At the level of our approximations, this constraint puts YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace in a close proximity to the $120^\circ$ phase boundary, giving the upper bound $J_{z\pm}/J \!\alt\! 0.4$. As one can see in Figure~\ref{HcHsJzpJp} (a), there is a narrow region inside the stripe-\textbf{x} phase where the transition of interest also occurs, but it involves another transition back to the four-sublattice state at higher fields, so we do not consider it as a relevant region to the materials in question here. \begin{figure*} \includegraphics[width=\linewidth]{CNCS_2magnon_fixedlabels.PNG} \vskip -0.2cm \caption{Inelastic neutron scattering. Panels (a-e) show evolution of inelastic neutron scattering with increasing applied field for $\vec{H} \parallel \vec{c}$ for integer fields from 0 to 4 T, where the 8 T data (where magnetic excitations have been lifted to above 1.2 meV) has been used for background subtraction). Panels (f) and (g) are calculated using SpinW for 0 T (stripe yz) and 2.5 T (V-state), respectively, with a summation from the possible magnetic domain orientations to best compare to the short-range order observed in experiment. Parameters used for SpinW calculations are (in meV) $J=0.15$, $J_{\pm\pm} = -0.0045$, $J_{z\pm} = 0.045$, $J_{zz}=0.165$, $J^2=0.0075$, $J_{zz}^2=0.00825$, $g_{\parallel}=3.82$, such that $J_{\pm\pm}/J = -0.03$, and $J_{z\pm}/J=0.3$. The dashed lines of (f) and (g) indicate the minimum energy for the 2-magnon dispersion for the possible domains. (h) shows 8 T dispersion, and a dotted line indicating the calculated curve from LSWT for the parameters described above. Experimental data are shown for an integration perpendicular to the path direction of width Q = 0.36 \r{A}$^{-1}$. Cuts for experimental data were generated using Horace\cite{Ewings2016132}. } \label{CNCS} \end{figure*} \begin{figure}[t] \includegraphics[width=0.7\linewidth]{3DPhD} \caption{A simplified classical zero-field $J_2$-$J_{\pm\pm}$-$J_{z\pm}$ phase diagram of the model (\ref{HJpm}). Dashed red line marks a representative value of the $J_2$ term used for all panels of Figure~\ref{HcHsJzpJp}}. \label{fig_3d} \end{figure} \begin{figure*}[b] \includegraphics[width=0.75\linewidth]{combined_fig.png} \vskip -0.2cm \caption{(a) Intensity plot of $H_c/H_s$ in $J_{z\pm}-J_{\pm\pm}$ axes for $\Delta =1.1$, $J_2=0.05J$. $H_c$ is the critical field of the transition from the four- to three-sublattice state, $H_s$ is the saturation field. Dot represents parameter set used in SpinW calculations. $H$ was scanned in steps of 0.1 and units of $g\mu_{\rm B} J$, equivalent to steps of 0.021 in terms of $H_c/H_s$ for $H_s = 4.815$. (b) Same as in (a) for $\Delta =0.8$, where step size in $H_c/H_s$ was 0.026 corresponding to $H_s = 3.87$. (c) Intensity plot of the magnetic susceptibility, $\chi=dM/dH$, in the $J_{z\pm}$--$H$ plane for $\Delta =1.1$, $J_2=0.05J$, and $J_{\pm\pm}=0$. Singularities correspond to phase transitions. (d) Same as in (c) for $\Delta =0.8$.} \label{HcHsJzpJp} \vskip -0.2cm \end{figure*} \subsection{Field-induced states} To provide further insights into the effects of the field, we explore the new phases it can produce. In some cases, the field-induced transformation of the spin structures involves simple canting toward the field until a full saturation is reached at some $H_s$. However, in frustrated spin systems, or systems with anisotropic interactions, the field-evolution is more complicated. The case of the triangular-lattice Heisenberg antiferromagnet is paradigmatic in this respect \cite{starykh2015unusual}, showcasing a well-known and much-studied sequence of transitions from ``Y'' to ``up-up-down'' (UUD) plateau and ``V'' states in its field-evolution toward saturation. The $XXZ$ extension of the same model also includes non-coplanar ``umbrella'' and coplanar ``fan'' states \cite{yamamoto2014quantum}. Next-nearest neighbor interactions and anisotropies also introduce a wider variety of four-sublattice field-induced states \cite{ye2017half,seabra2011competition}. However, the field-evolution of the phases of the anisotropic-exchange model \eqref{HJpm}, which combines frustration from the bond-dependent terms with that of the triangular-lattice geometry, remains largely unexplored. In this work, we offer some essential understanding and make significant steps of such an exploration. We use the same representative values of $J_2\!=\!0.05J$ and of the easy-axis and easy-plane $XXZ$ anisotropies $\Delta\!=\!1.1$ and $\Delta\!=\!0.8$ as in Figs.~\ref{HcHsJzpJp} (a) and (b) to provide an insight into the rich phase diagram of the model \eqref{HJpm} in a field. In Figure ~\ref{HcHsJzpJp} (c) and (d), we present intensity plots of the magnetic susceptibility $\chi\!=\!dM/dH$ in the $J_{z\pm}$--$H$ plane. They are obtained from the classical energy minimization of the four- and three-sublattice states in a field for the two values of the $XXZ$ anisotropy and for $J_{\pm\pm}\!=\!0$. Since singularities in $\chi$ correspond to the phase transitions, these figures, in fact, constitute the 2D $J_{z\pm}$--$H$ phase diagrams of the model (\ref{HJpm}) at fixed $\Delta$ and $J_2$ along the constant-$J_{\pm\pm}$ cut shown in Figs. \ref{HcHsJzpJp} (a) and (b). Similar vertical cuts along $J_{z\pm}$ for the other values of $J_{\pm\pm}$ and $J_2$ are also presented below to provide an understanding of the field-evolution of different states across the other dimensions of the phase diagram. In case of the easy-axis $XXZ$ anisotropy in Figure~\ref{HcHsJzpJp} (c), at the lower values of $J_{z\pm}$ one can observe the expected canonical sequence of the Y-UUD-V phase transitions of the three-sublattice states in the triangular-lattice $XXZ$ model \cite{starykh2015unusual,yamamoto2014quantum}. Their corresponding spin structures are sketched in the figure. The field-induced behavior of the stripe states also includes multiple transitions. At the lowest field, the collinear stripe-\textbf{yz} spin configuration, in which spins are tilted off the basal plane of the lattice, is deformed into a non-coplanar four-sublattice state with all four spins on the elementary plaquette having different tilt angles. There is a broad crossover from this state to a structure with three spins forming an ``umbrella'' and the fourth strictly antiparallel to the magnetic field, see the sketches of the spin order in Figure~\ref{HcHsJzpJp} (c). This latter state is stable in a wide field region. As the field increases further, at not too small $J_{z\pm}$ there is a spin-flop-like transition to a similar state, umbrella with a fourth spin parallel to the field. For the yet larger values of $J_{z\pm}$, transition to the saturation occurs directly from this ``umbrella+up" state via a first-order transition. The main feature in both Figs. \ref{HcHsJzpJp} (c) and (d), which is also our key finding, is that the region of stability of the three-sublattice states, related to the experimentally observed intensity at the $K$ point, \textit{expands} at the larger values of the magnetic field. Therefore, there is a region of the model parameters where an evolution from the four-sublattice to the saturated state \emph{necessarily} proceeds via a high-field three-sublattice state. For the easy-axis case of Figure~\ref{HcHsJzpJp} (c), this high-field state is a coplanar ``V'' state. For the easy-plane $XXZ$ anisotropy case of Figure~\ref{HcHsJzpJp} (d), the four-sublattice phases and all the discussed trends are the same, while the three-sublattice region, classically, is a single noncoplanar ``umbrella'' state, which is a canted $120^\circ$ structure. We note that in the quantum case and for not too small $XXZ$ anisotropy $\Delta$, this umbrella state is replaced by the same sequence of Y-UUD-V phases as in Figure~\ref{HcHsJzpJp} (c) \cite{yamamoto2014quantum}. In contrast with the large-$J_{z\pm}$ first-order transition from the four-sublattice state to saturation, the transition from both V and umbrella states to saturation is second-order. We note that the discussed transitions agree with the prior classical Monte-Carlo simulations \cite{steinhardt2020fieldinduced} conducted in the context of the parameter search for YbMgGaO$_4$\xspace for a narrower range of parameters. \begin{figure*} \includegraphics[width=\linewidth]{d11_j2_jpp} \vskip -0.2cm \caption{Intensity plots of the magnetic susceptibility for the easy-axis case, $\Delta =1.1$ and for various $J_2$ and $J_{\pm\pm}$. The axes and color-scale are the same as in Figure~\ref{HcHsJzpJp} (c), which is also the central panel here.} \label{fig_d11_j2_jpp} \end{figure*} \subsection{Further evolution of the phases} \begin{figure*} \includegraphics[width=\linewidth]{d08_j2_jpp} \caption{Same as in Figure~\ref{fig_d11_j2_jpp} for $\Delta =0.8$. Phase diagrams without three-sublattice states are marked with the red frame.} \label{fig_d08_j2_jpp} \end{figure*} Here we present further details on the field-evolution of various phases of the model (\ref{HJpm}) for different choices of $J_2$ and $J_{\pm\pm}$. Our Figs.~\ref{fig_d11_j2_jpp} and \ref{fig_d08_j2_jpp} show the intensity plots of the magnetic susceptibility in the $J_{z\pm}$--$H$ plane for the same choices of the $XXZ$ anisotropy $\Delta=1.1$ and $\Delta=0.8$ as in Figs.~\ref{HcHsJzpJp} (c) and (d), respectively. Each row of the graphs corresponds to a different constant-$J_{\pm\pm}$ cut of Figs.~\ref{HcHsJzpJp} (a) and (b) and each column corresponds to a different constant-$J_2$ cut of the 3D phase diagram in Figure~\ref{fig_3d}. While the variations of the $J_{\pm\pm}$ term simply give an elaborate dissection of what happens underneath the projected view of the intensity plots of Figure~\ref{HcHsJzpJp} (a) and (b) for $J_{\pm\pm}=\pm 0.05J$ cuts, it is clear that the next-nearest-neighbor interaction $J_2$ works strongly against the field-induced three-sublattice states. This is in accord with Figure~\ref{fig_3d}, which shows that at $J_2\approx0.125J$ the $120^{\degree}$ state is completely eliminated from the zero-field phase diagram. The sets of parameters where no three-sublattice state is observed at any field are highlighted with a red frame in Figure~\ref{fig_d08_j2_jpp}. There are other changes that include shrinking or expanding of the phases already described and the appearance of a new canted stripe state for large $J_2$ and negative $J_{\pm\pm}$ that occur at $J_{z\pm}\lesssim 0.2J$ and larger fields for $\Delta\!=\!1.1$ and for all fields for $\Delta\!=\!0.8$, see upper right panels of Figs.~\ref{fig_d11_j2_jpp} and \ref{fig_d08_j2_jpp}. The main takeaway from Figs.~\ref{fig_d11_j2_jpp} and \ref{fig_d08_j2_jpp} is that the region of the four- to three-sublattice transition is limited by the extent of $J_{\pm\pm}$ already shown in Figs.~\ref{HcHsJzpJp} (a) and (b) and more strongly by the next-nearest-neighbor interaction $J_2$. Therefore, the experimental observations that the critical field of such a transition $H_c\!\alt\!H_s$ in both YMGO and YZGO put strong bounds on the anisotropic-exchange parameters. This analysis also limits next-nearest neighbor interactions to $J_2\!\alt\!0.1J$ as only four-sublattice states survive for larger $J_2$. \subsection{$S({\bf Q},\omega)$ for select parameters} In order to relate the above theoretical analysis to our experimental results for YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace, we used SpinW \cite{toth2015linear} to calculate linear spin-wave theory (LSWT) $S({\bf Q},\omega)$ for representative sets of parameters from the identified regions of the phase diagram that are shown by dots in Figs.~\ref{HcHsJzpJp} (a) and (b). Since the phase diagrams are for the parameters in units of the overall scale $J$, the latter is set to match LSWT dispersion in high fields. Comparison of the data to calculations from SpinW is inherently challenging owing to the implicit assumption in the SpinW calculations of the absence of disorder (which is necessary to reproduce the broadening in energy and $Q$ observed in experiment). Nonetheless, for the parameters indicated in Figure \ref{HcHsJzpJp} (a) we find very good qualitative agreement between calculations and data (see Figure \ref{CNCS}) in the absence of disorder. The SpinW optimization algorithm optmagsteep was used with iterative manual adjustment of the spin state until the stable spin state (free of imaginary modes) was found\footnote{We should note that the ``V'' state spectrum is unstable for 2.5 T near the $\Gamma$ point for the chosen set of parameters. The instability is weak and may signify a transition to a more complicated multi-$\mathbf{Q}$ state. While the manual tweaking allows us to find a quasi-stable state with the gapped spectrum, the stable ``V'' state spectrum is gapless as any three-sublattice state of \eqref{HJpm} with broken continuous symmetry, see also Ref.\cite{maksimov2019anisotropic} for the $120^\circ$ state spectrum.}. In particular, comparing the diffuse scattering from Figure \ref{CNCS} (a) 0 T and the resolution-convoluted dispersion calculated from SpinW in Figure \ref{CNCS} (f), we note the enhanced intensity in the vicinity of M and K clearly present in both. Furthermore, the greater intensity residing in a range of energy from near the elastic line to about 0.2 meV in the calculation is clearly present, albeit undoubtedly broadened by disorder, in the experimental data. As discussed in previous works \cite{paddison2017continuous,ma2018spin,li2017crystalline}, disorder ``smears'' the intensity in $Q$ and $E$, which would also account for the intensity visible at higher energies in the INS data. In Figure \ref{CNCS} (g) we show the calculated dispersion using SpinW at 2.5 T, where the spins reside in the `V' state. The dispersion has again been convoluted with the resolution of the instrument for best comparison to the experimental data. Note that the intensity at the $\Gamma$ point (though suppressed as discussed below) is at approximately 0.5 meV, as expected from comparison to the 2 and 3 T data. Furthermore, the modes above the Brillouin zone edge (which we emphasize follow from the superposition of 3 domains in the clean limit) offer a hint as to the the origins of the the broad continuum apparent in the data. As the field is raised, the contributions from modes above the zone edge in the SpinW calculation are reduced (compare the intensity from modes above M and K at 0 and 2 T). When the system is fully saturated, and the `V'-state gives way to the field-induced ferromagnetic state, the modes above the zone edge are completely absent (see the curve calculated from the analytic expression following from linear spin-wave theory superimposed on the 8 T INS data in Figure \ref{CNCS} (h)). This evolution of intensity is qualitatively consistent with that observed in the data, where broadening due to disorder smears out the modes. That is, the modes residing above the zone edge contribute to the continuum at low to intermediate field, but are reduced in intensity as the system enters the (subject-to-disorder) `V'-state, and are replaced by the broadened, otherwise singular mode when the system is fully polarized. We also point out that an apparent visibility of the magnon modes in LSWT $S({\bf Q},\omega)$ in the vicinity of the $\Gamma$ point at low fields, Figure~\ref{CNCS}(f),(g), and lack thereof in the experimental data, Figure~\ref{CNCS}(a-c), can be explained by a significant interaction of the single-magnon branch with the two-magnon continuum. While the quantitative calculation of such effects in the anisotropic-exchange models is quite involved \cite{Kopietz2020, winter2017breakdown} and is beyond the scope of the present work, we, nevertheless, provide an intuitive insight into it by showing the bottom of the two-magnon continuum for three magnetic domains in Figure~\ref{CNCS}(f) and (g). As shown by the dashed lines in Figure~\ref{CNCS} for both 0 T and 2.5 T, the bottom of the continuum has lower energy than one magnon modes at the large portion of the Brillouin zone, including the $\Gamma$ point. For $S=1/2$ systems, such an overlap can lead to strong decays and near-complete disappearance of the well-defined magnon excitations in the corresponding range of momenta together with the other phenomena such as strong renormalization of the single-magnon branches \cite{zhitomirsky2013colloquium,chernyshev2009spin}. These effects require significant coupling between one- and two-magnon sectors - an inevitable consequence of the anisotropic exchange terms in \eqref{HJpm}, which are allowed by the presence of strong spin-orbit coupling in the rare-earth-based and some transition-metal compounds\cite{maksimov2020rethinking,winter2017models,Kopietz2020}. We have also calculated the 0 T $S(\vec{Q},\omega)$ for the parameters identified as appropriate for the easy-plane character of YbMgGaO$_4$\xspace, identified by a dot in Figure~\ref{HcHsJzpJp} (b), (see Supplementary Figure 10), which is in excellent qualitative agreement with Figure 2 (a) of \cite{paddison2017continuous}. We further calculated the $S(\vec{Q},\omega)$ for the field-induced polarized state, where the overall scaling of the model (given by $J$) was chosen for best agreement. Within the uncertainty of the broad scattering observed in experiment, this calculation also yields a very good qualitative agreement (Supplementary Figure 11). \section{Discussion} The hunt for the QSL state has yielded numerous studies of interesting new compounds and underlying physical phenomena, even though the ultimate goal may remain elusive. The search for a QSL state in the context of YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace materials seems \emph{likely} nearing its end, although one cannot claim yet with absolute certainty that the issue is completely settled \cite{wu2020exact}. In the present work, we have established a potentially close description of YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace in a hypothetical \emph{clean} limit, allowing for the implications of the disordered, real-world materials to follow. While possibly denied the coveted QSL state, a deeper understanding of the rich and diverse phase diagram of the triangular-lattice antiferromagnets is still a highly satisfying reward that will likely serve as a road map for future studies, in particular for designing new materials or advancing these studies to explore this rich phase diagram. The present work seeks to advance our understanding of the phase diagram describing YbMgGaO$_4$\xspace, YbZnGaO$_4$\xspace, and many related systems. We note that our specific conclusions about these materials are a part of a greater effort to narrow down possible magnetic exchange parameters in these systems \cite{zhang2018hierarchy,bachus2020field,li2020reinvestigation,steinhardt2020fieldinduced}, and provide a guideline on how to address the issue of chemical disorder in real-world materials. In a very recent work \cite{steinhardt2020fieldinduced} by some of the same authors, the phase crossovers similar to those shown in Figure \ref{TDO_SQUID} and Figure \ref{CORELLI} were used to constrain parameters of YbMgGaO$_4$\xspace, with the focus on reproducing them in the disorder-free limit, but \emph{without} a broader map of the phase diagram beyond the observations offered by an optimization algorithm. Indeed, the methods used in that prior work suggest a general approach to find exchange parameters for disordered systems. By contrast, the goal of \emph{this} work is to provide an expanded context of the phase diagram in which the observed crossovers \emph{can occur in principle}, offering a common description for the phenomena observed in both systems. As such, the present work will serve as an invaluable map in the search of QSL and other intriguing states and phenomena in frustrated triangular-lattice compounds, and will importantly benefit the materials design efforts in this field. With this last point in mind, recent experiments in the ytterbium-based chalcogenides also show promising QSL features and numerous phase transitions induced by an applied magnetic field \cite{liu2018rare,ding2019gapless,ranjith2019anisotropic,ranjith2019field,baenitz2018planar,xing2019field}. The precise nature of these transitions is not yet known and needs to be analyzed in a manner and within the framework suggested in the present study. These recent studies also highlight potential further insights that can bring deeper understanding of YbZnGaO$_4$\xspace and YbMgGaO$_4$\xspace by experiments with the field along different directions. Altogether, we have demonstrated the power of the experimental and theoretical insights in identifying relevant parameter spaces of YbMgGaO$_4$\xspace and YbZnGaO$_4$\xspace and, potentially, other related materials. We have investigated their field-induced phases and have shown that despite their disorder-induced pseudo-SL ground states, we can significantly narrow the allowed regions of their phase diagram that are compatible with the phenomenologies of these materials. Similar consideration can be applied to the other materials such as chalcogenides. More experimental and theoretical investigations, such as targeted materials design, neutron scattering for the in-plane field directions augmented by analytical and numerical studies for this setting, and utilizing alternative tuning parameters such as external pressure, all could be useful for exploring the diverse phase diagram of this group of compounds and shedding further light on their rich physics. \section{Methods} \subsection{Synthesis} Samples were synthesized from finely mixed Yb$_2$O$_3$ (99.9$\%$), ZnO(99.9$\%$), and Ga$_2$O$_3$ (99.999$\%$) powders at 1350 $\degree$C. High quality single-crystals (such as pictured in Supplementary Figure 2) were grown using the optical floating zone technique. A typical growth was conducted in 1 MPa O$_2$ atmosphere with a speed ranging from 4 to 10 mm/hour. Single crystal quality was confirmed via laue x-ray diffraction (Supplementary Figure 3) while powder xray diffraction (PXRD) was used to confirm the correct phase at every step of synthesis (see Supplementary Figure 1). \subsection{High Resolution Magnetization Measurements} High-resolution measurements of magnetization were achieved with the complimentary tunnel diode oscillator (TDO) technique \cite{steinhardt2020fieldinduced}. In a TDO measurement, a tunnel diode is biased to operate in the “negative resistance” region of the IV-curve. This provides power that maintains the resonance of a LC-circuit at a frequency range between 10 and 50 MHz. A nearly single-crystal sample with dimensions of ~2 mm in length and ~1 mm in diameter was wound inside a detection coil, with the $\textbf{c}$ axis of the sample aligned with the coil axis. The sample and coil constitute the inductor of the LC circuit. With the application of field, the sample magnetization change induces a change in the inductance, and thus shifts the resonance frequency. This technique enables highly sensitive detection of changes of magnetic moments ~ $10^{-15}$ Am$^2$. \cite{van1975tunnel}. Magnetization was also directly measured via an in-house Cryogenic S700X SQUID magnetometer in temperatures down to 300 mK using a 3He probe. A 1.81 mg sample was mounted on a silver straw with vacuum grease in $\textbf{H} \parallel \textbf{c}$ and $\textbf{H}\perp\textbf{c}$ orientations. ac-susceptibility measurements were carried out on a long, approximately rectangular sample with field perpendicular to the sample $c$ axis in a dilution refrigerator with a base temperature of 20 mK. \subsection{Neutron Scattering} \subsubsection{Diffuse magnetic scattering at CORELLI} Diffuse neutron scattering data were collected at the CORELLI spectrometer at Spallation Neutron Source, Oak Ridge National Laboratory \cite{rosenkranz2008corelli}. This instrument is a quasi-Laue TOF instrument equipped with a 2D detector, with a -20$\degree$ to +150$\degree$ in-plane coverage. The incident neutron energy was between 10 meV and 200 meV. A superconducting magnet was used to provide a vertical magnetic field up to 5 T, which constrained the out-of-plane coverage to $\pm$ 8$\degree$. A ~0.8 g single crystal was mounted on a Cu plate in a dilution refrigerator. The sample was aligned with the $(\textbf{h}, \textbf{k}, 0)$ plane horizontal and the magnetic field along the [0,0,$\textbf{l}$] direction. Neutron-absorbing Cd was used to shield the sample holder to reduce the background scattering. Experiments were conducted with applied fields at the base temperature of 130 mK by rotating the crystal through 180$\degree$ in 3$\degree$ steps, and then at 20 K in the same fields for background subtraction. The data were reduced using Mantid for the Lorentz and spectrum corrections \cite{michels2016expanding}. To account for the temperature factor in our background subtraction in total-scattering mode, we compared the ratio of integrated intensities of a rectangular volume of reciprocal space at 5 T for both temperatures. The region was bounded by $-0.45 < \textbf{h} < -0.35$, $0.5 < \textbf{k} < 0.6$, and $-2 < \textbf{l} < 2$. We used this region, away from the zone edge, and our 5 T data to ensure the comparison was unaffected by diffuse magnetic scattering. This integration approximates the ratio of Bose population factors - our scaling factor was $1.01\pm0.005$. To improve statistics, we used symmetry operations. All analysis and visualization were performed using Mantid and Python. \subsubsection{Inelastic neutron scattering at the CNCS, DCS, and SPINS} We conducted inelastic neutron scattering experiments at the Cold Neutron Chopper Spectrometer\cite{ehlers2011cncs} (CNCS) at Oak Ridge National Laboratory, and the Disk-Chopper Spectrometer\cite{COPLEY2003477} (DCS) and Spin Polarized Inelastic Neutron Spectrometer (SPINS) at the National Institute of Standards and Technology. The same ~1.4 g single crystal sample was aligned to use the $[h,k,0]$ scattering plane and with the field parallel to the sample $\vec{c}$ axis (along the [001] direction). All measurements were carried out in dilution refrigerators, with sample mounted on copper sample mounts such as pictured in Supplementary Figure 2. The base temperatures for CNCS, DCS, and SPINS measurements were 50 mK, 70 mK, and 60 mK, respectively. For CNCS and DCS, the sample was rotated through approximately 180 degrees. The incident energy used at CNCS, DCS, and SPINS was 3.9 meV, 3.55 meV, and 3.7 meV, respectively. Analysis of CNCS data was carried out in large part using the HORACE software package\cite{Ewings2016132}. Analysis of DCS data was carried out in large part using the DAVE software package\cite{azuah2009dave}. \subsubsection{Polarized neutron scattering at BT7} We measured using polarized neutrons at the BT7\cite{lynn2012bt7} beamline at the National Instistute of Standards and Technology. We measured with a fixed final energy of 14.7 meV at about 0.5 meV such that the FWHM of the collimated beam was 1.08 meV and encompassed the energy range of the continuum as pictured for low fields in Figure \ref{CNCS}. Samples were again aligned to use the [h,k,0] scattering plane, and measurements were conducted using a vertical guide field (parallel to the sample c-axis). A 3He cryostat was used. 0 T measurements were performed in the absence of a magnet, and then a 7 T magnet was added to perform measurements at 2 T. Flipping ratios ranged from 19 to 33 for the 0 T measurements and from ~17 to ~28 for the 2 T measurements. Polarization correction was performed using \emph{pbcor} software. \label{sec:methods} \subsubsection{Theory} For the LSWT $S({\bf Q},\omega)$ we used SpinW calculations \cite{toth2015linear}. The global phase diagram was obtained using classical energies for the single-${\bf Q}$ states, see Refs.~\citep{zhu2018topography,maksimov2019anisotropic}, and for the field-induced phases, classical energy minimization of the three- and four-sublattice structures was used. \section{Data availability} All relevant data are available from the authors upon reasonable request. This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan(http://energy.gov/downloads/doe-public-access-plan). \begin{acknowledgments} The work of A.~L.~C. was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Awards No. DE-FG02-04ER46174 and DE-SC0021221. P.~A.~M. acknowledges support from JINR Grant for young scientists 20-302-03. A.~L.~C. would like to thank Kavli Institute for Theoretical Physics (KITP) where this work was advanced. KITP is supported by the National Science Foundation under Grant No. NSF PHY-1748958. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by the National Science Foundation Cooperative Agreement No. DMR1157490 and DMR-1644779, the State of Florida and the U.S. Department of Energy. A portion of this research used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. We acknowledge the support of the National Institute of Standards and Technology, U.S. Department of Commerce, in providing the neutron research facilities used in this work. The identification of any commercial product or trade name does not imply endorsement or recommendation by the National Institute of Standards and Technology. \end{acknowledgments} \section{Author Contributions} Research conceived by S.H.; Samples synthesized by W.S., C.M. and S.H.; Magnetic measurements performed and analyzed by Z.S., W.S., D.G., and S.H.; Neutron scattering measurements performed and analyzed by W.S., S.D., N.P.B., A.P., Y.L., G.X., Y.Z., J.W.L, and S.H.; Theoretical calculations performed by P.A.M and A.L.C.; Manuscript written by W.S., P.A.M, A.L.C., and S.H.; Project supervised by A.L.C and S.H.; All authors commented on the manuscript. \section{Competing Interests} The Authors declare no Competing Financial or Non-Financial Interests. \bibliographystyle{apsrev4-1}
{ "timestamp": "2021-05-06T02:06:03", "yymm": "2105", "arxiv_id": "2105.01790", "language": "en", "url": "https://arxiv.org/abs/2105.01790" }
\section{Introduction} While turbulent flows are different from one another at large scales, they are universal at small scales. Understanding the nature of small-scale turbulence is at the center of turbulence research \cite{frisch1995turbulence, sreenivasan1997phenomenology,johnson2018predicting}. The beginning point is usually the Richardson cascade \cite{richardson2007weather, kolmogorov1941local}, according to which large-scale eddies break into small-scale eddies, and small-scale eddies break into lesser-scale eddies. This eddy breakup process is self-similar in the inertial range, where neither viscosity nor flow geometry plays an important role in determining flow's dynamics. While most authors acknowledge the eddy breakup process as being self-similar, how one models the eddy breakup process differs, and that has led to vastly different speculations about the nature of small-scale turbulence. Kolmogorov \cite{kolmogorov1941local} models eddy breakup as an even partition of mother eddy's turbulent kinetic energy. It follows from Kolmogorov that eddies' population densities are scale-invariant, and relatively small-scale turbulence is no different from relatively large-scale turbulence. On the other hand, Frisch \cite{frisch1978simple} argues that turbulence occupies less space as the cascade process continues to small scales, and small-scale turbulence consists of bursts of velocity fluctuations. According to Frisch, eddies' population densities are scale-dependent, and the probability of observing turbulence diminishes at small scales. {This picture was adopted in the study of vortex filaments: vortex filaments occupy less physical space at smaller scales \cite{jimenez1993structure,jimenez1998characteristics}.} Besides Kolmogorov and Frisch, many have proposed models for the Richardson cascade \cite{benzi1984multifractal, meneveau1987simple, benzi1991multifractality, sreenivasan1991fractals, biferale2004multifractal}. Like Kolmogorov and Frisch, while they all invoke the Richardson cascade, their models lead to different speculations about the nature of small-scale turbulence. The Richardson cascade being self-similar says very little about eddies population densities and the nature of small-scale turbulence. The self-similar Richardson cascade requires the eddy population density scales as {$P(S_i(l))\sim l^{\zeta_i}$} but with no further requirement on $\zeta_i$'s values. Here, $S_i(l)$ is a given type of $l$-scaled eddy, $i$ indexes all types of eddies, $P(S_i(l))$ is the probability density function for observing $S_i(l)$, and $\zeta_i$ is a positive number. In fact, for any $\zeta_i$, $P(S_i(l))$'s variation from one scale $l$ to the next scale $l/2$ is \begin{equation} 1-\frac{P(S_i(l/2))}{P(S_i(l))}=1-\frac{1}{2^{\zeta_i}}, \end{equation} i.e., not a function of $l$ and therefore self-similar irrespective of $\zeta_i$'s value. {Here, the length scale $l$ in the scaling $P(S_i)\sim l^{\zeta_i}$ needs normalization. Following the convention, if a process leads to a scaling that is an increasing function of $l$, i.e., if $\zeta>0$, the proper normalization length scale should be the Kolmogorov length scale $\eta$. The resulting scaling would be $(l/\eta)^{\zeta_i}$. Consequently, the integral length scale would not be a part of the scaling. Here, $\zeta_i\geq 0$, and therefore $P(S_i)\sim (l/\eta)^{\zeta_i}$. In the following, we will omit $\eta$ for brevity. } Unlike the Richardson cascade and its insensitivity to $\zeta_i$'s value, the small-scale turbulence and its nature critically depend on whether $\zeta_i$'s are zero. Consider two eddy types: $i$ and $j$. If $\zeta_i\neq \zeta_j$, the fact that $\lim_{l/\eta\to \infty, Re\to \infty}l^{\zeta_i}/l^{\zeta_j}$ is either 0 or infinity suggests that one eddy type dominates the other at small scales. Here, $\eta$ is the Kolmogorov length scale, and $Re$ is the {Taylor microscale Reynolds number}. Hence, if $\zeta_i\not \equiv 0$, one or a few eddy types dominate at small scales. On the other hand, if $\zeta_i\equiv 0$, eddies' population densities are invariant across the inertial range, and there would be as many types of eddies at small scales as at large scales. Eddies' population density being scale-invariant in the inertial range is, to date, unconfirmed speculation about small-scale turbulence. It is also a fundamental property of fractal interpolation \cite{scotti1997fractal,scotti1999fractal,basu2004synthetic,ding2010synthetic} (and an implied property of turbulence in Refs \cite{de2013multiscale,wu2020high}). When applying fractal interpolation, one re-scales the large-scale flows and populates them at small scales, which results in scale-invariant eddy population densities. However, those fractal models lack {\it a priori} validation, and the question remains open as to what is the true nature of small-scale turbulence. To answer the above question, we need to measure eddies' population densities, $P(S_i(l))$. Directly measuring eddies' population densities $P(S_i(l))$ as a function of $l$ is very difficult, if not impossible, as there are many eddy types. In this letter, we infer $P(S_i(l))$'s $l$ scaling by studying the statistics properties of ``equivalent eddies classes''. After five steps' derivation, we will come to the conclusion $P(S_i(l))\sim l^0$. First, we define eddies and eddy classes. We begin by defining an observation window. Denote a point in the turbulent flow field as ${\bf x}=(x_1,x_2,x_3)$. We define $\Omega(l,{\bf x})$ to be a one-dimensional observation window in an arbitrary direction (note that small-scale turbulence is isotropic). The size of the observation window is $l$, and the point {\bf x} belongs to $\Omega$. How the size of the observation window is measured can be somewhat arbitrary. For this discussion, we may think of the observation window as a lens centered at ${\bf x}$ with its length being $l$. We define a turbulent eddy as the velocity segment within an observation window, i.e., $\{\left.{\bf u}({\bf x})\right|x\in \Omega(l,{\bf x})\}$. {Thus defined eddies exist everywhere in the flow (as opposed to vortex filaments, which occupy a fraction of the physical space).} The ensemble of velocity segments at all locations and all scales (all $l$) contains all eddies. {The definition concerns eddies in the spatial domain only. We can also define eddies in the temporal domain, and if Taylor's hypothesis holds, we should come to the same conclusions. } Second, we define geometric equivalence. Consider two velocity segments $\{{\bf u}({\bf x'}_1),{\bf x'}_1$ $\in \Omega(l_1,{\bf x}_1)\}$ and $\{{\bf u}({\bf x'}_2),{\bf x'}_2\in\Omega(l_2,{\bf x}_2)\}$. We say that the two velocity segments are equivalent if there exist a constant velocity vector ${\bf u_0}$ and a constant positive real number $c$ such that for all $x_1'\in \Omega(l_1,x_1)$, we have \begin{equation} {\bf u}({\bf x}_2+l_2({\bf x}'_1-{\bf x}_1)/l_1)=c{\bf u}({\bf x'}_1)+{\bf u_0}. \label{sim} \end{equation} The notion of equivalence makes it possible for us to split velocity segments into equivalent eddy classes \cite{devlin2003sets}, and we denote these equivalent eddy classes as $S_i$, $i=1$, 2, 3, ... Per our definition, we have: first, any velocity segment must belong to some equivalent eddy class; second, two velocity segments that are equivalent must belong to the same equivalent eddy class; third, one velocity segment cannot belong to two equivalent eddy classes, i.e., equivalent eddy classes are mutually exclusive. It therefore follows that \begin{equation} P\left(\cup_{i\in I} S_i(l)\right)= \sum_{i\in I}P\left(S_i(l)\right), \label{eq:P} \end{equation} for any union of eddy classes $I$. {Note that the above definition does not concern eddies' dynamics \cite{johnson2016large,johnson2020energy}, and we do not study interactions among eddy classes.} Figure \ref{fig:eec} shows a few velocity segments that belong to the same equivalent eddy class. \begin{figure} \includegraphics[width=0.49\textwidth]{eec.pdf} \caption{Three velocity segments that belong to the same equivalent eddy class. (b) is (a) compressed and displaced, i.e., $u_2(x_2+l_2(x'-x_1)/l_1)=u_1(x')+\Delta U_2$. (c) is (a) stretched and displaced, i.e., $u_3(x_3+l_3(x'-x_1)/l_1)=u_1(x')+\Delta U_3$. } \label{fig:eec} \end{figure} Third, we define $S_i(l)$'s ``$n$-point equivalent eddy class'': {\small $S_i^{(n)}(l)$}. Given a velocity segment that belongs to $S_i(l)$ and $n$ sampling points on the segment, $S_i(l)$'s $n$-point equivalent eddy class contains all velocity segments that match the given velocity segment at these $n$ sampling points (up to a constant displacement and a multiplying factor). For example, given the velocity segment in figure \ref{fig:npt} (a) and its equivalent eddy class $S_i$, the velocity segments in figure \ref{fig:npt} (b, c) belong to $S_i(l)$'s 5-point, and 21-point equivalent eddy classes. \begin{figure} \includegraphics[width=0.49\textwidth]{npoint.pdf} \caption{\label{fig:npt} (a) A velocity segment in a given $S_i(l)$ and (b, c) $S_i^{(n)}$ for $n=$ 5, and 21. } \end{figure} Considering that two sampling points are practically one if the distance between them is less than one Kolmogorov length scale, we require that any two of the $n$ points have a distance of at least one Kolmogorov length scale. {Considering that the flow is isotropic}, we can think of the $n$ sample points as evenly spaced. Per the above definition, we have: first, $S_i(l)$'s $n$-point equivalent eddy class contains $S_i(l)$ itself; second, as $n$ increases, $S_i(l)$'s $n$-point equivalent eddy class approaches $S_i(l)$ itself; third, $S_i(l)$ and $S_j(l)$ give rise to the same $n$-point equivalent eddy class if the velocity segments in the two eddy classes $S_i(l)$ and $S_j(l)$ match at the $n$ sampling points; conversely, if the velocity segments in the two eddy classes $S_i(l)$ and $S_j(l)$ do not match at the $n$-sampling points, their $n$-point equivalent eddy classes, i.e., {\small $S_i^{(n)}(l)$} and {\small$S_j^{(n)}(l)$}, are two different sets; fourth, the union of all $S_i^{(n)}(l)$ contain all possible velocity segments at the scale $l$. In practice, to determine whether a given velocity segment belongs to {\small $S_i^{(n)}(l)$}, we compute the following $n-2$ by 1 feature vector, {\small $\pmb{\theta}^{(n)}(l)$}, whose $i$th component is \begin{equation} \theta_i^{(n)}(l)=\tan^{-1}\frac{u(x_{i+2})-u(x_{i+1})}{u(x_{i+1})-u(x_i)}, \end{equation} where $x_{i}$, $i=1$, ..., $n$ is the $i$th sampling point on the velocity segment, $\tan^{-1}$ is the inverse of the tangent function, and we define $\tan^{-1}(\pm\infty)=\pm \pi/2$. Two velocity segments that give rise to the same $\pmb{\theta}^{(n)}(l)$ belong to the same $n$-point equivalent eddy class $S_i^{(n)}(l)$. Figure \ref{fig:theta} sketches how one may compute $\pmb{\theta}^{(n)}(l)$ for $n=3$ and $n=5$. \begin{figure} \includegraphics[width=0.49\textwidth]{theta.pdf} \caption{\label{fig:theta}Schematic of two velocity segments, and their 3-point and 5-point feature vectors $\pmb{\theta}^{(n)}(l)$. (a) $\pmb{\theta}^{(3)}(l)$, (b) $\pmb{\theta}^{(5)}(l)$. Computing $\theta_i^{(n)}(l)$ involves $u(x_i)-u(x_{i-1})$ and $u(x_{i-1})-u(x_{i-2})$. Here, we color $u(x_i)-u(x_{i-1})$ yellow if $u(x_i)>u(x_{i-1})$ and blue if $u(x_i)<u(x_{i-1})$. Given the definition of $\tan^{-1}$, $\theta_i^{(n)}(l)$ is positive if $\left(u(x_i)-u(x_{i-1})\right)\left(u(x_{i-1})-u(x_{i-2})\right)>0$, and $\theta_i$ is negative if $\left(u(x_i)-u(x_{i-1})\right)\left(u(x_{i-1})-u(x_{i-2})\right)<0$. For the two velocity segments here, $\theta_1^{(3)}(l)$ in (a) and $\theta_2^{(5)}(l)$ in (b) are negative; $\theta_1^{(5)}(l)$ and $\theta_3^{(5)}(l)$ in (b) are positive.} \end{figure} Fourth, we compute $\pmb{\theta}^{(n)}(l)$'s statistics, the knowledge of which will allow us to infer $P(S_i^{(n)}(l))$'s $l$ scaling. Formally, given a function $f(\pmb{\theta}^{(n)}(l))$, {its ensemble average is its $P(S_i(l))$ weighted sum over all possible $S_i(l)$'s}, and therefore we have \begin{equation} \footnotesize \left<f\left(\pmb{\theta}^{(n)}(l)\right)\right> =\sum_{i'}P\left(S_{i'}^{(n)}(l)\right)\left<f\left(\pmb{\theta}^{(n)}(l)\right)\right>_{S_{i'}^{(n)}(l)}, \label{eq:ftht} \end{equation} where the summation is among the mutually exclusive $n$-point equivalent eddy classes, $\left<\cdot\right>_{S_{i'}^{(n)}(l)}$ is the ensemble average given only velocity segments in $S_{i'}(l)$'s $n$-point equivalent eddy class ${S_{i'}^{(n)}(l)}$. {While it is not the focus of this work, we can compute any statistics by summing up contributions due to all eddy classes. For example, the $p$th-order velocity structure function is \begin{equation} \begin{split} &\left<(u(x+l)-u(x))^p\right>\\ =&\sum_i P(S_i(l))\left<(u(x+l)-u(x))^p\right>_{S_i(l)}. \end{split} \end{equation}} Here, we compute {\small $f(\pmb{\theta}^{(3)}(l))=(\theta_1^{(3)}(l))^2$} according to Eq. \eqref{eq:ftht}. Let us say that the flow has only two mutually exclusive 3-point equivalent eddy classes: {\small $S_{1}^{(3)}(l)$} and {\small $S_{2}^{(3)}(l)$}. The velocity segments in {\small $S_{1}^{(3)}(l)$} and {\small $S_{1}^{(3)}(l)$} correspond to the feature vectors $\theta'$ and $\theta''$. (For $n=3$, the feature vector has only one component.) The eddy population densities are {\small $P(S_{1}^{(3)}(l))\sim l^{\zeta_1}$} and {\small $P(S_{2}^{(3)}(l))\sim l^{\zeta_2}$} as required by the Richardson cascade. It follows from Eq. \eqref{eq:ftht} that \begin{equation} \begin{split} \left<\left(\theta_1^{(3)}(l)\right)^2\right> &\sim c_1(l/\eta)^{\zeta_1}\theta'^2+c_2(l/\eta)^{\zeta_2}\theta''^2, \end{split} \label{eq:f3} \end{equation} where $c_1$ and $c_2$ are two constants. If $\zeta_1\neq \zeta_2$, one of the two terms in Eq. \eqref{eq:f3} dominates at sufficiently high Reynolds numbers. Without loss of generality, let us say $\zeta_1\geq \zeta_2\geq 0$. For a given $l/L$, we have \begin{equation} \begin{split} \small &\lim_{Re\to \infty}\left<\left(\theta_1^{(3)}(l)\right)^2\right>\\ \sim &\lim_{l/\eta\to \infty} \left({l}/{\eta}\right)^{\zeta_1}\left[1+c_3\left({l}/{\eta}\right)^{\zeta_2-\zeta_1}\right]\\ = &(l/\eta)^{\zeta_1}. \end{split} \label{eq:f3-1} \end{equation} In this case, {\small $\left<(\theta_1^{(3)}(l))^2\right>\sim l^0$} if and only if $\zeta_1=0$. Also, because $\zeta_1\geq \zeta_2\geq 0$, if $\zeta_1=0$, we would have $\zeta_1=\zeta_2=0$, and the population densities of the two 3-point equivalent eddy classes would have $l^0$ scaling. On the other hand, if $\zeta_1=\zeta_2(=\zeta)$, Eq. \eqref{eq:f3} becomes \begin{equation} \left<\left(\theta_1^{(3)}(l)\right)^2\right> \sim l^{\zeta}(\theta'^2+\theta''^2). \label{eq:f3-2} \end{equation} Again, {\small $\left<(\theta_1^{(3)}(l))^2\right>\sim l^0$} if and only if $\zeta=0$. {The above argument relies on {\it a priori} knowledge of $\zeta_i$'s sign.} In the supplemental material, we present a derivation that does not rely on our knowledge of $\zeta$'s sign. {The idea is to consider two $\mathbf{\theta}$'s statistics. We would then be able to determine $\zeta_{1,2}$'s values directly from the two scalings. (It is like solving for two unknowns from two equations.)} Generalizing the above derivation to an arbitrary number of 3-point equivalent eddy classes, Eq. \eqref{eq:f3} becomes \begin{equation} \left<\left(\theta_1^{(3)}(l)\right)^2\right> \sim l^{\zeta_1}\theta'^2+l^{\zeta_2}\theta''^2+l^{\zeta_3}\theta'''^2+... \end{equation} Following the same logic, we conclude that if {\small $\left<(\theta_1^{(3)}(l))^2\right>\sim l^0$}, the eddy population density scales as {\small $P(S_i^{(3)}(l))\sim l^0$}. We now examine the data to see if {\small $\left<(\theta_1^{(3)}(l))^2\right>$} scales as $l^0$. Figure \ref{fig:tht1} shows {\small $\left<(\theta_1^{(3)}(l))^p\right>$} for $p=2, 4, 6, 8$ in a $Re_\lambda=433$ isotropic turbulent flow. Here, $Re_\lambda$ is the Taylor-scale Reynolds number. The data is DNS of isotropic turbulence in a periodic box. The grid size is $1024^3$, and the domain size is $2\pi^3$. Further details of the DNS data can be found in Ref \cite{cao1999statistics}. We see that not only {\small $\left<(\theta^{(3)}_1)^2\right>$} scales as $l^0$ in the inertial range but also the higher order even moments. This allows us to conclude that, for any $i$, \begin{equation} \small P\left(S_i^{(3)}(l)\right)\sim l^0. \label{eq:P3} \end{equation} \begin{figure} \begin{center} \includegraphics[height=1.7in]{tht1.pdf} \caption{ \label{fig:tht1} $\left<\left(\theta_1^{(3)}(l)\right)^p\right>$ for $p=2, 4, 6, 8$. $L$ is the length of the periodic computational box in one of the three Cartesian directions. The dashed lines are at the grid cutoff. The solid lines encompass the inertial range. } \end{center} \end{figure} Next, we consider $n$-point equivalent eddy classes $S_i^{(n)}$, whose feature vectors' size is $n-2$ by 1. We have {\small $\left<(\theta_k^{(n)}(l))^2\right>$}: \begin{equation} \small \left<\left(\theta_k^{(n)}(l)\right)^2\right> \sim c_1l^{\zeta_1}\theta'^2_k+c_2l^{\zeta_2}\theta''^2_k+..., \label{eq:fn} \end{equation} for $k=1,~2,~3, ...,~n-2$. Following the same logic, if the data is such {\small $\left<(\theta_k^{(n)}(l))^2\right>\sim l^0$} for $k=1$, 2, 3, ..., $n-2$, we would be able to conclude $P(S_i^{(n)}(l))\sim l^0$. To prove {\small $\left<(\theta_k^{(n)}(l))^2\right>\sim l^0$}, we invoke the following two facts: first, because the flow is homogeneous, for evenly spaced sampling points, we have {\small $\left<\theta_{k'}^{(n)}(l)\right>=\left<\theta_{k''}^{(n)}(l)\right>$} for any $k'$ and $k''$; second, per our definition, {the segment between the first and the third sampling points of an velocity segment in $S^{(n)}_i(l)$ is a velocity segment in $S^{(3)}_i(2l/(n-1))$,} and therefore \begin{equation} \left<\left(\theta_1^{(n)}(l)\right)^2\right>\equiv\left<\left(\theta_1^{(3)}\left(\frac{2l}{n-1}\right)\right)^2\right>. \label{eq:n-3} \end{equation} Hence, to show {\small $\left<(\theta_k^{(n)}(l))^2\right>\sim l^0$} for $k=1$, 2, 3,..., $n-2$, we only need to show {\small $\left<(\theta_1^{(3)}(l))^2\right>\sim l^0$}, which is the result in figure \ref{fig:tht1}. Fifth (and the last step), we show {\small $P(S_i(l))\sim l^0$}. This is now trivial. Because $S_i^{(n)}(l)$ becomes $S_i(l)$ itself for sufficiently many sampling points, the fact that {\small $P(S_i^{(n)}(l))\sim l^0$} for any $n$ readily guarantees \begin{equation} P\left(S_i(l)\right)\sim l^0, \label{eq:Pn} \end{equation} and we come to our conclusion. To summarize, we show that eddies' population density is scale-invariant across the inertial range, i.e., $P(S_i(l))\sim l^0$. {The result shows that there are as many types of eddies at small scales as at large scales.} \section{Acknowledgement} We thank C Meneveau for fruitful discussion. Y.-P. Shi is supported by Projects 91752202 from the National Natural Science Foundation of China. \section{Appendix: a more rigorous derivation} Let us say that the flow has only two mutually exclusive 3-point equivalent eddy classes: $S_1^{(3)}(l)$ and $S_2^{(3)}(l)$, whose feature vectors are $(\theta')$ and $(\theta'')$ and their eddy population densities scale as $P(S_1^{(3)}(l))\sim l^{\zeta_1}$ and $P(S_2^{(3)}(l))\sim l^{\zeta_2}$. In order to arrive at the conclusion $\zeta_1=\zeta_2=0$, we assume that $\zeta_{1,2}>0$ in the main text. In this supplemental material, we present a derivation that does not rely any assumption about $\zeta_{1,2}$'s sign. We consider two $\pmb{\theta}$ statistics, i.e., {\small $\left<(\theta_1^{(3)}(l))^2\right>$} and {\small $\left<(\theta_1^{(3)}(l))^4\right>$} at two arbitrary length scales, $l_1$ and $l_2$: \begin{equation} \small \begin{split} \left<(\theta_1^{(3)}(l_1))^2\right> = P(S_1^{(3)}(l_1))\theta'^2+P(S_2^{(3)}(l_1))\theta''^2, \\ \left<(\theta_1^{(3)}(l_1))^4\right> = P(S_1^{(3)}(l_1))\theta'^4+P(S_2^{(3)}(l_1))\theta''^4, \\ \end{split} \label{eq1} \end{equation} and \begin{equation} \small \begin{split} \left<(\theta_1^{(3)}(l_2))^2\right> = P(S_1^{(3)}(l_2))\theta'^2+P(S_2^{(3)}(l_2))\theta''^2, \\ \left<(\theta_1^{(3)}(l_2))^4\right> = P(S_1^{(3)}(l_2))\theta'^4+P(S_2^{(3)}(l_2))\theta''^4, \\ \end{split} \label{eq2} \end{equation} Rewriting Eqs. \eqref{eq1} and \eqref{eq2}, we have \begin{equation} \begin{bmatrix} \theta'^2 & \theta''^2\\ \theta'^4 & \theta''^4 \end{bmatrix} \begin{bmatrix} P(S_1^{(3)}(l_1))\\ P(S_2^{(3)}(l_1)) \end{bmatrix} = \begin{bmatrix} \left<(\theta_1^{(3)}(l_1))^2\right>\\ \left<(\theta_1^{(3)}(l_1))^4\right> \end{bmatrix} \label{eq12} \end{equation} and \begin{equation} \begin{bmatrix} \theta'^2 & \theta''^2\\ \theta'^4 & \theta''^4 \end{bmatrix} \begin{bmatrix} P(S_1^{(3)}(l_2))\\ P(S_2^{(3)}(l_2)) \end{bmatrix} = \begin{bmatrix} \left<(\theta_1^{(3)}(l_2))^2\right>\\ \left<(\theta_1^{(3)}(l_2))^4\right> \end{bmatrix}. \label{eq22} \end{equation} The determinant of the 2-by-2 matrix \begin{equation*} \begin{bmatrix} \theta'^2 & \theta''^2\\ \theta'^4 & \theta''^4 \end{bmatrix} \end{equation*} is \begin{equation} \theta'^2\theta''^4-\theta'^4\theta''^2\neq 0 \end{equation} because $\theta'\neq \theta''$. Now, if {\small $\left<(\theta_1^{(3)}(l))^2\right>\sim l^0$} and {\small $\left<(\theta_1^{(3)}(l))^4\right>\sim l^0$}, the right hand side of Eqs. \eqref{eq12} and \eqref{eq22} are equal. As a result, \begin{equation} \begin{bmatrix} P(S_1^{(3)}(l_1))\\ P(S_2^{(3)}(l_1)) \end{bmatrix} = \begin{bmatrix} P(S_1^{(3)}(l_2))\\ P(S_2^{(3)}(l_2)) \end{bmatrix}, \end{equation} for arbitrary $l_1$ and $l_2$, i.e., $P(S_1^{(3)}(l_2))\sim l^0$ and $P(S_2^{(3)}(l_2))\sim l^0$, leading to the conclusion $\zeta_1=\zeta_2=0$. If we have $n$ mutually exclusive 3-point equivalent eddy classes, we need to show that $\left<(\theta_1^{(3)}(l))^{2n}\right>\sim l^0$, which is shown in figure 4 of the main text. In fact, data shows that any $\pmb{\theta}$ statistics is scale-invariant within the inertial range. Figure \ref{fig:thtn} shows a few $\pmb{\theta}^{(n)}(l)$'s statistics. We see that the statistics scales as $l^0$ in the inertial range. \begin{figure}[htb!] \centering \includegraphics[width=0.44\textwidth]{thtn.pdf} \caption{Here, $f_1=\left<|\theta_1|^6\right>$, $f_2=\left<|\theta_1|^3|\theta_2|^3\right>$, $f_3=\left<|\theta_1|^2|\theta_2|^2|\theta_3|^2\right>$, $f_4=\left<|\theta_1|^2|\theta_2|^2|\theta_3||\theta_4|\right>$, and $f_5=\left<|\theta_1|^2|\theta_2||\theta_3||\theta_4||\theta_5|\right>$ for $n=7$ in a $Re_\lambda=344$ isotropic turbulence. The dashed line is at the grid cutoff. The two solid lines enclose the scales within which the energy spectrum follows a $-5/3$ scaling.} \label{fig:thtn} \end{figure}
{ "timestamp": "2021-05-06T02:07:16", "yymm": "2105", "arxiv_id": "2105.01809", "language": "en", "url": "https://arxiv.org/abs/2105.01809" }
\section{Acknowledgements} The support of this work by the U.S.~Department of Energy, carried out at the University of Texas at Austin under contracts DE-SC0009286 and DE-SC0019303, is gratefully acknowledged. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper. URL: http://www.tacc.utexas.edu. \section{Posterior correlation matrices} Posterior correlation matrices for the three cases discussed in detail in \Cref{sec:case_1,sec:case_2} are presented in \Cref{fig:corr_mats}. Interestingly, the structure of the correlation matrices is quite different between the three cases. The correlation matrices for all spatial-series cases resemble \Cref{fig:case1_n512_corr_mat}, regardless of observation frequency or time of observation. Similarly, the correlation matrices for all time-series cases resemble \Cref{fig:case1_n32_corr_mat} regardless of observation frequency or location of observation. The posterior correlation matrix for the inference performed in \Cref{sec:case_2} using ensemble-averaged observations of the high-fidelity model \eqref{eq:detailedConsMass}-\eqref{eq:darcy} also exhibits strong positive and negative correlations between the eigenvalues, even though spatial-series observations were used in this case. The causes and interpretation of the differences in the posterior correlations across these cases are left for future study. \begin{figure}[h] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.75, trim=2em 0 0 0 ]{{rawfigs/case1/spatial_t_max_0.5_n_obs_512_corr_mat}.pdf} \caption{Correlation matrix of posterior samples from inference with 512 spatial observations of the solution to \eqref{eq:FRADE} equally spaced over the spatial domain, taken at time $t=0.5$. } \label{fig:case1_n512_corr_mat} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.75, trim=2em 0 0 0 ]{{rawfigs/case1/time_x_max_2.0_n_obs_32_corr_mat}.pdf} \caption{Correlation matrix of posterior samples from inference using 32 time-series observations of the solution of \eqref{eq:FRADE} taken at $x=2.0$ and uniformly spaced over $[0,4]$ in time.} \label{fig:case1_n32_corr_mat} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[scale=0.75, trim=2em 0 0 0 ]{{rawfigs/case2/corr_mat}.pdf} \caption{Correlation matrix of posterior samples from the inference defined in \Cref{sec:case_2}. } \label{fig:case2_corr_mat} \end{subfigure} \caption{Correlation matrices computed using posterior samples from the three inference scenarios discussed in detail in \Cref{sec:case_1,sec:case_2}. The upper-left quadrant of the correlation matrix contains correlations between the real parts of the eigenvalues, the bottom-left quadrant contains the correlations between the imaginary parts, and the top-right and bottom-left quadrants contain the correlations between the real and imaginary parts. } \label{fig:corr_mats} \end{figure} \FloatBarrier \section{Case 1: Data from Fractional Advection-Diffusion Equation}\label{sec:case_1} To study how successful Bayesian inference can be in the case where $\Dcal$ could exactly represent the underlying operator, data was generated using a 1D fractional advection-diffusion equation (FRADE), defined as \begin{equation} \begin{aligned} \diffp{\meanc(x,t)}{t} + \meanu \diffp{\meanc(x,t)}{x} = \nu \diffp[\alpha]{\meanc(x,t)}{x},& \quad x\in (0,4), \quad \alpha \in [1,2] \\ \meanc(0, t) = \meanc(4, t), &\\ \meanc(x,0) = \e{ -\frac{(1-x)^2}{2 \beta^2} }&\qquad\mbox{with $\beta=0.1$} . \end{aligned} \label{eq:FRADE} \end{equation} The data used for inference in this section was generated with $\meanu=1$, $\alpha=1.5$, and $\nu=0.05$ and taken at times $(x_i,t_i)$ from a Fourier series solution of \eqref{eq:FRADE} on a 512-point regular spatial grid, with Fourier coefficients defined as \begin{eqnarray*} \mean{\chat_k}(t_i) = \mean{\chat_k}(0) \e{ \left[ \nu(ia_k)^\alpha - \meanu (ia_k) \right]t_i }. \end{eqnarray*} Fractional PDEs can be seen as limiting forms to solutions of continuous-time-random-walk models, which are popular representations of anomalous diffusion in heterogeneous porous media \cite{berkowitz2006ctrw}. An example of the time evolution of the concentration field generated from this model is shown in Figure \ref{fig:FRADEevolution}. \begin{figure}[h] \centering \includegraphics[scale=0.8]{rawfigs/case1/frade_evolution.pdf} \caption{The evolution of a Gaussian initial condition with the FRADE at $t=1$ and $t=2$, or $\sfrac{1}{4}$ and $\sfrac{1}{2}$ of a flowthrough time, respectively.} \label{fig:FRADEevolution} \end{figure} In this case it is known \textit{a priori} that the true eigenvalues of $\Dcal$ are $\mu_k = \nu (ia_k)^\alpha$, making it possible to study if the true values are recovered in different data scenarios. \subsection{Likelihood} Data was generated by sampling the FRADE solution over a range of times and locations. Random noise distributed according to $\Ncal(0,\sigma^2)$ was added to the model evaluations to simulate measurement error. The measurement standard deviation of $\sigma=0.005$ corresponding to a $1\%$ standard error in the maximum concentration $\mean{c}=1$ was used, making the form of the likelihood \begin{eqnarray*} \begin{aligned} p(\dvec | \Thetavec ) = \frac{1}{(2\pi \sigma^2)^{N/2}}\e{ -\frac{1}{2\sigma^2}\norm{ \dvec - \mean{\cvec} }_2^2 }, \quad \sigma=0.005. \end{aligned} \end{eqnarray*} \subsection{Results}\label{sec:case1_results} The eigenvalues of $\mathcal{D}$ were inferred using spatial- and time-series data with 32, 64, or 512 observations, taken at regular intervals across the entire spatial domain, or in time from $t=0$ to $4$, which is the time required to advect the length of the domain at velocity $\mean{u}=1$. In both cases, the entirety of the pulse and its tails was observed. Global variance-based sensitivity analysis was performed to determine how many eigenvalues to infer for each data scenario. The number of eigenvalues whose Sobol indices exceeded the tolerance of $10^{-4}$ for each scenario are reported in \Cref{tab:sensitive_eigenvalues}. The number of sensitive eigenvalues did not depend on the number of observations taken over the range investigated (from 32 to 512), so only the observation location or time for time-series or spatial-series data is reported. The decay of higher wavenumber coefficients in the Fourier series solution of $\mean{c}$ as a function of time mean that an upper bound on the number of possibly sensitive eigenvalues is set by the number of modes initially excited in the initial condition. For the initial condition specified in \Cref{eq:FRADE}, the first 47 modes were excited, based on a threshold of $10^{-13}$ for the modulus of the coefficient. Additionally, the decay of coefficients with time means that the number of modes that remain excited and thus the eigenvalues to which $\mean{c}$ is sensitive will decay accordingly. This is demonstrated by the spatial-data cases, where the number of sensitive eigenvalues decreased with increasing time. Cases with spatial data observed at a single time were sensitive to more eigenvalues than cases with time-series data observed at a single location. The maximum number of eigenvalues that were informed over all the cases considered was 10, significantly fewer than the number of initially excited modes in the initial condition. \begin{table}\centering \begin{tabular}{@{}cccc@{}}\toprule \multicolumn{2}{c}{\textbf{Spatial series}} & \multicolumn{2}{c}{\textbf{Time series}} \\ Observation time & \# sensitive eigenvalues & Observation location & \# sensitive eigenvalues \\\hline 0.5 & 10 & 2.0 & 5 \\ 1.0 & 8 & 3.0 & 5 \\ 2.0 & 7 & 4.0 & 5 \\ \bottomrule \end{tabular} \caption{The number of sensitive eigenvalues for different data scenarios, with data drawn from solutions of the FRADE (\ref{eq:FRADE}) with $\alpha=1.5$ and $\nu=0.05$. Observations are equally spaced in space or time, over the whole spatial domain or over $t\in[0,4]$. } \label{tab:sensitive_eigenvalues} \end{table} KL divergences of the posterior relative to the prior for inference using spatial-series data are presented in \Cref{fig:kl_div_t_1} and \Cref{fig:spatial_kl_div_nobs_512}. In the case of \Cref{fig:kl_div_t_1}, the frequency of observation was varied. In the case of \Cref{fig:spatial_kl_div_nobs_512}, the time at which the data was collected was varied. As shown in \Cref{fig:kl_div_t_1}, increased frequency of observation in the spatial domain led to higher information gain in the eigenvalues that were informed by the data, as indicated by a higher KL divergence. The number of eigenvalues that were informed by the data depended on the time at which the spatial observations were made, as seen in \Cref{fig:spatial_kl_div_nobs_512}. For successively later times, the solution was sensitive to fewer and fewer eigenvalues. It is interesting to note that the information gain for the lowest-frequency eigenvalues increases with later observation times, but that the decay in the KL divergence as a function of $k$ was more rapid for later observations. The more rapid decay for later observations is not surprising, since the timescales on which the Fourier modes are dampened varies inversely with their wavenumber. The more rapid damping of high wavenumber modes makes them more difficult to observe at later times. Conversely, the lowest wavenumbers evolve slowly, which makes their evolution more difficult to observe at early times. This is presumably why the KL divergence for the lowest wavenumber modes increases with observation time. \begin{figure}[h] \centering \includegraphics[scale=0.8]{rawfigs/case1/spatial_fixed_time_kl_divergences.pdf} \caption{KL divergences of posteriors relative to priors for Bayesian inference from spatial-series data with varying number of observation $N_{obs}$ evenly spaced through the spatial domain of the solution of (\ref{eq:FRADE}).} \label{fig:kl_div_t_1} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.8]{rawfigs/case1/spatial_kl_divergences_n_obs_512.pdf} \caption{KL divergences of posteriors relative to priors for Bayesian inference from spatial-series data with varying observation time $t_{obs}$ of the solution of (\ref{eq:FRADE}).} \label{fig:spatial_kl_div_nobs_512} \end{figure} It is infeasible in realistic applications to have abundant spatial observations of the concentration field, since obtaining data from each location would require the creation of a different well. Instead it is more likely that one would have access to time-series observations of concentration at a limited number of locations. To reflect this, time-series data at one location was also used in the Bayesian inference of the eigenvalues. Each time series of $\mean{c}$ observations was sensitive to only the first 5 eigenvalues of $\mathcal{D}$. The time-series data may be sensitive to fewer eigenvalues than the spatial-series data because higher wavenumber modes are more rapidly damped, so that only data from the early times in the time series are sensitive to them. Once again, increased frequency of observation increased information gain for the informed eigenvalues, as shown in \Cref{fig:kl_div_x_4.0}. The information gain was less sensitive to the location at which the time-series data was collected than it was to the time at which spatial-series data was collected, as shown in \Cref{fig:time_kl_div_nobs_512}. This may be due to the observation locations not being far enough apart to significantly change the information available from the time series. While fewer eigenvalues were informed by time-series data than by spatial-series data, the information gain in those that were informed is similar. \begin{figure}[h] \centering \includegraphics[scale=0.8]{rawfigs/case1/time_fixed_loc_kl_divergences.pdf} \caption{KL divergences of posteriors relative to priors for Bayesian inference from time series data of varying number of observations $N_{obs}$ of the solution of (\ref{eq:FRADE}) evenly spaced in the time period $[0,4]$.} \label{fig:kl_div_x_4.0} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.8]{rawfigs/case1/time_kl_divergences_n_obs_512.pdf} \caption{KL divergences of posteriors relative to priors for Bayesian inference from time-series data from varying observation locations $x_{obs}$ of the solution of (\ref{eq:FRADE}).} \label{fig:time_kl_div_nobs_512} \end{figure} Both for time-series and spatial-series data, the posterior distributions for cases with abundant ($N_{obs}\!=\!512$) data contained the true value of the eigenvalues in their high-probability regions (see, e.g.~\Cref{fig:case1_n512_posteriors}). Note that the support of the posterior is so concentrated relative to the prior distribution that the prior appears flat at the scale of the posterior. Furthermore, statistics of $\mean{c}$ evaluated using posterior samples of $\Dcal$ and evolved outside the regime of the inference data were consistent with the true evolution, as shown in \Cref{fig:extrapolated_spatial_stats}. For the sparsest data ($N_{obs}\!=\!32$), the posterior marginal distributions also largely contained the true value of the eigenvalues in their high-probability regions (see, e.g.~\Cref{fig:case1_n32_posteriors}). However, statistics of $\mean{c}$ evaluated using posterior samples of $\Dcal$ and evolved outside the regime of the inference data can yield nonphysical oscillations in the tails of $\mean{c}$ and negative concentrations, as shown in \Cref{fig:oscillating_push_forward}. The likelihood can only penalize oscillations that induce large misfits with the data. Sparse observations allow for oscillations to occur between the data points. Time-series data is especially ill equipped to penalize oscillations, since they can occur anywhere in the spatial domain, as long as they are not evident when the solution crosses the observation point. This can be seen, for instance, in \Cref{fig:oscillating_push_forward}, where oscillations in the tails are present downstream of the observation point at $x=2.0$. This result highlights the need to impose as many constraints as possible in data-sparse situations, especially when the available data cannot penalize a particular nonphysical behavior. The MLE FRADE solutions from the first-pass optimization used to seed the MCMC chains were observed to be positive in all data scenarios, so the oscillatory behavior and negative concentrations are induced when the set of sensitive eigenvalues are calibrated separate from the rest of the spectrum. The positivity of the PDE solution depends on the spectrum as a whole, so it is postulated that a correlation relationship between the eigenvalues could be imposed which would guarantee positivity; however, such a relationship was not determined as part of this work. Correlation matrices computed using posterior samples for the inference using $512$ spatial-series observations at time $0.5$ and for the inference using $32$ time-series observations at $x=2.0$ are presented for the interested reader in \Cref{fig:case1_n512_corr_mat} and \Cref{fig:case1_n32_corr_mat}, respectively. \begin{figure}[h] \centering \includegraphics[scale=0.8]{{rawfigs/case1/spatial_t_max_0.5_n_obs_512_priors_vs_posteriors}.pdf} \caption{The posterior and prior marginal probabilities for the real and imaginary parts of the first two eigenvalues, inferred from 512 spatial observations of the solution of \eqref{eq:FRADE} equally spaced over the spatial domain, taken at time $t=0.5$. \label{fig:case1_n512_posteriors} } \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.9]{{rawfigs/case1/spatial_t_max_0.5_n_obs_512_extrapolated_fn_stats}.pdf} \caption{ Statistics of $\mean{c}$ evolved using posterior samples from inference with 512 equally-spaced spatial observations of the solution of (\ref{eq:FRADE}) collected at $t=0.5$. } \label{fig:extrapolated_spatial_stats} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.8]{{rawfigs/case1/time_x_max_2.0_n_obs_32_priors_vs_posteriors}.pdf} \caption{The posterior and prior marginal probability densities for the real and imaginary parts of the first two eigenvalues of $\Dcal$, inferred using 32 time-series observations of the solution of (\ref{eq:FRADE}) taken at $x=2.0$ and uniformly spaced over $[0,4]$ in time.} \label{fig:case1_n32_posteriors} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.9]{{rawfigs/case1/time_x_max_2.0_n_obs_32_extrapolated_fn_stats}.pdf} \caption{ Statistics of $\mean{c}$ evolved using posterior samples from inference using 32 time-series observations of the solution of (\ref{eq:FRADE}) taken at $x=2.0$ and uniformly spaced over $[0,4]$ in time.} \label{fig:oscillating_push_forward} \end{figure} \FloatBarrier \section{Case 2: Data from the direct numerical computation of $\meanc$}\label{sec:case_2} The study in \Cref{sec:case_1} indicates that it is possible to infer the eigenvalues to which $\mean{c}$ is sensitive, at least when the true operator lies in the probability space of the operator being inferred. Based on this positive result, the same procedure is used in this section to infer the uncertain eigenvalues using data generated from the detailed evolution equations for $c$, \eqref{eq:detailedConsMass}-\eqref{eq:darcy}. The full specification and implementation of this high-fidelity model is detailed in Appendices A, B, and C of \cite{portone_thesis}. Observations of $\mean{c}$ are produced by averaging over an ensemble of evolutions of $c$, generated by solving the high-fidelity model defined in \eqref{eq:detailedConsMass}-\eqref{eq:darcy} for an ensemble of independent, identically-distributed samples of the permeability field $\kappa$. The goal of this study is to examine the spectrum of the uncertain linear differential operator inferred using data exhibiting anomalous diffusion, and compare its spectrum to that of the operators typically used as models of the phenomenon. Whether the inferred spectrum is valid for times not included in the inference is also considered. \subsection{Data and measurement error} The data used for the inference was collected as an ensemble average of depthwise-averaged solutions of the high-fidelity model defined in \eqref{eq:detailedConsMass}-\eqref{eq:darcy} with initial condition \begin{eqnarray*} \begin{aligned} c_0(x,y) = \e{ -\frac{(1-x)^2}{2\beta^2} } \qquad\mbox{with $\beta=0.1$} \end{aligned} \end{eqnarray*} on the domain $[0,4]\times[0,1]$. The ensemble average was computed over the space of permeability fields distributed according to \begin{equation*} \begin{aligned} \ln \kappa &\sim \Ncal( 0, C(\mathbf{x},\mathbf{x}') ), \\ C(x,x') &\equiv \sigma^2 \e{ -\frac{(x - x')^2}{2\ell_x^2} - \frac{( y - y' )^2}{2 \ell_y^2} }. \end{aligned} \end{equation*} An ensemble of 576 evolutions of $c$ was generated by sampling $\kappa$ and solving \eqref{eq:detailedConsMass}-\eqref{eq:darcy}. These 2D evolutions were averaged in $y$, and the sample mean of these depth-averaged solutions $\mean{c}_y$ was computed and used as data for inference, denoted here $\mean{c}_N$, $N=576$. Observations were taken at all 512 points on the grid in the streamwise direction at time $t=0.4$, or $1/10$ of a flowthrough time ($\sfrac{L_x}{\mean{u}}=4$). \subsection{Likelihood} The sampling error in the sample mean $\mean{c}_N$ was assumed to be consistent with the multidimensional Central Limit Theorem: \begin{equation*} \begin{aligned} \dvec = \mean{\cvec}_N + \eps, \quad\quad \eps \sim \Ncal\left( 0, \frac{\Sigma}{N} \right), \end{aligned} \end{equation*} where $\Sigma$ is the covariance matrix of the distribution of $\mean{c}$, which was estimated using the sample covariance matrix $S_N$. In the tails of the ensemble-averaged concentration field, far from the mode of the pulse, the sample variance approached zero. To avoid numerical issues in inverting the covariance matrix for the likelihood computation a minimum variance of $10^{-6}$ was imposed, yielding the likelihood \begin{equation*} \begin{aligned} p( \mathbf{d} | \Thetavec ) &= \e{ -\frac{1}{2} \norm{ \mean{\mathbf{c}}( \Thetavec ) - \dvec }_{S^{-1/2}} }, \\ S_{ij} &= \fndef{ \max\left( \sfrac{(S_N)_{ij}}{N}, 10^{-6} \right), & i=j, \\ \sfrac{(S_N)_{ij}}{N}, & i\not=j. } \end{aligned} \end{equation*} \subsection{Results}\label{sec:case2_results} The eigenvalues of $\Dcal$ were inferred using abundant spatial data to replicate the most successful conditions for inference from \Cref{sec:case_1}. This abundance of spatial observations penalizes nonphysical oscillations in $\mean{c}$ through the likelihood, since the operator is not constrained against this behavior directly. As with \Cref{sec:case_1}, it is expected that if sparse time-series data were used, the same oscillatory behavior could arise. However, the data in this case is smoother than in \Cref{sec:case_1}, which was corrupted with uncorrelated noise; with sparse data the likelihood in \Cref{sec:case_1} would be minimized for evolutions of $\mean{c}$ that had small fluctuations that fit the noise in the data, while such oscillations in the data do not occur in this case. Because of this it is possible that in this case the oscillations in $\mean{c}$ would be much less significant relative to \Cref{sec:case_1}, though this possibility was not explored here. The solution $\mean{c}$ from the generalized ADE was sensitive to the first 11 eigenvalues, using the same sensitivity analysis procedure as \Cref{sec:case_1}. The KL divergences for the informed eigenvalues are shown in \Cref{fig:case2_kl_divs}. The KL divergences are lower than in \Cref{sec:case_1}, presumably because the correlation in the data makes it less informative. The correlation matrix computed using posterior samples is presented for the interested reader in \Cref{fig:case2_corr_mat}. The resulting evolutions of $\mean{c}$ evaluated using posterior samples indicate good agreement with the data used in the inference, as shown in \Cref{fig:case2_function_stats}. However, the solution for $\meanc$ at later times, as in \Cref{fig:case2_extrapolated_stats}, makes it clear that the inferred eigenvalues for $\Dcal$ do not capture the evolution of $\mean{c}$. To reproduce the evolution of $\mean{c}$ the eigenvalues of $\Dcal$ must be time dependent. Since $\Dcal \equiv \nu_p \sfrac{\partial^2}{\partial x^2}+ \Lcal $, the only possible source of this time dependence is $\Lcal$. As discussed in \Cref{sec:operator_formulation}, the eigenvalues of the deterministic operator $\Lcal$ that would reproduce the effects of dispersion on the mean are time dependent, so this does not come as a surprise. \begin{figure}[h] \centering \includegraphics[scale=.9]{rawfigs/case2/kl_divergences.pdf} \caption{KL divergence of posteriors relative to priors for Bayesian inference from spatial-series data generated from the high-fidelity model.} \label{fig:case2_kl_divs} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.9]{rawfigs/case2/function_statistics.pdf} \caption{ Evolutions of $\mean{c}$ evaluated using posterior samples inferred from spatial-series data with 512 observations of the high fidelity model at $t=0.4$, compared to the inference data. } \label{fig:case2_function_stats} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.9]{rawfigs/case2/extrapolated_function_stats_vs_data.pdf} \caption{ Evolutions of $\mean{c}$ at $t=0.5$ and $1.0$ evaluated using posterior samples inferred from a spatial-series of 512 observations of the high fidelity model at $t=0.4$, compared to the high-fidelity model. } \label{fig:case2_extrapolated_stats} \end{figure} Though it cannot successfully extrapolate in time, the operator was general enough that its posterior was consistent with the data used to infer its eigenvalues. In comparison, the fractional derivative that maximized the likelihood (the FRADE MLE) was not consistent with the data, as shown in \Cref{fig:frade_opt_vs_data}. \begin{figure}[h] \centering \includegraphics[scale=0.9]{rawfigs/case2/frade_optimization_results_vs_data.pdf} \caption{FRADE solution $\mean{c}$, using the maximum likelihood estimator values of $\nu$ and $\alpha$ for the fractional derivative, compared to the calibration data from the high-fidelity model. } \label{fig:frade_opt_vs_data} \end{figure} As shown in \Cref{fig:posterior_mean_vs_frade}, the mean of the inferred spectrum of $\Dcal$ exhibits a more complex dependence on wavenumber than would be captured with a fractional derivative. While the magnitude of the fractional derivative eigenvalues grows as a fixed power of $k$, this is not true for the inferred eigenvalues. Additionally, the rate of growth as a function of $k$ is different between the real and imaginary parts of the eigenvalues. Finally, note that the imaginary parts of the inferred eigenvalues are the same order of magnitude as the real part, in contrast to a gradient-diffusion model of dispersion which predicts real eigenvalues. These findings indicate that more complex wavenumber dependence, as well as time dependence, are important to the development of an adequate closure model for anomalous diffusion. \begin{figure}[h] \centering \includegraphics[scale=0.9]{rawfigs/case2/posterior_mean_vs_FRADE.pdf} \caption{The real and imaginary parts of $\mu_k$, computed using the posterior mean values of $r_k$ and $u_k$, compared to the eigenvalues of the fractional derivative with maximum likelihood estimator values of $\nu$ and $\alpha$.} \label{fig:posterior_mean_vs_frade} \end{figure} \FloatBarrier \section{Conclusions}\label{sec:conclusions} In this paper a Bayesian inverse problem was posed to infer the spectrum of an infinite-dimensional differential operator appearing as a closure term in a model for mean contaminant transport through a heterogeneous porous medium. Observations of the state variable on which the operator acts were used as inference data. The operator was parametrized using its eigendecomposition, and physics-based constraints were mathematically imposed on its eigenfunctions and eigenvalues. In this case the operator's eigenfunctions were known to be the Fourier modes. Remaining uncertainties in its eigenvalues were represented as probability distributions, which were updated using Bayes's theorem. The parameterization of the operator using its eigendecomposition provided a useful insight into its action on the state variable. Most of the relevant physical constraints in the problem translated into straightforward constraints on the eigendecomposition. However, a simple, constructive method to enforce positivity preservation on the operator was not available. This is presumably because of the nonpositivity of the Fourier mode eigenfunctions, rather than because of the infinite-dimensional operator formulation itself. As shown in \Cref{sec:case_1}, with enough spatial data to penalize nonphysical behavior, the lack of this constraint can be overcome to infer an operator that produces physical evolutions. However, for data that does not penalize nonphysical behavior, such as sparse time-series data, the constraint would be necessary. This demonstrates the importance of enforcing as much prior information as possible in scenarios with limited data. In the case where the uncertain operator could exactly represent the underlying dynamics of the problem, as in \Cref{sec:case_1}, the inferred eigenvalues converged to their true values with increasing frequency of observations in time or space. The inherent dimension of the Bayesian inverse problem is limited by the spectral content of the state, as was illustrated through the global sensitivity analysis performed in the different data scenarios. This dimension is independent of the discretization of the problem. Because of this, although the operator is defined to be infinite dimensional, the effective dimensionality of the problem in all cases was relatively small, not exceeding 10 eigenvalues in any of the data scenarios studied. In \Cref{sec:case_2}, the operator's eigenvalues were inferred using data generated from a high-fidelity model that exhibits anomalous diffusion. The operator's eigenvalues were inferred using observations of the state variable at a single time and at every point in the computational domain. Though the model evaluations using samples from the posterior of the inferred operator were consistent with the calibration data, they were not consistent with observations of the state at later times. This is not surprising since, as discussed in \Cref{sec:operator_formulation}, the operator's eigenvalues would need to be time dependent to fully capture the time-dependent relationship between $\mean{c}$ and $\mean{u'c'}$. The operator formulation posed here is more general than the fractional-derivative and gradient-diffusion models which are common closures for dispersion, since it does not require the eigenvalues grow according to fixed power of $k$. However, even this more general operator could not successfully extrapolate in time. This suggests that a successful closure representation of anomalous diffusion must account for the time dependence of the process, perhaps through use of a richer state description e.g.~modeling the evolution of the variance of $c$. This work is an initial step in assessing the feasibility of inferring an uncertain operator appearing in a PDE-based physical model using limited data. It does not address the well-posedness of the infinite-dimensional inference problem, however the generalization of existing theory for infinite-dimensional Bayesian inverse problems to operators is an interesting future research direction. The inverse problem was cast in terms of its eigendecomposition, which enabled physical constraints to be encoded deterministically and in a straightforward manner. Additionally, qualitative physical information was encoded through the prior distribution used in the inference. The eigendecomposition formulation of the inverse problem exposed the inherent dimensionality of the problem, based on the number of eigenvalues which were informed by the data, which is independent of discretization. Given the generality of the operator's form, the importance of enforcing any known physical constraints in cases of sparse data is essential to achieving physically meaningful results. Nevertheless, the success in inferring the operator's spectrum in \Cref{sec:case_1} suggests this approach is promising. There are several potential extensions to this formulation. First, a straightforward extension to a three-dimensional high-fidelity model with two homogeneous directions would be possible using the presented eigendecomposition parameterization. The eigendecomposition approach is not limited to stochastically-upscaled models; it can be applied to any model in which an invariance to a continuous tranformation exists and can be exploited to determine the eigendecomposition of the operator. In this case translation invariance was exploited to identify the Fourier modes as the eigenfunctions of the operator, but systems with rotational invariance would admit an eigendecomposition in terms of spherical harmonics. More generally, the approach of augmenting limited data with qualitative physical information through the prior distribution of a tractably-parametrized uncertain operator is applicable in the case of nonlinear problems as well, including nonlinear uncertain operators. For weakly nonlinear problems it may be possible to employ the given formulation on the linearized system along with a low-dimensional parametrization of a nonlinear correction. For strongly nonlinear problems, methods to transform nonlinear equations into more tractable forms using variable transformations and introductions of auxiliary variables, as in \cite{gu2011} and \cite{kramer2019}, could yield tractable nonlinear operators that are amenable to imposing prior physical constraints. For example, prior constraints were placed on a low-dimensional nonlinear operator in \cite{morrison}, although the operator was finite-dimensional rather than infinite-dimensional as was considered here. \section{Uncertain operator formulation}\label{sec:operator_formulation} First, deterministic constraints on the operator's form are imposed to encode prior information and to enable inference via the operator's parametrization. These constraints are based on physical and mathematical characteristics of the problem that should not be violated by the introduction of the uncertain operator. For instance, the mean advection-diffusion equation is linear in $\mean{c}$. It is also shift-invariant because of the statistical homogeneity of the underlying medium. Finally, it is an expression of conservation of mass. The deterministic formulation of $\Lcal$ must be defined to respect these constraints. To respect linearity in $\mean{c}$, $\Lcal$ is defined to be a linear operator. Substituting $\eqref{eq:L_def}$ into the mean evolution equation for $\mean{c}$ yields the system \begin{eqnarray} \begin{aligned} \diffp{\mean{c}(x,t)}{t} + \mean{u} \diffp{\mean{c}(x,t)}{x} = \nu_p\diffp[2]{\meanc(x,t)}{x} + \Lcal\mean{c}(x,t), \quad x \in (0,L_x), \\ \mean{c}(0,t) = \mean{c}(L_x,t), \\ \mean{c}(x,0) = c_{\,0}(x). \end{aligned} \label{eq:composite_ADE} \end{eqnarray} Because $\Lcal$ is linear and defined on a finite domain, it can be specified by its eigenvalues and eigenfunctions, $\lambda_k$ and $f_k$, $k\in\Z$. Assuming its eigenfunctions form a basis for the solution space of \eqref{eq:composite_ADE}, its action on $\mean{c}$ can be expressed as \begin{eqnarray} \begin{aligned} \Lcal \mean{c}(x,t) = \sum_{k=-\infty}^{\infty} \lambda_k c_k(t) f_k(x),\\ \end{aligned} \label{eq:L_eigendecomp} \end{eqnarray} where $c_k$ are the expansion coefficients of $\mean{c}$. This parametrization enables further constraints to be applied to $\Lcal$ via $\lambda_k$ and $f_k$. The second constraint is shift invariance, which means $\Lcal$ must commute with the spatial shift operator $\Scal_{x'} f(x) = f(x+x')$ for all $x'$. The solution space of \eqref{eq:composite_ADE} is the set of continuously-differentiable periodic functions on the bounded domain $[0,L_x]$. On this domain the shift operator's eigenfunctions are the Fourier modes, $\e{i a_k x},$ where $a_k = 2\pi k / L_x,$ $k\in\Z$. This implies that the Fourier modes are the eigenfunctions of $\Lcal$ as well, since operators that commute share eigenfunctions. Let the Fourier coefficients of $\mean{c}$ be denoted $\mean{\chat_k}$ (note that the averaging operator is applied to the coefficients because it commutes with the Fourier transform). Then the action of $\Lcal$ on $\meanc$ can be expressed in a Fourier series as \begin{eqnarray*} \begin{aligned} \Lcal\mean{c}(x,t) = \sum_{k=-\infty}^{\infty} \lambda_k \mean{\chat_k}(t)\e{ia_k x}, \quad a_k \equiv \frac{2\pi k}{L_x}. \end{aligned} \end{eqnarray*} Since the eigenfunctions of $\Lcal$ are known, only its eigenvalues $\lambda_k$ are uncertain and are constrained further. The advection-diffusion equation is a statement of mass conservation, so $\lambda_k$ must be defined so that $\Lcal$'s action does not add mass to the system. Then \begin{eqnarray*} \begin{aligned} 0 &= \diff{}{t}\int_0^{L_x} \mean{c}(x,t) \d x \\ &= \int_0^{L_x} \diffp{\mean{c}(x,t)}{t} \d x \\ &= \int_0^{L_x} \nu_p \diffp[2]{\mean{c}(x,t)}{x} + \Lcal \mean{c}(x,t) - \mean{u} \diffp{\mean{c}(x,t)}{x}\d x\\ &= \sum_{k=-\infty}^{\infty}\int_0^{L_x} \left(-\nu_p a_k^2 + \lambda_k - \mean{u}ia_k\right)\mean{\chat_k}(t)e^{ia_k x} \d x\\ &= \lambda_0 \mean{\chat_0}, \end{aligned} \end{eqnarray*} where all integrals in the sum are zero besides $k=0$ because of periodicity. Thus it is sufficient to require $\lambda_0=0$ to conserve mass. Additional constraints based on expected physical behavior can also be imposed. For instance, solutions of this physical system are known to decay spatially with time to a uniform $\mean{c}\equiv \mean{\chat_0}$ as the contaminant is diffused throughout the domain. To ensure this behavior it is sufficient to require $\abs{\mean{\chat_k}}$ decay with time $\forall k\not=0$. The Fourier coefficients of the solution $\mean{c}$ to \eqref{eq:composite_ADE} are \begin{eqnarray*} \mean{\chat_k}(t) = \mean{\chat_k}(0)\e{\left(-\nu_pa_k^2 + \lambda_k - \meanu ia_k\right)t \vphantom{\diff{}{x}}}, \quad k \in \Z. \end{eqnarray*} Separating this into its real and imaginary parts yields \begin{eqnarray*} \mean{\chat_k}(t) = \mean{\chat_k}(0)\e{\left(-\nu_pa_k^2 + \Re\left[\lambda_k\right]\right)t +i\left(\Im\left[\lambda_k\right] - \meanu a_k\right)t\vphantom{\diffp{}{x}}}, \quad k \in \Z. \end{eqnarray*} Only the real part of the argument in the exponential affects the coefficients' magnitude, so \begin{eqnarray*} \begin{aligned} \abs{ \mean{\chat_k}(t) } &= \abs{ \mean{\chat_k}(0)}\abs{\;\e{\left(-\nu_p a_k^2 + \Re\left[\lambda_k\right]\right)t} }. \end{aligned} \end{eqnarray*} Then it is sufficient to require that \begin{eqnarray} \begin{aligned} -\nu_p a_k^2 + \Re\left[\lambda_k\right] \leq 0 \end{aligned} \end{eqnarray} to guarantee the solution fluctuations will not grow with time. An additional property of this system is that the mean concentration should be propagated downstream. The imaginary part of the operator affects the advection of $\meanc$, which can be seen by rearranging the evolution equation of $\mean{\chat_k}$: \begin{eqnarray*} \begin{aligned} \diff{\mean{\chat_k}}{t} + i\Big( \mean{u} a_k - \Im[\lambda_k]\Big)\mean{\chat_k} = \Big(-\nu_p a_k^2 + \Re[\lambda_k]\Big)\mean{\chat_k}. \end{aligned} \end{eqnarray*} The velocity at which Fourier mode $k$ propagates is $\mean{u}-\Im[\lambda_k]/a_k$, so to guarantee downstream propagation for all wavenumbers it is sufficient to require \begin{eqnarray} \begin{aligned} \mean{u}a_k - \Im[\lambda_k] > 0. \end{aligned} \end{eqnarray} To this point the physical constraints placed on the operator resulted in simple constraints on its structure that were easy to impose. A further constraint on $\Lcal$ is that $\meanc$ must remain positive. Determining a constructive constraint to enforce this property is challenging. The typical approach of reformulating the problem in terms of the $\log \meanc$ makes the governing equations nonlinear, which precludes the use of the eigenfunction expansion of $\meanc$ to parametrize the action of $\Lcal$. Conditions on the Fourier expansion of a function to guarantee positivity, a likely means of deriving such constraints for $\Lcal$, are an open area of inquiry. Due to these challenges a positivity constraint was not enforced here. Note that $\Lcal$ should exactly represent the effects of dispersion on the evolution of $\mean{c}$, since, by definition, \begin{eqnarray*} \begin{aligned} \Lcal\mean{c}(x,t) = - \diffp{\mean{u'c'}(x,t)}{x}. \end{aligned} \end{eqnarray*} In terms of the Fourier series solution of $\mean{c}$, this equates to $\lambda_k \mean{\chat_k}(t) = - (ia_k) \mean{\widehat{\left(u'c'\right)}_k}(t)$. Solving for $\lambda_k$ yields \begin{eqnarray*} \begin{aligned} \lambda_k &= \frac{-(ia_k)\mean{\widehat{\left(u'c'\right)}_k}(t)}{\mean{\chat_k}(t)}. \end{aligned} \end{eqnarray*} This highlights that, unless $\mean{(u'c')_k}(t)/\mean{\chat_k}(t)$ is a constant proportion as a function of time, an exact representation would in general require time dependence in $\lambda_k$. This time dependence cannot be recovered directly, however, because $\mean{u'c'}$ cannot be observed. For this initial study, $\lambda_k$ are assumed constant in time for simplicity. It should be noted that this assumption induces inadequacy in the formulation, which will limit its ability to successfully extrapolate in time. This completes the deterministic formulation of $\Lcal$ used in this work. \section{Introduction} In the past decade, a greater understanding of infinite-dimensional Bayesian inference has developed, but it has largely been in the context of inferring functions \cite{hosseini,stuart}. On other fronts, Bayesian inference of operators has been considered in several contexts, but the problem is either finite-dimensional or can be recast as a field inversion problem. For instance, in \cite{levin2009}, the kernel of a convolution operator was inferred along with an image in a blind deconvolution problem, converting operator inference to field inversion. Much work has focused on inference of the covariance matrix of a multivariate Gaussian distribution, for example to quantify uncertain measurement errors \cite{daniels1999,huang2013}. These matrices are finite-dimensional and do not directly affect the dynamics of the state, only their presumed measurement error. Furthermore, in both of these examples, the operator is not affecting the dynamics of the problem. In \cite{morrison}, an operator affecting the state dynamics was inferred, but the operator was finite-dimensional. Non-Bayesian methods for inferring an operator affecting state dynamics from observations of the state variable have recently been developed. For instance, in \cite{peherstorfer2016} the operators in a reduced-order model are inferred deterministically using data generated from a higher-fidelity model's output, taken at a variety of times, locations and model parameter values. The types of operators in question appear in dynamical systems and are often discretizations of differential operators, but the inference is in a deterministic setting. In large part, the operator inference problems mentioned here focus on inferring the elements of a finite-dimensional operator's matrix representation directly. In contrast, in this work prior knowledge of invariance in the modeled system (specifically, translation invariance) is exploited to determine the eigendecomposition of an unknown linear operator, and the inference problem is cast in terms of the operator's eigenvalues. A Bayesian inverse problem is defined to infer the infinite-dimensional differential operator's spectrum using observations of the state variable whose dynamics it affects. A favorable property of this approach is that the dimension of the inverse problem does not depend on the discretization of the problem, as is the case when inferring the matrix representation of the discretized operator. Instead the dimension of the inverse problem is determined by the spectral content of the solution to the inverse problem; that is, how many eigenvalues are informed by the observational data as determined by a global sensitivity analysis. In \cite{pinns,wu2020data} an operator inference is formulated in terms of a differential operator, parametrizing its symbol in Fourier space using a neural network, and deterministic constraints based on physical properties are placed on the Fourier symbol. The problem formulations of \cite{peherstorfer2016,pinns,wu2020data} are most similar to the Bayesian inverse problem posed here, however all methods use the full space- and time-varying evolution of the state variable(s), possibly for multiple initial conditions and/or model parameterizations, to infer the unknown operator. This work instead focuses on the case of limited data, where either a snapshot in time of the spatially-varying state, or a time-series observation of the state at a specific location is observed. Data sparsity is a common issue in realistic physical applications. This work is a first step in investigating the feasibility of inferring uncertain or unknown dynamics governing physical phenomena with limited observations. To mitigate the effect of sparse data, physical constraints are imposed deterministically on the operator's formulation. Additionally, qualitative information about the behavior of the system is imposed through the prior distributions defined on the operator parameterization. How the solution to the Bayesian inverse problem depends on the type of data (whether observations are a spatial series or a time series) and on the frequency of observations is explored. With limited data to constrain the operator, the need to encode prior information into its formulation is especially important. How prior information about physical realizability can be encoded in the operator's form is demonstrated here. The Bayesian inverse problem is focused on inferring an uncertain differential operator representing dispersion in a field-scale model of contaminant transport. Development of closure models for this phenomenon is an ongoing effort, so this work also has potential applications as a novel method of deriving such closures. While both practical and theoretical aspects of Bayesian inference of an infinite-dimensional operator must be explored, this work focuses on the practicalities of the problem. Conditions under which it is possible to parametrize the inference problem and challenges that arose during the process will be discussed. \section{Bayesian inference problem specification}\label{sec:ip_formulation} In \Cref{sec:operator_formulation}, the uncertain operator $\Lcal$ was parametrized by its eigenvalues and eigenfunctions, and its eigenfunctions were determined to be the Fourier modes. The only remaining uncertainty in $\Lcal$ is in its eigenvalues $\lambda_k$, for which a Bayesian inverse problem will be defined. Observations of the mean concentration $\mean{c}$ at different times and locations constitute the data for the inverse problem. How many eigenvalues can be inferred, and how precisely, is assessed as a function of the frequency of observation and as a function of whether observations are collected in a time series or across the spatial domain. Samples of the posterior distribution are generated using Markov Chain Monte Carlo (MCMC). Let the right-hand side of \eqref{eq:composite_ADE} be denoted $\Dcal$ so that \begin{eqnarray} \mathcal{D}\mean{c} \equiv \left(\nu_p\diffp[2]{}{x} + \Lcal\right)\mean{c}. \label{eq:D} \end{eqnarray} The uncertainty of $\Lcal$ induces uncertainty in $\Dcal$, whose eigenvalues are denoted $\mu_k$. As discussed in \Cref{sec:operator_formulation}, the eigenfunctions of $\Dcal$ are the Fourier modes because of the shift invariance of $\Lcal$ and the second derivative. The eigenvalues of $\Dcal$ are therefore given by \begin{eqnarray} \mu_k = -\nu_p a_k^2 + \lambda_k, \label{eqn:muLambda} \end{eqnarray} so that inference of $\mu_k$ is equivalent to inference of $\lambda_k$. Inference will be formulated in terms of $\mu_k$ for two reasons. First, it is simpler to enforce the constraint $\Re\left[ \mu_k \right] \leq 0$ to guarantee that $\abs{\mean{\chat_k}}$ decay with time, as discussed in \Cref{sec:operator_formulation}. Second, it will allow for direct comparison with another popular model of contaminant transport through heterogeneous media, the fractional advection-diffusion equation (FRADE) \cite{berkowitz2006ctrw,schumer2009fractional}. Given the parametrization of the uncertain operator using its eigendecomposition \eqref{eq:L_eigendecomp}, the goal is to infer the uncertain eigenvalues of $\Dcal$, which forms the right-hand side of a generalized diffusion equation: \begin{eqnarray} \begin{aligned} \diffp{\mean{c}(x,t)}{t} + \mean{u} \diffp{\mean{c}(x,t)}{x} = \Dcal\mean{c}(x,t), \\ \mean{c}(0,t) = \mean{c}(L_x,t), \\ \mean{c}(x,0) = c_0(x). \end{aligned} \label{eq:generalized_ADE} \end{eqnarray} \subsection{Prior specification} Prior distributions are defined on the real and imaginary parts of $\mu_k$, and their posterior distributions are inferred using observations of $\mean{c}$ in a Bayesian inverse problem. Because $\mean{c}$ is real, its Fourier coefficients are conjugate symmetric; that is, $\mean{\chat_{-k}} = \overline{\mean{\chat_k}}$, where $\overline{(\cdot)}$ represents the complex conjugate. As a result, the action of $\Dcal$ on $\mean{c}$ can be expressed using just the eigenvalues associated with positive wavenumbers: \begin{eqnarray*} \begin{aligned} \Dcal\mean{c}(x,t) =2 \,\Re\left[\sum_{k=1}^{\infty} \mean{\chat_k}(t) \mu_k \e{ia_k x} \right], \end{aligned} \end{eqnarray*} where the series is indexed from $1$ because $\mu_0=0$ to conserve mass. An upper bound on the number of eigenvalues to be inferred is $N_k$, the number of terms in a similar Fourier expansion needed to resolve the initial condition $c_0(x)$. This is because the Fourier coefficients of $\meanc$ decay with increasing $k$ and time, so the number of terms needed to resolve $\meanc$ will never exceed those required to resolve $c_0$. It should be noted that for problems that exhibit this property, the dimensionality of the inverse problem for the operator is determined by the spectral content of the solution, not by the discretization of the problem, as it would be if the discretized values of the operator were inferred directly. The prior distributions for the real and imaginary parts of the eigenvalues of $\Dcal$ are defined using the deterministic constraints discussed in \Cref{sec:operator_formulation} and considerations of what values they would plausibly take. For the real part of the eigenvalues, recall that $\Re[\mu_k] \leq 0$ to ensure $\abs{\mean{\chat_k}}$ decay with time. This is a hard upper bound for the real parts that cannot be violated. A plausible lower bound for the real parts is determined by observing that the value of an eigenvalue cannot be inferred from the data if the corresponding Fourier mode is too rapidly damped. The plausible lower bound is therefore set as the eigenvalues of a diffusion operator with diffusion coefficient $\nu_{max}$ that ensures that at least the lowest wavenumber Fourier coefficient does not decay by more than some factor $A$ in magnitude while propagating the length of the domain $L_x$ at the velocity $\meanu$. In this case, the value of $\nu_{max}$ is given by \begin{equation} \nu_{max}=\frac{L_x\meanu\ln A}{4\pi^2}. \end{equation} For the values of $L_x$ and $\meanu$ used in \Cref{sec:case_1,sec:case_2} ($L_x=4$, $\meanu=1$) and $A=10^{10}$ one obtains $\nu_{max}\approx 2.5$, which is used to define the priors below. The posteriors determined in \Cref{sec:case_1,sec:case_2} are dominated by the likelihood, so the plausible lower bounds defined here have no impact on the inference. The imaginary parts of $\mu_k$ are identical to those of $\lambda_k$, so they must satisfy the same condition to guarantee downstream propagation for this problem: \begin{eqnarray*} \begin{aligned} \mean{u}a_k - \Im[\mu_k] > 0 \implies \Im[\mu_k] < \mean{u}a_k. \end{aligned} \end{eqnarray*} This is a hard upper bound on the imaginary parts of the eigenvalues. A plausible lower bound of $\Im[\mu_k] \geq - \mean{u}a_k$ is imposed. This bound implies the contribution to transport from dispersion will be less than or equal to the contribution from advection, which is expected in practice. As with the plausible bound for the real parts of the eigenvalues, the prior is dominated by the likelihood in the posterior, so this specification was determined not to have an effect on the results. For both the real and imaginary parts of the eigenvalues a hard bound is specified on one side of the domain, while a plausible bound has been placed on the other. The hard bound should not be violated, while there is no physical reason that an eigenvalue cannot go outside the plausible bounds that have been set, though exceeding the bounds is considered improbable. To reflect this state of knowledge, the prior distributions for the real and imaginary parts will be defined as exponential distributions, with the hard bounds corresponding to the lower bound of the exponential distribution and the plausibility bounds used to define the scaling coefficients. Let $\Re[\mu_k]\equiv R_k$. The bounds on $R_k$ are $-\nu_{max} a_k^2 \lesssim R_k < 0,$ where the $\lesssim$ denotes the plausible bound. The negative real parts $-R_k$ are then represented using the exponential distribution: \begin{eqnarray*} \begin{aligned} p(-R_k ) = \e{-(-R_k)/\beta_k}/\beta_k. \end{aligned} \end{eqnarray*} The scaling coefficients $\beta_k$ are defined so that $95\%$ of the probability mass for each $-R_k$ falls between 0 and $\nu_{max} a_k^2$. This is done using the CDF of $-R_k$, $P(-R_k) = 1 - \e{-(-R_k)/\beta_k}$: \begin{eqnarray*} \begin{aligned} 0.95 &= P(-(-\nu_{max} a_k^2)) = 1-\e{-\frac{\nu_{max} a_k^2}{\beta_k}} \\ &\Downarrow\\ \beta_k &= -\nu_{max} a_k^2 / \ln{0.05}. \end{aligned} \end{eqnarray*} The negative real parts are bounded from below by zero. This can harm mixing for MCMC algorithms that employ Gaussian proposal distributions, which can generate many samples outside the parameter domain. To avoid the bound, the transformed variables $r_k=\log(-R_k)$ were inferred instead. Their prior distributions can be computed analytically using a variable transformation and are defined as \begin{eqnarray} \begin{aligned} r_k = \log(-R_k), \\ p(r_k) = \frac{\e{-e^{r_k}/\beta_k + r_k}}{\beta_k,}, \end{aligned} \label{eq:real_part_prior} \end{eqnarray} where $\beta_k$ are the same as for the distributions of $R_k$. This prior distribution and the distribution of $-R_1$ is shown in \Cref{fig:real_prior} for reference. \begin{figure}[h] \centering \includegraphics[scale=.8]{rawfigs/real_prior_plots.pdf} \caption{The prior distribution for $-R_1$ and $r_1$.} \label{fig:real_prior} \end{figure} Let $\Im[\mu_k]\equiv I_k.$ The bounds on $I_k$ are $ -\mean{u}a_k \lesssim I_k < \mean{u} a_k$, where again $\lesssim$ denotes the plausible bound. To derive a variable that is consistent with the exponential distribution, let $U_k = \mean{u}a_k - I_k$. Then $ 0 < U_k \leq 2\mean{u}a_k$, and the scaling coefficients of the exponential distribution are defined so that 95\% of the probability mass falls between these bounds as was done for $-R_k$. To avoid the hard lower bound on $U_k$ the transformed variables $u_k=\log(U_k)$ were inferred instead. Inference of the parameters $\Thetavec\equiv[r_1,r_2, \cdots, u_1, u_2,\cdots]$ is performed. The real and imaginary parts of the eigenvalues are then recovered using the transformations \begin{eqnarray} \begin{aligned} R_k &= - e^{r_k}, \\ I_k &= \mean{u}a_k - e^{u_k}. \end{aligned}\label{eq:parameter_mappings} \end{eqnarray} All parameters are assumed independent, so the infinite-dimensional prior density is defined as \begin{eqnarray*} p(\thetavec)\equiv \prod_{k=1}^{\infty} p(r_k)p(u_k). \end{eqnarray*} Recall, however, that the number of inferred eigenvalues is limited to at most $N_k$, where $N_k$ is the number of Fourier modes required to resolve the Fourier expansion of the initial condition. The truncated version of the prior is thus defined as \begin{eqnarray} p(\thetavec)\equiv \prod_{k=1}^{K} p(r_k)p(u_k), \label{prior} \end{eqnarray} where $K\leq N_k$. $K$ is determined by a global sensitivity analysis, described in \Cref{sec:inference_implementation}. \subsection{Likelihood specification} Several data sets will be considered, but in all cases an additive, normally-distributed measurement error is assumed. The data model is denoted \begin{eqnarray*} \begin{aligned} d_{i} = \mean{c}(x_i, t_i; \Thetavec) + \eps_i, \quad i = 1, \cdots, N_{obs}, \quad \eps_i \sim \mathcal{N}( 0, \Sigma ), \end{aligned} \end{eqnarray*} where $\Sigma$ is the measurement error covariance matrix. Then the likelihood is defined as \begin{equation} \begin{aligned} p(\dvec | \Thetavec ) & = \frac{ \e{ -\frac{1}{2} \norm{ \dvec - \mean{\cvec}}_{\Sigma^{-1/2}}^2 } }{ (2\pi)^{N_{obs} / 2}|\Sigma|^{-1/2} }. \end{aligned} \label{eq:likelihood} \end{equation} \subsection{Inference implementation}\label{sec:inference_implementation} Before inference a global variance-based sensitivity analysis is performed to determine to which eigenvalues $\mean{c}$ is most sensitive \cite{gbvsa1,gbvsa2,gbvsa3}. The sensitivity is assessed by computing the Sobol' total-effect index, a measure of the contribution to variance in $\mean{c}$ from varying an eigenvalue alone as well as from its variation along with other eigenvalues. Any eigenvalues whose Sobol total-effect indices exceed a heuristic threshold are included in the inference, and the rest are fixed at a reasonable value as described below. This heuristic threshold was determined by studying mixing of Markov chains at different threshold values and selecting the value that produced the best mixing. The sensitivity analysis is performed using the Python software package SALib \cite{salib}. To generate samples of the posterior distributions of the eigenvalues using Markov Chain Monte Carlo (MCMC), the Delayed Rejection Adaptive Metropolis (DRAM) algorithm \cite{DRAM}, implemented in Version 1 of the MIT UQ Library (MUQ1) \cite{MUQ}, is used. The starting point of the Markov chain is determined by performing two deterministic optimizations, also using MUQ. The first optimization is performed with the assumption that $\Dcal$ is of the form \begin{eqnarray*} \begin{aligned} \Dcal = \nu \diffp[\alpha]{}{x}, \end{aligned} \end{eqnarray*} and $\nu$, $\alpha$ are optimized to maximize the likelihood density. The second optimization is initialized at the solution to the first optimization and relaxes the assumption on the form of $\Dcal$, optimizing over $r_k$ and $u_k$ to maximize the posterior. Only the eigenvalues to which $\mean{c}$ is sensitive are optimized and included in the Bayesian inference. To ensure the uninferred eigenvalues are set at reasonable values, they are fixed at the solution of the first optimization. By fixing the insensitive eigenvalues at those of the fractional derivative, the Bayesian inference can be interpreted as finding a correction to the fractional derivative for the sensitive eigenvalues. \subsection{Analysis of MCMC results} Chains of length $3\times 10^5$ were run, and the first $1\times10^5$ samples were discarded as burn-in for each of the cases discussed. The Kullback-Leibler divergence (DKL or KL divergence) is a natural measure of how much information was gained through inference \cite{kl_div}, since it is a measure of how different two probability distributions are from each other. Of interest for this work is the KL divergence between the marginal prior and posterior for each parameter, since this will provide an assessment of how much information is gained about the eigenvalues as a function of $k$. The DKL between prior and posterior for a single parameter $\Theta_k$ is defined as \begin{eqnarray*} \begin{aligned} D\Big( p(\Theta_k|\dvec) \;\Big|\Big|\; p(\Theta_k ) \Big) &= \int \ln\left( \frac{p(\Theta_k|\dvec)}{p(\Theta_k)} \right) p(\Theta_k | \dvec) \d \Theta_k. \end{aligned} \end{eqnarray*} Larger values of KL divergence indicate greater information gain between posterior and prior. If an analytical expression for the marginal posterior were available, this integral could be approximated using Monte Carlo integration by \begin{eqnarray*} \begin{aligned} D\Big( p\left(\Theta_k|\dvec\right) \;\Big|\Big|\; p\left(\Theta_k\right) \Big) \approx \frac{1}{N_s}\sum_i^{N_s} \ln\left[p\left(\Theta_k^{(i)}\Big|\dvec\right)\right] - \ln\left[p\left(\Theta_k^{(i)}\right)\right], \; \Theta_k^{(i)} \sim p\left(\Theta_k|\dvec\right), \end{aligned} \end{eqnarray*} where $N_s$ is the number of samples used in the sample mean. Because an analytical expression is unavailable, an approximation of the posterior distribution must be provided. It is common to approximate the posterior using a Kernel-Density Estimate (KDE) approximation \cite{kde}, built using samples from the posterior generated using MCMC. However, in this case, MCMC samples of the posterior indicate a nearly-Gaussian posterior so a Gaussian approximation was used. The KL divergence between a single parameter $\Theta_i$'s marginal posterior and prior is thus approximated by \begin{eqnarray} \begin{aligned} D\Big(p(\Theta_k|\dvec) \,\Big|\Big|\, p(\Theta_i) \Big) \approx \frac{1}{N_s}\sum_{i=1}^{N_s} \ln\left[ p_{GA}\left(\left.\Theta_k^{(i)} \,\right|\, \dvec\right) \right] - \ln\left[ p\left(\Theta_k^{(i)}\right)\right], \; \Theta_k^{(i)}\sim p(\Theta_k | \dvec ), \end{aligned} \label{eq:DKL} \end{eqnarray} where $p_{GA}$ denotes a Gaussian approximation. \section{Application problem description}\label{applicationProblemDescription} When modeling a physical phenomenon, the first step is to bring reliable theory such as conservation laws to bear. Conservation laws generally contain unclosed terms for which models must be introduced to close the equations. Often the correct form for such closure models is unknown, given the information available to the modeler. This paper recasts the closure problem as a Bayesian inverse problem, where the closure model is represented as an uncertain operator acting on the state variable. The application problem studied for this work is field-scale transport of a contaminant through a heterogeneous porous medium. For the purposes of this discussion, the 2D advection-diffusion equation is considered an accurate representation of the relevant physics, with the velocity governed by Darcy's law and assumed incompressible in a medium with uniform porosity. For $\mathbf{x}\equiv(x,y)\in[0,L_x]\times[0,L_y]\equiv\Omega$, let \begin{eqnarray} \diffp[]{c(\mathbf{x}, t)}{t} + \div{\ub(\mathbf{x})c(\mathbf{x},t)} = \nu_p \grad^2 c(\mathbf{x},t), \label{eq:detailedConsMass}\\ \div{\ub} = 0, \label{eq:detailedContinuity}\\ \ub(\mathbf{x}) = -\kappa(\mathbf{x})\grad p(\mathbf{x}), \label{eq:darcy} \end{eqnarray} where $c \in C^{\infty}(\Omega)$ is the concentration field of the contaminant; $\uvec \in C^1(\Omega)$ is the velocity; $\nu_p$ represents pore-scale diffusivity; $\kappa(\mathbf{x})$ represents permeability, a measure of how easily fluid travels through the medium; and $p$ is the pressure field. All fields are assumed periodic in $x$, and $c$ and $\mathbf{u}$ are assumed to satisfy zero-Neumann boundary conditions in $y$. The initial condition and all parameters are assumed known, except for $\kappa$. Because the velocity depends on $\kappa$ through Darcy's law, the permeability indirectly determines the transport of the contaminant. If $\kappa$ were known throughout the entire computational domain, the transport of the contaminant would be completely predictable. In realistic problems, the structure of a permeability field is not available over the entire span of the domain due to limitations in sensing technology. Instead, it is possible to collect samples of the porous medium and study small sections of the domain in a laboratory. Viewing the permeability as a random field, the samples may be used to determine a mean and correlation structure. Assuming $\kappa$ is statistically homogeneous---that is, that its statistics do not depend on absolute location---the statistics determined in the lab are representative of its statistics over the whole domain. Thus, although the detailed behavior of the permeability field is not known, its statistics, and those of the velocity and other quantities that depend on it, can still be predictable. It is common practice to make such assumptions and perform statistical averaging to derive an equation for the transport of the mean contaminant concentration field \cite{bear2010modeling}, and this approach is taken here. Additionally, depthwise ($y$-direction) averaging is performed for two reasons. First, although the evolution of the contaminant is assumed to occur in 2D, observations of the contaminant are limited to a depthwise average due to mixing that occurs when drawing fluid from a well for measurement. Second, the depthwise variation of the contaminant's concentration is not generally the relevant quantity of interest; of more concern is when the average depthwise concentration of the contaminant exceeds a safe threshold downstream of some contaminant source. To obtain a set of equations for the statistically- and spatially-averaged concentration, let \begin{eqnarray*} \mean{ f(x,y) } \equiv \frac{1}{L_y}\int_0^{L_y} \mathbb{E}_{\kappa}\left[ f(x,y) \right] \d y \end{eqnarray*} for a random field $f$, where $\mathbb{E}_\kappa$ signifies an expectation over the probability space of $\kappa$. The random field $f$ can thus be written as the sum of its mean and its deviation from that mean: \begin{eqnarray*} f = \mean{f} + f'. \end{eqnarray*} Substituting this decomposition of $c$ and $\mathbf{u}=[u,v]$ into the high-fidelity equations \eqref{eq:detailedConsMass} and \eqref{eq:detailedContinuity} and applying the averaging operator to the equations gives \begin{eqnarray} \begin{aligned} \diffp[]{\meanc(x,t)}{t} + \mean{u}\diffp[]{\meanc(x,t)}{x} + \diffp[]{\mean{u'c'}(x,t)}{x} = \nu_p \diffp[2]{\mean{c}(x,t)}{x}, \\ \mean{c}(0,t) = \mean{c}(L_x,t), \\ \mean{c}(x,0) = c_0(x). \end{aligned} \label{eq:averaged} \end{eqnarray} Note that $\meanc$ is assumed periodic with period $L_x$, which enables the computationally-efficient solution of \eqref{eq:averaged} using a Fourier-series expansion. The periodicity assumption is valid provided the velocity fluctuations are homogeneous with correlation lengths small compared to the period (in this work correlation lengths did not exceed 10\% of $L_x$), and the contaminant is confined to a region that is small compared to the period (in this work simulations were stopped before the contaminant pulse reached the edges of the domain). Furthermore, $\mean{u}$ is constant, since by continuity \begin{eqnarray*} \begin{aligned} 0 = \diffp{\mean{u}}{x} + \diffp{\mean{v}}{y} = \diffp{\mean{u}}{x}. \end{aligned} \end{eqnarray*} This system of equations is exact but unclosed because of the second-order fluctuating term $\mean{u'c'}$. This term is often called the dispersive flux, and $\sfrac{\partial \mean{u'c'}}{\partial x}$ is herein called dispersion. A typical closure model for $\mean{u'c'}$ is gradient diffusion \cite{bear2010modeling}. However, it is well known that gradient diffusion can be an inadequate model for field-scale transport through heterogeneous media when dispersion dominates pore-scale diffusion and strong heterogeneities induce anomalous diffusion \cite{levy2003, heterogeneousFlow, neuman2009perspective}. This work investigates the possibility of deriving a more general closure model for dispersion in scenarios where anomalous diffusion is significant. This is done by defining the problem so that it is in a high-P\'eclet number regime, to ensure dispersion dominates pore-scale diffusion in \eqref{eq:detailedConsMass}. In all cases herein the P\'eclet number is $\text{Pe}\equiv \langle u\rangle L_y / \nu_p = (1)(1)/(0.01) = 100$. Additionally, $\kappa$ are specified with sufficient heterogeneity that anomalous diffusion is observed. Further details on how the problem was defined to produce anomalous diffusion are provided in \cite{portone_thesis}. \subsection{Uncertain operator as closure model} As an alternative to gradient diffusion, here we pursue the Bayesian inference of an uncertain operator acting on $\mean{c}$ to represent dispersion, defined such that \begin{eqnarray} \begin{aligned} \Lcal \mean{c} &= - \diffp{\mean{u'c'}}{x}. \end{aligned} \label{eq:L_def} \end{eqnarray} Because $\Lcal$ acts on $\mean{c}$ and appears in its evolution equation, it affects the dynamics of the mean evolution. The aim of this work is to assess the feasibility of inferring $\Lcal$ using only observations of $\mean{c}$ at varying locations and times. To enable inference and to encode prior information in the operator's form, $\Lcal$ is parametrized by its eigendecomposition, and relevant physical constraints are enforced on its eigenfunctions and eigenvalues in \Cref{sec:operator_formulation}. Remaining uncertainties in the operator's parametrization are represented using prior distributions, discussed in \Cref{sec:ip_formulation}. In \Cref{sec:case_1}, Bayesian inference is performed using data generated from a model for which the operator's form is known \textit{a priori} to assess the feasibility of inferring the operator from data. Based on the findings of \Cref{sec:case_1}, in \Cref{sec:case_2} the operator is inferred using sample statistics of $\mean{c}$ computed from solutions of the detailed model \eqref{eq:detailedConsMass}-\eqref{eq:darcy}. The validity of the inferred operator as a closure model for $\sfrac{\partial \mean{u'c'}}{\partial x}$ is also assessed. Finally, conclusions and future work are discussed in \Cref{sec:conclusions}.
{ "timestamp": "2021-09-27T02:01:28", "yymm": "2105", "arxiv_id": "2105.01807", "language": "en", "url": "https://arxiv.org/abs/2105.01807" }
\section{Introduction} \label{sec:intro} The tree path minimum query problem is to find the minimum weight along the simple path from one node to another on a tree. It contributes notably to the problem of minimum spanning tree verification \citep{komlos1985linear,king1997simpler}, which has been shown to imply efficient randomized minimum spanning tree algorithm \citep{karger1995randomized}. To verify whether a spanning tree is minimum, we only need to check, for each edge not on the spanning tree, the maximum weight along the simple path between the two endpoints of the edge on the spanning tree. This only requires an `offline' solution to path minimum query, for which we means that all queries are given as a large batch, and the algorithm is allowed to process them simultaneously. For this reason, this `offline' version of this problem has been studied extensively in the literature. The first linear algorithm is presented by \citet{komlos1985linear}, but the algorithm only achieves linear in the number of comparisons used. The algorithm itself is far from linear. \citet{dixon1992verification} come out with the first \emph{truly} linear-time algorithm, but this initial proposal is hard to implement. Motivated by this, several simplifications are later made based upon the Komlos's algorithm, including King's algorithm based on the Boruvka trees \citep{king1997simpler}, and Hagerup's algorithm based on set theory \citep{hagerup2009even}. However, if the queries come in \emph{one-by-one}, and we are required to prepare an oracle to answer these queries online, then all the above algorithms will no longer work. As a special case, the famous range minimum query problem adopts a linear-time solution in word RAM which can answer the queries in constant time \citep{bender2000lca}. However, this algorithm builds upon the Cartesian tree of the sequence, which cannot be built linearly on a general tree (as we will discuss in Section \ref{sec:pre:cartesian}). In fact, answering these queries online is intrinsically difficult, in the sense that a lower bound is known that $\Omega(n \log \lambda_k(n))$ pre-processing time is necessary to answer queries within $k$ comparisons \citep{pettie2006inverse}, where $\lambda_k(n)$ is the inverse of the Ackermann function along the $k$-th column. We will formally define this function in Section \ref{sec:pre:ackermann}. Apart from this result, algorithms are known to nearly match this lower bound. Built upon the Yao's algorithm for partial sums in a linear list \citep{yao1982space}, \citet{alon1987optimal} proposed an algorithm that builds an oracle in $O(n \lambda_k(n))$ time and space to answer the queries within $4k$ comparisons. Another approach is presented by \citet{chazelle1987computing} in the same year with the same preprocessing time and $2k+O(1)$ query comparisons\footnote{In fact, these two algorithms work for a more generalized setting called semi-group queries, where we query the sum of all weights on a path in a semi-group.}. Based on the two algorithms above, \citet{pettie2006inverse} claimed an oracle which can answer each query in $2k$ comparisons, and can be constructed in $O(n \log \lambda_k(n))$ time and space. This is the best known result for this problem, but it gives no guarantee on the query time of the oracle. In this paper, we present the first algorithm which, while keeping the best known preprocessing time and query comparisons needed, gives a near-optimal bound on the query time required in word RAM. Particularly, our algorithm constructs an oracle in $O(n \log \lambda_k(n))$ time and space, which is able to answer the queries within $2k$ comparisons, and $O(k + \log \lambda_k(n))$ time. If $O(k)$ time is intended, then we either need to loosen the preprocessing time to $O(n \lambda_k(n))$, or the number of comparisons needed to $2k + 2$. Moreover, our algorithm is, in our eyes, much simpler than the previous ones, in both the algorithm itself and the analysis. \subsection{Intuition} Our algorithm is based on the Boruvka trees introduced by \citet{king1997simpler}. Briefly speaking, Boruvka tree is the structure built during Boruvka's maximum spanning tree algorithm. It has the beautiful property of preserving path minimum query: the minimum weight on the path between any two nodes in a tree is exactly equal to the minimum weight on the path between the two corresponding nodes in its Boruvka tree. Boruvka trees also have some additional useful properties, making it easier for us to handle path minimum queries on it than general trees. These properties are similar to full binary trees: all leaves of a Boruvka tree have the same depth, and we will show later in Section \ref{sec:alg:balanced-boruvka} that we can further make the number of children of each internal node between $2$ and some constant $c$. These properties give us an upper bound on the number of vertices with small depth. Hence we can preprocess all vertices with small depth (depth smaller than a threshold $s$) via a trivial algorithm, which is affordable since the number of vertices is not too large. Then we divide the rest of tree into many smaller parts, each of which can be recursively solved. For any query, we can then divide the path into three segments, with the middle one having depth less than $s$, and the other two in some smaller subtrees. By carefully setting the threshold $s$, we can construct, in $O(n \lambda_1(n))$ time, an oracle which can answer the queries with $2 \lambda_1(n)$ comparisons. With a simple trick at preprocessing, the number of comparisons can be reduced down to $2$. Moreover, the queries can be answered in constant time. Then we can repeat this process, replacing the simple algorithm to preprocess the small depth cases, with the above non-trivial algorithm with $O(n \lambda_1(n))$ preprocessing time and $2$ query comparisons. By re-choosing the thresholds, we can construct an oracle in $O(n \lambda_2(n))$ time to answer the queries with $4$ comparisons. By doing this $k$ steps, we get an oracle to answer the queries with $2k$ comparisons, which can be constructed in $O(n \lambda_k(n))$ time. Also, the query time is bounded by $O(k)$. By analyzing the bottleneck on constructing the oracle, preprocessing time can be further reduced to $O(n \log \lambda_k(n))$ by sacrificing either $2$ additional queries, or $O(\log \lambda_k(n))$ query time. \section{Preliminaries} \label{sec:pre} In this paper, we mainly focus on edge-weighted trees $T = \code{V, E, W}$, where $V$ is the set of vertices, $E$ is the set of edges, and $W$ corresponds to the weights of the tree. We usually use $n$ to denote the number of vertices in the tree. For convenience, we adopt the notation of using $fa_u$ to represent the father of $u$ and $ch_u$ to represent the \emph{children set} of $u$. For rooted trees, we can further define the \emph{depth} of a node to be the number of vertices on the simple path from the root to it, denoted by $dep_u$. In particular, the root has depth $1$. We call the maximum depth of all nodes the \emph{height} of the tree. We can also define the lower common ancestor of two nodes $u$ and $v$ in the normal way, denoted by $LCA(u,v)$. The problem we consider can be formalized as the following: for a fixed tree $T$, construct a data structure, which on queries of the form $(u,v)$, answer the minimum weight of all edges on the unique simple path from $u$ to $v$. We note here that this edge-weighted version is in fact equivalent to the node-weighted variant, with a linear pre-processing time overhead, since we can reduce the latter one to the former by taking the weight of each edge to be the smaller one between the two endpoints, and conversely by adding dummy nodes on the edges. What is critical in this problem is that the queries come in one-by-one, and the data structure we construct needs to answer them \emph{online}. For the sake of simpler notation, if an algorithm requires $T_1(n)$ preprocessing time, and can then answer any query within $T_2(n)$ comparisons, and $T_3(n)$ additional time in word RAM model, then we denote the complexity by $\onlinecompf{T_1(n)}{T_2(n)}{T_3(n)}$. For example, the algorithm we are going to propose has complexity $\loglambdacompf{2k}{O(k+\log \lambda_k(n))}$. \subsection{Boruvka trees} \label{sec:pre-boruvka} The key component we utilize is the Boruvka trees, introduced by \citet{king1997simpler}, which is the hierarchical structure constructed during the process of Boruvka's algorithm. Initially, a leaf is constructed in Boruvka tree for each node in the original tree $T$. Then we do one iteration of Boruvka algorithm, which marks out the minimum edge incident to each node, and shrink the connected components formed by marked edges into hypernodes. For each shrunk connected component, say $C$, we create a node $v_C$ for it in the Boruvka tree, then for any $v \in C$, we set the father of $v$ in the Boruvka tree to be $v_C$, with the edge weight equal to the incident edge selected by $v$ in this iteration. Then we use the $v_C$'s to represent the hypernodes created in this iteration, and repeat the iterations until only one hypernode containing the whole tree left. Since the time complexity of Boruvka algorithm on trees is $O(n)$, we can construct the Boruvka tree in linear time. Without further clarification, we consider the Boruvka’s algorithm for \emph{maximum spanning trees} in the rest of this paper. For convenience, for a tree $T$, we denote the corresponding Boruvka tree (given by the \emph{maximum spanning tree} variant of Boruvka's algorithm) by $B(T)$. For each node $u$ in $T$, we denote the corresponding leaf in $B(T)$ by $B(u)$. The key property of such a Boruvka tree is captured in \citep{king1997simpler}, which relates the path minimum query problems on $T$ and $B(T)$. \begin{lemma}[Theorem 1 of \citep{king1997simpler}] \label{lmm:boruvka-eq} Let $T$ be any edge-weighted tree, and $B(T)$ be its corresponding Boruvka tree (given by the \emph{maximum spanning tree} variant of Boruvka's algorithm), then for any pair of vertices $u, v$ in $T$, the minimum weight on the path between $u$ and $v$ in $T$ is equal to the minimum weight on the path between $B(u)$ and $B(v)$ in $B(T)$. \end{lemma} This lemma tells us that in order to answer path minimum query on $T$, we only need to get the minimum weight on the corresponding path on $B(T)$. This allows us to reduce path minimum query on \emph{general trees} to \emph{Boruvka trees}. In the meantime, Boruvka trees have two wonderful properties: \begin{itemize} \item all leaves in a Boruvka tree have the same depth, \item all internal nodes in a Boruvka tree have at least two children. \end{itemize} These properties can both be shown directly from the definition, so detailed proof is omitted. The point is, they allows us to bound the number of vertices near the root. \begin{lemma} \label{lmm:boruvka-subtree} For a Boruvka tree $B$ of height $h$, the number of nodes with depth not larger than $k$ is at most $\frac{n}{2^{h-k}}$. \end{lemma} \begin{proof} Since each internal node has at least two children, for a node $u$ at depth $k$, the subtree rooted at $u$ has size at least $2^{h-k+1}-1$. Also, similar to a full binary tree, the number of nodes with depth $< k$ is smaller than those with depth exactly $k$. Suppose that the number of vertices with depth less than $k$ is $a$, and the number of vertices with depth exactly $k$ is $b$, then we should have $a < b$, and the number of vertices with depth not smaller than $k$ is at least $(2^{h-k+1}-1)b$. Hence, the maximum number of vertices with depth not larger than $k$ is given by the following linear program: \begin{align*} \max.\ \ & a + b \\ s.t.\ \ & a < b \\ & a + (2^{h-k+1}-1)b \le n. \end{align*} Solving this completes the proof. \end{proof} As a simple corollary, we can further imply that any Boruvka tree has height $O(\log n)$. \subsection{Cartesian trees} \label{sec:pre:cartesian} Another useful data structure is the Cartesian tree introduced by \citet{vuillemin1980unifying} to capture the order of elements in a permutation. This idea can naturally be generalized to trees. For simplicity, in the rest of this sub-section, we consider node-weighted trees. However, as we have noted earlier, this can be easily translated into an edge-weighted version. For an unrooted, node-weighted tree $T$, we can define the corresponding Cartesian tree $C(T)$ as a rooted tree satisfying the following three properties: \begin{itemize} \item the nodes of $C(T)$ has a bijective correspondence with the nodes in $T$, \item for each subtree of $C(T)$, all nodes in it form a connected subgraph in $T$, \item $C(T)$ satisfies the heap property, i.e., the weight of each node cannot be greater than the weight of its children. \end{itemize} If the weights on the vertices are pair-wise different, then the Cartesian tree is unique, since we can only recursively select the vertex with smallest weight to be the root in order to satisfy the heap property. Assuming that we already know the order of all weights (for example, the weights are given as a permutation), Cartesian tree of a tree can be contructed in linear time \citep{chazelle1987computing} using the linear-time union-find on trees given by \citet{gabow1985linear}. With the Cartesian trees, we can then easily answer tree path minimum query, since the node with the minimum weight on path between $u$ and $v$ is exactly the lowest common ancestor of $u$ and $v$ in $C(T)$. Lowest common ancestor can be answered in constant time with linear-time prepocessing \citep{bender2000lca}, so we get a simple $\onlinecompf{O(n \log n)}{0}{O(1)}$ algorithm for tree path minimum query, with the bottleneck at preprocessing being sorting all the weights while building Cartesian trees. In fact, this sorting is unavoidable in building Cartesian trees for trees. For any sequence $a_1, a_2, \dots, a_n$, we can construct a tree of size $n+1$ in the following manner: the $i$-th vertex (for $1 \le i \le n$) has weight $a_i$, and has an edge to the $(n+1)$-th vertex. The $(n+1)$-th vertex has a weight larger than any of the other $n$ elements. Then building the Cartesian tree of this tree will solve sorting, hence gives us a $\Omega(n \log n)$ lower bound for building Cartesian trees. \subsection{Ackermann function} \label{sec:pre:ackermann} The Ackermann function \citep{ackermann1928hilbertschen} is used widely in complexity analysis. Here we give a slightly modified definition of this function and its inverses: \begin{definition}[Ackermann Function] \label{def:ackermann} \begin{equation*} A(m,n) = \begin{cases} 2^n & m = 0 \\ A(m-1,1) & m \ge 1, n = 0 \\ A(m-1,A(m,n-1)) & m \ge 1, n \ge 1 \end{cases}. \end{equation*} \end{definition} \begin{definition}[Inverse of Ackermann Function along Columns] \label{def:ackermann-inverse-column} \begin{equation*} \alpha (m,n) = \min \left\{ i \ge 1: A \left(i, \left\lfloor \frac{m}{n} \right\rfloor\right) \ge n \right\}. \end{equation*} For simplicity, we further use $\alpha(n)$ as a shorthand for $\alpha(n,n)$. \end{definition} \begin{definition}[Inverse of Ackermann Function along Rows] \label{def:ackermann-inverse-row} \begin{equation*} \lambda_k (n) = \min \left\{ j \ge 1: A(k,j) \ge n \right\}. \end{equation*} \end{definition} From the definition, it can be seen immediately that \begin{equation} \label{eq:ackermann-inverse-eq} \lambda_{\alpha(n)}(n) = 1. \end{equation} \section{Main algorithm on Boruvka trees} \subsection{Balanced Boruvka trees} \label{sec:alg:balanced-boruvka} We first present a modified version of the Boruvka's algorithm. Recall that for each round, the algorithm will pick, for each vertex, the edge incident to it with the maximum weight (again we emphasize that we are finding the maximum spanning tree), and shrink all the connected components formed by picked edges to a hyper-node. In our modified version, we will take a constant $c$, and at the beginning of each round, first make the degree of each node in the tree not larger than $c$ by splitting nodes with large degrees. This will involve at most $\frac{n}{c-2}$ additional nodes. Then after picking the edges with maximum weights, we will repeatedly drop the middle edge for any simple path formed by picked edges of length $3$. This will make the diameter of all the remaining connected components not greater than $2$, while keeping the assertion that there is a least one picked edge adjacent to each node. Then we shrink the connected components in the normal way. \begin{algorithm}[htb] \caption{A modified Boruvka's algorithm} \label{alg:modified-boruvka} \begin{algorithmic} \While{The graph has more than $1$ node} \While{There is a node $u$ with degree larger than $c$} \State Split $u$ into two nodes both with weights equal to $u$ \EndWhile \State For each vertex, mark the edge incident to it with maximum weight \While{There is a path $u-a-b-t$ of length $3$ formed by marked edges} \State Unmark the edge $(a,b)$ \EndWhile \State Shrink all connected components formed by marked edges into hypernodes \EndWhile \end{algorithmic} \end{algorithm} The reason why we apply this modification is that this will ensure that the size of each shrunk connected component is not larger than $c+1$, since its diameter is not greater than $2$, and the degree of each node is not greater than $c$. In the meantime, since we still guarantee that each vertex has at least one marked edge incident to it, the number of vertices will reduce to half after the shrinkage. Hence the size of the shrunk tree after each round is bounded by \begin{equation*} \frac{1}{2} \left( n + \frac{n}{c-2} \right) = \frac{c-1}{2c-4} n. \end{equation*} By picking $c \ge 4$, we have $\frac{c-1}{2c-4} < 1$, so the algorithm is still linear, meaning that the size of the corresponding Boruvka tree is linear as well. The tree generated by this modified Boruvka's algorithm have all the properties in Section \ref{sec:pre-boruvka}. In addition, the number of children of each node can never be greater than $c+1$ (i.e., $\lvert ch_u \rvert \le c+1$). Thus the degree of each node is bounded by a constant, which give us a tight bound that $h = \Theta(\log n)$, where $h$ is the height of the tree and $n$ is the number of vertices in it. We call it the \emph{balanced Boruvka tree} of the original tree $T$, denoted by $B'(T)$. \subsection{Basic algorithm} \label{sec:alg:basic} By lemma \ref{lmm:boruvka-eq}, we can translate the problem on $T$ to the equivalent problem on $B'(T)$ in linear time, so we now only consider the queires on $B'(T)$. Suppose that the tree has $n$ vertices and height $h$, then $h = \Theta(\log n)$. In the balanced Boruvka tree $B'(T)$, the size of $ch_u$ for all nodes $u$ is bounded by a constant $c' = c + 1$. We now present two simple ways of getting an $\onlinecompf{O(nh)}{0}{O(1)}$ algorithm on such a balanced Boruvka tree $B'(T)$. The first one is by applying the Cartesian tree. As discussed in Section \ref{sec:pre:cartesian}, we can achieve $\onlinecompf{O(n \log n)}{0}{O(1)}$ by constructing the corresponding Cartesian tree. Since $h = \Theta(\log n)$, $O(n \log n)$ is equivalent to $O(nh)$. The other method is more straight-forward. We maintain at each node $u$, the order of the answers from $u$ to every node $v$ in the subtree of $u$ in $B'(T)$. By applying merge sort on this tree structure, the time complexity to process all the nodes is (here we abuse the notation that $ch_v$ and $dep_v$ are defined on $B'(T)$, and $B'(T)(u)$ represents the subtree in $B'(T)$ rooted at $u$) \begin{equation*} \sum_{v \in V} \lvert ch_v \rvert \cdot \lvert B'(T)(v) \rvert \le \sum_{v \in V} c' \cdot \lvert B'(T)(v) \rvert = c' \sum_{v \in V} dep_v \le c'nh = O(nh). \end{equation*} Querying can be answered by checking the order of the result from $u$ to $LCA(u,v)$ and the result from $v$ to $LCA(u,v)$. This gives us an $\onlinecompf{O(nh)}{0}{O(1)}$ algorithm. \subsection{Recursion to speed up preprocessing} \label{sec:alg:recursion-1} Our intuition is somehow reducing either $n$ or $h$ to make the algorithm affordable. Let us set a threshold $s$, and solve the cases where both ends of the query have $dep \le s$ with the basic algorithm. By Lemma \ref{lmm:boruvka-subtree}, the number of such nodes is $O \displaystyle \left(\frac{n}{2^{h-s}}\right)$. Setting $s = h - \log h$, the complexity of preprocessing becomes $O \displaystyle \left( \frac{n}{2^{\log h}} \cdot h \right) = O(n)$, which is efficient. We then solve the remaining cases recursively. Formalizing the idea above, we set $m$ thresholds $s_1, s_2, \dots, s_m$, where $s_i = h - \log^{(i)} h$, with \begin{equation*} \log^{(i)} h = \begin{cases} h, & i = 0 \\ \log \log^{(i-1)} h, & i > 0 \end{cases}. \end{equation*} Then $m = \log^* h = \lambda_1(h)$. For simplicity, we assume that $s_0 = 0$. For each layer $i$, we process all the nodes within the depth range from $s_{i-1}$ to $s_i$ using the basic algorithm. Since the number of nodes in each layer is bounded by Lemma \ref{lmm:boruvka-subtree}, the complexity of each layer is \begin{equation*} O \left( \frac{n}{2^{\log^{(i)} h}} \cdot \log^{(i-1)} h \right) = O(n). \end{equation*} Since there are $m = \lambda_1(h)$ layers, the total time complexity of preprocessing is $O\left(n \lambda_1(h)\right)$. In addition, for each node $u$ and layer $i$, suppose that the ancestor of $u$ with depth exactly $s_i$ is $p_{u,i}$, then we compute the minimum weight on the path from $u$ to $p_{u,i}$ in advance. This can also be done in $O\left(n \lambda_1(h)\right)$ time and space. To answer a query $(u,v)$ with the above information, we first find out their lowest common ancestor $l = LCA(u,v)$. Suppose that $l$ is in layer $i$, then we split the full path into three segments: the part in layer $i$, and the parts from $u$ and $v$ to layer $i$ respectively. The answer of all these segments can be directly found in the preprocessed information. Thus we only need $2$ comparisons to find the minimum one among them. In this way, we obtained a solution with time and space complexity $\onlinecompf{O \left( n \lambda_1(h) \right)}{2}{O(1)}$ \subsection{Recursion of recursions} \label{sec:alg:recursion-2} We call the algorithm described in the previous sub-section the first step, and $f_1(n,h)$ to be its preprocessing time and space complexity. Then $f_1(n,h) = O \left( n \lambda_1(h) \right)$. Our intuition is to repeat this process with different thresholds. For $k \ge 2$, we suppose that the complexity of step $k-1$ is $f_{k-1}(n,h) = O \left( n \lambda_{k-1}(h) \right)$, we now consider the $k$-th step. We set the thresholds $s_i$ to be $h - \lambda_{k-1}^{(i)} (h)$, with \begin{equation*} \lambda_{k-1}^{(i)}(h) = \begin{cases} h, & i = 0 \\ \lambda_{k-1}(\lambda_{k-1}^{(i-1)}(h)), & i > 0 \end{cases}, \end{equation*} then the number of layers is $m = \lambda_k (h)$. Now we process each layer in the same way as Section \ref{sec:alg:recursion-1}, except that we handle the nodes with depth exactly $s_i$ individually. Specifically, we preprocess the nodes within the depth range from $s_{i-1}+1$ to $s_i-1$ using the algorithm at step $k-1$ which preprocessing time complexity $f_{k-1}(n,h)$, and the answers from each node to their ancestors with depth exactly $s_i$ or $s_i-1$. By dealing with the boundary of the layers carefully at query time, we can keep the query complexity to be the same. In this way, the number of nodes to be preprocessed at layer $i$ using the algorithm of step $k-1$ will be bounded by $\displaystyle \frac{n}{2^{h-s_i+1}}$. Suppose that the number of nodes in the depth range from $s_{i-1}+1$ to $s_i-1$ is $n_i$, then they meet the following requirement: \begin{equation*} n_1 + n_2 + \dots + n_i \le \displaystyle \frac{n}{2^{\lambda_{k-1}^{(i)}(h)+1}},\ \forall\ i \end{equation*} Taking $i = m$ gives us $n_1 + n_2 + \dots + n_m \le \frac{n}{2}$. The preprocessing time complexity of this step is \begin{equation} \label{eq:preprocess} f_k(n,h) = \sum_{i=1}^m f_{k-1} \left( n_i, \lambda_{k-1}^{(i-1)} (h) \right) + O \left( n \lambda_k (h) \right). \end{equation} By solving this equation, we can get a bound for the preprocessing time complexity of step $k$. \begin{theorem} \label{thm:preprocess} For any $k$ which can depend on $n$, $f_k(n,h) = O \left( n \lambda_k (h) \right)$. \end{theorem} \begin{proof} The proof is by induction on $k$. The statements is trivial while $k = 1$. For any $k \ge 2$, suppose that $f_{k-1}(n,h) \le c_1 n + c_2 n \lambda_k (h)$ for some constant $c_1, c_2$. Then, \begingroup \allowdisplaybreaks \begin{align*} f_k(n,h) & = \sum_{i=1}^m f_{k-1} \left( n_i, \lambda_{k-1}^{(i-1)} (h) \right) + O \left( n \lambda_k (h) \right) \\ & \le \sum_{i=1}^m \left( c_1 n_i + c_2 n_i \lambda_{k-1}^{(i)} (h) \right) + O \left( n \lambda_k (h) \right) \\ & = c_1 \sum_{i=1}^m n_i + c_2 \sum_{i=1}^m n_i \lambda_{k-1}^{(i)} (h) + O \left( n \lambda_k (h) \right) \\ & \le c_1 \frac{n}{2} + c_2 \sum_{i=1}^m \frac{n}{2^{\lambda_{k-1}^{(i)}(h)+1}} \lambda_{k-1}^{(i)}(h) + O \left( n \lambda_k (h) \right) \\ & \le c_1 \frac{n}{2} + c_2 \frac{n}{2} \sum_{i=1}^n \frac{i}{2^i} + O \left( n \lambda_k (h) \right) \\ & = \left( \frac{c_1}{2} + c_2 \right) n + O \left( n \lambda_k (h) \right). \end{align*} \endgroup From the result, we can see that the constant $c_1$ will not increase with $k$, so we can discard the first term, and get $f_k(n,h) = O(n \lambda_k(h))$. \end{proof} Note that $h = \Theta(\log n)$, so this also guarantees that the preprocessing time is in $O \left( n \lambda_k(n) \right)$. At each step, the query path from $u$ to $v$ will be split into three parts, so $2$ additional comparisons are needed at each step. Therefore, the number of comparisons needed for each query is $2k$, with query complexity $O(k)$. Combining with the basic algorithm described in Section \ref{sec:alg:basic}, which can be considered as the $k=0$ case, we obtain an $\lambdacompf{2k}{O(k)}$ algorithm. By Theorem \ref{thm:preprocess}, this complexity holds even if $k$ is relevant to $n$. Taking $k = \alpha(n)$, combining with Equation \eqref{eq:ackermann-inverse-eq}, the algorithm will become $\onlinecompf{O(n)}{\alpha(n)}{O\left(\alpha(n)\right)}$. \subsection{Further improvements} \label{sec:alg:improvement} The bottleneck of the current algorithm is the $O \left( n \lambda_k (h) \right)$ term in Equation \eqref{eq:preprocess}, which is the time needed to preprocess the answers from each node to their ancestors on the layer borders at the last step. This part is essentially a leaf-to-ancestor query on a tree with $n$ nodes and height $O \left( \lambda_k (h) \right)$. The Komlos's algorithm for minimum spanning tree verification \citep{komlos1985linear} provides us an $\onlinecompf{O(n \log h)}{0}{O(\log h)}$ algorithm. Komlos's algorithm pre-computes the answer of each node by traversing the tree once with a stack and we can use a persistent balanced binary search tree (e.g., treap or red-black tree) to maintain the stack. This implies an $\loglambdacompf{2k}{O\left(k + \log \lambda_k (n)\right)}$ algorithm for our problem, which is faster to preprocess while requiring more query time, although the number of comparisons to answer a query remains the same. This is in fact a trade-off among the preprocessing time, the number of comparisons needed to answer a query and the query time. Indeed, an $\loglambdacompf{2k+2}{O(k)}$ solution can also be obtained by applying long path decomposition at this last step. \section{Open problems} As discussed above, our result suffers a trade-off among the preprocessing time, the number of comparisons needed to answer a query, and the additional query time needed in word RAM model. One direct open problem is to find a $\loglambdacompf{2k}{O(k)}$ algorithm. To solve this problem, a $\onlinecompf{O(n \log h)}{0}{O(1)}$ solution for leaf-to-ancestor queries would be sufficient. This result is appealing since it gives an algorithm which matches the currently best algorithm on the number of comparisons, and is also optimally fast practically in word RAM. Although the performance of our algorithm matches previous works, there is still a gap between the complexity of our algorithm and the proven lower bound for this problem in \citep{pettie2006inverse}. Closing this gap would also be a very interesting future direction. Indeed, our algorithm provides a novel way of handling this problem, compared with previous ones based on \citep{yao1982space}. There might be chances to combine the ideas together to obtain a better solution. \paragraph{Acknowledgement.} I am grateful to Zhiyuan Fan, Jiatu Li and Yiding Zhang for many useful discussions throughout this work. \printbibliography \end{document}
{ "timestamp": "2021-05-06T02:09:35", "yymm": "2105", "arxiv_id": "2105.01864", "language": "en", "url": "https://arxiv.org/abs/2105.01864" }
{ "timestamp": "2021-05-06T02:11:11", "yymm": "2105", "arxiv_id": "2105.01892", "language": "en", "url": "https://arxiv.org/abs/2105.01892" }
\section{Introduction} The ICDAR 2021 competition on scientific literature parsing task B is to reconstruct the table image into an HTML code. In this competition, PubTabNet dataset (v2.0.0)~\cite{zhong2019image} is provided as the official evaluation data, and Tree-Edit-Distance-based similarity (TEDS) metric is used for evaluation. The PubTabNet data set consists of 500,777 training samples, 9,115 validation samples, 9,138 samples for the development stage, and 9,064 samples for the final evaluation stage. For the training and validation data, the ground truth HTML codes and the position of non-empty table cells are provided to the participants. Participants of this competition need to develop a model that can convert images of tabular data into the corresponding HTML code, which should correctly represent the structure of the table and the content of each cell. The labels of samples for the development and the final evaluation stages are preserved by the organizers. We divide this task into four sub-tasks: table structure recognition, text line detection, text line recognition, and box assignment. And several tricks are tried to improve the model. The details of each sub-task will be discussed in the following section. \section{Method} \label{sec:headings} In this section, we will present these four sub-tasks in order. \begin{figure}[t] \centering \includegraphics[width=0.85\textwidth]{image/MASTER_vs_TableStructureMASTER.png} \caption{(a) Architecture of vanilla MASTER; (b) Architecture of table structure MASTER} \label{fig:structureR} \end{figure} \subsection{Table Structure Recognition} The task of table structure recognition is to reconstruct the HTML sequence items and their corresponding locations on the table, but ignore the text content in each item. Our model structure is shown in Figure~\ref{fig:structureR}(b). It is customized based on MASTER~\cite{lu2019master}, a powerful image-to-sequence model originally designed for scene text recognition. Different from the vanilla MASTER as shown in Figure~\ref{fig:structureR}(a), our model has two branches. One branch is to predict the HTML item sequence, and the other is to conduct the box regression. Instead of splitting the model into two branches in the last layer,   we decouple the sequence prediction and the box regression after the first transformer decode layer. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{image/alphabet2.png} \caption{39 classes used in table structure MASTER. } \label{fig:alphabetM} \end{figure} To structure an HTML sequence, we need to define an Alphabet for the sequence. As shown in the left of Figure~\ref{fig:alphabetM}, we define 39 class labels for the sequence prediction. For the pairs \emph{<thead>} and \emph{</thead>}, \emph{<tbody>} and \emph{</tbody>}, and \emph{<tr>} and \emph{</tr>}, some other control characters may appear between these pairs. Thus, we need to define one individual class for each of them. We define the maximum ``colspan'' and ``rowspan'' as 10, thus we both use 9 labels for them individually.  There are two forms for \emph{<td></td>}, empty content or non-empty content between \emph{<td>} and \emph{</td>}. We use one class to denote the whole of the \emph{<td>[content]</td>}. It should be noted that using one label instead of defining two individual labels for \emph{<td>} and \emph{</td>} can largely reduce the length of the sequence. For the form of \emph{<td></td>} with empty content, we can find 11 special forms. As shown in the right of Figure~\ref{fig:alphabetM}, each form is represented by a special class label. According to the above description, the sequence lengths of 99.6\% HTML in the PubTabNet data set are less than 500. For the sequence prediction, we use the standard cross-entropy loss. For the box regression, we employ the L1 loss to regress the coordinates of [x,y,w,h]. The coordinates are normalized to [0,1]. For the box regression head, we use an \emph{Sigmoid} activation function before the loss calculation. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{image/token.png} \caption{Example of table structure prediction. Predicted bounding box are marked with yellow color.} \label{fig:structureMasterExample} \end{figure} In Figure~\ref{fig:structureMasterExample}, we show a result example of sequence prediction and box regression. We could see that the structure MASTER can predict out the box coordinates correctly. \subsection{Text Line Detection} PSENet is an efficient text detection algorithm that can be considered as an instance segmentation network. It has two advantages. Firstly, PSENet, as a segmentation-based method, is able to localize texts of arbitrary shape. Secondly, the model proposes a Progressive Scale Expansion Network which can successfully identify adjacent text instances. PSENet not only adapts to text detection at arbitrary angles but also works better for adjacent text segmentation. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{image/new_pse.png} \caption{Visualization of text line detection.} \label{fig:PSENetTextLineDetection} \end{figure} Text detection in print documents is an easy task compared to text detection in a natural scene. In training PSENet, there are three key points needing attention, the input image size, the minimum area and the minimum kernel size.  To avoid true negative, especially some small region (such as a dash line), the resolution of the input image should be large, and the minimum area size should be set to be small. In Figure~\ref{fig:PSENetTextLineDetection},  we visualize an detection result by PSENet. \subsection{Text Line Recognition} We also use MASTER as our text line recognition algorithm. MASTER is powerful and can be freely adapted to different tasks according to different data forms, e.g. curved text prediction, multi-line text prediction, vertical text prediction, multilingual text prediction. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{image/single_line_vs_multi_line2.png} \caption{Example of text line images cropped from training data of PUbTabNet data set; (a) single line text image; (b) multi-lines text image} \label{fig:ocrtrainingsamples} \end{figure} The position annotations in the PubTabNet dataset (v2.0.0) is cell-level, cropped text images according to the position annotation in the data set contains both single-line and multi-line text images. We construct a text line recognition database according to position information provided in the annotation file. This text line recognition database contains about 32 million samples cropped from 500k training images. We split out 20k text line images as a validation set for checkpoint selection. Some training samples are shown in Figure~\ref{fig:ocrtrainingsamples}. We can see that some texts are blur, and some are black and some are grey. The maximum sequence length is set to be 100 in our MASTER OCR. Text lines longer than 100 characters will be discarded. Some training samples are shown in Figure~\ref{fig:ocrtrainingsamples}. It should be noted that in training stage, our algorithm is trained on a database mixed with single-line text images and multi-line text images, but in the test stage, only single-line text images are inputted. By text line recognition, we can get the corresponding text content of text line images. These text contents will be merged to non-empty \emph{<td></td>} items in the HTML sequence. The details of text content merge will also be discussed in the next subsection. \subsection{Box Assignment} According to the above three subsections, we have obtained the table structure together with the box of each cell, and the box of each text line together with its corresponding text content. To generate the complete HTML sequence, we need to assign each box of text line into its corresponding table structure cell. In this subsection, we will introduce our used match rules in detail. There are three matching rules used in our method, which we call \emph{Center Point Rule}, \emph{IOU Rule} and \emph{Distance Rule}. The details will be discussed below. \subsubsection{Center Point Rule} In this matching rule, we firstly calculate central coordinate of each box obtained by PSENet. If the coordinate is in the rectangular region of the regressed box obtained by structure prediction, we call them a matching pair. The content of the text line will be filled into \emph{<td></td>}. It is important to note that one table structure cell can be associated with several PSENet boxes because of one table structure cell may have multiple text lines. \subsubsection{IOU Rule} If the above rule is not satisfied, we will compute the IOU between the box of the chosen text line and all structure cell boxes. The box cell with the maximum IOU value will be selected. The text content will be filled into the chosen structure cell. \subsubsection{Distance Rule} Finally, if both above rules are unsuccessful. We will calculate the Euclidean distances between between the box of the chosen text line and all structure cell boxes. Similar to the \emph{IOU Rule}, the structure cell with minimum Euclidean distance will be chosen. \subsubsection{Matching Pipeline} All above-mentioned three rules will be applied in order. Firstly, most boxes detected by PSENet will be assigned to their corresponding structure cells by \emph{center point rule}. Owing to prediction deviations of structure prediction, a few central points of PSENet boxes are out of the rectangle region of structure cell boxes obtained by structure prediction. Secondly, some unmatched PSENet boxes under the \emph{center point rule} will get matched under the \emph{IOU Rule}. In the above two steps, we use the PSENet boxes to match their corresponding structure item. If there are some structure items that are not matched. In this way, we use the structure item to find the left PSENet boxes. To do this, the \emph{distance rule} is applied. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{image/new_match_2.png} \caption{Example of box assignment visualization. On the left side, some detected boxes by PSENet are marked by different colors. On the right side, the boxes generated by structure prediction are marked.} \label{fig:matchingresults} \end{figure} A visualization example of matching results is shown in Figure~\ref{fig:matchingresults}. For aesthetic effect, we only show part of the boxes. On the left side of Figure~\ref{fig:matchingresults}, some detected boxes by PSENet are marked by different colors. On the right side of Figure~\ref{fig:matchingresults}, the boxes generated by structure prediction are marked. The boxes on the left side will be assigned to the box cell with the same color. \section{Experiment} \label{sec:others} In this section, we will describe the implementation of our table recognition system in detail. {\bf{Dataset.}} Our used data is the PubTabNet dataset (v2.0.0), which contains 500,777 training data and 9,115 validation data in development Phase, 9,138 samples for the development stage, and 9,064 samples for the final evaluation stage. Except for the provide training data, no extra data is used for training. To get text-line level annotation of all text boxes, 2k images of training data are relabeled for PSENet training. Actually, we only need to adjust the annotations of multi-line annotation into single-line box annotation. {\bf{Implementation Details.}} In PSENet training, 8 Tesla V100 GPUs are used with the batch size 10 in each GPU. The input image is resized equally, keeping the long side with resolution 1280. RandomFilp and RandomCrop are used for data augmentation. A $640\times 640$ region is cropped from each image. Adam optimizer is applied, and the initial learning rate is 0.001 with step learning rate decay. In table structure training, 8 Tesla V100 GPUs are used with the batch size 6 in each GPU. The input image size is $480\times 480$, and the maximum sequence length is 500. Synchronized BN~\cite{zhang2018context} and Ranger optimizer~\cite{lessw2019ranger} are apply in this experiment, and the initial learning rate of optimizer is 0.001 with step learning rate decay. In the training of text line recognition, 8 Tesla V100 GPUs are used with the batch size 64 in each GPU. The input size is $256\times 48$, and the maximum length is 100. Synchronized BN and Ranger optimizer are also applied and the hyper-parameter setting is the same as the table structure training. All models are trained based on our own FastOCR toolbox. \subsection{Ablation Studies} Our table recognition system is described above. We have conducted many attempts in this competition. In this subsection, we will discuss some useful tricks, but ignore some unsuccessful attempts. {\bf{Ranger}} is a synergistic optimizer combining RAdam (Rectified Adam)~\cite{liu2019radam}, LookAhead~\cite{zhang2019lookahead}, and GC (gradient centralization)~\cite{yong2020gradient}. We observe that Ranger optimizer shows a better performance than Adam in this competition, and it is applied in both table structure prediction and text line recognition. We use default Ranger. Result comparison between Adam and Ranger is shown in Table~\ref{tab:ablationstudy}(a). {\bf{Synchronized Batch Normalization (SyncBN) }} is an effective batch normalization approach that is suitable for multi-GPU or distributed training. In standard batch normalization, the data is only normalized within the data on each GPU device. But SyncBN normalizes the input within the whole mini-batch. SyncBN is ideal for situations where the batch size is relatively small on each GPU graphics card. SyncBN is applied in our experiment. {\bf{Feature Concatenation of Layers in Transformer Decoder.}} In structure MASTER and text recognition MASTER, three successive transformer layers~\cite{lu2019master} is used as decoder. Different from the original MASTER, we concatenate the outputs of each transformer layer~\cite{dou2018exploiting} and then apply a linear projection on the concatenated feature. {\bf{Label Encoding in Structure Prediction}} After we inspect on the training data of the PubtabNet data set(v2.0.0), we find some ambiguous annotations about empty table cell. Some empty cells of table are labeled as \emph{<td></td>}, whereas the others are labeled as \emph{<td> </td>} in which one space character is inserted. However, these two different table cells look the same visually. According to statistics, the ratio between \emph{<td></td>} and \emph{<td> </td>} is around 4:1. In our experiment, we encode these two different cells into different tokens. Our motivation is to let the model to discover the intrinsic visual features by training. \begin{table*}[htbp] \centering \subfloat[Comparison of optimizer. \label{tab:chapter5:1a}]{ \normalsize \centering \begin{tabular}{l|c} \hline \textbf{Optimizer} & \multicolumn{1}{l}{\textbf{Structure prediction Acc.}} \\ \hline Adam & 0.7784 \\ \hline Ranger & \textbf{0.7826} \\ \hline \end{tabular} } \hspace{10pt} \subfloat[Comparison of with or without feature concatenation.\label{tab:chapter5:1b}]{ \normalsize \centering \begin{tabular}{c|c} \hline \textbf{Feature Concatenation} & \textbf{Text line recognition Acc.} \\ \hline No & 0.9313 \\ \hline Yes & \textbf{0.9347} \\ \hline \end{tabular} } \subfloat[Evaluation of label encoding, SyncBN and feature concatenation. \label{tab:chapter4:1f}]{ \normalsize \centering \begin{tabular}{cc|c} \hline \textbf{SyncBN} & \textbf{FC} & \textbf{Structure prediction Acc.} \\ \hline & & 0.7734 \\ \hline \checkmark & & 0.7750 \\ \hline \checkmark & \checkmark & \textbf{0.7785} \\ \hline \end{tabular} } \caption{Evaluation of different tricks on table recognition task. (a). comparison of Ranger and Adam. (b). comparison of with or without feature concatenation. (c). evaluation of label encoding.}\label{tab:chapter4:1} \label{tab:ablationstudy} \end{table*} In this competition, we have conducted some evaluations and recorded the results. The results are shown in Table~\ref{tab:ablationstudy}. According to Table 1, we have the following observations, \begin{itemize}[leftmargin=.1in] \item Ranger optimizer has outperformed Adam optimizer consistently. Similar observation is also found in our another report~\cite{he2021ICDAR} about ICDAR 2021 Competition on Scientific Table Image Recognition to LaTeX~\cite{Pratik2021ICDAR}. In our evaluation on standard benchmarks, we also find that Ranger can improve the average accuracy by around 1\%. \item SyncBN can improve the performance a little. We also observe that SyncBN also shows better performance than standard BN on ICDAR 2021 competition on Mathematical Formula Detection. \item Feature concatenation can improve the accuracy of the structure prediction on this task. It should be noted that in~\cite{he2021ICDAR}, we do not observe performance improvement. \end{itemize} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{TLD} & \multicolumn{3}{c|}{\textbf{TSR}} & \textbf{TLR} & \textbf{BA} & \multirow{2}{*}{\textbf{ME}} & \multirow{2}{*}{\textbf{ForC}} & \multirow{2}{*}{\textbf{TEDS}} \\ \cline{1-6} PSE & ESB & SyncBN & FeaC & FeaC & Extra Insert & & & \\ \hline \checkmark & & & & & & & & 0.9385 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & & & 0.9621 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & & & 0.9626 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & & 0.9635 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \textbf{0.9684} \\ \hline \end{tabular} \caption{End-to-end evaluation on the validation set with TEDS as the indicator. TLD: text line detection; TSR: table structure recognition; TLR: text line recognition; ME: model ensemble. ESB: empty space box encode; SyncBN: synchronized BN; FeaC: feature concatenate output of transformer layers. ForC: format correction.} \label{tab:endtoendEval} \end{table} \subsection{End-to-end Evaluation on the Validation Set} We generate the final HTML code by merging structure prediction, text line detection, text line recognition, and box assignment. We evaluate some tricks in these stages. Results are shown in Table~\ref{tab:endtoendEval}. TEDS is used as our indicator. We have some overall conclusions from this competition, \begin{itemize}[leftmargin=.1in] \item ESB (empty space box encode) is important for the final TEDS indicator. \item FeaC (feature concatenation) is effective for both table structure recognition and text line recognition. \item ME (model ensemble) improves the performance a little bit. Three model ensembles in the TSR can improve the end-to-end TEDS score for around 0.2\%. Three model ensembles in the text line recognition can only improve the TEDS score for around 0.03\%. We only use one PSENet model. \item SyncBN is effective for both TSR and TLR. \item ForC (format correction) helps the final indicator. Our format correction is to promise all content between \emph{<thead>} and \emph{</thead>} is black font. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=0.90\textwidth]{image/problem2.png} \caption{An example of wrong table structure prediction.} \label{fig:wrongexample} \end{figure} {\bf{Discussion.}} From this competition, we have some reflections. For the end-to-end table recognition to the HTML code, structure prediction is an extremely important stage, especially for the TEDS indicator. As shown in Figure~\ref{fig:wrongexample}, although all text line information is correctly recognized. Our method obtains very low TEDS (0.423) due to wrong structure prediction. Although the provided data set is large, we still believe larger scale of data that cover more templates may further improve the structure prediction. Secondly, text line detection and text line recognition are easy tasks considering all table images are print. Thirdly, There are some labeling inconsistency issues, such as \emph{<td></td>} and \emph{<td> </td>}. Finally, the box assignment sub-task can be conducted by Graph Neural Network (GNN)~\cite{chen2020learning} instead of hand-crafted rules. \section{Conclusion} \label{sec:conclusion} In this paper, we present our solution for the ICDAR 2021 competition on Scientific Literature Parsing task B: table recognition to HTML. We divide the table recognition system into four sub-tasks, table structure prediction, text line detection, text line recognition, and box assignment. Our system gets a 96.84 TEDS scores on the validation data set in the development phase, and gets a 96.324 TEDS score in the final evaluation phase. \bibliographystyle{unsrt}
{ "timestamp": "2021-05-06T02:09:08", "yymm": "2105", "arxiv_id": "2105.01848", "language": "en", "url": "https://arxiv.org/abs/2105.01848" }
\section{Introduction} Despite advances in health care and technology, most of the elder care is still provided by informal caregivers, i.e. friends and family members. According to predictions, however, this type of care will decrease in the future, for which studies encourage society to concentrate on improving the elderly's lifestyle, helping them to remain independent for a longer period of time~\cite{Willcox2014HealthyAD}. In particular, the focus should be on external and internal difficulties of the elderly, offering arrangements and facilities to support active aging. Indeed, socio-behavioral and environmental conditions are a crucial factor affecting longevity~\cite{kirkwood2005}, which to some extent explains variations found in the aging process, ranging from active and positive to feeble and dependent. We believe that four principles promote active aging, namely dignity, autonomy, participation, and joint responsibility. Information and Communication Technologies (ICT) are expected to make such principles possible, allowing the elderly to stay active members of the societal community while helping them remain independent and self-sufficient~\cite{brinkschulte2018empathic}. Consequently, the EMPATHIC (\textit{Empathic, Expressive, Advanced Virtual Coach to Improve Independent Healthy-Life-Years of the Elderly}) project\footnote{http://www.empathic-project.eu/} aims to contribute to technological progress in this area by researching, innovating and validating new interaction paradigms and platforms for future generations of personalized Virtual Coaches (VC) to promote active aging. It is centred around the development of the EMPATHIC-VC, a non-obtrusive, emotionally-expressive virtual coach whose aim is to engage senior users in enjoying a healthier lifestyle concerning diet, physical activity, and social interactions. This way, they actively minimize their risk of potentially chronic diseases, which contributes to their ability to maintain a pleasant and autonomous life, while in turn it helps their carers. The main goal of the VC is to create a link between one's body and emotional well-being. To do so, it will perceive and identify users' social and emotional states by means of multi-modal face, eye gaze and speech analytics modules. Furthermore, it will learn and understand users' requirements and expectations, and adaptively respond to their needs through novel spoken dialogue systems and intelligent computational models. Such a combination of modules will allow for user-coach real-time interaction, thus promoting empathy in the user. In this paper, we describe our mid-term achievements, explaining where we currently stand with these goals, 18 months into the project. Section~\ref{sec:system} describes the current status of the system components, with particular emphasis on the most robust modules up to date. The integration of these modules is presented in Section~\ref{sec:technology}. Lessons learned from the preliminary human-coach interaction studies are explained in Section~\ref{sec:interaction}, while Section~\ref{sec:summary} concludes the paper. \section{Status: System Components} \label{sec:system} The EMPATHIC VC is based on the following system components, each of which is researched, built and evaluated independently. \subsection{Automatic Speech Recognition} The Automatic Speech Recognition (ASR) component turns speech from a continuous stream of audio into structured data, containing likely words and their alternatives, labelled with the confidence level for each, start time (as an offset from the stream start), and duration. So far, our main achievements with this component are: \begin{itemize} \item The development ASR.online, a new method of reading data into the ASR engine which uses the opensource GStreamer framework\footnote{https://github.com/GStreamer/gstreamer} to continuously stream the audio through the ASR executable. Compared with our previous approach of buffering audio and processing small chunks, ASR.online reduces the latency between the speech and the transcription. The average latency has been reduced to approximately 500 milliseconds, as a result of both overall performance improvements as shown in Table~\ref{table:asrperformance}, and the immediate transmission of transcription data. \item The training of acoustic models for French, Spanish and Norwegian, the languages which will be used in the EMPATHIC field trials. The acoustic models of the ASR component were trained using the Kaldi ASpIRE recipe\footnote{https://github.com/kaldi-asr/kaldi/tree/master/egs/aspire}. The training data consists of 1067 hours of Spanish speech, 271 hours of French and 228 hours of Norwegian, augmented at the DNN-HMM training stage by the addition of noise from RWCP, AIR and Reverb2014 databases using the reverberation algorithm implemented in the Kaldi framework. The training process took approximately two weeks, and used two systems with 32 CPU cores and 64GB RAM each, with a total of 6 NVIDIA Pascal architecture GPUs. 3-gram language models were adapted using transcription data from the training set. These models contain a vocabulary of approximately 67,000 words in the Spanish model, 60,000 words in French and 48,000 in Norwegian. The Norwegian model uses Bokm{\aa}l, one of two official written standards of Norwegian. The models were tested using the NIST SCLITE utility\footnote{https://github.com/usnistgov/SCTK}, which scores the best-path output against the ground-truth for correct words, substitutions, deletions, insertions and an overall word error rate. The results are shown in Table~\ref{table:scliteresults} \end{itemize} \begin{table}[] \caption {The relative performance of the original (ASR) and new (ASR.online) approaches. All values are in seconds.} \begin{tabular}{p{1.2cm} p{1.0cm} p{1.0cm} p{1.5cm} p{1.5cm}} \hline \makecell{Audio \\ Duration \\ (in sec.)} & \makecell{ASR \\ CPU \\ (in sec.)} & \makecell{ASR \\ GPU \\ (in sec.)} & \makecell{ASR.online \\ CPU \\ (in sec.)} & \makecell{ASR.online \\ GPU \\ (in sec.)} \\ \hline \multicolumn{1}{c}{2.0} & \multicolumn{1}{c}{6.0} &\multicolumn{1}{c}{6.0} & \multicolumn{1}{c}{2.5} & \multicolumn{1}{c}{1.5} \\ \multicolumn{1}{c}{5.0} & \multicolumn{1}{c}{7.0} & \multicolumn{1}{c}{6.0} & \multicolumn{1}{c}{3.0} & \multicolumn{1}{c}{1.5} \\ \multicolumn{1}{c}{10.0} & \multicolumn{1}{c}{7.0} & \multicolumn{1}{c}{6.0} & \multicolumn{1}{c}{4.6} & \multicolumn{1}{c}{2.0} \\ \multicolumn{1}{c}{15.0} & \multicolumn{1}{c}{10.0} & \multicolumn{1}{c}{6.0} & \multicolumn{1}{c}{6.5} & \multicolumn{1}{c}{2.5} \\ \multicolumn{1}{c}{20.0} & \multicolumn{1}{c}{13.0} & \multicolumn{1}{c}{6.0} & \multicolumn{1}{c}{8.0} & \multicolumn{1}{c}{4.0} \\ \multicolumn{1}{c}{25.0} & \multicolumn{1}{c}{15.0} & \multicolumn{1}{c}{6.0} & \multicolumn{1}{c}{9.5} & \multicolumn{1}{c}{4.5} \\ \multicolumn{1}{c}{40.0} & \multicolumn{1}{c}{22.0} & \multicolumn{1}{c}{8.0} & \multicolumn{1}{c}{17.0} & \multicolumn{1}{c}{5.5} \\ \multicolumn{1}{c}{70.0} & \multicolumn{1}{c}{35.0} & \multicolumn{1}{c}{9.0} & \multicolumn{1}{c}{26.0} & \multicolumn{1}{c}{5.5} \\ \multicolumn{1}{c}{100.0} & \multicolumn{1}{c}{47.0} & \multicolumn{1}{c}{10.0} & \multicolumn{1}{c}{38.5} & \multicolumn{1}{c}{7.0} \\ \multicolumn{1}{c}{130.0} & \multicolumn{1}{c}{63.0} & \multicolumn{1}{c}{12.0} & \multicolumn{1}{c}{45.0} & \multicolumn{1}{c}{7.5} \\ \multicolumn{1}{c}{600.0} & \multicolumn{1}{c}{279.0} & \multicolumn{1}{c}{25.0} & \multicolumn{1}{c}{210.0} & \multicolumn{1}{c}{16.5} \\ \multicolumn{1}{c}{1200.0} & \multicolumn{1}{c}{550.0} & \multicolumn{1}{c}{64.0} & \multicolumn{1}{c}{443.0} & \multicolumn{1}{c}{30.0} \\ \hline \end{tabular} \label{table:asrperformance} \end{table} \begin{table}[] \caption {SCLITE benchmarking for supported languages.} \begin{tabular}{lccccc} \hline Language & {COR} & {SUB} & {DEL} & {INS} & {WER} \\ \hline English & 87.7 & 9.2 & 3.1 & 4.0 & 16.3 \\ French & 73.9 & 22.7 & 3.4 & 10.1 & 36.3 \\ Spanish & 78.1 & 12.7 & 9.3 & 4.4 & 26.3 \\ Norwegian & 55.9 & 19.8 & 24.4 & 7.5 & 51.5 \\ \hline \end{tabular} \label{table:scliteresults} \end{table} \subsection{Natural Language Understanding} The Natural Language Understanding (NLU) component translates the output of the ASR into semantic units to be processed by the Dialog Management (DM) component. The main mid-term achievements in the conception of the NLU component concern the development of two multi-lingual methods for topic classification. First we had to detect the user's end-of-turn pauses. Following previous approaches (e.g. \cite{roddy2018investigating,shannon2017improved}), we addressed this question as a classification problem. As features for classification, we use the temporal profile of speech, and the syntactic and semantic information encoded in the utterances. As classifiers, we used different variants of deep neural models. A distinguished feature of our approach is that we implemented an ASR simulator that allows us to evaluate the sensitivity of the End-of-Turn-Detection (EOTD) with respect to particular characteristics of the speaker (speech profiles), and to errors in the ASR output. This validation step is essential, since in the implementation of dialogue systems an early mistake in any of the system components can produce a negative cascading effect in the performance of the subsequent elements of the pipeline. Our NLU component is expected to treat user utterances in three + one different languages, i.e. Spanish, French and Norwegian + English. In the literature, such systems are usually referred to as multi-lingual models, and different strategies have been proposed to develop them. Learning becomes a challenging task for a multi-lingual system since languages are diverse in their grammar. Furthermore, the availability and quality of the corpora with which machine learning models are usually trained is not equal for the languages. Thus, we focused on topic classification. Our approach was to create a modular system where information available in one language is transferred or exploited while learning models for the other languages. We implemented two strategies: the first based on the use of the Wordnet semantic network and synsets \cite{Miller:1995}, the second based on parallel corpora. Wordnet semantic synsets encapsulate information about different senses of commonly used words. This information is language independent, and can thus be used not only to obtain the equivalent word in another language, but also to obtain a set of sense-connected synsets, by computing the closure set that includes the hyperonymies. Thus, once we have a group of sense-connected synsets, we can generate the set of words that represent them for each language. Our second strategy focuses on training one model for one particular language for which we have quality labeled data. By means of parallel corpora, we then extrapolate the obtained labels to a second language, so as to later train a specific model. Labeling is a tedious work, yet thanks to this strategy, we only have to label in one language while still obtaining a corpus for each of the languages. \subsection{Dialogue Management} \label{sec:dm} The Dialogue Management (DM) component maintains the state and manages the flow of the conversation or, in other words, determines the action a system has to perform at each step. For the EMPATHIC VC we used a DM providing an advanced management structure based on distributed software agents. It enforces a clear separation between the domain-dependent and the domain-independent aspects of the dialogue control logic. The domain-specific aspects are defined by a dialogue task specification, and a domain-independent dialogue engine executes the given dialogue task to manage the dialogue. The dialogue task specification is defined by a tree of dialogue agents, where each agent is responsible for managing a sub-part of the dialogue. For instance, Figure~\ref{fig:dm} shows the high level dialogue task structure for the EMPATHIC VC where the ``Introduction'' agent is responsible for handling the introductory dialogue of the EMPATHIC VC, the ``Nutrition'' agent is responsible for handling the nutrition dialogues, etc. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{dm.png} \caption{High level dialogue structure employed by the DM of the EMPATHIC VC.} \Description{High level dialogue structure empoyed by the DM of the EMPATHIC VC} \label{fig:dm} \end{figure} The aim of the system is to have a virtual coach that helps users in certain aspects of their lives. To this end, we have emulated a GROW (Goal - Reality - Obstacle - Will)~\cite{Sayas2018} coaching model into the DM's dialogue strategy. The GROW model is a structured method based on problem solving, goal setting and goal-orientation. The Model is divided into four phases that propose four questions to guide the user towards obtaining and achieving a goal. These questions are asked in a pre-established order. In the first session, this order must be respected to facilitate the user to follow the thread and be able to explore its goal and the necessary steps to achieve it. In the following sessions, the order can be changed or specific phases be chosen. Using the GROW dialogue model, the following dialogue topics have been implemented: \begin{itemize} \item \textbf{Introduction} dialogues to make the users feel comfortable with the system and to obtain some basic information. This dialogue is carried out the very first time a user dialogues with the system. \item \textbf{Sport and Leisure} \cite{Sayas_leisure2018} \cite{Sayas_physical2018} dialogues based on users' leisure time activities. The aim of these dialogues is to explore users' leisure time activities, and if necessary, nudge them towards a more active lifestyle. \item \textbf{Nutrition} \cite{Sayas_nutrition2018} dialogues focused on users' nutritional habits. The goal of these dialogues is to explore the nutritional routines of the users and, if necessary, try to make the users vary those routines in order to form nutritional habits that are potentially more healthy. \end{itemize} \subsection{Natural Language Generation} The Natural Language Generation (NLG) component maps the abstract dialogue acts provided by the DM to natural language constructs (in Spanish, French or Norwegian), written in orthographic form. So far, the NLG component has been developed from a reduced database of coaching turns. The data have been extracted from video recordings of real user sessions with a professional coach, and some handmade dialogues created by a professional coach. In the process of labeling, two types of labels have been used: (1) based on the GROW coaching model~\cite{alexander2010behavioural}, and (2) based on linguistic features needed to construct the text. Considering the restrictions in the amount of training data, the NLG currently represents a rule-based system, using a template-based approach~\cite{oh2000stochastic}. Future work, however, aims to design a new NLG component based on a seq2seq neural network~\cite{duvsek2015training} (note: this plan of implementing a seq2seq neural model only concerns the NLG part of the EMPATHIC VC and will not affect the other components). \subsection{Text-to-Speech Synthesis} A Text-To-Speech (TTS) component converts any text into a spoken message. For EMPATHIC, TTS speaking styles should be fully compatible with the role of the VC communicating with elderly users. The Acapela TTS system employs a range of internally developed technologies, such as unit selection or parametric synthesizers based on Hidden Markov Models (HMM's) or Deep Neural Networks (DNN's). During the project, all of these technologies are adapted for the EMPATHIC communication and compared during evaluation with end-users. Professional speakers will be recorded to capture the communicative role of the VC with the aim to reflect the expressive possibilities of the dialogue system by coherent audio responses to the user's emotional state so as to support the credibility, naturalness and adaptability of the full dialogue chain. So far, Acapela already recorded a Spanish professional speaker enacting a VC for the elderly and trained a snit selection, and initial DNN synthesis systems on this corpus. We run evaluations of this elderly coach style TTS in terms of naturalness and intelligibility. A Spanish coaching TTS voice is already available and integrated in the mid-term version of the EMPATHIC VC. Upon the evaluation of the Spanish system, Acapela will fine-tune the process and develop the French and Norwegian voices. \subsection{Emotional Agents} \label{sec:agent} For initial prototyping purposes (cf. Section~\ref{sec:woz}), we used a multi-step creation process (cf. Figure~\ref{fig:WorkflowVC}) to build five virtual agent coaches (3 female and 2 male) named Natalie, Alice, Lena, Christian and Adam. The first of the steps depended on the origin of the 3D model. The coaches Alice and Adam were created based on 2D images. For this, we had to create their 3D model from a 2D image using CrazyTalk\footnote{https://www.reallusion.com/crazytalk/} and then exported this to the RLhead format. For the coaches Christian, Natalie and Lena, who were created based on 3D models predefined by iClone\footnote{https://www.reallusion.com/iclone/}, we skipped the CrazyTalk step. Second, we imported the RLhead to the Character creator. This tool helped us create realistic-looking 3D human models. We fixed the 3D model design, imported 3D clothing designs and generated humanoid animations with extensive customization tools. After that, a model was imported into iClone. At this step, we used iClone to blend character creation, animation and scene design into a real-time engine, and to edit them in 3DXchange. Next we exported the model and animation using 3Dxchange into the FBX format. Then, we imported the FBX file as a new resource into Unity3D\footnote{https://unity3d.com/fr}. The transition from iClone to unity degrades the appearance of the model. To solve this, we had to correct the texture and optimize the shader of the model. Still in this step, we added the lip-synchronization using the SALSA plugin. The audio was processed so as to automate four basic mouth positions, which are the basis for lip-sync approximation. Instead of pre-processing or mapping shapes to audio markers, mouth movements were procedurally applied to a minimal set of mouth shapes to provide variation. It uses a combination of waveform analysis and four mouth shapes to produce high-quality lipsync approximation. The last step was to build the WebGL format within the Unity environment. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{workflow_VC.png} \caption{Workflow to generate 3D virtual coaches.} \Description{Workflow to generate 3D virtual coaches} \label{fig:WorkflowVC} \end{figure} \section{Status: Technology Integration} \label{sec:technology} To integrate the different software components of the EMPATHIC-VC while meeting the challenges of a low-latency interactive system with the additional security, confidentiality and privacy requirements for health information, we took a multi-stage approach. First we defined up front a model for development which uses fully separated containers for each component, with communication between components over sockets and messaging via a global message queue. Next, a review process was conducted for each of the components, covering the component container layout and requirements, capabilities and testing approach. Subsequently, a dedicated integration environment was set up with the supporting tools such as the container orchestration and message queue. Finally, each component was tested independently in the integration environment, both initial smoke testing and then testing component inputs and outputs. Once all components are validated in the integration environment, according to the above described procedure, we will be ready to test the entire system. For the purpose of testing we have split the system into four sub-systems: \begin{itemize} \item An ``inbound'' system, using pre-recorded input, joining ASR, NLU and DM. \item An ``outbound'' system, using pre-generated DM output, joining NLG, TTS and the virtual agents. \item A ``user interaction'' system, joining the Web UI, Web A/V proxy and also test recordings. \item A ``human sensing'' system, joining the emotion detection for speech, text, face and gaze, and the biometric authentication. \end{itemize} In addition to the integration of the VC components we started to work on the provision of secure cloud connectivity. For this we have designed a review process covering: \begin{itemize} \item the secure connection over WebRTC between participant devices and the EMPATHIC-VC system; \item the secure configuration of host servers; \item secure remote administration; \item secure software development practices; \item secure storage and encryption of participant data; \item and physical security at field trial hosting sites. \end{itemize} \section{Status: User Interaction} \label{sec:interaction} In order to ensure acceptance of the EMPATHIC VC, we have to show an added value for the end-user. The creation of this value proposition is an iterative and contrasted process that involves interdisciplinary collaboration~\cite{pagliari2007design}. To start this, we analyzed relevant target groups in Spain, France and Norway. From those studies we were able to extract general as well as country-specific end-user traits, leading to an archetypical end-user definition with some specific factors that may vary from country to country. \subsection{Defining the Target Population} As a common definition, we can consider our target population as ``Young Olds'', aged 65 to 79 years, with a healthy and active life, characterized by the continuation of their former lifestyle after retirement, yet focused more on enjoyment and leisure activities. It is important to highlight the two main indicators used for this definition. The first indicator concerns the healthy life years at birth; this indicator shows the average of life years sans diseases. The second indicator concerns the healthy life expectancy based on self-perceived health; this indicator shows the population's self-perception of health. Thus, it include factors such as economic status, emotional problems and social relations. This is a subjective indicator based on surveys, perception records and/or self-assessments\footnote{https://ec.europa.eu/eurostat/data/database}. Therefore, the term ``Young Olds'' refers to people older than 65, who perceive their health as good or very good. Yet although, their self-perception is good, ``Young Olds'' may already have some type of disease or sickness\footnote{Encuesta Europea de Salud en Espana 2017, Pub. Pub. Instituto Nacional de estad\'{i}stica (INE)}. \subsection{Understanding the ``Young Olds''} Before describing the priorities defined by our analysis, it is worth to highlight four basic recommendations to promote user acceptance of ICT by seniors, defined by the European Active and Assisted Living (AAL) programme. Those recommendations are \footnote{http://www.aal-europe.eu/wp-content/uploads/2015/02/AALA\_Knowledge-Base\_YOUSE\_online.pdf}: \begin{itemize} \item Provide clear additional value and benefit of the solution; \item Balance between supporting the users and activating them; \item Maintain simplicity on the interaction user-solution; \item Provide joyful experiences; \end{itemize} Those recommendations act as a starting point for integrating priorities into a solution design. Regarding our analysis, family has been revealed as the main priority across different countries. On average, more than two thirds of seniors with children see them several times per month. This ratio is even higher when children live close by. In addition, the use of ICT tools is significantly higher when it is used for communication purposes with family members (especially children and grandchildren). Therefore, the involvement of family members as part of the solution will strongly increase the acceptance and usability of the solution. Another key point to promote the ICT use among elderly is to show the clear benefit of a solution. For example, seniors are willing to learn to use ICT tools if this provides more interaction with their younger family members. As mentioned before, more than half of our target population perceives their health as ``good'' or ``very good'', although a common fear is the physical decline associate to ageing. This could be one strong reason why, independent of the country, more than half of the people we talked to perform some type of physical activity on a weekly basis (note: most commonly walking). Supporting those activities, providing motivational strategies, gamification tools and/or professional advice may thus be seen an opportunity to increase the acceptability of ICT. Even better, it would not only support and enrich a leisure activity that is already common among the target population, but also promote ``healthy habits'' and ``well-being''. Following a similar approach, nutrition is an area that combines ``joyful'' experiences and ``healthy habits''. Cooking, planning meals, going out to restaurants, increasing expenditure on food etc. are related activities that are more promoted with representatives of the target population than with younger people. In that sense, changes in the nutrition habits can be seen as an indicator for physical or physiological changes. On the other hand, promoting and motivating a healthier nutrition habit can positively influence a human's physical and emotional status. While motivational factors and physical as well nutritional habits were similar with our target populations in Spain, France and Norway, we did also find differences. Those mainly relate to the familiarity and use of the Internet. Here it was shown that Seniors from Norway have a significantly higher percentage of people that use the Internet on a regular basis, in particularly when compared to Spain (note: France lies in between). This information is relevant, as personal experience with technology is a relevant factor influencing technology acceptance. \subsection{Aspects of User Acceptance} \label{sec:acceptance} One aspect to focus on in human-agent interaction is the user's level of technology acceptance. The concept was introduced by Davis~\cite{davis1989perceived} in the attempt to explain people's acceptance (or not acceptance) of an interactive system. It led to the development of the Technology Acceptance Model (TAM), a questionnaire where acceptance is assessed in terms of a user's perceived usefulness, and perceived ease of use~\cite{davis1989perceived} of the system. TAM was extended into TAM2 in 2000~\cite{venkatesh2000theoretical}, adding two theoretical constructs that accounted for a user's social influence and for how well a user's work goals are supported by the interactive system. TAM2 evolved into the Unified Theory of Acceptance and Use of Technology (UTAUT)~\cite{venkatesh2003user} and later into UTAUT2, where hedonic motivations (the fun or pleasure derived from using a technology), price values (trade-off between perceived benefits and monetary costs), and habits were added to the original questionnaire, as further determinants theorized to affect user's behavioral intentions and use behavior~\cite{venkatesh2012consumer}. Finally, the Almere questionnaire was developed as a further evolvement of UTAUT2 objecting that the latter was developed without accounting for variables that relate to social interaction with robots or virtual agents and without considering seniors as potential users~\cite{tsiourti2014virtual, heerink2010assessing}. Following the same reasoning, Hassenzahl developed the AttrakDiff questionnaire\footnote{AttrakDiff(tm)Internet Resource -- http://www.attrakdiff.de.}~\cite{hassenzahl2018thing, hassenzahl2004interplay}, a four cluster test, where each cluster, composed of 7 items, was assessing a desired user requirement. It must be noted that, currently, all the theoretical formulations of questionnaires aiming to assess the user experience of an interactive system are, to a certain extent, dated, since those systems are increasingly more complex, showing humanoid appearance, and human features. Thus, even though the theory for defining user experience is still valid, new concepts have to be accounted for, in order to have a fair assessment of modern interactive systems. This is why, to date, there are no systematic investigations devoted to assessing the role of virtual agents' features exploiting the above mentioned questionnaires. Furthermore, seniors have only been involved in a very limited number of studies on virtual agent's. In the few they have been involved in, it has been shown that they clearly enjoy interacting with a speaking synthetic voice produced by a static female agent (note: these were 65+ aged seniors in good health~\cite{cordasco2014assessing}), and that such seniors are less enthusiastic than impaired people in recognizing the agent's usefulness~\cite{yaghoubzadeh2013virtual}. The only comparison among user's age we know about, was conducted by Stra\ss mann \& Kr\"{a}mer~\cite{strassmann2017categorization}. The study was \textit{``a qualitative interview study with five seniors and six students''} and showed that senior users prefer embodied human like agents over machine or animal-like ones. No information were obtained on the gender of the agents as well as their pragmatic and hedonic features, as advocated by the TAM, UTAUT, and AttrakDiff questionnaires discussed above. Therefore, the Empathic project aims to \textit{``develop causal models of [agent] coach-user interactional exchanges that engage elders in emotionally believable interactions keeping off loneliness, sustaining health status, enhancing quality of life and simplifying access to future telecare services''}. To this end, an initial research step was the development of an ad-hoc questionnaire to assess the pragmatic and hedonic features of the to be developed EMPATHIC VC. The questionnaire was developed through an iterative process that involved several experiments and the exploitation of the theoretical concepts already advocated by the authors of TAM 2, UTAUT2, and AttrakDiff, and the inclusion of new theoretical considerations regarding our users' age. The goal of the questionnaire was also to provide information on the user's preferences regarding the agent's physical and social features, including its face, voice, hairdo, age, gender, eyes, dressing mode, attractiveness, and personality. The first experiment with the aim to build and evaluate such a questionnaire (and start collecting said data) was conducted in Italy and involved 45 healthy seniors (50\% female), aged 65+~\cite{esposito2018seniors}. As this study was conducted before any of the agents described in Section~\ref{sec:agent} were built, our stimuli were based upon the four conversational agents proposed by the Semaine project\footnote{http://www.semaine-project.eu/}, each possessing different personality features able to arise user specific emotional states i.e., Poppy (female, expressing optimism), Obadiah (male, expressing pessimism), Spike (male, expressing aggression) and Prudence (female, expressing a high degree of pragmatism)~\cite{ochs2010virtual}. For each agent, a video-clip was extracted from the videos available on the Semaine website. In order to contextualize them to the Italian culture they were renamed, using names very popular in the local area (i.e. Serena, Gerado, Pasquale, and Francesca). Agent's names and video durations were carefully assessed by four people. The final set of stimuli consisted of 4 video clips, each 10 secs long showing the agent's half torso, all of the same dimensions, acting as if they were speaking while the audio was mute. The preferences the target group had towards each of the proposed agents were assessed through a first version of our questionnaire, structured in 4 clusters each containing 7 items devoted to assess the practicality, pleasure, feelings, and attractiveness experienced by participants while watching the agent video-clips. The items proposed in each cluster exploited the theoretical foundation inherent to the UTAUT2, Almere, and AttrakDiff questionnaires. Results showed seniors' positive tendency to initiate an interaction with the agent, with a strong preference towards agents with a positive personality. That is, Francesca and Serena scored always significantly higher than Pasquale and Gerardo for the pragmatic, hedonic and attractiveness features. Although seniors had not been informed of the agents' personality, somehow they had perceived negativity or positivity from the dynamics of their facial expressions, which triggered a behavior of acceptance/rejection, suggesting that participants have preferences for positive facial dynamics. These results, however, required deeper investigations as all agents showing positive facial dynamics were females, hinting towards a potential gender influence on the processing of emotional facial expressions (cf.~\cite{marsh2005effects, rotteveel2004automatic}). Furthermore it must be noted that in this first study agents were dynamically moving their lips as if they would be speaking. Their voice was, however, muted which might have had an effect on people's perceptions. These aspects were accounted for in a subsequent experiment, conducted again in Italy~\cite{esposito2018bseniors}, but this time with agents created with BOTLIBRE\footnote{http://www.botlibre.com}. Also these agents, which were selected by 3 experts, showed half their torso, with definite clothing. Again, so as to contextualize, we named them in accordance with typical names for the region, i.e. Michele, Edoardo, Giulia and Clara. Each agent was provided with a different synthetic voice, producing the following sentence pronounced in Italian: \textit{``Hi, my name is Clara / Edoardo / Giulia / Michele. If you want, I would like to assist you with your daily activities!''}. The synthetic voice was created through the website Natural Reader\footnote{http://www.naturalreaders.com}. The voices (recorded using the software Audacity\footnote{https://www.audacityteam.org/}) were embedded into each agent's video-clip which had an average duration of approx. 6 seconds. The proposed agents did not show a particular personality. Care was taken to ensure that no emotion was depicted by their faces and they were shown on the same background, avoiding exaggerated cloths colours or facial features. This second experiment, again devoted to assessing senior's preferences towards each of the proposed agents, exploited a new version of our questionnaire. It had 6 clusters: \begin{itemize} \item \textbf{Cluster 1:} 10 items focusing on the usefulness, usability, and accomplishment of the tasks of the proposed system, i.e. the system's pragmatic qualities (PQ). High scores in the PQ dimensions indicate that the users perceive the systems as well structured, clear, controllable, efficient and practical. \item \textbf{Cluster 2:} 10 items focusing on motivations, i.e. the reason why a user should own and use such an interactive system, i.e., the system's stimulating hedonic qualities (HQS). A system receiving high scores in the HQS dimensions is meant to be original, creative, captivating. \item \textbf{Cluster 3:} 10 items focusing on how captivating, and, of good taste the system appears, i.e. the system's hedonic qualities of feeling (HQF). A system receiving high scores in the HQF dimensions, is considered presentable, professional, of good taste, and bringing users close to each other. \item \textbf{Cluster 4:} 10 items on the subjective perception of the system's attractiveness (ATT), which is the hedonic dimension that gives rise to behaviors as increased use, or dissent, as well as, emotions as happiness, engagement, or frustration. \item \textbf{Cluster 5:} 4 items assessing the type of professions seniors would endorse to the proposed agents, among which were welfare, housework, security, and front desk jobs. \item \textbf{Cluster 6:} 3 items assessing agent's age range preferences. \end{itemize} Each questionnaire item required a response given on a 5-point Likert scale ranging from 1=strongly agree to 5=strongly disagree (3=I don't know). Since all clusters contained positive and negative items evaluated on a 5-point Likert scale, scores from negative items were corrected in a reverse way. This implies that low scores summon to positive evaluations, whereas high scores to negative ones. Each participant was first asked to provide answers to the items related to demographic information and user technology savviness, then they were asked to watch each agent's video-clip and immediately after to complete the items from the remaining 6 clusters. \paragraph{Final Results} Our results show that seniors clearly expressed their preference to interact with the female rather than the male agents. This was true for the pragmatic, hedonic stimulation, hedonic feeling and the attractiveness dimensions, and independent from their gender and technology savviness (note: between the two proposed female agents, senior's preferences for Giulia scored statistically significantly higher than those attributed to Clara). Consequently, the data suggests, that seniors' willingness to be assisted by a virtual agent is strongly affected by the gender of the proposed agent, and that, up to now, female agents seem to be the preferred choice. In addition, these two preliminary experiments allowed the definition of a questionnaire on virtual agent acceptance, which also contains a demographic information section and a section on user technology savviness. We call this the Virtual Agent Acceptance Questionnaire (VAAQ). So far, it has been translated from English into French, Norwegian, Spanish, Italian, and German. \subsection{A Simulated Virtual Coach} \label{sec:woz} Various modules of the EMPATHIC VC will require ongoing development and improvement, e.g. to advance speech recognition and language understanding, or foster user experience and acceptance. In order to simulate such modules while they are being developed, the goal was to build and consequently use a simulated Wizard of Oz (WOZ) component. WOZ constitutes a prototyping method that uses a human operator (i.e., the so-called wizard) to simulate non- or only partly-existing system functions. In language-based interaction scenarios, like the ones envisioned by EMPATHIC, WOZ is usually used to explore user responses and the consequent handling of the dialogue, to test different dialogue strategies or simply to collect language resources (i.e., corpora) needed to train technology components. In EMPATHIC, however, the goal is to use WOZ beyond this traditional prototyping stage, and make it a fallback safety net for situations in which the automated coach may be unable to respond. That is, the goal is to develop a system component, which initially serves as a prototyping tool supporting the research on language-based interaction and dialogue policies, but then becomes an always-on backup channel dealing with those user requests the system is incapable of handling by itself. A first version of this tool has been built and consequently used in several user studies. \subsubsection{Technical Setup} As the specification and integration of the the EMPATHIC VC is still ongoing (cf. Section~\ref{sec:technology}), yet user feedback is urgently needed to inform the design of technology components, we decided to work on two separate WOZ systems. The first one, referred to as the EMPATHIC WOZ Platform, acts as a stand-alone tool, which is being developed independently from the EMPATHIC VC architecture, despite being a testbed for feasibility evaluations regarding the different technologies to be used. In doing so, we were able to perform initial user studies before final decisions on the EMPATHIC architecture were taken. On the contrary, the second WOZ system, referred to as the EMPATHIC WOZ Component, is meant to become a component of the final EMPATHIC coach for which it is implemented and adapted in accordance with the overall EMPATHIC system architecture. This component is currently in development and thus not yet used for user studies. \subsubsection{The EMPATHIC WOZ Platform} Several researchers have worked on WOZ tools before (e.g., \cite{munteanu2000mdwoz, fiedler2002supporting, hundhausen2007woz, davis2007sketchwizard, smeddinck2010quickwoz, lu2011polonius, villano2011domer}), but only few of those tools are openly available for implementation and adaption. One such tool is the WebWOZ Wizard of Oz Prototyping Platform~\cite{schlogl2013webwoz}\footnote{https://github.com/stephanschloegl/WebWOZ}, which has already been employed by a number of previous projects (e.g., vAssist~\cite{schlogl2014designing}, Roberta Ironside~\cite{lee2017first}). Building upon the experience of these projects, we used the platform as a core system and implemented the following EMPATHIC-specific adaptations: \begin{itemize} \item \textbf{Audio Transmission and Recording:} Previous versions of WebWOZ required a separate audio channel to transfer a study participant's voice input to the wizard. Usually, such was achieved using some sort of Voice-over-IP tool such as Skype or Google Talk. To avoid 3\textsuperscript{rd} party tool usage we directly integrated audio transmission between a study participant and a wizard via a connection channel based on the WebRTC standard. In our adapted WebWOZ Platform, WebRTC does not only serve as a communication channel but also handles the recording of sessions, allowing for the collection of relevant language resources needed to inform the design of future EMPATHIC language components (e.g., ASR, NLU, DM, etc.). In addition, the technology will be used as a core technology for the final EMPATHIC VC architecture, for which its implementation into the WebWOZ Platform acted as a test case evaluating its feasibility. \item \textbf{Video Transmission and Recording:} In addition to the audio transmission channel described above, we integrated video transmission and recording between participant and wizard. Also here, WebRTC acted as the core technology. The video link, which was added to the wizard interface, allows the wizard to see the participant's face, providing important contextual information and thus supporting the cognitively rather demanding task of simulating machine behaviour (cf. Figure~\ref{fig:wizardinterface}). \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{wizardinterface.png} \caption{EMPATHIC WOZ Platform wizard interface incl. live video feed and flow-chart graph.} \Description{EMPATHIC WOZ Platform wizard interface} \label{fig:wizardinterface} \end{figure} \item \textbf{Flow-chart in Wizard Interface:} In order to help the wizard follow a systematic interaction path, we further added a flow-chart graph to the wizard interface (see Figure \ref{fig:wizardinterface}). This graph shows the optimal flow of dialogue steps and thus acts as an interaction manual supporting the selection of pre-defined utterances to be sent to a study participant. \item \textbf{Web-based Scenario Upload Mechanism:} The definition of pre-defined utterances to be sent to a study participant counts as a key feature of a language-based WOZ tool. In order to speed up the definition of these utterances and at the same time allow developers and interaction designers to work with standard tools, we integrated an Excel (.csv) import feature. A similar feature was already available in earlier implementations of WebWOZ \cite{schlogl2014designing}, yet such needed root access to the server infrastructure where the platform was hosted. We extended this feature by a simple web-based upload mechanism, so that respective user rights are no longer required. \item \textbf{Agent Interface incl. Text-to-Speech Synthesis:} The client interface of the original WebWOZ Prototyping Platform does not offer any agent or avatar feature. Hence, in order to obtain feedback on our EMPATHIC virtual agent designs, we implemented five different agent prototypes (3 female, 2 male) and connected them to the wizard interface (cf. Section~\ref{sec:agent}). All agents use similar core features (size, background, facial features, etc.) and integrate, depending on the environment setup, Spanish, French and Norwegian (as well as German, Italian and English) text-to-speech synthesis provided by Acapela (note: female and male agents use different voices). \end{itemize} With this setup, a human wizard is currently able to remotely control a virtual agent in any of these languages. Lessons learned from first tests using this setup are reported next. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{clientinterface.png} \caption{EMPATHIC WOZ Platform client interface featuring five different agents to interact with.} \Description{EMPATHIC WOZ Platform client interface} \label{fig:clientinterface} \end{figure} \subsection{Lessons Learned from User Studies} A total of 176 WOZ user studies (\`{a} 2 sessions) have so far been conducted (i.e., 68 in Spanish, 54 in French, and 54 in Norwegian). The following insights are based on feedback provided by wizards, who simulated the EMPATHIC VC, as well as study participants. \subsubsection{Insights Regarding the Study Setup} Experience has shown that at least two people are required to realistically conduct a WOZ user study -- one who acts as a human simulator, i.e. the wizard (usually sitting in a different location), and one who acts as a facilitator, greeting study participants, introducing them to the study purpose, administrating questionnaires, and helping the participants in cases of confusion. A setup without facilitator, i.e. without a second person, did in our case not seem feasible. From a procedural point of view, we further found it imperative that, once the interaction with the simulated agent has started, the facilitator needs to leave the room, because otherwise the participant tends to look at and talk to the facilitator instead of conversing with the actual agent. This behaviour may be explained by a participant's lack of reassurance when interacting with a novel technology. Taking the additional person out of the room helps eliminate this potential distraction, yet it may also increase a participant's level of anxiety. \subsubsection{Insights Concerning Study Participants} In general, we found that the concept of a virtual agent seems rather frightening to many people of the targeted age group (i.e. aged 65 or older). While we did use face-to-face meetings to overcome this fear as much as possible, it should be noted that for this type of technology anxiety poses a significant challenge, particularly when it comes to the recruitment of study participants. Consequently, recruitment via flyers/posters was difficult (even when conducted in senior centres or elderly homes). However, we found that recommendations coming from other participants who had already taken part and enjoyed the study, helped mitigate the problem. Still, a lot of personal coaching was usually required to make people feel comfortable. Here, our experience has shown that participants needed approx. 10 minutes to `lose their fear' regarding the technology -- in particular, when studies took place somewhere away from peoples' homes or familiar living environments. A technical setup, which would allow studies to be mobile and, thus, be brought to potential study participants, may therefore help circumvent some of this felt uneasiness. With respect to the study inclusion criteria, the studies have shown that elderly people are rather pessimistic when evaluating their personal health status. That is, while initially we were searching for `healthy' participants aged 65 or older, we had to realize that most representatives of this group would not include themselves due to minor health issues they perceived preclusive (e.g. minor hearing problems). A slight change in wording has helped tackle this problem. What remained difficult, however, was the recruitment of people with depression. As for the interaction, it seemed important that participants thought they would interact with a prototypical system. This helped keep the expectations regarding speed and accuracy low. In this context, the speed with which a simulated system responds may be seen a particular challenge. Especially in cases where the wizard could not use a pre-defined utterance and, thus, had to type a response. An additional challenge with this generation of on-the-fly utterances concerns the great potential for typo's and other mistakes, which are forwarded to the TTS and, consequently, spoken out loud to a study participant. However, being aware of the prototypical status of the system, study participants were rather tolerant towards these types of issues. \subsubsection{Insights Concerning the Dialogue} With respect to the scenarios, participants were usually pre-informed about some of the content to be addressed by the coach so that they could think about relevant topics in advance (e.g. they were told to think about certain health goals they would like to achieve before starting the conversation). Such was necessary to keep the interaction going and reduce the number of ``yes/no'' answers. Still, in particular with respect to the the nutrition scenario, it was difficult to keep the conversation flowing, as the scenario was looking for personal goals, yet people were often satisfied with their status-quo and, thus, did not find much to talk about. This somewhat restricted the number of available interaction turns, for which we had to slightly shift participants' foci to other nutrition-relevant topics so as to keep the conversation alive. To this end, the pre-defined utterances that were prepared for the wizard seemed rather limited in scope as well as in variation, which significantly increased the use of the (arguably much slower and more error-prone) free-text feature of the platform. A way of making the conversation more flexible, may be found in a speech input interface for the wizard. Yet such a feature, while in consideration for the EMPATHIC WOZ Component, has so far not been integrated. Changing the conversational focus due to missing participant goals also caused some side effects. Particularly in France, participants often felt insufficiently `coached'; i.e. they had the impression that the virtual coach wanted their information, yet did not provide them with any advice on what they should do in order to change their habits. Finally, from a conversational point of view, we found that different types of back-channelling (i.e. approving a participant's input) had a significant influence on the `smoothness' of the conversation. That is, while rather basic approval utterances such as ``interesting'' or ``good'' seemed to distort the conversation, other strategies which re-used participants' words or sentence structures (e.g. Participant: ``I like to walk 2 hours every day''; Agent: ``You walk 2 hours every day?'') helped in keeping participants engaged and consequently the conversation flowing. \subsubsection{Insights Regarding Administered Questionnaires} Regarding the study closure and debriefing phase, study participants perceived the number and length of the administered questionnaires as too much (note: this also included our own questionnaire whose development was presented in Section~\ref{sec:acceptance}). In addition, some of the questions were hard to understand or potentially ambiguous and, thus, study participants found them to be rather difficult to answer (note: this led to a high number of ``I don't know'' answers). In particular, questions concerning more abstract concepts or terms as well as feelings not usually connected to a human-agent dialogue (e.g. hedonic features) seemed to pose problems. Often it was the wording, which played a significant role here and might, thus, be adapted in future studies. Finally, unexpectedly high depression scores found with healthy study participants caused some doubts regarding the used scales, which will require further investigation. \subsubsection{Insights Regarding Technical Issues} From a technical point of view, a re-occurring problem seemed to be connected to the session management, which influenced the connection between the wizard and respective client. The issue is currently under investigation, with a viable solution expected to be implemented in the upcoming weeks. A second challenge concerned the technical support during user studies. Given that the entire EMPATHIC WOZ Platform is currently hosted in a location (i.e. the UK) different from where studies are conducted, and technical support provided from yet another location (i.e. Spain), technical problems often caused long delays. Local setups incl. respective technical support -- as it is planned for the final EMPATHIC VC -- may thus be considered in future studies. \section{Conclusion} \label{sec:summary} In this paper we reported on the mid-term achievements of the H2020 EMPATHIC (\textit{Empathic, Expressive, Advanced Virtual Coach to Improve Independent Healthy-Life-Years of the Elderly}) project. Those achievements include, on the one hand, significant efforts put into the development and integration of various technical components required to run a modern, virtual agent based dialogue system geared towards supporting elderly people in their daily activities; on the other hand, a number of user studies aimed at understanding said user group (i.e., healthy seniors aged 65+) and their preferences with respect to agent acceptance. Results of these efforts are manifested in a working WOZ prototyping tool for simulating human-agent interaction, a multi-lingual (i.e. Spanish, Norwegian, French, Italian, German and English) questionnaire assessing virtual agent acceptance, and a better understanding of particular technical challenges inherent to the provision of a web-based, secure, responsive and reliable agent platform. \section{Acknowledgments} The research presented in this paper is conducted as part of the project EMPATHIC that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 769872. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2021-05-06T02:10:19", "yymm": "2105", "arxiv_id": "2105.01878", "language": "en", "url": "https://arxiv.org/abs/2105.01878" }
\section{Introduction} The induced metric of a parallel mean curvature surface of a general type in the complex hyperbolic plane has the form of a special type of the Liouville metric (Kenmotsu \cite{ken2}). In this paper, we remark another necessary condition for parallel mean curvature surfaces in complex space forms. In Section 2 of this paper, we prove that the Gaussian curvature of a parallel mean curvature surface in a two-dimensional complex space form satisfies an elliptic differential equation that is intrinsic and similar to the Ricci condition of the classical surface theory (cf. Lawson \cite{laws}). The main purpose of this study is to examine the converse of these two facts. Given a domain in the Euclidean two-plane $\mathbb{R}^{2}$, we consider a special type of the Liouville metric such that the Gaussian curvature satisfies the differential equation, which is the same as that obtained in Section 2. Then, in Section 3, we prove that this metric is explicitly determined by an elliptic function. In the last section, as an application of the result of Section 3, we prove that a domain in $\mathbb{R}^{2}$ with the special type of the Liouville metric such that the Gaussian curvature satisfies the differential equation obtained in Section 2 can be isometrically immersed in the complex hyperbolic plane with parallel mean curvature vector. \section{Ricci condition on parallel mean curvature surfaces } In the 19th century, Ricci observed an intrinsic characterisation of constant mean curvature surfaces in $\mathbb{R}^{3}$, and in the late 20th century, it was extended to surfaces in three-dimensional space forms by Lawson (Lawson \cite{laws}). Later, this work was generalised to parallel mean curvature surfaces in four-dimensional space forms by Eschenburg and Tribuzy \cite{esctri}, and Sakaki \cite{sakaki} and to minimal surfaces in spheres with higher codimensions by Vlachos \cite{vlacho}. In this section, we demonstrate that the Gaussian curvature of a parallel mean curvature surface in a complex two-dimensional complex space form satisfies an elliptic differential equation. This differential equation is intrinsic and similar to the Ricci condition for constant mean curvature surfaces in $\mathbb{R}^{3}$ (cf. Lawson \cite{laws}). For further studies on the Ricci condition, we refer the reader to a recent paper by Tsouri and Vlachos \cite{tsou-vlach}. Let $\overline{M}[4\rho]$ denote a complex two-dimensional complex space form of constant holomorphic sectional curvature $4\rho$, and $M$ denote an oriented and connected real two-dimensional Riemannian manifold. Let $x\!:\!M\longrightarrow \overline{M}[4\rho]$ represent an isometric immersion from $M$ to $\overline{M}[4\rho]$ with the Kaehler angle $\alpha$ such that the mean curvature vector $H$ is non-zero and parallel for the normal connection on the normal bundle of the immersion. Because the length of the mean curvature vector is constant, we may set $|H| =2b >0$. This immersion is called a parallel mean curvature surface. The second fundamental form of $x$ is expressed by two complex-valued functions, $a$ and $c$ (Chern and Wolfson \cite{cherwolf}). If the Kaehler angle function is not constant and $a$ is not real-valued, then the immersion $x$ is called of a general type (Kenmotsu \cite{ken2}). Let $(u,v)$ represent a system of local coordinates in a neighbourhood of $M$ and $f_{1}(u)\ (\mbox{resp}. \, f_{2}(v))$, smooth real-valued functions of one variable $u\ (\mbox{resp}.\, v)$, which are considered as functions on the neighbourhood, with $f_{1}(u) +f_{2}(v) >0$ everywhere. Then, the metric $ds^{2}= (f_{1}(u) +f_{2}(v))(du^{2} +dv^{2})$ is called a Liouville metric. Refer studies by Kiyohara \cite{kiyo}, Knauf, Sinai, and Baladi \cite{ksb} for the results on Liouville metrics. In particular, if $f_{2}(v) =0$ everywhere, then such a metric is called a special Liouville metric. Although the first fundamental form of any surface of revolution in $\mathbb{R}^{3}$ is isometric to a special Liouville metric, we provide new insights into special Liouville metrics. Namely, we prove the following: \begin{thm} Under the notation above, if $x\!: \!M\longrightarrow \overline{M}[4\rho]$ is of a general type, then the Riemannian metric of $M$ is a special Liouville metric, the Gaussian curvature $K$ of this metric satisfies $ K \leq 4b^{2}+2\rho $ and, at points where $K <4b^{2}+2\rho $, \begin{equation} \Delta\log\sqrt{4b^{2}+2\rho-K}=2K, \end{equation} where $\Delta$ denotes the Laplace operator on $M$ with respect to the metric. \end{thm} Proof. We follow notation and formulas in \cite{ken2}. By Lemma 2.4 of \cite{ken2}, there exists a complex-valued function $\mu(u)$ of a real variable $u$ such that $\phi = \mu(u)(du+ i dv) $, where $ds^{2} = \phi \bar{\phi} = |\mu(u)|^{2}(du^{2} + dv^{2})$. This proves the first part of Theorem 1. By (2.3) and (2.6) of \cite{ken2}, the Gauss equation of $x$ is \begin{equation} |c|^{2} = \frac{1}{4}(4b^{2} + 2 \rho - K) \ \geq 0, \end{equation} which proves the second part of Theorem 1. We remark that if $x$ is of a general type, then $\alpha \neq 0$, and $ \alpha \neq \pi$, hence $\sin \alpha \neq 0$. The Codazzi equation of $x$ is, by (2.5) of \cite{ken2}, $ dc\wedge\bar{\phi}=2c(a - b)\cot\alpha \, \phi\wedge\bar{\phi}. \ At points where $|c|>0$, we have $ \partial\log c =2(a-b) \cot\alpha \phi. $ Its exterior differentiation with $(2.1),\ (2.2), \ (2.3), \ (2.4)$ of \cite{ken2} shows \[ d(\partial\log c)= \Big\{-\frac{K}{2}+2b(a-\bar{a})\frac{(1+\cos^{2}\alpha)} {\sin^{2}\alpha} \Big\}\phi\wedge \bar{\phi}. \] Putting $c=|c|e^{\sqrt{-1}\eta}$, this implies, at points where $|c| >0$, \begin{equation} \Delta\log|c| = 2K, \end{equation} where the following formulas: $4\partial \bar{\partial} \log |c| = \Delta \log |c| \phi \wedge \bar{\phi}$ and $\partial \bar{\partial} = -\bar{\partial} \partial$ are used. The last part of Theorem 1 follows from $(2)$ and $(3)$, proving Theorem 1. \vspace{0.5cm} We provide three remarks about Theorem 1. \begin{remark} $(1)$ \ The first part of { \rm Theorem 1} holds under the assumption that $x$ is of a general type. If $x$ is not of a general type, these parallel mean curvature surfaces are classified as done by {\rm Hirakawa \cite{hirakawa}}. $(2)$\ It is proved by {\rm Kenmotsu \cite{ken3}} that non-trivial parallel mean curvature surfaces of a general type exist only when $\rho = -3b^{2}$, so the Ricci condition $(1)$ is now, $\Delta\log\sqrt{-2b^{2}-K}=2K, \ (K < -2b^{2})$. $(3)$\ The Ricci condition $(1)$ implies that the induced metric of a parallel mean curvature surface of a general type $x\!: \!M\longrightarrow \overline{M}[4\rho]$ can be locally realized on a minimal surface in a $3$-dimensional real space form, $\mathbb{Q}^{3}_{\bar{c}}$, of curvature $\bar{c}=4b^{2} + 2 \rho$. This establishes a sort of Lawson correspondence between parallel mean curvature surfaces of a general type in $\overline{M}[4\rho]$ and minimal surfaces in $\mathbb{Q}^{3}_{\bar{c}}$. \end{remark} \section{Special Liouville metric with the Ricci condition } In this section, we study the converse of Theorem 1. Let $D$ denote an open, simply connected domain in $\mathbb{R}^{2}$. In Theorem 2 and Corollary 1 of this section, we prove that the Ricci condition implies certain algebraic relation for a conformal factor and the Gaussian curvature on $D$. Moreover, if the metric is a special Liouville metric, then Theorem 3 states that this is explicitly written by an elliptic function. \begin{thm} Let $ds^{2}$ denote a Riemannian metric on $D$ and $b>0$ a positive number such that the Gaussian curvature of this metric satisfies $K < -2b^{2}$ and the Ricci condition \begin{equation} \Delta\log\sqrt{-2b^{2}-K}=2K. \end{equation} Then, there exists a Riemannian metric $|\mu|^{2}|dw|^{2}$on $D$ which is isometric to $ds^{2}$, and satisfies \begin{equation} |\mu|^{2}\sqrt{-2b^{2}-K} =\mbox{constant} >0. \end{equation} \end{thm} Proof. The Euclidean two-plane $\mathbb{R}^{2}$ is identified as the Gaussian plane $\mathbb{C}$, so a point $(u,v)$ of $D$ is written as $z=u+iv \in \mathbb{C}, \ (i^{2}=-1)$. Let $z \in D$ denote an isothermal coordinate with $ds^{2}= |\nu|^{2}|dz|^{2}$, where $\nu$ is a non-zero complex-valued function of $u$ and $v$. By $2K= -\Delta \log |\nu|^{2}$ and (4), we have $\Delta \log( |\nu|^{2} \sqrt{-2b^{2}-K}) =0$. Since $ |\nu|^{2} \sqrt{-2b^{2}-K}$ is not zero on $D$, we can apply Lemma 3.12 of Eschenburg, Guadalupe and Tribuzy \cite{estri} to this function and therefore there exists a holomorphic function $g(z)$ on $D$ such that $|g(z)| = |\nu|^{2}\sqrt{-2b^{2}-K} >0\ (z \in D)$. For any positive number $c>0$, let us define a holomorphic transformation on $D$ by $w=c^{-1/2} \int g^{1/2}dz$. Then we have $\nu dz =\mu dw$, where $\mu = \nu c^{1/2} g^{-1/2}$ and there exists a holomorphic function $k(w)$ such that $ |k(w)| =|\mu| ^{2} \sqrt{-2b^{2}-K}$, because $K$ is invariant by holomorphic transformations on $D$. By the definition of $\mu$, we have $|k(w)| =c |g|^{-1} |g| = c $, proving Theorem 2. \begin{coro} In {\rm Theorem 2}, if $ds^{2}=|\nu|^{2}|dz|^{2}$ is a special Liouville metric, then so is $|\mu|^{2}|dw|^{2}$. \end{coro} Proof. We write $|\nu|^{2}= |\nu|^{2}(u)$. Then $ \Delta \log( |\nu|^{2}(u)\sqrt{-2b^{2}-K}) = 0 $ implies $ d^{2}\log ( |\nu|^{2}(u)\sqrt{-2b^{2}-K})/du^{2}=0$, because $K$ is a function of $u$ alone. So, we can write $ |\nu|^{2}(u) \sqrt{-2b^{2}-K} = c_{1}\exp(c_{2}u)$, for some $c_{1}\!>\!0$ and $c_{2} \in \mathbb{R}$. If $c_{2}=0$, then nothing to prove. When $c_{2}\neq 0$, we can take $g(z) = c_{1}\exp(c_{2}z/2)$ and $w= \int g^{1/2}dz$. Then, we have $\mu =\nu c_{1}^{-1/2}\exp(-c_{2} z/4)$, so $|\mu|^{2} =| \nu|^{2} c_{1}^{-1}\exp(-c_{2}(z+\bar{z})/4) = |\nu|^{2}(u) c_{1}^{-1} \exp (-c_{2} u/2)$, which proves Corollary 1. Now we write the isothermal coordinate $w$ in Corollary 1 as $w=u+iv, (i^{2}=-1)$, and put $ds^{2}=|\mu|^{2}|dw|^{2} =\lambda^{2}(u)(du^{2}+dv^{2}), \ (\lambda(u) >0, (u,v) \in D)$. By (5), we have $\lambda^{4}(u)(-2b^{2}-K) = c_{1} , \ (c_{1} >0)$. It follows from $K=(-\lambda \lambda'' + \lambda'^{2})/\lambda^{4}$ that $$ \lambda''(u) \lambda(u) - \lambda'(u)^{2} = c_{1} + 2b^{2} \lambda(u)^{4}, \ (c_{1} >0). $$ It is integrated as \begin{equation} \lambda'(u)^{2} = -c_{1} + c_{2}\lambda(u)^2 + 2b^{2}\lambda(u)^{4}, \ (c_{2} \in R), \end{equation} where $c_{2}$ is an integral constant. The right hand side of (6) is decomposed as $\lambda'(u)^{2} = 2b^{2}(\lambda^{2} -\lambda_{-})(\lambda^{2} - \lambda_{+})$, where $\lambda_{+}$ and $\lambda_{-}$ are real numbers such that $\lambda_{+} >0> \lambda_{-}$ because of $c_{1} >0$. Hence it holds that $\lambda^{2} \geq \lambda_{+}>0$. Let us define $\theta =\theta(u)$ by \begin{equation} \cos \theta =\frac{ \sqrt{\lambda_{+}}}{\lambda(u)}, \end{equation} which yields, by (6), $$ \theta'(u)^{2} = 2b^{2}(\lambda_{+} - \lambda_{-})(1+ \frac{\lambda_{-}}{\lambda_{+}-\lambda_{-}} \sin^{2}\theta). $$ In other words, \begin{equation} \theta'(u)^{2} = \sqrt{c_{2}^{2}+ 8b^{2}c_{1}}- \left(\frac{c_{2}+ \sqrt{c_{2}^{2}+ 8b^{2}c_{1}}}{2} \right)\sin^{2}\theta . \end{equation} Now we introduce the constant $k^{2}$ and two functions $\tilde{u},\ \tilde{\theta}(\tilde{u})$. $$ k^{2}= \frac{c_{2}+\sqrt{c_{2}^{2}+ 8b^{2}c_{1}}}{2\sqrt{c_{2}^{2}+ 8b^{2}c_{1}}}, \ \tilde{u}= (c_{2}^{2}+ 8b^{2}c_{1})^{\frac{1}{4}}u, \ \tilde{\theta}(\tilde{u})= \theta( (c_{2}^{2}+ 8b^{2}c_{1})^{-\frac{1}{4}}\tilde{u}). $$ Then, we have $(d\tilde{\theta}/d\tilde{u})^{2} = 1 - k^{2} \sin^{2}\tilde{\theta}$, that is, \begin{equation} \theta(u) = am ((c_{2}^{2} + 8b^{2}c_{1})^{\frac{1}{4}}u,k), \end{equation} where $am(\cdot,k)$ is Jacobi's amplitude with the modulus $k$. Hence, we have, by (7) and (9), \begin{equation} \lambda(u)= \frac{ \sqrt{\lambda_{+}}}{\cos\theta(u)}=\frac{ \sqrt{\lambda_{+}}}{cn((c_{2}^{2}+8b^{2}c_{1})^{1/4} u,k)} , \end{equation} where $4b^2\lambda_{+}= -c_{2}+\sqrt{c_{2}^{2}+8b^{2}c_{1}}$ and $cn(\cdot,k)$ is the Jacobi elliptic function with the modulus $k$. Thus, we have the two-parameter family of Jacobi elliptic functions. The main result of this study is the following: \begin{thm} Let $D$ denote an open, simply connected domain in $\mathbb{R}^{2}$ and $ds^{2}$ a special Liouville metric on $D$. Suppose that the Gaussian curvature of this metric satisfies $K < -2b^{2}$ everywhere for some $b>0$ and the Ricci condition $(4)$. Then, there exists a special Liouville metric on $D$ which is isometric to $ds^{2}$, and whose conformal factor is explicitly determined by a Jacobi elliptic function and two real constants. \end{thm} \section{Application to parallel mean curvature surfaces} In this section, we apply the results of the previous section to obtain immersions from a domain $D$ in $\mathbb{R}^{2}$ to a complex two-dimensional complex space form. For simplicity, we assume that $6b^{2}=1$. Let $\mathbb{CH}^{2}[-2]$ be the complex hyperbolic plane of constant holomorphic sectional curvature $-2$. \begin{thm} Let $D$ denote an open, simply connected domain in $\mathbb{R}^{2}$, $ds^{2}$ a special Liouville metric on $D$ so that the Gaussian curvature of this metric satisfies $K < -1/3$ and $\Delta\log\sqrt{-1/3-K}=2K$. Then, there exists a family of isometric immersions from $(D, ds^{2})$ into $\mathbb{CH}^{2}[-2]$ such that the mean curvature vector $H$ of the immersion in the family is parallel for the normal connection of the normal bundle of the immersion and $|H| = 2/\sqrt{6}$. \end{thm} Proof. We find certain one-parameter sub-family of Jacobi elliptic functions obtained in Theorem 3. For a given $c_{1}>0$, put $c_{2}=c_{1}/6-2$. Then, the formula (9) becomes $\theta'(u)^{2} = 2+c_{1}/6 - c_{1}/6 \sin^{2}\theta$. Therefore when $0<c_{1}<3/2$, setting $p=c_{1}/6$ and $\theta = \gamma$, we have the equation (4.3) of \cite{ken3}. Take $\alpha$ such that $3\cos \alpha= -\sin \gamma $. Using this $\alpha$ and putting $b=1/\sqrt{6}$ let us define $\mu, a,$ and $c$ by (4.8) of \cite{ken3}. Then the metric defined by $\mu$ is isometric to $ds^{2}$ and by Theorem 4.1 of \cite{ken3}, Theorem 4 is proved when $0<c_{1}<3/2$. When $3/2 <c_{1} $, setting $c_{2}= 2-c_{1}/6, \ p=c_{1}/6$ and $\theta =\tilde{\gamma}$, we have the equation (4.5) of \cite{ken3}. In the same way as above, we proved Theorem 4. \section{Added in proof} \leftline{ {\bf Surface of revolution in $\mathbb{R}^{3}$}} \quad The first fundamental form of a surface of revolution in $\mathbb{R}^{3}$ is isometric to a special Liouville metric. In fact, let $I = ds^{2} + y(s)^{2}dv^{2}, \ (y(s)>0)$ denote the first fundamental form of a surface of revolution $(x(s),y(s)\cos v,y(s)\sin v) \subset \mathbb{R}^{3}$, where $s$ is the arc length parameter of the profile curve. Then we have $I=\lambda(u)^{2}(du^{2}+dv^{2})$, where $u=u(s) := \int ds/y(s)$, $s=s(u)$ is its inverse function, and $\lambda(u) := y(s(u))$. We remark that the conformal factor $\lambda(u)$ must satisfy $\lambda^{2}(u) \geq \lambda'(u)^{2}$ everywhere because of $y'(s)^{2}\leq 1$. Conversely the special Liouville metric $I=\lambda(u)^{2}(du^{2}+dv^{2})$ with $\lambda^{2}(u) \geq \lambda'(u)^{2}$ induces naturally a surface of revolution in $R^{3}$ where the profile curve is defined by $y(u):= \lambda(u),\ x(u): = \int \sqrt{\lambda^{2}(u) - \lambda'^{2}(u)}du$. In particular, if the special Liouville metric satisfies the Ricci condition, then $\lambda(u)$ is determined by (10). The shape of the surface resulting from the profile curve resembles the front part of a trumpet, because the both coordinate functions of the profile curve are monotone increasing on an interval of $u>0$. \vspace{0.3cm} \leftline{{\bf Open problem}} The results by Eschenburg and Tribuzy \cite{esctri} indicate a problem of determining whether the hypothesis in Theorem 4, which is the special Liouville metric, is necessary.
{ "timestamp": "2021-05-06T02:11:00", "yymm": "2105", "arxiv_id": "2105.01887", "language": "en", "url": "https://arxiv.org/abs/2105.01887" }
\section{Introduction} \subsection{Magnetic Resonance Imaging} Through offering repeatable, non-invasive measures of tissue structure and function, Magnetic resonance imaging (MRI) has transformed clinical medical imaging and medical science. The sensitivity of the image to tissue properties can be greatly varied with MRI, either by changing the timing with which MR signals are obtained (e.g., the echo time---TE and the repetition time---TR), or by using magnetisation preparation or contrast agents. These so-called multimodal or multi-sequence MRI methods can not only provide a comparison in traditional anatomical or structural MRI but can also quantify the function of most tissues and organs of the human body in clinical and pre-clinical laboratory environments. Invasive methods, such as tissue biopsy or radionuclide tests, have therefore become less needed as a result of the prosperity of MRI and other non-invasive medical imaging technologies \cite{Hollingsworth2015}. Since MRI is non-invasive, it can be used to provide longitudinal and quantitative imaging biomarkers in the therapy trials. MR methods have begun to represent gold standard measurement for clinical research, despite the fact that the modality is still relatively underutilised \cite{Hollingsworth2015}. Why is this so? 'MRI is complex and pricey,' which is a commonly heard melancholy. \subsection{Limitations of Magnetic Resonance Imaging} Although MRI is a revolutionary non-invasive diagnostic imaging technique that provides high-resolution definition of the structural and functional information of most body tissues and organs, one significant drawback of MRI is its slow rate of image acquisition, which results in a longer scanning period as compared to other imaging modalities \cite{Lustig2007,Yang2018}. In MRI, raw data is acquired in the \textit{k}-space, which includes information about spatial frequencies within the image, rather than collected directly in the image space \cite{Suetens2009}. The Fourier transformation links image space and the \textit{k}-space. The Nyquist criterion defines the \textit{k}-space information that must be satisfied conventionally after we have defined the field-of-view (FOV) and spatial resolution of the image that we want to obtain. The distance between \textit{k}-space neighbours is inversely proportional to the field of view in each direction. The highest frequency obtained in each direction is inversely proportional to the desired resolution. Data encoded with pulsed magnetic field gradients are acquired to fill the \textit{k}-space. We may obtain a line of \textit{k}-space points very quickly in one direction, known as the read direction, using either a spin or gradient echo in one repetition time. Further directions, on the other hand, must be phase-encoded, which takes one repetition time to encrypt one line of \textit{k}-space lines \cite{Hollingsworth2015,Suetens2009}. This must then be rerun for all possible combinations of the number of phase encoding steps needed in the anterior-posterior and foot-head directions. As a result, MRI acquisitions can be time-consuming, particularly when a high resolution or large FOV is needed. This drawback not only raises the cost of imaging but also limits its use in emergency situations. Furthermore, in order to maintain image consistency, patients must lie still during the acquisition. For abdominal/thoracic imaging, patients must hold their breath, which can be problematic for paediatric, obese patients, and those with respiratory compromise \cite{Hollingsworth2015}. As a result, many patients can experience anxiety and claustrophobia during the MR scanning procedure \cite{Hollingsworth2015}. In order to minimise scanning costs and increase patient throughput, the MR image acquisition process must be sped up. Various MR acceleration methods rely on taking measurements of several lines per repetition time, allowing for quicker traversing of the \textit{k}-space. Examples include echo planar imaging \cite{Mansfield1977}, rapid acquisition with relaxation enhancement \cite{Hennig1986}, and fast low angle shot imaging \cite{Haase1986}. \subsection{Conventional Acceleration Using Compressive Sensing} It is possible to attain a higher degree of acceleration by sampling the \textit{k}-space only partially, i.e., not collecting all lines of measurements in the phase encoding path(s). The undersampled calculation can be used to infer the original \textit{k}-space details. As a consequence, the acceleration metric is equal to the undersampling ratio. For example, if half of the \textit{k}-space is sampled, the acceleration factor is doubled. As a result, undersampling methods aim to circumvent the Nyquist-Shannon sampling criterion \cite{Lustig2007}. Compressed sensing (CS) is a promising undersampling approach that may allow for more aggressive undersampling and acceleration \cite{Lustig2007}. CS principle is similar to the concept of compressing signals for transmission and then decompressing them \cite{Zisselman2018}, as seen in the JPEG, JPEG2000, MPEG, and MP3 standards \cite{Hollingsworth2015}. If undersampled signals or images can be compressed correctly, they can also be decompressed or recovered accurately, according to CS \cite{Fair2015}. Hereby, CS sets three conditions on the MRI reconstruction: \begin{enumerate} \item The image or signal must be compressible. In other words, the MRI images must be sparse, with the bulk of its pixel values being zeros, either in its native domain or in an appropriate transformation domain, such as in the wavelet or frequency domain. \item To avoid aliasing artefacts, the undersampling patterns should be incoherent using random undersampling. \item A non-linear reconstruction algorithm must be used. \end{enumerate} It is possible to recover the original MRI images from their undersampled measurements following these three criteria. Previously published research on using CS as an MR acceleration approach employs iterative non-linear optimisation algorithms that implement sparsity and reconstruction fidelity. Total variation (TV) \cite{Lustig2007}, dictionary learning (DLMRI \cite{Ravishankar2011}, RecPF \cite{Yang2010}, and BM3D \cite{Eksioglu2016}) are typical examples. However, there are four major issues with these approaches: \begin{enumerate} \item Iterative optimisation can be time-consuming \cite{Hollingsworth2015,Hu2014}. \item These algorithms tend to generate artificially smoothed image appearance \cite{Hollingsworth2015}. \item The reconstruction results can have blocky artefacts \cite{Liu2015,Kayvanrad2014,Guerquin-Kern2011}. \item They reconstruct each image as an individual event, failing to account for the expected anatomical features in MR images that may be used to improve the reconstruction accuracy \cite{Hammernik2018}. \end{enumerate} \subsection{Deep Learning Based Fast MRI} Deep learning based approaches have recently achieved performance dividends in a variety of medical image processing problems by using ‘big data' and advancements in computational power. To date, however, the majority of research studies have concentrated on downstream medical image interpretation and post-processing activities, such as anatomical segmentation \cite{chen2021jas,wu2021fast,wu2021automated,jin20213d,zhou2020systematic,liu2020exploring,ferreira2020automating,li2020mv,liu2019automatic,zhuang2019evaluation,mo2018deep}, lesion segmentation \cite{zhang2021me,yang2020simultaneous,li2020atrial,zhang2019automatic,yang20186,bakas2018identifying}, co-registration \cite{mok2020fast,de2017end,wu2015scalable}, synthesis \cite{xu2021synthesis,wang2021dicyc}, and multimodal data detection \cite{gao2020salient,wang2019saliencygan,ali2020novel,yang2020deep,li2018deep,dong2018holistic,dong2017automatic}, for disease identification \cite{hu2020weakly,cao2020multiparameter,zhang2019deep}, prognosis \cite{roberts2020machine,soltaninejad2017mri}, and treatment prediction \cite{jin2021predicting,nielsen2018prediction}. To increase the precision of these post-processing operations, imaging methods must be improved, which can also be aided by deep learning \cite{chen2021wavelet,lv2021pic,lvgan,yuan2020sara,guo2020deep,schlemper2018stochastic,seitzer2018adversarial}. Since its principle was developed in 2006, CS has had a long history for fast imaging applications, including the embodiment of MRI reconstruction \cite{donoho2006compressed}. However, the related less efficient iterative optimisation can stymie further implementation. Although deep learning based tomographic reconstruction technology has only been around for a few years, there is a lot of interest in this area, and there are many ongoing advances and exciting applications, including MRI. Deep learning based approaches can successfully overcome the majority of the aforementioned shortcomings of earlier CS methods. A deep learning algorithm, e.g., convolutional neural networks (CNN), is made up of many layers of nodes. To learn the mapping from undersampled MR images to their corresponding fully sampled ones, the weights of the node relations between layers are optimised. The method of optimising weights is known as training the model. Once trained, the model is capable of reconstructing original images from undersampled measurements. In terms of reconstruction accuracy, speed, and visual consistency, deep learning based methods have been shown to consistently outperform non-deep learning based ones \cite{Quan2018,Mardani2019,Yang2018,Schlemper2018,Huang2019,Eo2018}. \subsection{GAN Powered Fast MRI} Generative Adversarial Networks, or GAN for short, represent a type of generative modelling technique that employs deep learning methods, e.g., CNN. Generative modelling is an unsupervised learning task in machine learning that entails automatically finding and learning the regularities or patterns in input data such that the model can be used to produce or output new examples that could have been plausibly drawn from the original dataset. GAN employs a clever method for training a generative model by posing the problem as a supervised learning problem with two sub-models: the generator model, which we train to produce new examples, and the discriminator model, which attempts to identify examples as either true (from the original domain) or false (generated). The two models are trained in an adversarial zero-sum game before the discriminator model is tricked about half of the time, indicating that the generator model is producing plausible instances. GAN is a fascinating and quickly evolving area that delivers on the promise of generative models by producing plausible instances in a variety of problems, most notably in image-to-image conversion tasks such as translating image styles, and in generating photo-realistic images of objects, scenes, and individuals that even humans cannot recognise which ones are fake. GAN is an important type of deep learning based CS-MRI reconstruction method, which was proposed first by Yang et al. in 2017 \cite{yu2017deep,Yang2018}. In the context of CS-MRI, GAN entails training a generator to recreate the original image from undersampled measurements and a discriminator to produce the likelihood of whether the generated image matches the original, i.e., fully sampled measurements. The discriminator, in turn, modifies the generator's learning \cite{Goodfellow2014}. As a consequence, the generator generates photo-realistic images \cite{Deng2019}. In terms of reconstruction accuracy and efficiency, GAN based methods \cite{Yang2018,Quan2018} outperform the non-GAN based deep learning method, e.g., deep ADMM-net. One GAN based approach \cite{Mardani2019} also claims to generate less fuzzy and aliasing artefacts than non-deep learning based methods. As a result, GAN based approaches have the capability to produce state-of-the-art CS-MRI reconstruction results. ~\\ In this book chapter, \begin{itemize} \item we will perform a mini topical review on GAN powered fast MRI, including the original Deep De-Aliasing Generative Adversarial Networks (DAGAN) method and other more advanced and recently proposed GAN based models; \item we will analyse and explain different GAN models, and compare the results obtained by different GAN based models; \item we will provide a comparison study on different datasets, e.g., MRI for various anatomical scans. \item we will highlight the recent development and discuss future directions. \end{itemize} \section{Methods} \subsection{Fundamentals of MRI Reconstruction} \input{sections/fundamental_MRI_recon} \subsection{CNN Based MRI Reconstruction} \input{sections/CNN_MRI_recon} \subsection{GAN Based MRI Reconstruction} \subsubsection{General GAN} \input{sections/Method_GAN} \subsubsection{DAGAN} \input{sections/Method_DAGAN} \subsubsection{KIGAN} \input{sections/Method_KIGAN} \subsubsection{ReconGAN/RefineGAN} \input{sections/Method_ReconRefineGAN} \subsection{Evaluation Methods} Generally, the evaluation methods include objective methods for fidelity quality assessment and subjective methods for perceptual quality assessment. In this section, we review the most popular metrics for fast MRI quality evaluation. \subsubsection{Fidelity Quality Assessment} First, we introduce the Peak Signal-to-Noise Ratio (PSNR), which is the most commonly used evaluation criteria for image transformation tasks (e.g., reconstruction, super-resolution, de-noising). It involves the data range to measure the pixel-level Mean Squared Error (MSE): \begin{equation}\label{eqt:psnr} \mathrm{PSNR}(I_{\mathrm{rec}}, I_{\mathrm{gt}}) = 10 \cdot \log_{10}(\frac{L^2}{\frac{1}{N}\sum_{i=1}^{N}(I_{\mathrm{rec}}(i)-I_{\mathrm{gt}}(i))^2}), \end{equation} where $L$ denotes the data range (generally $L = 1.0$ in MRI reconstruction tasks), and $N$ is the number of all the pixels in $I_{\mathrm{rec}}$ and $I_{\mathrm{gt}}$. PSNR represents the pixel-wise accuracy of the reconstruction regardless of the acquisition sequences of the multimodal MRI. Besides, considering the importance of image structural information, such as brightness, contrast and structures, Structural SIMilarity index (SSIM) is formed as: \begin{equation}\label{eqt:ssim} \mathrm{SSIM}(x, y) = \frac{2\mu_{x}\mu_{y} + \kappa_1}{\mu_x^2 + \mu_y^2 + \kappa_1} \cdot \frac{\sigma_{xy}+\kappa_2}{\sigma_x^2 + \sigma_y^2 + \kappa_2}, \end{equation} where $x, y$ denote two images, $\mu$ and $\sigma^2$ are the mean and variance, $\sigma_{xy}$ is the covariance between $x$ and $y$, and $\kappa_1, \kappa_2$ are constant relaxation terms. \subsubsection{Perceptual Quality Assessment} The perceptual quality of an image represents how realistic it looks. In MRI images reconstruction tasks, the most reliable perceptual quality assessment is the mean opinion score (MOS), which asks experienced radiologists to rate the reconstructed images. Typically, the images are rated from 0 to 4 depending on the reconstructed image quality (i.e., non-diagnostic, poor, fair, good, and excellent), and the final MOS is calculated as the arithmetic mean of the scores of all raters. In some cases, the rater may also mark the low perceptual quality features such as low SNR and motion artefacts. Although the MOS seems to be faithful, it has limitations such as inter-/inner-raters bias and variance of rating criteria and the scoring might be time-consuming. Thus, the Frechet Inception Distance (FID) \cite{FID}, as a learning based perceptual quality assessment, is becoming more commonly used for evaluation in GAN based image reconstruction tasks. It considers the high-level global features of a group of images (e.g., the reconstructed images) as a multidimensional Gaussian distribution $\mathcal{N}(\mu, \Sigma)$, and measures the differences between the two distributions of the reconstructed images $\mathbb{I}_{\mathrm{rec}}$ and the ground truth images $\mathbb{I}_{\mathrm{gt}}$. It first converts each group of images into a distribution of 2048 features in the latent space of a pre-trained image classification model Inception-V3 \cite{InceptionV3}. Then, the FID between these two distributions is calculated as: \begin{equation}\label{eqt:fid} \mathrm{FID}(\mathbb{I}_{\mathrm{rec}}, \mathbb{I}_{\mathrm{gt}}) = \left \|\mu_{\mathrm{gt}} - \mu_{\mathrm{rec}}\right \|^2 + \mathrm{Tr}(\Sigma_{\mathrm{gt}} +\Sigma_{\mathrm{rec}} -2(\Sigma_{\mathrm{gt}}\Sigma_{\mathrm{rec}})^{1/2}). \end{equation} The FID becomes a popular metric for image perceptual quality assessment in GAN based image generation tasks because it is fully automatic and the features extracted from Inception-V3 are close to real-world object classification problems, which tend to mimic human perception similarity for images. \section{Benchmarking} \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/brain_cartes.pdf} \caption{Brain reconstruction result using the Cartesian mask. From top to bottom 2$\times$, 4$\times$ and 6$\times$ acceleration, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{brain_cartes} \end{figure} In this chapter, we benchmark four GAN based algorithms, i.e., DAGAN, KIGAN, ReconGAN and RefineGAN, for fast MRI. Figure \ref{brain_cartes} shows the brain reconstruction results using different acceleration factors (2$\times$, 4$\times$, 6$\times$). It is obvious that the zero-filled (ZF) image has strong artefacts inside the brain tissue. From the entire image, the DAGAN effectively removes the artefacts in the ZF. However, in terms of the zoomed-in areas, there still exists some residual artefacts. For the reconstructions produced by KIGAN, blurring artefacts still exist. Although ReconGAN shows a significant reduction of aliasing artefacts, the edge details are not reconstructed clearly enough. It can be seen that the reconstructed details of RefineGAN are relatively fine, and the reconstruction quality is close to that of the ground truth. \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/knee_cartes.pdf} \caption{Knee reconstruction result using the Cartesian mask. From top to bottom 2$\times$, 4$\times$ and 6$\times$ acceleration, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{knee_cartes} \end{figure} Besides, knee reconstruction results using different Cartesian masks are shown in Figure \ref{knee_cartes}. It can be seen that except for the ZF images, all methods can reconstruct acceptable MR images. As the acceleration factor goes high, obvious aliasing artefacts are produced in DAGAN images. As can be observed in the zoomed-in areas and the corresponding error maps, KIGAN can not restore clear vessels. ReconGAN and RefineGAN show better reconstruction results with higher PSNR and SSIM. In addition, the quantitative values of the RefineGAN are superior to those of the other methods. \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/brain_radial.pdf} \caption{Brain reconstruction result using the radial mask. From top to bottom, the sampling rate (SR) is 50\%, 30\%, and 20\%, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{brain_radial} \end{figure} \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/knee_radial.pdf} \caption{Knee reconstruction result using the radial mask. From top to bottom 2$\times$, 4$\times$ and 6$\times$ acceleration, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{knee_radial} \end{figure} Furthermore, we also used radial and spiral masks for training and testing each GAN based method. The sampling rate (SR) of each mask is 50\%, 30\% and 20\%. Figures \ref{brain_radial} and \ref{knee_radial} show the brain and knee reconstruction results using radial masks. We can see that the image reconstructed by the ZF method under the radial mask has strong blurring artefacts, and the details in the brain and knee cannot be distinguished clearly. When SR=20\%, from the error maps, we can see that there are still obvious blurring artefacts and obscure blood vessels in the results of DAGAN and KIGAN. However, both ReconGAN and RefineGAN can restore sharper vessel edges and finer textures compared to other methods. Besides, RefineGAN has better PSNR and SSIM quantification. \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/barin_spiral.pdf} \caption{Brain reconstruction result using the radial mask. From top to bottom, the sampling rate (SR) is 50\%, 30\%, and 20\%, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{brain_spiral} \end{figure} \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/knee_spiral.pdf} \caption{Knee reconstruction result using the radial mask. From top to bottom 2$\times$, 4$\times$ and 6$\times$ acceleration, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{knee_spiral} \end{figure} The results are similar using spiral masks. Figures \ref{brain_spiral} and \ref{knee_spiral} show the brain and knee reconstruction results of each method using different spiral masks. Through the error map, we can intuitively see that the reconstructed image of RefineGAN has fewer errors, and the reconstruction details are also better. The image reconstructed by RefineGAN can clearly show the details of the gray matter in the brain and the details of the blood vessels in the knee. \begin{table}[] \caption{The quantitative metrics (PSNR, SSIM, and RMSE($\times10^{-2}$)) of the brain using different GAN based methods). The bold numbers indicate the best results.} \label{tab:my-table} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccccc} \hline Mask & AF/SR & Metric & ZF & DAGAN & KIGAN & ReconGAN & \textbf{RefineGAN} \\ \hline \multirow{9}{*}{Cartesian} & \multirow{3}{*}{2X} & PSNR & 30.94±2.75 & 33.79±1.88 & 33.90±2.55 & 39.08±1.34 & \textbf{39.40±1.33} \\ \cline{3-8} & & SSIM & 0.92±0.02 & 0.93±0.01 & 0.96±0.01 & \textbf{0.97±0.00} & \textbf{0.97±0.00} \\ \cline{3-8} & & RMSE & 1.57±0.01 & 0.72±0.43 & 0.78±0.53 & 0.20±0.09 & \textbf{0.19±0.08} \\ \cline{2-8} & \multirow{3}{*}{4X} & PSNR & 23.69±3.02 & 28.76±1.95 & 28.14±1.84 & 32.07±1.65 & \textbf{32.67±1.56} \\ \cline{3-8} & & SSIM & 0.79±0.03 & 0.86±0.02 & 0.88±0.02 & 0.92±0.01 & \textbf{0.93±0.01} \\ \cline{3-8} & & RMSE & 8.86±7.01 & 2.35±1.50 & 2.67±1.40 & 1.05±0.54 & \textbf{0.91±0.08} \\ \cline{2-8} & \multirow{3}{*}{6X} & PSNR & 19.47±2.31 & 25.4±1.57 & 27.91±1.57 & 29.23±1.68 & \textbf{29.95±1.61} \\ \cline{3-8} & & SSIM & 0.66±0.04 & 0.77±0.03 & 0.86±0.02 & 0.88±0.02 & \textbf{0.89±0.02} \\ \cline{3-8} & & RMSE & 21.3±14.1 & 4.89±2.33 & 2.75±1.31 & 2.03±1.00 & \textbf{1.71±0.83} \\ \hline \multirow{9}{*}{Radial} & \multirow{3}{*}{50\%} & PSNR & 34.28±1.03 & 35.24±1.14 & 38.93±1.10 & 37.89±0.93 & \textbf{39.38±0.88} \\ \cline{3-8} & & SSIM & 0.87±0.02 & 0.91±0.01 & 0.96±0.01 & 0.95±0.01 & \textbf{0.97±0.00} \\ \cline{3-8} & & RMSE & 1.94±0.22 & 1.74±0.22 & 1.01±0.12 & 1.28±0.13 & \textbf{0.88±0.11} \\ \cline{2-8} & \multirow{3}{*}{30\%} & PSNR & 28.83±0.72 & 30.99±2.33 & 33.97±1.05 & 34.35±0.99 & \textbf{35.21±1.05} \\ \cline{3-8} & & SSIM & 0.73±0.02 & 0.82±0.02 & 0.91±0.01 & 0.91±0.01 & \textbf{0.92±0.01} \\ \cline{3-8} & & RMSE & 3.10±0.30 & 2.01±0.27 & 1.60±0.18 & 1.93±0.21 & \textbf{1.02±0.17} \\ \cline{2-8} & \multirow{3}{*}{20\%} & PSNR & 26.47±0.64 & 28.96±1.90 & 31.87±0.94 & 31.96±0.87 & \textbf{32.83±0.94} \\ \cline{3-8} & & SSIM & 0.65±0.02 & 0.78±0.02 & 0.84±0.03 & 0.87±0.02 & \textbf{0.89±0.02} \\ \cline{3-8} & & RMSE & 6.08±0.36 & 2.47±0.45 & 2.56±0.27 & 2.54±0.25 & \textbf{1.56±0.32} \\ \hline \multirow{9}{*}{Spiral} & \multirow{3}{*}{50\%} & PSNR & 34.87±1.01 & 38.08±0.99 & 41.56±1.00 & 43.69±0.58 & \textbf{44.36±0.57} \\ \cline{3-8} & & SSIM & 0.90±0.01 & 0.95±0.01 & 0.96±0.01 & 0.97±0.01 & \textbf{0.98±0.00} \\ \cline{3-8} & & RMSE & 1.62±0.18 & 1.25±0.01 & 0.84±0.10 & 0.72±0.66 & \textbf{0.59±0.61} \\ \cline{2-8} & \multirow{3}{*}{30\%} & PSNR & 29.55±0.71 & 34.87±2.79 & 36.71±1.09 & 37.93±0.78 & \textbf{38.61±0.82} \\ \cline{3-8} & & SSIM & 0.83±0.01 & 0.87±0.01 & 0.91±0.02 & 0.95±0.01 & \textbf{0.95±0.00} \\ \cline{3-8} & & RMSE & 3.93±0.28 & 1.72±0.41 & 1.65±0.20 & 1.27±0.11 & \textbf{0.94±0.19} \\ \cline{2-8} & \multirow{3}{*}{20\%} & PSNR & 26.62±0.63 & 28.87±2.29 & 32.43±0.72 & 35.02±0.81 & \textbf{35.11±0.85} \\ \cline{3-8} & & SSIM & 0.73±0.01 & 0.80±0.04 & 0.88±0.03 & 0.91±0.01 & \textbf{0.92±0.01} \\ \cline{3-8} & & RMSE & 7.12±0.36 & 2.29±0.31 & 2.40±0.20 & 1.78±0.16 & \textbf{1.52±0.27} \\ \hline \end{tabular}% } \label{table_brain} \end{table} \begin{table}[] \caption{The quantitative metrics (PSNR, SSIM, and RMSE($\times10^{-2}$)) of the knee using different GAN based methods. The bold numbers indicate the best results.} \label{tab:my-table} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccccc} \hline Mask & AF/SR & Metric & ZF & DAGAN & KIGAN & ReconGAN & \textbf{RefineGAN} \\ \hline \multirow{9}{*}{Cartesian} & \multirow{3}{*}{2X} & PSNR & \multicolumn{1}{l}{34.66±2.98} & \multicolumn{1}{l}{38.91±1.59} & 38.53±2.51 & 42.37±1.60 & \textbf{42.41±1.98} \\ \cline{3-8} & & SSIM & 0.95±0.01 & 0.94±0.01 & 0.96±0.01 & 0.97±0.00 & \textbf{0.98±0.00} \\ \cline{3-8} & & RMSE & 1.64±1.32 & 0.52±0.31 & 0.83±1.03 & \textbf{0.23±0.09} & 0.24±0.16 \\ \cline{2-8} & \multirow{3}{*}{4X} & PSNR & 27.31±3.23 & 34.35±1.77 & 34.70±1.74 & 34.88±1.96 & \textbf{35.58±1.74} \\ \cline{3-8} & & SSIM & 0.84±0.02 & 0.86±0.02 & 0.89±0.02 & 0.90±0.02 & \textbf{0.91±0.02} \\ \cline{3-8} & & RMSE & 10.30±11.00 & 1.55±1.20 & 1.49±1.27 & 1.37±0.93 & \textbf{1.14±0.66} \\ \cline{2-8} & \multirow{3}{*}{6X} & PSNR & 25.15±3.37 & 32.67±1.89 & 30.83±2.09 & 32.34±2.16 & \textbf{33.36±1.81} \\ \cline{3-8} & & SSIM & 0.79±0.03 & 0.82±0.03 & 0.84±0.02 & 0.86±0.02 & \textbf{0.87±0.02} \\ \cline{3-8} & & RMSE & 18.2±22.3 & 2.36±2.09 & 4.08±4.29 & 2.62±2.19 & \textbf{2.00±1.52} \\ \hline \multirow{9}{*}{Radial} & \multirow{3}{*}{50\%} & PSNR & 35.17±1.37 & 36.70±1.37 & 37.17±1.37 & 38.38±1.22 & \textbf{38.91±1.21} \\ \cline{3-8} & & SSIM & 0.90±0.02 & 0.91±0.01 & 0.92±0.01 & \textbf{0.93±0.01} & \textbf{0.93±0.01} \\ \cline{3-8} & & RMSE & 2.37±0.50 & 1.97±0.59 & 1.53±0.18 & 1.22±0.16 & \textbf{1.14±0.15} \\ \cline{2-8} & \multirow{3}{*}{30\%} & PSNR & 32.69±1.13 & 34.02±1.39 & 35.27±1.40 & 35.96±1.27 & \textbf{36.21±1.29} \\ \cline{3-8} & & SSIM & 0.80±0.03 & 0.87±0.02 & 0.88±0.01 & 0.88±0.02 & \textbf{0.89±0.02} \\ \cline{3-8} & & RMSE & 4.87±0.89 & 2.23±0.76 & 1.39±0.67 & 1.61±0.22 & \textbf{1.36±0.32} \\ \cline{2-8} & \multirow{3}{*}{20\%} & PSNR & 30.77±1.15 & 33.69±1.51 & 33.49±1.54 & 34.48±1.26 & \textbf{34.72±1.30} \\ \cline{3-8} & & SSIM & 0.75±0.03 & 0.84±0.02 & 0.85±0.02 & \textbf{0.86±0.02} & \textbf{0.86±0.02} \\ \cline{3-8} & & RMSE & 7.13±1.24 & 2.76±0.87 & 2.68±0.73 & 1.91±0.27 & \textbf{1.61±0.34} \\ \hline \multirow{9}{*}{Spiral} & \multirow{3}{*}{50\%} & PSNR & 35.62±1.27 & 39.37±1.39 & 41.39±1.37 & 42.53±0.75 & \textbf{43.04±0.77} \\ \cline{3-8} & & SSIM & 0.91±0.02 & 0.94±0.01 & 0.94±0.02 & 0.96±0.01 & \textbf{0.97±0.02} \\ \cline{3-8} & & RMSE & 1.78±0.54 & 1.06±0.10 & 0.97±0.14 & 0.73±0.15 & \textbf{0.61±0.07} \\ \cline{2-8} & \multirow{3}{*}{30\%} & PSNR & 33.48±1.12 & 36.19±1.35 & 36.26±1.18 & 38.20±1.31 & \textbf{38.49±1.33} \\ \cline{3-8} & & SSIM & 0.86±0.02 & 0.90±0.01 & 0.90±0.02 & \textbf{0.92±0.01} & \textbf{0.92±0.01} \\ \cline{3-8} & & RMSE & 3.71±0.91 & 1.67±0.72 & 1.64±0.35 & 1.24±0.18 & \textbf{0.98±0.23} \\ \cline{2-8} & \multirow{3}{*}{20\%} & PSNR & 30.97±1.14 & 34.88±1.39 & 33.83±1.50 & 36.51±1.23 & \textbf{36.75±1.27} \\ \cline{3-8} & & SSIM & 0.81±0.03 & 0.88±0.02 & 0.88±0.01 & \textbf{0.90±0.02} & \textbf{0.90±0.02} \\ \cline{3-8} & & RMSE & 5.39±1.26 & 2.29±0.56 & 1.95±0.25 & 1.51±0.20 & \textbf{1.32±0.14} \\ \hline \end{tabular}% } \label{table_knee} \end{table} The quantitative metrics (PSNR, SSIM and RMSE) of each GAN based method using different under-sampling masks are shown in Tables \ref{table_brain} and \ref{table_knee}. We can draw similar conclusions as the qualitative visualisation results that the image quality after RefineGAN reconstruction is better than other methods. Even at a high acceleration factor or high under-sampling rate, the reconstructed images of RefineGAN still have high SNR. Tables \ref{table_brain} and \ref{table_knee} show the quantitative metrics, including PSNR, SSIM and RMSE for all compared methods. The numbers in Tables \ref{table_brain} and \ref{table_knee} represent the mean and standard deviation values of the corresponding metrics (bold numbers indicate the best performance). Compared to DAGAN, KIGAN and ReconGAN, the RefineGAN framework has outperformed them remarkably at different acceleration factors. \section{Discussion} \input{sections/discussion} \section{Conclusion} We carried out a mini review, benchmarked and compared four different GAN-based network architectures for fast MRI reconstruction in this chapter. Our comparison used the various sampling patterns, different masks on corresponding datasets that have covered commonly used clinical MRI scenarios. For our systematic research and measurement, we used both traditional and newly proposed quantitative tools. The outcomes of qualitative visualization were also examined and compared. To summarise, our mini review and benchmarking have revealed that all GAN-based approaches could obtain promising results at lower acceleration factors. However, when the acceleration factors are high, some GAN-based architectures, such as DAGAN and KIGAN, may not be sufficient for MRI reconstruction applications. Furthermore, as compared to other GAN-based methods, the RefineGAN has improved reconstruction accuracy and perceptual efficiency. Future development incorporating MR physics and XAI into GAN based models will provide promising pathways for clinical deployment with more transparency of the reconstruction algorithms. \newpage \section{Synopsis} \subsection{Limitations of Magnetic Resonance Imaging (MRI)} Magnetic resonance imaging (MRI) is a revolutionary non-invasive medical imaging technique, offering high resolution definition of the structure and function of most body tissues and organs. However, a major limitation of MRI is its slow rate of image acquisition \cite{Lustig2007,Yang2018}, resulting in a prolonged scanning time compared to other imaging modality. This limitation not only increases its scanning cost, but restricts its application in emergency settings. Furthermore, patients have to `lie still throughout an acquisition in order not to degrade the quality of the images' \cite{Hollingsworth2015}. Patients must hold their breadth for abdominal/thoracic imaging, which may be difficult for `children, obese individuals and those with respiratory compromise' \cite{Hollingsworth2015}. Hence, the MR scanning process can bring feelings of discomfort and claustrophobia to many patients. Therefore, to reduce the scanning cost and improve patient experience, it is necessary to accelerate the MR image acquisition process. The reason why MR imaging process is slow is that unlike other imaging techniques, MR images are obtained from the k-space, i.e. the spatial frequency domain. The image domain information is recovered from the k-space data by inverse Fourier transform \cite{Suetens2009}. The measurements in the k-space are acquired sequentially, i.e. one line by one line per repetition time across the phase encoding direction(s) \cite{Hollingsworth2015,Suetens2009}. Various MR acceleration techniques focus on taking measurements of multiple lines per repetition time, thus traversing through the k-space at a faster rate \cite{Hollingsworth2015}. Examples include echo planar imaging \cite{Mansfield1977}, rapid acquisition with relaxation enhancement \cite{Hennig1986}, and fast low angle shot imaging \cite{Haase1986}. \subsection{Conventional Acceleration Using Compressive Sensing} To achieve a higher degree of acceleration, it is possible to sample the k-space only partially, i.e. not obtaining all lines of measurements across the phase encoding direction(s). The original k-space information can be inferred from the undersampled measurement. The result is an acceleration factor inversely proportional to the undersampling ratio. For instance, if 50\% of the k-space is sampled, the acceleration factor is 2-fold. Undersampling techniques therefore attempt to bypass the Nyquist-Shannon sampling criteria \cite{Lustig2007}. A promising undersampling method is compressed sensing (CS) \cite{Lustig2007}, potentially allowing for more aggressive undersampling and acceleration. The theory of CS is related to the idea of compressing signals for transmission, and decompressing them afterwards \cite{Zisselman2018}, as applied to JPEG, JPEG2000, MPEG, and MP3 standards \cite{Hollingsworth2015}. CS assumes that if undersampled signals or images can be compressed accurately, they could be decompressed or reconstructed accurately \cite{Fair2015}. Hence, CS imposes 3 criteria to the reconstruction process: 1) The signal or the image must be compressible. In other words, the image must be sparse, i.e. the majority of its pixel values being zeros, either in itself or in a suitable transformation domain, e.g. wavelet or frequency domain. 2) The undersampling pattern should be incoherent, such as being random, to prevent aliasing artefacts. 3) The reconstruction algorithm should be non-linear. Under these 3 criteria, it is possible to reconstruct the original image from its undersampled measurements. Earlier work on applying CS as an MR acceleration method involves iterative non-linear optimisation algorithms, which enforce sparsity and reconstruction fidelity. Examples include total variation (TV) \cite{Lustig2007}, dictionary learning such as DLMRI \cite{Ravishankar2011}, RecPF \cite{Yang2010}, and BM3D \cite{Eksioglu2016}. There are however 4 main problems associated with these techniques: \begin{enumerate} \item Iterative optimisation algorithm is time consuming \cite{Hollingsworth2015,Hu2014}. \item These algorithms tend to cause over-generalisation, leading to artificially smoothed appearance \cite{Hollingsworth2015}. \item The reconstruction results show blocky artefacts \cite{Liu2015,Kayvanrad2014,Guerquin-Kern2011}. \item They reconstruct each image as an isolated case, and thus fail to take into account of expected anatomical features in MR images to enhance performance \cite{Hammernik2018}. \end{enumerate} \subsection{Deep Learning Based Fast MRI} Recently, deep learning based methods have gained performance dividends in many medical image analysis problems by using ‘big data’ and advances in computing capacity. However, to date, majority of the work has focused on downstream medical image analysis and post-processing tasks, e.g., imaging data segmentation, co-registration and multimodal data classification, for disease diagnosis, prognosis and treatment prediction. Pivoting towards higher accuracy of these post-processing tasks requires an improvement of imaging techniques, which can also benefit from deep learning. CS has a long-term history in fast imaging applications including the embodiment of MRI reconstruction since its theory has been established in 2006 \cite{donoho2006compressed}. Nevertheless, the less effective iterative optimization procedure associated might hinder the further deployment. While deep learning based research in tomographic reconstruction emerged only a few years ago, there is a great interest in this research field and there are many ongoing developments and promising applications including MRI. Most of the above limitations of the earlier CS methods are however effectively addressed by deep learning based methods. A deep learning model consists of multiple layers of nodes. The weights of the node connections between layers are optimised to learn the mapping from the undersampled MR images to their corresponding fully sampled ones. This process of weight optimisation is referred to as training the model. Once trained, the model can reconstruct original images from their undersampled measurements. Deep learning based CS methods have been shown to consistently outperform non-deep learning based ones, in terms of reconstruction accuracy, speed and visual quality \cite{Quan2018,Mardani2019,Yang2018,Schlemper2018,Huang2019,Eo2018}. \subsection{GAN Powered Fast MRI} Generative Adversarial Networks, or GAN for short, are an approach to generative modelling using deep learning methods, such as convolutional neural networks. Generative modelling is an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset. GAN is a clever way of training a generative model by framing the problem as a supervised learning problem with two sub-models: the generator model that we train to generate new examples, and the discriminator model that tries to classify examples as either real (from the domain) or fake (generated). The two models are trained together in a zero-sum game, adversarial, until the discriminator model is fooled about half the time, meaning the generator model is generating plausible examples. GAN is an exciting and rapidly changing field, delivering on the promise of generative models in their ability to generate realistic examples across a range of problem domains, most notably in image-to-image translation tasks such as translating photos of summer to winter or day to night, and in generating photo-realistic photos of objects, scenes, and people that even humans cannot tell are fake. An important type of deep learning based CS-MRI reconstruction method is using GAN, which was proposed first by Yang et al. in 2017 in the following two papers (by the first author of this book chapter). \begin{itemize} \item Yang, Guang, et al. "DAGAN: Deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction." IEEE Transactions on Medical Imaging 37.6 (2017): 1310-1321. \end{itemize} In the context of CS-MRI, GAN involves training a generator to reconstruct the original image from its undersampled measurements and a discriminator to output the probability of whether the generated image resembles the original, i.e. fully sampled ones. The discriminator output in turn modifies the learning of the generator \cite{Goodfellow2014}. This results in the generator producing photo-realistic images \cite{Deng2019}. GAN-based methods \cite{Yang2018,Quan2018} outperform the non-GAN based deep learning method---deep ADMM-net---both in reconstruction accuracy and speed. One GAN-based method \cite{Mardani2019} also claims to produce fewer blurry and aliasing artefacts than the non-deep learning-based ones. Hence, GAN-based methods can potentially achieve state-of-the-art CS-MRI reconstruction result. ~\\ In this book chapter, \begin{itemize} \item we will perform a topical review on GAN powered fast MRI, including the original DAGAN method and other more advanced and recently proposed GAN based models; \item we will analyse and explain different GAN models, and compare the results obtained by different GAN based models; \item we will provide a comprehensive study on different datasets, e.g., MRI for various anatomical scans. \item we will highlight the recent development and discuss future directions. \end{itemize} \section{Introduction} \subsection{Magnetic Resonance Imaging} Through offering repeatable, non-invasive measures of tissue structure and function, Magnetic resonance imaging (MRI) has transformed clinical medical imaging and medical science. The sensitivity of the image to tissue properties can be greatly varied with MRI, either by changing the timing with which MR signals are obtained (e.g., the echo time---TE and the repetition time---TR), or by using magnetisation preparation or contrast agents. These so-called multimodal or multi-sequence MRI methods can not only provide a comparison in traditional anatomical or structural MRI but can also quantify the function of most tissues and organs of the human body in clinical and pre-clinical laboratory environments. Invasive methods, such as tissue biopsy or radionuclide tests, have therefore become less needed as a result of the prosperity of MRI and other non-invasive medical imaging technologies \cite{Hollingsworth2015}. Since MRI is non-invasive, it can be used to provide longitudinal and quantitative imaging biomarkers in the therapy trials. MR methods have begun to represent gold standard measurement for clinical research, despite the fact that the modality is still relatively underutilised \cite{Hollingsworth2015}. Why is this so? 'MRI is complex and pricey,' which is a commonly heard melancholy. \subsection{Limitations of Magnetic Resonance Imaging} Although MRI is a revolutionary non-invasive diagnostic imaging technique that provides high-resolution definition of the structural and functional information of most body tissues and organs, one significant drawback of MRI is its slow rate of image acquisition, which results in a longer scanning period as compared to other imaging modalities \cite{Lustig2007,Yang2018}. In MRI, raw data is acquired in the \textit{k}-space, which includes information about spatial frequencies within the image, rather than collected directly in the image space \cite{Suetens2009}. The Fourier transformation links image space and the \textit{k}-space. The Nyquist criterion defines the \textit{k}-space information that must be satisfied conventionally after we have defined the field-of-view (FOV) and spatial resolution of the image that we want to obtain. The distance between \textit{k}-space neighbours is inversely proportional to the field of view in each direction. The highest frequency obtained in each direction is inversely proportional to the desired resolution. Data encoded with pulsed magnetic field gradients are acquired to fill the \textit{k}-space. We may obtain a line of \textit{k}-space points very quickly in one direction, known as the read direction, using either a spin or gradient echo in one repetition time. Further directions, on the other hand, must be phase-encoded, which takes one repetition time to encrypt one line of \textit{k}-space lines \cite{Hollingsworth2015,Suetens2009}. This must then be rerun for all possible combinations of the number of phase encoding steps needed in the anterior-posterior and foot-head directions. As a result, MRI acquisitions can be time-consuming, particularly when a high resolution or large FOV is needed. This drawback not only raises the cost of imaging but also limits its use in emergency situations. Furthermore, in order to maintain image consistency, patients must lie still during the acquisition. For abdominal/thoracic imaging, patients must hold their breath, which can be problematic for paediatric, obese patients, and those with respiratory compromise \cite{Hollingsworth2015}. As a result, many patients can experience anxiety and claustrophobia during the MR scanning procedure \cite{Hollingsworth2015}. In order to minimise scanning costs and increase patient throughput, the MR image acquisition process must be sped up. Various MR acceleration methods rely on taking measurements of several lines per repetition time, allowing for quicker traversing of the \textit{k}-space. Examples include echo planar imaging \cite{Mansfield1977}, rapid acquisition with relaxation enhancement \cite{Hennig1986}, and fast low angle shot imaging \cite{Haase1986}. \subsection{Conventional Acceleration Using Compressive Sensing} It is possible to attain a higher degree of acceleration by sampling the \textit{k}-space only partially, i.e., not collecting all lines of measurements in the phase encoding path(s). The undersampled calculation can be used to infer the original \textit{k}-space details. As a consequence, the acceleration metric is equal to the undersampling ratio. For example, if half of the \textit{k}-space is sampled, the acceleration factor is doubled. As a result, undersampling methods aim to circumvent the Nyquist-Shannon sampling criterion \cite{Lustig2007}. Compressed sensing (CS) is a promising undersampling approach that may allow for more aggressive undersampling and acceleration \cite{Lustig2007}. CS principle is similar to the concept of compressing signals for transmission and then decompressing them \cite{Zisselman2018}, as seen in the JPEG, JPEG2000, MPEG, and MP3 standards \cite{Hollingsworth2015}. If undersampled signals or images can be compressed correctly, they can also be decompressed or recovered accurately, according to CS \cite{Fair2015}. Hereby, CS sets three conditions on the MRI reconstruction: \begin{enumerate} \item The image or signal must be compressible. In other words, the MRI images must be sparse, with the bulk of its pixel values being zeros, either in its native domain or in an appropriate transformation domain, such as in the wavelet or frequency domain. \item To avoid aliasing artefacts, the undersampling patterns should be incoherent using random undersampling. \item A non-linear reconstruction algorithm must be used. \end{enumerate} It is possible to recover the original MRI images from their undersampled measurements following these three criteria. Previously published research on using CS as an MR acceleration approach employs iterative non-linear optimisation algorithms that implement sparsity and reconstruction fidelity. Total variation (TV) \cite{Lustig2007}, dictionary learning (DLMRI \cite{Ravishankar2011}, RecPF \cite{Yang2010}, and BM3D \cite{Eksioglu2016}) are typical examples. However, there are four major issues with these approaches: \begin{enumerate} \item Iterative optimisation can be time-consuming \cite{Hollingsworth2015,Hu2014}. \item These algorithms tend to generate artificially smoothed image appearance \cite{Hollingsworth2015}. \item The reconstruction results can have blocky artefacts \cite{Liu2015,Kayvanrad2014,Guerquin-Kern2011}. \item They reconstruct each image as an individual event, failing to account for the expected anatomical features in MR images that may be used to improve the reconstruction accuracy \cite{Hammernik2018}. \end{enumerate} \subsection{Deep Learning Based Fast MRI} Deep learning based approaches have recently achieved performance dividends in a variety of medical image processing problems by using ‘big data' and advancements in computational power. To date, however, the majority of research studies have concentrated on downstream medical image interpretation and post-processing activities, such as anatomical segmentation \cite{chen2021jas,wu2021fast,wu2021automated,jin20213d,zhou2020systematic,liu2020exploring,ferreira2020automating,li2020mv,liu2019automatic,zhuang2019evaluation,mo2018deep}, lesion segmentation \cite{zhang2021me,yang2020simultaneous,li2020atrial,zhang2019automatic,yang20186,bakas2018identifying}, co-registration \cite{mok2020fast,de2017end,wu2015scalable}, synthesis \cite{xu2021synthesis,wang2021dicyc}, and multimodal data detection \cite{gao2020salient,wang2019saliencygan,ali2020novel,yang2020deep,li2018deep,dong2018holistic,dong2017automatic}, for disease identification \cite{hu2020weakly,cao2020multiparameter,zhang2019deep}, prognosis \cite{roberts2020machine,soltaninejad2017mri}, and treatment prediction \cite{jin2021predicting,nielsen2018prediction}. To increase the precision of these post-processing operations, imaging methods must be improved, which can also be aided by deep learning \cite{chen2021wavelet,lv2021pic,lvgan,yuan2020sara,guo2020deep,schlemper2018stochastic,seitzer2018adversarial}. Since its principle was developed in 2006, CS has had a long history for fast imaging applications, including the embodiment of MRI reconstruction \cite{donoho2006compressed}. However, the related less efficient iterative optimisation can stymie further implementation. Although deep learning based tomographic reconstruction technology has only been around for a few years, there is a lot of interest in this area, and there are many ongoing advances and exciting applications, including MRI. Deep learning based approaches can successfully overcome the majority of the aforementioned shortcomings of earlier CS methods. A deep learning algorithm, e.g., convolutional neural networks (CNN), is made up of many layers of nodes. To learn the mapping from undersampled MR images to their corresponding fully sampled ones, the weights of the node relations between layers are optimised. The method of optimising weights is known as training the model. Once trained, the model is capable of reconstructing original images from undersampled measurements. In terms of reconstruction accuracy, speed, and visual consistency, deep learning based methods have been shown to consistently outperform non-deep learning based ones \cite{Quan2018,Mardani2019,Yang2018,Schlemper2018,Huang2019,Eo2018}. \subsection{GAN Powered Fast MRI} Generative Adversarial Networks, or GAN for short, represent a type of generative modelling technique that employs deep learning methods, e.g., CNN. Generative modelling is an unsupervised learning task in machine learning that entails automatically finding and learning the regularities or patterns in input data such that the model can be used to produce or output new examples that could have been plausibly drawn from the original dataset. GAN employs a clever method for training a generative model by posing the problem as a supervised learning problem with two sub-models: the generator model, which we train to produce new examples, and the discriminator model, which attempts to identify examples as either true (from the original domain) or false (generated). The two models are trained in an adversarial zero-sum game before the discriminator model is tricked about half of the time, indicating that the generator model is producing plausible instances. GAN is a fascinating and quickly evolving area that delivers on the promise of generative models by producing plausible instances in a variety of problems, most notably in image-to-image conversion tasks such as translating image styles, and in generating photo-realistic images of objects, scenes, and individuals that even humans cannot recognise which ones are fake. GAN is an important type of deep learning based CS-MRI reconstruction method, which was proposed first by Yang et al. in 2017 \cite{yu2017deep,Yang2018}. In the context of CS-MRI, GAN entails training a generator to recreate the original image from undersampled measurements and a discriminator to produce the likelihood of whether the generated image matches the original, i.e., fully sampled measurements. The discriminator, in turn, modifies the generator's learning \cite{Goodfellow2014}. As a consequence, the generator generates photo-realistic images \cite{Deng2019}. In terms of reconstruction accuracy and efficiency, GAN based methods \cite{Yang2018,Quan2018} outperform the non-GAN based deep learning method, e.g., deep ADMM-net. One GAN based approach \cite{Mardani2019} also claims to generate less fuzzy and aliasing artefacts than non-deep learning based methods. As a result, GAN based approaches have the capability to produce state-of-the-art CS-MRI reconstruction results. ~\\ In this book chapter, \begin{itemize} \item we will perform a mini topical review on GAN powered fast MRI, including the original Deep De-Aliasing Generative Adversarial Networks (DAGAN) method and other more advanced and recently proposed GAN based models; \item we will analyse and explain different GAN models, and compare the results obtained by different GAN based models; \item we will provide a comparison study on different datasets, e.g., MRI for various anatomical scans. \item we will highlight the recent development and discuss future directions. \end{itemize} \section{Methods} \subsection{Fundamentals of MRI Reconstruction} \input{sections/fundamental_MRI_recon} \subsection{CNN Based MRI Reconstruction} \input{sections/CNN_MRI_recon} \subsection{GAN Based MRI Reconstruction} \subsubsection{General GAN} \input{sections/Method_GAN} \subsubsection{DAGAN} \input{sections/Method_DAGAN} \subsubsection{KIGAN} \input{sections/Method_KIGAN} \subsubsection{ReconGAN/RefineGAN} \input{sections/Method_ReconRefineGAN} \subsection{Evaluation Methods} Generally, the evaluation methods include objective methods for fidelity quality assessment and subjective methods for perceptual quality assessment. In this section, we review the most popular metrics for fast MRI quality evaluation. \subsubsection{Fidelity Quality Assessment} First, we introduce the Peak Signal-to-Noise Ratio (PSNR), which is the most commonly used evaluation criteria for image transformation tasks (e.g., reconstruction, super-resolution, de-noising). It involves the data range to measure the pixel-level Mean Squared Error (MSE): \begin{equation}\label{eqt:psnr} \mathrm{PSNR}(I_{\mathrm{rec}}, I_{\mathrm{gt}}) = 10 \cdot \log_{10}(\frac{L^2}{\frac{1}{N}\sum_{i=1}^{N}(I_{\mathrm{rec}}(i)-I_{\mathrm{gt}}(i))^2}), \end{equation} where $L$ denotes the data range (generally $L = 1.0$ in MRI reconstruction tasks), and $N$ is the number of all the pixels in $I_{\mathrm{rec}}$ and $I_{\mathrm{gt}}$. PSNR represents the pixel-wise accuracy of the reconstruction regardless of the acquisition sequences of the multimodal MRI. Besides, considering the importance of image structural information, such as brightness, contrast and structures, Structural SIMilarity index (SSIM) is formed as: \begin{equation}\label{eqt:ssim} \mathrm{SSIM}(x, y) = \frac{2\mu_{x}\mu_{y} + \kappa_1}{\mu_x^2 + \mu_y^2 + \kappa_1} \cdot \frac{\sigma_{xy}+\kappa_2}{\sigma_x^2 + \sigma_y^2 + \kappa_2}, \end{equation} where $x, y$ denote two images, $\mu$ and $\sigma^2$ are the mean and variance, $\sigma_{xy}$ is the covariance between $x$ and $y$, and $\kappa_1, \kappa_2$ are constant relaxation terms. \subsubsection{Perceptual Quality Assessment} The perceptual quality of an image represents how realistic it looks. In MRI images reconstruction tasks, the most reliable perceptual quality assessment is the mean opinion score (MOS), which asks experienced radiologists to rate the reconstructed images. Typically, the images are rated from 0 to 4 depending on the reconstructed image quality (i.e., non-diagnostic, poor, fair, good, and excellent), and the final MOS is calculated as the arithmetic mean of the scores of all raters. In some cases, the rater may also mark the low perceptual quality features such as low SNR and motion artefacts. Although the MOS seems to be faithful, it has limitations such as inter-/inner-raters bias and variance of rating criteria and the scoring might be time-consuming. Thus, the Frechet Inception Distance (FID) \cite{FID}, as a learning based perceptual quality assessment, is becoming more commonly used for evaluation in GAN based image reconstruction tasks. It considers the high-level global features of a group of images (e.g., the reconstructed images) as a multidimensional Gaussian distribution $\mathcal{N}(\mu, \Sigma)$, and measures the differences between the two distributions of the reconstructed images $\mathbb{I}_{\mathrm{rec}}$ and the ground truth images $\mathbb{I}_{\mathrm{gt}}$. It first converts each group of images into a distribution of 2048 features in the latent space of a pre-trained image classification model Inception-V3 \cite{InceptionV3}. Then, the FID between these two distributions is calculated as: \begin{equation}\label{eqt:fid} \mathrm{FID}(\mathbb{I}_{\mathrm{rec}}, \mathbb{I}_{\mathrm{gt}}) = \left \|\mu_{\mathrm{gt}} - \mu_{\mathrm{rec}}\right \|^2 + \mathrm{Tr}(\Sigma_{\mathrm{gt}} +\Sigma_{\mathrm{rec}} -2(\Sigma_{\mathrm{gt}}\Sigma_{\mathrm{rec}})^{1/2}). \end{equation} The FID becomes a popular metric for image perceptual quality assessment in GAN based image generation tasks because it is fully automatic and the features extracted from Inception-V3 are close to real-world object classification problems, which tend to mimic human perception similarity for images. \section{Benchmarking} \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/brain_cartes.pdf} \caption{Brain reconstruction result using the Cartesian mask. From top to bottom 2$\times$, 4$\times$ and 6$\times$ acceleration, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{brain_cartes} \end{figure} In this chapter, we benchmark four GAN based algorithms, i.e., DAGAN, KIGAN, ReconGAN and RefineGAN, for fast MRI. Figure \ref{brain_cartes} shows the brain reconstruction results using different acceleration factors (2$\times$, 4$\times$, 6$\times$). It is obvious that the zero-filled (ZF) image has strong artefacts inside the brain tissue. From the entire image, the DAGAN effectively removes the artefacts in the ZF. However, in terms of the zoomed-in areas, there still exists some residual artefacts. For the reconstructions produced by KIGAN, blurring artefacts still exist. Although ReconGAN shows a significant reduction of aliasing artefacts, the edge details are not reconstructed clearly enough. It can be seen that the reconstructed details of RefineGAN are relatively fine, and the reconstruction quality is close to that of the ground truth. \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/knee_cartes.pdf} \caption{Knee reconstruction result using the Cartesian mask. From top to bottom 2$\times$, 4$\times$ and 6$\times$ acceleration, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{knee_cartes} \end{figure} Besides, knee reconstruction results using different Cartesian masks are shown in Figure \ref{knee_cartes}. It can be seen that except for the ZF images, all methods can reconstruct acceptable MR images. As the acceleration factor goes high, obvious aliasing artefacts are produced in DAGAN images. As can be observed in the zoomed-in areas and the corresponding error maps, KIGAN can not restore clear vessels. ReconGAN and RefineGAN show better reconstruction results with higher PSNR and SSIM. In addition, the quantitative values of the RefineGAN are superior to those of the other methods. \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/brain_radial.pdf} \caption{Brain reconstruction result using the radial mask. From top to bottom, the sampling rate (SR) is 50\%, 30\%, and 20\%, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{brain_radial} \end{figure} \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/knee_radial.pdf} \caption{Knee reconstruction result using the radial mask. From top to bottom 2$\times$, 4$\times$ and 6$\times$ acceleration, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{knee_radial} \end{figure} Furthermore, we also used radial and spiral masks for training and testing each GAN based method. The sampling rate (SR) of each mask is 50\%, 30\% and 20\%. Figures \ref{brain_radial} and \ref{knee_radial} show the brain and knee reconstruction results using radial masks. We can see that the image reconstructed by the ZF method under the radial mask has strong blurring artefacts, and the details in the brain and knee cannot be distinguished clearly. When SR=20\%, from the error maps, we can see that there are still obvious blurring artefacts and obscure blood vessels in the results of DAGAN and KIGAN. However, both ReconGAN and RefineGAN can restore sharper vessel edges and finer textures compared to other methods. Besides, RefineGAN has better PSNR and SSIM quantification. \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/barin_spiral.pdf} \caption{Brain reconstruction result using the radial mask. From top to bottom, the sampling rate (SR) is 50\%, 30\%, and 20\%, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{brain_spiral} \end{figure} \begin{figure}[!h] \centering\includegraphics[width=5in]{Results/knee_spiral.pdf} \caption{Knee reconstruction result using the radial mask. From top to bottom 2$\times$, 4$\times$ and 6$\times$ acceleration, respectively. From left to right are Zero-Filled (ZF), DAGAN, KIGAN, ReconGAN, RefineGAN, Ground Truth (GT).} \label{knee_spiral} \end{figure} The results are similar using spiral masks. Figures \ref{brain_spiral} and \ref{knee_spiral} show the brain and knee reconstruction results of each method using different spiral masks. Through the error map, we can intuitively see that the reconstructed image of RefineGAN has fewer errors, and the reconstruction details are also better. The image reconstructed by RefineGAN can clearly show the details of the gray matter in the brain and the details of the blood vessels in the knee. \begin{table}[] \caption{The quantitative metrics (PSNR, SSIM, and RMSE($\times10^{-2}$)) of the brain using different GAN based methods). The bold numbers indicate the best results.} \label{tab:my-table} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccccc} \hline Mask & AF/SR & Metric & ZF & DAGAN & KIGAN & ReconGAN & \textbf{RefineGAN} \\ \hline \multirow{9}{*}{Cartesian} & \multirow{3}{*}{2X} & PSNR & 30.94±2.75 & 33.79±1.88 & 33.90±2.55 & 39.08±1.34 & \textbf{39.40±1.33} \\ \cline{3-8} & & SSIM & 0.92±0.02 & 0.93±0.01 & 0.96±0.01 & \textbf{0.97±0.00} & \textbf{0.97±0.00} \\ \cline{3-8} & & RMSE & 1.57±0.01 & 0.72±0.43 & 0.78±0.53 & 0.20±0.09 & \textbf{0.19±0.08} \\ \cline{2-8} & \multirow{3}{*}{4X} & PSNR & 23.69±3.02 & 28.76±1.95 & 28.14±1.84 & 32.07±1.65 & \textbf{32.67±1.56} \\ \cline{3-8} & & SSIM & 0.79±0.03 & 0.86±0.02 & 0.88±0.02 & 0.92±0.01 & \textbf{0.93±0.01} \\ \cline{3-8} & & RMSE & 8.86±7.01 & 2.35±1.50 & 2.67±1.40 & 1.05±0.54 & \textbf{0.91±0.08} \\ \cline{2-8} & \multirow{3}{*}{6X} & PSNR & 19.47±2.31 & 25.4±1.57 & 27.91±1.57 & 29.23±1.68 & \textbf{29.95±1.61} \\ \cline{3-8} & & SSIM & 0.66±0.04 & 0.77±0.03 & 0.86±0.02 & 0.88±0.02 & \textbf{0.89±0.02} \\ \cline{3-8} & & RMSE & 21.3±14.1 & 4.89±2.33 & 2.75±1.31 & 2.03±1.00 & \textbf{1.71±0.83} \\ \hline \multirow{9}{*}{Radial} & \multirow{3}{*}{50\%} & PSNR & 34.28±1.03 & 35.24±1.14 & 38.93±1.10 & 37.89±0.93 & \textbf{39.38±0.88} \\ \cline{3-8} & & SSIM & 0.87±0.02 & 0.91±0.01 & 0.96±0.01 & 0.95±0.01 & \textbf{0.97±0.00} \\ \cline{3-8} & & RMSE & 1.94±0.22 & 1.74±0.22 & 1.01±0.12 & 1.28±0.13 & \textbf{0.88±0.11} \\ \cline{2-8} & \multirow{3}{*}{30\%} & PSNR & 28.83±0.72 & 30.99±2.33 & 33.97±1.05 & 34.35±0.99 & \textbf{35.21±1.05} \\ \cline{3-8} & & SSIM & 0.73±0.02 & 0.82±0.02 & 0.91±0.01 & 0.91±0.01 & \textbf{0.92±0.01} \\ \cline{3-8} & & RMSE & 3.10±0.30 & 2.01±0.27 & 1.60±0.18 & 1.93±0.21 & \textbf{1.02±0.17} \\ \cline{2-8} & \multirow{3}{*}{20\%} & PSNR & 26.47±0.64 & 28.96±1.90 & 31.87±0.94 & 31.96±0.87 & \textbf{32.83±0.94} \\ \cline{3-8} & & SSIM & 0.65±0.02 & 0.78±0.02 & 0.84±0.03 & 0.87±0.02 & \textbf{0.89±0.02} \\ \cline{3-8} & & RMSE & 6.08±0.36 & 2.47±0.45 & 2.56±0.27 & 2.54±0.25 & \textbf{1.56±0.32} \\ \hline \multirow{9}{*}{Spiral} & \multirow{3}{*}{50\%} & PSNR & 34.87±1.01 & 38.08±0.99 & 41.56±1.00 & 43.69±0.58 & \textbf{44.36±0.57} \\ \cline{3-8} & & SSIM & 0.90±0.01 & 0.95±0.01 & 0.96±0.01 & 0.97±0.01 & \textbf{0.98±0.00} \\ \cline{3-8} & & RMSE & 1.62±0.18 & 1.25±0.01 & 0.84±0.10 & 0.72±0.66 & \textbf{0.59±0.61} \\ \cline{2-8} & \multirow{3}{*}{30\%} & PSNR & 29.55±0.71 & 34.87±2.79 & 36.71±1.09 & 37.93±0.78 & \textbf{38.61±0.82} \\ \cline{3-8} & & SSIM & 0.83±0.01 & 0.87±0.01 & 0.91±0.02 & 0.95±0.01 & \textbf{0.95±0.00} \\ \cline{3-8} & & RMSE & 3.93±0.28 & 1.72±0.41 & 1.65±0.20 & 1.27±0.11 & \textbf{0.94±0.19} \\ \cline{2-8} & \multirow{3}{*}{20\%} & PSNR & 26.62±0.63 & 28.87±2.29 & 32.43±0.72 & 35.02±0.81 & \textbf{35.11±0.85} \\ \cline{3-8} & & SSIM & 0.73±0.01 & 0.80±0.04 & 0.88±0.03 & 0.91±0.01 & \textbf{0.92±0.01} \\ \cline{3-8} & & RMSE & 7.12±0.36 & 2.29±0.31 & 2.40±0.20 & 1.78±0.16 & \textbf{1.52±0.27} \\ \hline \end{tabular}% } \label{table_brain} \end{table} \begin{table}[] \caption{The quantitative metrics (PSNR, SSIM, and RMSE($\times10^{-2}$)) of the knee using different GAN based methods. The bold numbers indicate the best results.} \label{tab:my-table} \resizebox{\textwidth}{!}{% \begin{tabular}{cccccccc} \hline Mask & AF/SR & Metric & ZF & DAGAN & KIGAN & ReconGAN & \textbf{RefineGAN} \\ \hline \multirow{9}{*}{Cartesian} & \multirow{3}{*}{2X} & PSNR & \multicolumn{1}{l}{34.66±2.98} & \multicolumn{1}{l}{38.91±1.59} & 38.53±2.51 & 42.37±1.60 & \textbf{42.41±1.98} \\ \cline{3-8} & & SSIM & 0.95±0.01 & 0.94±0.01 & 0.96±0.01 & 0.97±0.00 & \textbf{0.98±0.00} \\ \cline{3-8} & & RMSE & 1.64±1.32 & 0.52±0.31 & 0.83±1.03 & \textbf{0.23±0.09} & 0.24±0.16 \\ \cline{2-8} & \multirow{3}{*}{4X} & PSNR & 27.31±3.23 & 34.35±1.77 & 34.70±1.74 & 34.88±1.96 & \textbf{35.58±1.74} \\ \cline{3-8} & & SSIM & 0.84±0.02 & 0.86±0.02 & 0.89±0.02 & 0.90±0.02 & \textbf{0.91±0.02} \\ \cline{3-8} & & RMSE & 10.30±11.00 & 1.55±1.20 & 1.49±1.27 & 1.37±0.93 & \textbf{1.14±0.66} \\ \cline{2-8} & \multirow{3}{*}{6X} & PSNR & 25.15±3.37 & 32.67±1.89 & 30.83±2.09 & 32.34±2.16 & \textbf{33.36±1.81} \\ \cline{3-8} & & SSIM & 0.79±0.03 & 0.82±0.03 & 0.84±0.02 & 0.86±0.02 & \textbf{0.87±0.02} \\ \cline{3-8} & & RMSE & 18.2±22.3 & 2.36±2.09 & 4.08±4.29 & 2.62±2.19 & \textbf{2.00±1.52} \\ \hline \multirow{9}{*}{Radial} & \multirow{3}{*}{50\%} & PSNR & 35.17±1.37 & 36.70±1.37 & 37.17±1.37 & 38.38±1.22 & \textbf{38.91±1.21} \\ \cline{3-8} & & SSIM & 0.90±0.02 & 0.91±0.01 & 0.92±0.01 & \textbf{0.93±0.01} & \textbf{0.93±0.01} \\ \cline{3-8} & & RMSE & 2.37±0.50 & 1.97±0.59 & 1.53±0.18 & 1.22±0.16 & \textbf{1.14±0.15} \\ \cline{2-8} & \multirow{3}{*}{30\%} & PSNR & 32.69±1.13 & 34.02±1.39 & 35.27±1.40 & 35.96±1.27 & \textbf{36.21±1.29} \\ \cline{3-8} & & SSIM & 0.80±0.03 & 0.87±0.02 & 0.88±0.01 & 0.88±0.02 & \textbf{0.89±0.02} \\ \cline{3-8} & & RMSE & 4.87±0.89 & 2.23±0.76 & 1.39±0.67 & 1.61±0.22 & \textbf{1.36±0.32} \\ \cline{2-8} & \multirow{3}{*}{20\%} & PSNR & 30.77±1.15 & 33.69±1.51 & 33.49±1.54 & 34.48±1.26 & \textbf{34.72±1.30} \\ \cline{3-8} & & SSIM & 0.75±0.03 & 0.84±0.02 & 0.85±0.02 & \textbf{0.86±0.02} & \textbf{0.86±0.02} \\ \cline{3-8} & & RMSE & 7.13±1.24 & 2.76±0.87 & 2.68±0.73 & 1.91±0.27 & \textbf{1.61±0.34} \\ \hline \multirow{9}{*}{Spiral} & \multirow{3}{*}{50\%} & PSNR & 35.62±1.27 & 39.37±1.39 & 41.39±1.37 & 42.53±0.75 & \textbf{43.04±0.77} \\ \cline{3-8} & & SSIM & 0.91±0.02 & 0.94±0.01 & 0.94±0.02 & 0.96±0.01 & \textbf{0.97±0.02} \\ \cline{3-8} & & RMSE & 1.78±0.54 & 1.06±0.10 & 0.97±0.14 & 0.73±0.15 & \textbf{0.61±0.07} \\ \cline{2-8} & \multirow{3}{*}{30\%} & PSNR & 33.48±1.12 & 36.19±1.35 & 36.26±1.18 & 38.20±1.31 & \textbf{38.49±1.33} \\ \cline{3-8} & & SSIM & 0.86±0.02 & 0.90±0.01 & 0.90±0.02 & \textbf{0.92±0.01} & \textbf{0.92±0.01} \\ \cline{3-8} & & RMSE & 3.71±0.91 & 1.67±0.72 & 1.64±0.35 & 1.24±0.18 & \textbf{0.98±0.23} \\ \cline{2-8} & \multirow{3}{*}{20\%} & PSNR & 30.97±1.14 & 34.88±1.39 & 33.83±1.50 & 36.51±1.23 & \textbf{36.75±1.27} \\ \cline{3-8} & & SSIM & 0.81±0.03 & 0.88±0.02 & 0.88±0.01 & \textbf{0.90±0.02} & \textbf{0.90±0.02} \\ \cline{3-8} & & RMSE & 5.39±1.26 & 2.29±0.56 & 1.95±0.25 & 1.51±0.20 & \textbf{1.32±0.14} \\ \hline \end{tabular}% } \label{table_knee} \end{table} The quantitative metrics (PSNR, SSIM and RMSE) of each GAN based method using different under-sampling masks are shown in Tables \ref{table_brain} and \ref{table_knee}. We can draw similar conclusions as the qualitative visualisation results that the image quality after RefineGAN reconstruction is better than other methods. Even at a high acceleration factor or high under-sampling rate, the reconstructed images of RefineGAN still have high SNR. Tables \ref{table_brain} and \ref{table_knee} show the quantitative metrics, including PSNR, SSIM and RMSE for all compared methods. The numbers in Tables \ref{table_brain} and \ref{table_knee} represent the mean and standard deviation values of the corresponding metrics (bold numbers indicate the best performance). Compared to DAGAN, KIGAN and ReconGAN, the RefineGAN framework has outperformed them remarkably at different acceleration factors. \section{Discussion} \input{sections/discussion} \section{Conclusion} We carried out a mini review, benchmarked and compared four different GAN-based network architectures for fast MRI reconstruction in this chapter. Our comparison used the various sampling patterns, different masks on corresponding datasets that have covered commonly used clinical MRI scenarios. For our systematic research and measurement, we used both traditional and newly proposed quantitative tools. The outcomes of qualitative visualization were also examined and compared. To summarise, our mini review and benchmarking have revealed that all GAN-based approaches could obtain promising results at lower acceleration factors. However, when the acceleration factors are high, some GAN-based architectures, such as DAGAN and KIGAN, may not be sufficient for MRI reconstruction applications. Furthermore, as compared to other GAN-based methods, the RefineGAN has improved reconstruction accuracy and perceptual efficiency. Future development incorporating MR physics and XAI into GAN based models will provide promising pathways for clinical deployment with more transparency of the reconstruction algorithms. \newpage \section{Synopsis} \subsection{Limitations of Magnetic Resonance Imaging (MRI)} Magnetic resonance imaging (MRI) is a revolutionary non-invasive medical imaging technique, offering high resolution definition of the structure and function of most body tissues and organs. However, a major limitation of MRI is its slow rate of image acquisition \cite{Lustig2007,Yang2018}, resulting in a prolonged scanning time compared to other imaging modality. This limitation not only increases its scanning cost, but restricts its application in emergency settings. Furthermore, patients have to `lie still throughout an acquisition in order not to degrade the quality of the images' \cite{Hollingsworth2015}. Patients must hold their breadth for abdominal/thoracic imaging, which may be difficult for `children, obese individuals and those with respiratory compromise' \cite{Hollingsworth2015}. Hence, the MR scanning process can bring feelings of discomfort and claustrophobia to many patients. Therefore, to reduce the scanning cost and improve patient experience, it is necessary to accelerate the MR image acquisition process. The reason why MR imaging process is slow is that unlike other imaging techniques, MR images are obtained from the k-space, i.e. the spatial frequency domain. The image domain information is recovered from the k-space data by inverse Fourier transform \cite{Suetens2009}. The measurements in the k-space are acquired sequentially, i.e. one line by one line per repetition time across the phase encoding direction(s) \cite{Hollingsworth2015,Suetens2009}. Various MR acceleration techniques focus on taking measurements of multiple lines per repetition time, thus traversing through the k-space at a faster rate \cite{Hollingsworth2015}. Examples include echo planar imaging \cite{Mansfield1977}, rapid acquisition with relaxation enhancement \cite{Hennig1986}, and fast low angle shot imaging \cite{Haase1986}. \subsection{Conventional Acceleration Using Compressive Sensing} To achieve a higher degree of acceleration, it is possible to sample the k-space only partially, i.e. not obtaining all lines of measurements across the phase encoding direction(s). The original k-space information can be inferred from the undersampled measurement. The result is an acceleration factor inversely proportional to the undersampling ratio. For instance, if 50\% of the k-space is sampled, the acceleration factor is 2-fold. Undersampling techniques therefore attempt to bypass the Nyquist-Shannon sampling criteria \cite{Lustig2007}. A promising undersampling method is compressed sensing (CS) \cite{Lustig2007}, potentially allowing for more aggressive undersampling and acceleration. The theory of CS is related to the idea of compressing signals for transmission, and decompressing them afterwards \cite{Zisselman2018}, as applied to JPEG, JPEG2000, MPEG, and MP3 standards \cite{Hollingsworth2015}. CS assumes that if undersampled signals or images can be compressed accurately, they could be decompressed or reconstructed accurately \cite{Fair2015}. Hence, CS imposes 3 criteria to the reconstruction process: 1) The signal or the image must be compressible. In other words, the image must be sparse, i.e. the majority of its pixel values being zeros, either in itself or in a suitable transformation domain, e.g. wavelet or frequency domain. 2) The undersampling pattern should be incoherent, such as being random, to prevent aliasing artefacts. 3) The reconstruction algorithm should be non-linear. Under these 3 criteria, it is possible to reconstruct the original image from its undersampled measurements. Earlier work on applying CS as an MR acceleration method involves iterative non-linear optimisation algorithms, which enforce sparsity and reconstruction fidelity. Examples include total variation (TV) \cite{Lustig2007}, dictionary learning such as DLMRI \cite{Ravishankar2011}, RecPF \cite{Yang2010}, and BM3D \cite{Eksioglu2016}. There are however 4 main problems associated with these techniques: \begin{enumerate} \item Iterative optimisation algorithm is time consuming \cite{Hollingsworth2015,Hu2014}. \item These algorithms tend to cause over-generalisation, leading to artificially smoothed appearance \cite{Hollingsworth2015}. \item The reconstruction results show blocky artefacts \cite{Liu2015,Kayvanrad2014,Guerquin-Kern2011}. \item They reconstruct each image as an isolated case, and thus fail to take into account of expected anatomical features in MR images to enhance performance \cite{Hammernik2018}. \end{enumerate} \subsection{Deep Learning Based Fast MRI} Recently, deep learning based methods have gained performance dividends in many medical image analysis problems by using ‘big data’ and advances in computing capacity. However, to date, majority of the work has focused on downstream medical image analysis and post-processing tasks, e.g., imaging data segmentation, co-registration and multimodal data classification, for disease diagnosis, prognosis and treatment prediction. Pivoting towards higher accuracy of these post-processing tasks requires an improvement of imaging techniques, which can also benefit from deep learning. CS has a long-term history in fast imaging applications including the embodiment of MRI reconstruction since its theory has been established in 2006 \cite{donoho2006compressed}. Nevertheless, the less effective iterative optimization procedure associated might hinder the further deployment. While deep learning based research in tomographic reconstruction emerged only a few years ago, there is a great interest in this research field and there are many ongoing developments and promising applications including MRI. Most of the above limitations of the earlier CS methods are however effectively addressed by deep learning based methods. A deep learning model consists of multiple layers of nodes. The weights of the node connections between layers are optimised to learn the mapping from the undersampled MR images to their corresponding fully sampled ones. This process of weight optimisation is referred to as training the model. Once trained, the model can reconstruct original images from their undersampled measurements. Deep learning based CS methods have been shown to consistently outperform non-deep learning based ones, in terms of reconstruction accuracy, speed and visual quality \cite{Quan2018,Mardani2019,Yang2018,Schlemper2018,Huang2019,Eo2018}. \subsection{GAN Powered Fast MRI} Generative Adversarial Networks, or GAN for short, are an approach to generative modelling using deep learning methods, such as convolutional neural networks. Generative modelling is an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset. GAN is a clever way of training a generative model by framing the problem as a supervised learning problem with two sub-models: the generator model that we train to generate new examples, and the discriminator model that tries to classify examples as either real (from the domain) or fake (generated). The two models are trained together in a zero-sum game, adversarial, until the discriminator model is fooled about half the time, meaning the generator model is generating plausible examples. GAN is an exciting and rapidly changing field, delivering on the promise of generative models in their ability to generate realistic examples across a range of problem domains, most notably in image-to-image translation tasks such as translating photos of summer to winter or day to night, and in generating photo-realistic photos of objects, scenes, and people that even humans cannot tell are fake. An important type of deep learning based CS-MRI reconstruction method is using GAN, which was proposed first by Yang et al. in 2017 in the following two papers (by the first author of this book chapter). \begin{itemize} \item Yang, Guang, et al. "DAGAN: Deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction." IEEE Transactions on Medical Imaging 37.6 (2017): 1310-1321. \end{itemize} In the context of CS-MRI, GAN involves training a generator to reconstruct the original image from its undersampled measurements and a discriminator to output the probability of whether the generated image resembles the original, i.e. fully sampled ones. The discriminator output in turn modifies the learning of the generator \cite{Goodfellow2014}. This results in the generator producing photo-realistic images \cite{Deng2019}. GAN-based methods \cite{Yang2018,Quan2018} outperform the non-GAN based deep learning method---deep ADMM-net---both in reconstruction accuracy and speed. One GAN-based method \cite{Mardani2019} also claims to produce fewer blurry and aliasing artefacts than the non-deep learning-based ones. Hence, GAN-based methods can potentially achieve state-of-the-art CS-MRI reconstruction result. ~\\ In this book chapter, \begin{itemize} \item we will perform a topical review on GAN powered fast MRI, including the original DAGAN method and other more advanced and recently proposed GAN based models; \item we will analyse and explain different GAN models, and compare the results obtained by different GAN based models; \item we will provide a comprehensive study on different datasets, e.g., MRI for various anatomical scans. \item we will highlight the recent development and discuss future directions. \end{itemize}
{ "timestamp": "2021-05-06T02:06:53", "yymm": "2105", "arxiv_id": "2105.01800", "language": "en", "url": "https://arxiv.org/abs/2105.01800" }
\section{Introduction} Understanding the motion of non-rigidly deforming scenes using a single range sensor lies at the core of many computer vision, AR/VR, and robotics applications. In this context, one fundamental limitation is that a single-view range sensor cannot capture data in occluded regions, leading to incomplete observations of a 3D environment. As a result, existing non-rigid motion tracking methods are restricted to the observable part of the scene. However, the ability to infer complete motion from a partial observation is indispensable for many high-level tasks. For instance, as a nursing robot, to safely care for an elderly person (e.g., predict the person’s action and react accordingly), it needs to understand both the complete body shape and how the whole body moves even if the person is always partially occluded. In order to address these challenges, we pose the question \textit{how can we infer the motion of the unobserved geometry in a non-rigidly deforming scene?} Existing works such as DynamicFusion~\cite{dynamicfusion} and VolumeDeform~\cite{volumedeform} propose to propagate deformations from the visible surface to the invisible space through a latent deformation graph. Hidden deformations are then determined by optimizing hand-crafted deformation priors such as As-Rigid-As-Possible~\cite{arap} or Embedded Deformation~\cite{embededdeformation}, which enforces that graph vertices locally move in an approximately rigid manner. Such deformation priors have several limitations: 1) they require heavy parameter tuning; 2) they do not always reflect natural deformations; and 3) they often assume a continuous surface. As a result, these priors are mostly used as regularizers for local deformations, but struggle with larger hidden regions. One promising avenue towards solving this problem is to leverage data-driven priors that learn to infer the missing geometry. Very recently, deep learning approaches for 3D shape or scene completion and other generative tasks involving a single depth image or room-scale scans have shown promising results~\cite{3d_epn,SSCNet_shuran,scancomplete,chibane20ifnet,sgnn_cvpr2020}. However, these works primarily focus on static environments. In this paper, we make the first effort to combine geometry completion with non-rigid motion tracking. We argue that the shape and motion of non-rigidly deforming objects are highly entangled data modalities: on one hand, the ability to infer the geometry of unobserved object parts provides valuable information for motion estimation. On the other hand, motion is considered as the shape's evolution in the time axis, as similarity in motion patterns are a strong indicator for structural connectivity. To leverage these synergies, we propose 4DComplete, which jointly recovers the missing geometry and predicts motion for both seen and unseen regions. We build 4DComplete on a sparse, fully-convolutional neural network, which facilitates the joint estimation of shape and motion at high resolutions. In addition, we introduce DeformingThings4D, a new large-scale synthetic dataset which captures a variety of non-rigidly deforming objects including humanoids and animals. Our dataset provides holistic 4D ground truth with color, optical/scene flow, depth, signed distance representations, and volumetric motion fields. \medskip \noindent In summary, we propose the following contributions: \begin{itemize} \item We introduce 4DComplete, the first method that jointly recovers the shape and motion field from partial observations. \item We demonstrate that these two tasks help each other, resulting in strong 4D feature representations outperforming existing baselines by a significant margin. \item We provide a large-scale non-rigid 4D dataset for training and benchmarking. The dataset consists of 1,972~animation sequences, and 122,365~frames. Dataset is available at: \url{https://github.com/rabbityl/DeformingThings4D}. \end{itemize} \section{Related Work} \subsection{Non-Rigid Tracking Using Depth Sensors} \begin{figure*}[!ht] \centering \includegraphics[width= 1\linewidth ]{fig/network.jpg} \caption{ The network architecture of 4DComplete (Pink capsule: training loss; $\mathbf{\oplus}$: concatenation; $\mathbf{\otimes}$: filter by geometry, number in the brackets: ($n_{in}$, $n_{out}$) feature dimension). The input partial TSDF and VMF are concatenated together and fed into the 4D encoder. The two decoders predict the complete TSDF and VMF in parallel. There are 4 hierarchical levels. The shape decoder predicts the geometry in each hierarchical level and passes the predicted geometry to the corresponding layer in the motion decoder and the following layer in the shape decoder. Our method is trained on cropped volumes of spatial dimension $96\times96\times128$, which covers about 70 percent of an object. The fully-convolutional nature of our approach enables testing on whole objects of arbitrary sizes. } \label{fig:network_architecture} \end{figure*} Many methods for non-rigid tracking use variations of the Non-rigid Iterative Closest Point (N-ICP) algorithm~\cite{NICP-1,NICP-2,li2008regist,zollhofer2014real}, where the point-to-point or point-to-plane distance of correspondences points are iteratively minimized. To prevent uncontrolled deformations and resolve motion ambiguities, the N-ICP optimization usually employs deformation regularizers such as As-Rigid-As-Possible (ARAP)~\cite{arap} or embedded deformation~\cite{embededdeformation}. One of the first real-time methods to jointly track and reconstruct non-rigid surfaces was DynamicFusion~\cite{dynamicfusion}. VolumeDeform~\cite{volumedeform} extends the ideas of DynamicFusion by adding sparse SIFT feature matches to improve tracking robustness. Using deep learning, DeepDeform~\cite{deepdeform} replaces the classical feature matching by CNN-based correspondence matching. Li et al.~\cite{Learning2Optimize} goes one step further and differentiates through the N-ICP algorithm thus obtaining a dense feature matching term. A similar direction is taken by Neural Non-Rigid Tracking~\cite{bovzivc2020neural}; however, their focus lies in an end-to-end robust correspondence estimation. To handle topology changes, KillingFusion~\cite{killingfusion} directly estimates the motion field given a pair of signed distance fields (SDF). Optical/scene flow~\cite{flownet,RAFT_eccv2020,pwcnet,global_patch_collider,flownet3d,wu2019pointpwc,liu2019meteornet,HPLFlownet,lv2018LearningRigidity} is a closely related technique. They have been used to generate initial guess for non-rigid tracking in~\cite{dou2016fusion4d,surfelwarp,bovzivc2020neural,motion2fusion,flownet3d++}. Among these works, FlowNet3D~\cite{flownet3d} is one of the first methods that directly estimates scene flow from two sets of point clouds. While existing methods mainly focus on the visible surface of a scene, we take one step further to model the deformation of the hidden surface. \subsection{Shape and Scene completion} Completing partial 3D scans is an active research area in geometry processing. Traditional methods, such as Poisson Surface Reconstruction~\cite{PoissonSurfaceReconst}, locally optimize for a surface to fit observed points and work well for small missing regions. Zheng et al.~\cite{zheng2013beyond} predict the unobserved voxels by reasoning physics and Halimi et al.~\cite{WholeIsGreaterThanParts} complete partial human scans by deforming human templates. More recently, we have seen 3D CNNs with promising results for geometry completion for depth scans ~\cite{SSCNet_shuran,3d_epn,scancomplete,sgnn_cvpr2020}. These works operate either on a single depth image of a scene as with SSCNet~\cite{SSCNet_shuran}, or scene completion on room- and building floor-scale scans, as shown by ScanComplete~\cite{scancomplete} and SGNN~\cite{sgnn_cvpr2020}. An alternative line of research for shape completion uses implicit scene representations~\cite{occupancy_network,occupancy_flow,convolutional_occnet,deepsdf,PiFusion,chibane20ifnet,chiyu2020localimplicit}; however, while these approaches achieve stunning results for fitting and interpolation of objects/scenes, they still struggle to generalize across object categories with high geometric variety. While these existing works mainly focus on static scenes, we investigate how to leverage the synergies of shape completion in the dynamic 4D domain. \subsection{Non-Rigid 4D Datasets} Collecting large scale 4D datasets for deforming objects is a non-trivial task, in particular when the goal is to obtain a sufficiently large number of objects. Non-rigid datasets~\cite{de2008performance,faust,anguelov2005scape,vlasic2008articulated,guo2015robust,ye2012performance,volumedeform,killingfusion,deepdeform,Zheng2019DeepHuman,AMASS_ICCV2019,3D_menagerie} have been widely used, but they are either relatively small, limited to specific scene types, or suffer from occlusion and sensor noise; hence, they are not directly suited for our 4D completion task. Notably, obtaining dense motion field ground truth from real-world 3D scans is quite challenging as it requires costly per-point correspondence annotations. This is one of the reasons why we have seen many synthetic datasets in the context of dense optical flow methods ~\cite{Monka,Sintel,lv2018LearningRigidity,playing_for_benchmarks}; Among them, Sintel~\cite{Sintel} and Monka~\cite{Monka} are composed of rendered animations of deforming objects. However, these sequences are relatively short and do not provide complete 3D shapes and motion fields. In order to facilitate learning data-driven deformation priors, we introduce a much larger synthetic dataset with over 1,972~animation sequences, spanning a large diversity of objects ranging from humanoids to various animal species (cf. Sec.~\ref{section-dataset}). \section{Method: 4DComplete} Given a single-view depth map observation of a 3D scene, and the scene flow that is computed between the current frame and its next frame, the goal of 4DComplete is to recover the hidden geometry and its motion field. \bigskip \noindent \textbf{Input.} We use a 3D volumetric grid to represent both shape and motion. The input shape is represented as a truncated signed distance field (TSDF), as a sparse set of voxel locations within truncation and their corresponding distance values. The TSDF is computed from a single depth map using volumetric fusion~\cite{volumetric_fusion}; i.e., every voxel is projected into the current depth map and their distance values are updated accordingly. To represent the input motion of the visible surface, we pre-compute the 3D motion vector (in $\mathbb{R}^3$) for each occupied voxel, resulting in a volumetric motion field (VMF) representation. We concatenate the TSDF and VMF and feed it as input to a neural network. \bigskip \noindent \textbf{Scene Flow Field $\Leftrightarrow$ Volumetric Motion Field.} We use FlowNet3D~\cite{flownet3d} to predict the motion for the visible surface, which estimates the scene flow field (SFF) between two sets of point clouds. Because a 3D point does not necessarily lie on a regular 3D grid position, we convert between the SFF and the VMF as follows: given a point cloud $\{ {p_i|i = 1,...,N} \} $, where $p_i \in \mathbb{R}^3$ are XYZ coordinates of individual point, the SFF is defined as $ \{ \mathcal{SFF}_i|i = 1,...,N \} $, where $\mathcal{SFF}_i\in \mathbb{R}^3$ are the 3D translational motion vectors of the points. Similarly, given a set 3D voxel positions $\{ {v_j|j = 1,...,M} \} $, the VMF is defined as $ \{ \mathcal{VMF}_i|i = 1,...,M \} $, where $\mathcal{VMF}_i\in \mathbb{R}^3$ are the 3D translational motion vectors of the voxels. To convert from SFF to VMF, we use the inverse-distance weighted interpolation as defined in \cite{pointnet++}: \begin{equation} \label{eqn:sff_2_vmf} \mathcal{VMF}_j = \sum_{p_i \in knn ( v_j ) } \frac{ \mathcal{SFF}_i \cdot dist(p_i, v_j)^{-1} }{\sum_{p_{i} \in knn ( v_j ) }dist(p_i, v_j)^{-1}} \end{equation} where $knn()$ is the function to find K-Nearest-Neighbors. We set the number of neighbors to $K=3$, and $dist(,)$ computes the euclidean distance between two positions. To convert from VMF to SFF, we do tri-linear interpolation: \begin{equation} \label{eqn:vmf_2_sff} \mathcal{SFF}_j = \sum_{v_j \in knn ( p_i ) } \mathcal{VMF}_j \cdot w(p_i, v_j) \end{equation} where $w(,)$ computes the linear-interpolation weights, and $K=8$ represents the neighboring 8 corners of the cube that a point lies in. \bigskip \noindent \textbf{Network Architecture.} To allow for high-resolution outputs of the shape and motion field, we leverage sparse convolutions~\cite{sparseconv_arxiv,sparseconv_iccv,choy2019minkowski} for our neural network architecture, which makes our architecture computationally efficient in processing 3D volumetric data by operating only on the surface geometry. Hence, our method only processes the surface region and ignores the truncated region. Fig.~\ref{fig:network_architecture} shows an overview of our network architecture. The network consists of a shared 4D encoder and two decoders to estimate shape and motion in parallel. The input sparse tensor is first fed into the~\textit{4D Encoder}, which encodes the data using a series of sparse convolutions where each set reduces the spatial dimensions by a factor of two. The two decoders are designed in a coarse-to-fine architecture with 4 hierarchical levels. We use skip connections between the 4D encoder and the 2 decoders to connect feature maps of the same spatial resolution. Since the shape decoder usually generates a larger set of sparse locations than the input, we use zero feature vector for the locations that do not exist in the input volume. \bigskip \noindent \textbf{Message passing between two branches.} At a hierarchical level $k$, the shape decoder predicts the voxels' occupancy $O_k$ and TSDF value $S_k$. We filter voxels with $sigmoid(O_k (v)) > 0.5$ as the input geometry for the next hierarchical level. Within each hierarchical level, the shape decoder feeds the predicted geometry to the parallel motion decoder to inform where the motion should be estimated. In return, the motion feature is filtered by the sparse geometry and shared to the shape decoder. \bigskip \noindent \textbf{Shape Loss.} The shape decoder's final output is a sparse TSDF from which a mesh can be extracted by Marching Cubes. Following~\cite{sgnn_cvpr2020}, we apply an $l_1$ loss on the log-transformed TSDF values. Using the log-transformation on the TSDF values helps to shift the losses attention more towards the surface points as larger values far away from the surface get smaller, thus encouraging more accurate prediction near the surface geometry. We additionally employ proxy losses at each hierarchy level for outputs $O_k$ and $S_k$, using binary cross-entropy with target occupancies and $l_1$ with target TSDF values, respectively. \bigskip \noindent \textbf{Motion Loss.} The output of our sparse neural network is facilitated by the motion decoder, which estimates the completed volumetric motion field $ \{ \mathcal{VMF}_i|i = 1,...,M \} $. The ground truth motion field at the predicted sparse locations is given by $ \{ \mathcal{VMF}_{i,gt}|i = 1,...,M \} $. We formulate the loss for the motion field on the final predicted sparse locations using the $l_2$ loss: $\sum_{i=1}^M ||\mathcal{VMF}_i - \mathcal{VMF}_{i,gt} ||_2^2$. In addition, we apply the cosine similarity loss: $\sum_{i=1}^M (1 - \frac{\mathcal{VMF}_i \cdot \mathcal{VMF}_{i,gt} }{ ||\mathcal{VMF}_i ||\cdot ||\mathcal{VMF}_{i,gt}||})$ on the normalized motion vectors to encourage the directions of the motion to be consistent with the ground truth. \bigskip \noindent \textbf{Progressive Growing.} We train our network in a progressively growing fashion following the ideas of \cite{sgnn_cvpr2020}. There are four hierarchy levels, we progressively introduce higher resolution geometry decoder after every 2000 training iterations. To facilitate motion decoder learning, instead of using the predicted geometry of shape decoder, we fed ground-truth geometry to motion decoder during the beginning 10K iterations. \bigskip \noindent \textbf{Training.} We use our newly-constructed DeformingThings4D dataset (c.f. Sec.~\ref{section-dataset}) to train our network. At training time, we consider cropped views of scans for efficiency (see Fig.~\ref{fig:network_architecture}); we use random crops of size $[96\times 96\times 128]$ voxels for the finest level. We crop the volumes at $1$ meter intervals out of each train object and discard empty volume. The resolution drops by a factor of 2, resulting resolution of $[48\times 48\times 64]$, $[24\times 24\times 32]$, and $[12\times 12\times 16]$ for each hierarchical level. The fully-convolutional design of our approach enables testing on whole objects of arbitrary sizes at testing time. To learn viewpoint-invariant motion representation, we apply random rigid rotation transformations on the 3D motion vectors as data augmentation during training. The randomness is drawn from the Haar distribution~\cite{random_orthogonal}, which yields uniform distribution on SO3. We train our network using the Adam optimizer with a learning rate of 0.001 and a batch size of 8. \section{DeformingThings4D Dataset} \label{section-dataset} \begin{figure*}[!t] \centering \hspace*{-0.7cm} \includegraphics[width= 1.05\linewidth]{fig/dataset.jpg} \caption{ DeformingThings4D dataset. Left: examples of animated characters. Right: dataset statistics. In total, we collected 147 different characters spanning 31~categories, with a total of 1,972~animations and 122,365~frames. } \label{fig:Dataset_examples} \end{figure*} Training our network requires a sufficient amount of non-rigidly deforming target sequences with ground truth 4D correspondences at the voxel level (i.e., motion and shape). In order to provide such data, we construct a synthetic non-rigid dataset, DeformingThings4D, which consists of a large number of animated characters including humanoids and animals with skin mesh, texture, and skeleton. We obtained the characters from Adobe Mixamo\footnote{https://mixamo.com} where humanoid motion data was collected using a motion capture system. Animals' skin and motion are designed by CG experts. Generally, these objects are animated by using ``rigging" and ``skinning" to blend the skeletal movement to the surface skin mesh. Fig.~\ref{fig:Dataset_examples} shows examples of characters in the dataset and the statistics of our dataset. \subsection{Data Generation}\label{section:data_generation} Given a animated 3D mesh, we generate per-frame RGB-D maps, inter-frame scene flow, signed distance field and volumetric motion field; see Fig.~\ref{fig:datagen}. We perform data generation with Blender\footnote{https://www.blender.org/} scripts. \begin{figure}[!h] \centering \includegraphics[width= 1\linewidth]{fig/datagen.jpg} \caption{ Data Generation Process: given an animated 3D mesh (a), virtual cameras are sampled on a sphere. One of the cameras is selected as the input view, for which depth maps (b) are rendered. Depth frames are used to compute the projective TSDF (e) and inter-frame scene flow (f). The ground-truth complete TSDF (c) is computed by integrating the depth images from all virtual cameras. The complete 3D motion field (d) is obtained by blending the mesh vertices' motion to nearby occupied voxels. } \label{fig:datagen} \end{figure} \medskip \noindent \textbf{RGB-D Map.} To render depth maps, we uniformly sample 42 camera viewpoints on a sphere that is centered by the target character's mesh. The mesh-to-camera distance ranges within $0.5-2.5m$. We render all the depth maps using the intrinsic parameters of the Azure Kinect camera. We store per-pixel depth in millimeters and render the color channel using Blender's built-in Eevee engine with a principled BSDF shader. \medskip \noindent \textbf{Inter-frame Scene Flow Field (SFF).} The mesh animations run at 25 frames per second. We track the mesh vertices' 3D displacements between a pair of temporally adjacent frames and project the 3D displacements to the camera's pixel coordinates as scene flow. The flow vector for a pixel is computed by interpolating the 3 vertices' motion on a triangle face where the pixel's casted ray is first received. We generate scene flow ground truth for all observable pixels in the source frame even if the pixels are occluded in the target frame. To simulate the different magnitudes of deformation we sub-sample the sequences using the frame jumps: \{1, 3, 7, 12\}. \medskip \noindent \textbf{Signed Distance Field (SDF).} In order to generate the ground truth SDF, we volumetrically fuse the depth maps from all virtual cameras into a dense regular grid~\cite{volumetric_fusion}, where each voxel stores a truncated signed distance value. We repeat this process independently for four hierarchy levels, with voxel sizes of $1.0cm^3$, $2.0cm^3$, $4.0cm^3$, and $8.0cm^3$. From the input depth map, we compute the projective SDF with voxel sizes of $1.0cm^3$ as network input while setting the truncation to 3$\times$ the voxel size. TSDF values are stored in voxel-distance metrics, which facilitates testing on volumes with arbitrarily sampled voxel size. \medskip \noindent \textbf{Volumetric Motion Field (VMF).} We compute the motion ground-truth for all voxels near the mesh surface, i.e., within 3x voxel truncation. For each valid voxel, we first find its $K$-nearest-neighbor vertices on the mesh surface and then use Dual Quaternion Blending (DQB) to bind the motion of the KNN vertices to the voxel position. Empirically, we set $K=3$. We follow the same procedure for the SDF volume and we repeat this process independently for all four resolutions, i.e., with voxel size of $1.0cm^3$, $2.0cm^3$, $4.0cm^3$, and $8.0cm^3$. \section{Results} \subsection{Evaluation Metrics} \smallskip \noindent \textbf{Motion Estimation Evaluation Metric.} Following~\cite{flownet3d}, we use 3D end-point-error (EPE) and motion accuracy (ACC) as our motion evaluation metrics. The 3D EPE measures the average euclidean distance between the estimated motion vector to the ground truth motion vector. The ACC score measures the portion of estimated motion vectors that are below a specified end-point-error among all the points. We report two ACC metrics with two different thresholds. Note that throughout the experiments we convert all VMF to SFF (using Eqn.~\ref{eqn:vmf_2_sff}) before doing motion evaluation. \medskip \noindent \textbf{Shape Completion Evaluation Metric.} We use the following metrics to evaluate the reconstructed geometry, Volumetric IoU (IoU), Chamfer Distance (CD) in centimeters, Surface Normal Consistency (SNC), Point to Plane distance (P2P), and $\ell_1$ error of SDF value. \subsection{Benchmarking Scene Flow} We compare our DeformingThings4D dataset with the FlyingThings3D~\cite{Monka}, which is a large-scale dynamic motion dataset consists of flying rigid objects. We train FlowNet3D~\cite{flownet3d} with the two datasets and evaluate it on the test sets of DeformingThings4D, the DeepDeform~\cite{deepdeform}, and the KITTI~\cite{kitti} scene flow benchmark. The results are shown in Tab.~\ref{tab:dataset_compare}. DeepDeform~\cite{deepdeform} is a very challenging real-world benchmark for non-rigid motion. The FlowNet3D model trained on our dataset significantly reduces the scene flow error on the real-world DeepDeform benchmark (from 21.07 to 13.08). KITTI dataset captures street scenes with mainly rigid cars moving around which is more close to the flying things scenario. Our dataset still shows comparable results to FlyingThings3D on KITTI. \begin{figure*}[!h] \centering \includegraphics[width= 0.89\linewidth ]{fig/cow_deform.jpg} \caption{ Surface deformation for the ``Deer'' and ``Dairy Cow'' sequence. The complete shape and the motion of the visible surface (\blue{\textbf{blue}}) are given, and the goal is to estimate the deformation of the hidden surface (\red{\textbf{red}}). The gray mesh shows the ground truth deformation (\textcolor{mygray}{\textbf{gray}}), which is not available/used for registration. ARAP leads to severe distortion on the neck and head of the deer; The dairy cow's stomach is undergoing a contraction movement. ARAP can not evenly distribute such deformation, leading to unnatural surface folding at the stomach. Our method yields natural deformations for both sequences. Note that our method is trained only on humanoid motions. } \label{fig:cow_deform} \end{figure*} \subfile{Tab-EPE.tex} \begin{table*}[] \centering \small \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{12pt} \begin{tabular}{|l|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{DeformingThings4D} & \multicolumn{3}{c|}{DeepDeform Dataset~\cite{deepdeform}} \\ \cline{2-7} Method & EPE$\downarrow$ & ACC(5)$\uparrow$ & ACC (10)$\uparrow$ & EPE$\downarrow$ & ACC(5)$\uparrow$ & ACC(10)$\uparrow$ \\ \hline Ours (\textit{w/o shape completion}) &3.82 & 79.02\% & 90.55\% & 13.75 & 26.89\% & 63.42\% \\ \hline Ours (\textit{w/ shape completion}) &\textbf{3.56} & \textbf{85.02}\% & \textbf{91.59}\% & \textbf{13.15} & \textbf{28.57\%} & \textbf{63.66}\% \\ \hline \end{tabular} \caption{ Scene Flow estimation results on our DeformingThings4D dataset and DeepDeform~\cite{deepdeform} dataset. All scores are reported only for the visible surface points. Metrics are end-point-error (EPE) in centimeter, and accuracy ( \textless 5$cm$ or 5\%, 10$cm$ or 10\%). } \label{tab:end_point_error_vis} \end{table*} \begin{table}[h] \small \centering \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{6pt} \begin{tabular}{|c|c|c|c|c|} \hline Method & CD$\downarrow$ & IoU $\uparrow$ & SNC $\uparrow$ & L1 $\downarrow$ \\ \hline Ours (\textit{w/o motion})& 2.66 & 74.98\% & 0.779 & 0.531 \\ \hline Ours (\textit{w/ motion}) & \textbf{2.57} & \textbf{75.72}\% & \textbf{0.812} & \textbf{0.503} \\ \hline \end{tabular} \vspace{0.1cm} \caption{ Surface prediction error on the test set of DeformingThings4D. The metrics are Volumetric IoU (IoU), Chamfer Distance (CD) in centimeters, Surface Normal Consistency (SNC), and $\ell_1$ score of SDF.} \label{tab:geometry_score} \end{table} \subsection{Motion Prediction for the Hidden Surface} This section evaluates the motion estimation of the hidden surface. We conduct the following experiment: the complete mesh shape, a subset of mesh vertices that is visible from a given camera viewpoint, and the ground truth scene flow for the visible vertices are given, and the goal is to estimate the motion of the hidden vertices of the mesh. We evaluate the following methods: \smallskip \noindent \textbf{\textbullet Rigid Fitting.} This method assumes that the shape undergoes rigid motion. It finds a single rigid transform in $SE(3)$ for the entire shape that best explains the surface motion. \smallskip \noindent \textbf{\textbullet As-Rigid-As-Possible (ARAP) Deformation.} ARAP~\cite{arap} is widely used as a deformation prior in non-rigid reconstruction~\cite{dynamicfusion,volumedeform,zollhofer2014real}. It assumes that locally, a point is transformed with a rigid transformation. Such rigid constraints are imposed upon nearby vertices that are connected by edges. ARAP deformation finds for each mesh vertex a local fan-rotation $R\in SO(3)$ and a global translation vector $t\in \mathbb{R}^3$ that best explains the scene flow motion with the local rigidity constraints. \smallskip \noindent \textbf{\textbullet Motion Complete (Ours).} Given the complete shape and the partial motion on the visible surface, this method predicts the VMF for the complete shape and converts it to SFF to get the motion on mesh vertices' positions. This method is trained only on humanoid motions and evaluates on an animal motion subset (we aim to confirm how the model generalizes across domains). \smallskip \noindent \textbf{\textbullet Motion Complete + Post Processing (PP) (Ours).} We found that the motion prediction of our Motion Complete model is sometimes noisy. We employ optimization-based post-processing to alleviate the noise: the predicted motion filed on the mesh surface is jointly optimized with ARAP prior that enforce that nearby vertices have similar motions. \smallskip Tab.~\ref{tab:end_point_error_inv} reports the motion estimation results for the occluded surface. The testing sequence includes one humanoid sequence and 6 animal sequences with different animations. Note that our method is trained only on the humanoids dataset. Among the baselines, rigid fitting yields significantly larger errors on most sequences, which indicate that the sequences undergo large non-rigid motion. Our Motion Complete overall achieves lower end-point-error than the ARAP on most sequences. Motion Complete + PP further improves the numbers. Fig.~\ref{fig:cow_deform} shows the qualitative results of surface deformation for the ``Deer'' and ``Dairy Cow'' sequence. The deformed surfaces are achieved by using the estimated motion to warp the source model. Our method yields more plausible deformation than ARAP deformation for the occluded surface. We conclude that the 3D sparse ConvNets with large receptive fields learn to capture the global deformation. \subsection{Real-World Results} Fig.~\ref{fig:demo_screenshot} shows that our method, trained only on our synthetic data, generalizes well to a real-world RGB-D input captured with an Azure Kinect camera. \subsection{Ablation of Shape and Motion Estimation} This experiment examines how the two tasks, geometry completion and motion estimation, influence each other. To get the scene flow of the visible surface, we re-train FlowNet3D~\cite{flownet3d} using our scene flow dataset. FlowNet3D predicts the SFF given a pair of point clouds with a subsampled size of 2048. We convert the sparse SFF to VMF using Eqn.~\ref{eqn:sff_2_vmf} as network input. The voxel position in the VMF is consistent with the input projective TSDF. As defined in Fig.~\ref{fig:network_architecture}, we alternatively remove the shape completion head or the motion estimation head to examine the synergy of the two tasks. Tab.~\ref{tab:end_point_error_vis} reports the motion prediction results for the visible surface. Though only evaluating on the visible surface, the model trained with the added supervision of the geometry completion task show improvement over the model trained only on motion prediction. This demonstrates that complete the missing shape is beneficial for non-rigid motion estimation. Tab.~\ref{tab:geometry_score} reports the geometry completion results in our synthetic DeformingThings4D dataset. The whole model shows improvement over the model that is trained for geometry completion only. This result validates the idea that in a dynamic scene it is beneficial to understand the motion in order to achieve better geometric completion. \begin{figure}[!ht] \centering \includegraphics[width= 1.02\linewidth ]{fig/compare_ifnet.jpg} \caption{Shape completion results on real-world RGB-D images. The top 3 rows are images from VolumeDeform~\cite{volumedeform}, and the last row is from Li et al.~\cite{Learning2Optimize}. } \label{fig:sgnn_real} \end{figure} \begin{figure}[!h] \centering \includegraphics[width= 1.12\linewidth ]{fig/sintel.jpg} \caption{ Shape completion on a large scene. The image is from MPI Sintel~\cite{Sintel} dataset. The maximum depth is set to 10 meters in this sintel scene. With fixed volume size ($256^3$), IF-Net~\cite{chibane20ifnet} loses the ability to model details for large scenes. Our fully-convolutional approach better captures all levels of detail. } \label{fig:sgnn_sintel} \end{figure} \subsection{Shape Completion Results} We show qualitative shape completion results of our approach. IF-Nets~\cite{chibane20ifnet} is a state-of-the-art method that performs single depth image reconstruction from point clouds. At the core of IF-Nets is an implicit function that maps a 3D coordinate to an occupancy score using a multi-layer perceptron. We train both methods on the humanoids dataset and evaluate the completion performance on unseen sequences. Fig.~\ref{fig:sgnn_real} shows shape completion from real-world RGB-D images. Our fully-convolutional approach shows more complete, sharper results than the implicit IF-Net. Tab.~\ref{tab:volumedeform_number} shows quantitative evaluation on VolumeDeform~\cite{volumedeform} sequences. In particular, for large scenes, our approach effectively captures both global and local structures, as shown in Fig.~\ref{fig:sgnn_sintel}. \begin{table}[!h] \centering \small \begin{tabular}{|l|c|c|c|} \hline Methods & IF-Nets & Ours (w/o motion) & Ours \\ \hline P2P (cm)$\downarrow$ & 2.231 & 1.983 & \textbf{1.876} \\ \hline SNC $\uparrow$ & 0.757 & 0.899 & \textbf{0.908} \\ \hline time (s) $\downarrow$ & 14.26 & \textbf{3.19} & 3.45\\ \hline memory (MB) $\downarrow$ & 19,437& \textbf{1,103} & 1,379\\ \hline \end{tabular} \caption{ Quantitative results on VolumeDeform examples. The surface ground-truth is provided by VolumeDeform. The metrics are the point-to-plane (P2P) distance, and surface normal consistency (SNC). We also report the average time and memory required for the inference on a Tesla V100-SXM2-32GB GPU. } \label{tab:volumedeform_number} \end{table} \paragraph{Limitations} Our approach maintains several limitations: 1) estimating the uncertainty of hidden motion is necessary but not handled by our approach. A probabilistic approach (e.g., Huang et al.~\cite{huang2020di}) would be promising for modeling motion uncertainty. 2) our method does not predict surface colors. The differentiable volumetric rendering approach of~\cite{dai2020spsg} is a potential solution to learn colored deformable objects. 3) DeformingThings4D largely contains articulated objects such as humans and animals. We are planning to expand the dataset with examples of loose clothing or plants that are deformed by external forces. \section{Conclusion} In this work, we present the first method that jointly estimates the invisible shape and deformation from partial depth frame observation. We show that shape completion and motion estimation are mutually complementary tasks, with joint learning benefiting each. Our newly proposed animation dataset allows for cross-domain generalization for both motion and shape. We believe that our method and new dataset open a new research avenue on generic non-rigid 4D reconstruction. \section{Acknowledgements} This work was conducted during Yang Li's internship at Tokyo Research Center, Huawei. Matthias Nie{\ss}ner is supported by a TUM-IAS Rudolf M{\"o}{\ss}bauer Fellowship and the ERC Starting Grant Scan2CAD (804724). We thank Angela Dai for the voice-over of the video and also thank for setting up the DeformingThings4D dataset.
{ "timestamp": "2021-05-06T02:11:55", "yymm": "2105", "arxiv_id": "2105.01905", "language": "en", "url": "https://arxiv.org/abs/2105.01905" }
\section{Introduction}\label{sec:intro} Blockchain systems have become an integral part of modern financial society with their use reaching beyond the storage of value and cryptocurrencies into the wider financial market \cite{crosby2016blockchain}. One of the core tenets of these systems is that decisions about what data is immutably written to the blockchain's ledger, and therefore what is made a permanent entry on the chain's state going forwards, is made by consensus between nodes connected to and storing the ledger's information. Although, consensus can be achieved utilizing several different methods \cite{8400278}, Proof of Work (PoW) powered blockchains currently account for more than 90\% of the current market share \cite{anand2016colored} and include some of the largest cryptocurrencies such as Bitcoin and Ethereum. These two blockchains alone account for a market cap of over US\$430 billion (approximate as of December 2020) \cite{CoinMarketCap}. This demonstrates that considerable financial assets are stored and maintained by blockchains, their transactions and therefore the underlying consensus algorithms. In this paper we focus on the PoW mechanisms of blockchains. We show that quantum computers give a quadratic advantage in PoW efficiency; not just for all existing protocols but for any possible PoW protocol that relies on computational work being done. Unlike many other cryptographic standards, blockchain systems intrinsically tie the protected asset (the ledger) with the encryption systems used. It has been previously shown that this makes blockchains particularly vulnerable to quantum attacks \cite{kearney, aggarwal2017quantum}. The main concern is that replacing the cryptographic protocols that build a blockchain with `post-quantum' ones is extremely more difficult than with more traditional cryptographic uses \cite{kearney, aggarwal2017quantum}. Several predicted timelines \cite{van2013blueprint,mosca} pin the year 2035 as when we can expect quantum computers to reliably be able to break current mainstream cryptographic protocols such as RSA2048 and ECDSA. These two key facts make these concerns timely and pressing. Within most blockchain technologies, PoW underpins the protocols' consensus algorithm and because the consensus algorithm determines which transactions and actions performed on the network are integrated into the chain. This gives a quantum actor a potentially much stronger ability to control the decision-making in the blockchain. From a cybersecurity perspective, when one actor (or group of actors) can reliably force all decisions in the blockchain, it is called a `51\%' attack \cite{ye2018analysis}. In the first part of this paper, we will describe how quantum actors can much more reliably, and with much fewer resources than any classical counterpart, perform `51\%' attacks. In the second part of this paper, we will consider a much less sinister, and much more profitable, use of quantum resources. Given the quadratic increase in PoW efficiency, one may consider using a quantum computer to \emph{`mine'} Bitcoin or some other cryptocurrency (\emph{mining} is the act of performing PoW in order to help the blockchain arrive at a consensus). Performing this task generally involves economic remuneration to the \emph{`miner'}. A quantum cryptocurrency miner can potentially require fewer clock cycles, a lot less energy, and dissipate a lot less heat, in order to mine the same amount of cryptocurrency as a classical computer counterpart. Whether this makes the endeavor profitable, of course, will depend on both the initial cost, and operating costs of such a quantum device. We explore these questions in Section \ref{sec:eqn}. First, however, in the following section, we will discuss PoW as it is understood today, in more technical and formal detail. Then, in Section \ref{sec:grover}, we summarize the quantum technologies that can be deployed for PoW, and the advantages in doing so. Finally, we conclude with a summary of results, and a discussion on the future outlook for PoW. \section{Proof of Work}\label{sec:pow} Consensus algorithms within blockchain technologies are critical to the running of the protocol and PoW is the most commonly utilized mechanism. It is used to ensure miners act honestly according to the rules of the blockchain protocol \cite{antonopoulos2014mastering}. It was adapted as a mechanism for consensus across a blockchain by Satoshi Nakamoto \cite{nakamoto2019bitcoin, back2002hashcash}. PoW is widely used partly due to the utilization of Bitcoin's technologies and code base within a large amount of subsequent projects, but also because it is a highly secure mechanism for ensuring the good nature of mining nodes and because it lends itself well to distributed networks. Blockchain consensus employs the concept of the longest chain. The longest chain is typically the valid chain that a majority of the network holds as the state of the blockchain. While a miner can create a malicious block and add it to the network trivially, it will not be accepted by a majority of the nodes, as other peers on the network will reject the block and choose an alternate proposed block, therefore, excluding the malicious block from the longest chain. If a malicious user controls a majority of the network's computational power, they could potentially overwhelm this consensus mechanism by adding blocks to the chain faster than the rest of the network can compete, meaning they consistently have the longest chain. This means that the user could gain overall control of what is included into each block. This is known as a `51\%' attack and is the most damaging threat to a blockchain's integrity. In PoW-based systems a user proposing a new block must perform a computationally-intensive task. The successful completion of this task must be easily verified by other users on the network. Miners must expend a non-trivial amount of resources---usually computational \emph{work} and its associated costs, such as electricity and heat. This incurs a sunk cost for the miner that will be lost if the block they present is malicious or malformed. The Bitcoin PoW algorithm employs a NP-Complete problem where the goal is to create a hash digest based on a given input string \cite{ren2019analysis}. This hash digest is required to be in a specific form. This form is dictated by a target value (some integer in the range $[0,2^{256}]$) and Bitcoin miners must compute the hash digest that has an equal to or smaller value than the target. The target value is determined by the difficulty value of the blockchain network, which is altered depending on the computational power on the network as a whole as determined by the network's current \emph{hash-rate} (leading to the final target value being in the range of $[0,(2^{256} - difficulty)]$). Within Bitcoin, the difficulty value is changed according to the current computational power on the network once every 2016 blocks\cite{antonopoulos2014mastering} in order to maintain a block time of approximately 10 minutes. While this example is taken from Bitcoin, this is applicable to any network which utilizes PoW. The hash is calculated using the block header, which is constant for a specific block, and a nonce, which is changed repeatedly by the miner, to create different hash digests in the hope of finding a digest that fits the requirements for the block. As noted earlier, this problem is NP-Complete. The best known classical algorithms for solving PoW scale exponentially to the size of the difficulty (which in turn is bounded by the size of the hash itself). It is important to note that while hash-based PoW uses a NP-Complete problem, this does not necessarily have to be the case. It \emph{must} be the case, however, that the miner expend a non-trivial amount of work, and that this expenditure can be verified by other users of the blockchain in a relatively trivial manner. In other words, let $TC_V$ be the time complexity for verification and $TC_S$ be the time complexity for the miner to solve the problem. Then, any PoW mechanism must guarantee that: \begin{equation}\label{eq:hard} TC_V \ll TC_S. \end{equation} Clearly, any NP-Complete problem will satisfy the equation above. More generally, however, any PoW algorithm must satisfy the following requirements: \begin{definition}[Proof of Work]\label{def:pow}A computational problem can be considered as a PoW problem if it satisfies the following two requirements \begin{enumerate} \item The computational complexity of the problem must satisfy Eq.\ \ref{eq:hard}, \item The difficulty of the problem must be easily \emph{tuneable} with a parameter. \end{enumerate} \end{definition} Requirement 1 has been explained above. Requirement 2 is an important requirement for the continued health of the blockchain network over time. As the computational power of miners increases, this parameter needs to be re-tuned to keep PoW as a meaningful deterrent against rogue miners. In the following section we will explore the quantum computational advantage in PoW as described here. We will then explore the cybersecurity threat of quantum attacks on blockchain networks. Finally, we will analyze the possibility, and possible profit, of using this quantum advantage for the more benign purpose of more efficient cryptocurrency mining. \section{Quantum Advantage for PoW}\label{sec:grover} When discussing quantum advantage for computational tasks, two main types of algorithms are most often cited. The first is the subgroup-finding algorithms based on Shor's seminal work\cite{shor1994algorithms}. These types of algorithm provide a exponential advantage on problems including factoring and discrete logarithm. Though this is only a relatively small set of problems, it covers a large area of the cryptographic landscape. The other type are the quantum search algorithms based on Grover's algorithm\cite{grover1996fast,GroverOptical}. Whilst quantum search algorithms provide a more modest quadratic advantage over classical, their very broad applicability makes them extremely versatile, and central to our discussion. The quantum search algorithm, as its name suggests, allows one to search \emph{any} (including unsorted and unstructured) data-set $S$, of cardinality $N = |S|$ for certain items that fulfill some condition, or is an element of some subset $C \subseteq S$. This condition is specified, in the quantum algorithm, as a black box or oracle $O$ that takes as input one register containing an element of $x \in S$, and an ancilla qubit, which is set to $1$ if $x \in C$ and $0$ otherwise. The importance of this algorithm is that it runs in \emph{total time} $O(\sqrt{N})$, and makes $O(\sqrt{N})$ queries to $O$. This oracle can be, and in practical uses often is, replaced by a quantum circuit or subroutine program $a$ that computes whether $x$ satisfies the required condition, or is an element, of the subset $C$. In particular, one may consider a decision problem $D$ that is NP-Complete. Let $I$ be its input set, and $S \subseteq I$ the solution set. Given that the problem is in NP, there exists an efficient (polynomial-time) algorithm $a$ that can compute, on input $x$, whether $x \in S$. This in turn implies that a quantum search algorithm can solve $D$ in total time $O(\sqrt{N}) = O(\sqrt{2^n})$, where $n$ is the input size in bits. Because $D$ is NP-Hard, there is \emph{no} (known) classical algorithm that can solve $D$ in time substantially better than $O(2^n)$. It should be now clear why the quantum search algorithm is of central importance to any discussion of quantum advantage for PoW. As discussed previously, most PoW systems today require the miner to find a SHA-x hash for a pre-determined string, that is under a certain value. This problem is NP-Complete. Hence, a quantum computer with a memory register large enough to run Grover's algorithm on the necessary hash size, would be able to gain a quadratic advantage over any classical device---including purpose-built ASICs. To illustrate this, we can consider a toy example in which a classical brute-force search algorithm which runs in time precisely $2^n$, and a quantum search one that runs in precisely time $\sqrt{2^n}$. On input size $n=2$, the quantum algorithm is only twice as fast as the classical one. On input size $n=256$ the quantum algorithm will run $3.4 \times 10^{38}$ times faster. Compare this to ASIC chips that typically provide a speed-factor advantage of approximately $1 \times 10^{4}$. We can also perform a more realistic analysis. Running a quantum search algorithm (assuming no error correction) on SHA-256 hashes requires roughly 512 qubits. Estimates by major quantum computer manufacturer predict such quantum computers will be available in 2023\cite{gambetta_2020}. At today's reported quantum computer clock-speeds\cite{arute_et_al._2019} (barring any major improvements) we can thus expect the equivalent of $4 \times 10^{7}$ calculations performed per second which, using Grover's algorithm, leads to the equivalent of $1.6 \times 10^{15}$ hashes computed per second (H/s). Fig. \ref{fig:Graph1} plots the Bitcoin network hash rate using the most current value of $130 \times 10^{18}H/s$\cite{BitcoinHashRate} against a quantum computing technology that starts at 40 MHz\cite{gambetta_2020}, and both increasing over time at the same rate, as dictated by Moore's Law. This gives an estimated timeframe of approximately 27 years until a \emph{single} quantum computer will be capable of completely out-mining the rest of the network, and hence be able to take over complete control of it (a successful $51\%$ attack). \begin{figure} \includegraphics[width=.5\textwidth]{BitcoinvQuantum} \caption{\textbf{Bitcoin network hash rate \emph{vs.} single quantum computer.} The graph shows the hash rate growth over time of the entirety of the Bitcoin network, compared to that of a \emph{single} quantum computer. Future data-points are extrapolated from current hash-rates, and assumes growth-rates for both quantum and classical technologies in line with current Moore's Law trends. See the main text for further details.} \label{fig:Graph1} \end{figure} This prediction, however, is perhaps overly conservative for a couple of reasons. The first is that we consider the speed-increase in clock-rate for both quantum computers and classical computers to be the same. In reality, classical computers are known to be at the tail-end of Moore's Law's\cite{moore1965cramming} logistic-curve rate-of-growth\cite{shalf2020future}. Meanwhile, we can expect quantum computers, which are in their infancy, to over-achieve this rate-of-growth\cite{ghose_2019}. Furthermore, this comparison has been made on the Bitcoin network, which has, by far, the largest hashing power of all blockchains \cite{coinmetrics_2020}. Other comparatively smaller blockchain networks would be vulnerable far sooner than suggested here. For example, if network hash rates of blockchains such as Monero (1.28 Giga-Hashes per Second ($GH/s$))\cite{klemens_2020} or Ethereum Classic (6.43 Tera-Hashes per Second ($TH/s$))\cite{ethereumclassichashratechart} do not improve in the coming years, we could expect them to be vulnerable to a quantum 51\% attack as soon as there is a quantum computer with sufficient quantum memory, which is predicted to happen roughly in 2023\cite{gambetta_2020}. In short, not only do quantum computers provide an \emph{asymptotic quadratic} efficiency increase for current PoW systems, they do so for any likely possible PoW system as well. Compare this to custom-built ASIC chips which also provide a speed increase in mining crypto-currencies, but are however limited to constant-factor speed-increases. This results in a single quantum computer being able to launch devastating attacks on the cryptocurrency network, in the foreseeable future. Of course, this \emph{`single quantum computer'} attack would only work against a cryptocurrency network that is, at least for the most part, composed of classical miners. If a sizable portion of a cryptocurrency's miners were to move to quantum hardware, this would protect the entire network from quantum 51\% attacks. In the next section we explore legitimate uses of quantum technology for PoW-based cryptocurrency mining. As we shall see, there may also be definite profit motives for individual cryptocurrency miners to invest in and adopt quantum technologies. \section{The Profitability of Quantum Cryptocurrency Mining}\label{sec:eqn} In previous sections we studied the cybersecurity threat posed by quantum-led 51\% attacks on blockchain networks. These attacks, while largely inevitable, are time-wise a bit far off---at least for the larger cryptocurrencies such as Bitcoin. The reason for this is that for a successful attack a quantum computer must have as much (or more) PoW computational power as the rest of the network combined. Here, we will study the viability of using a quantum computer for the purpose of legitimately mining a cryptocurrency such as Bitcoin. In order to do \emph{this} effectively and profitably, a quantum computer doesn't have to be more powerful than the whole network, it only needs to be more efficient (in terms of resource-cost per block approved by the network) than a single classical miner. Hence, we can expect quantum supremacy, in the field of cryptocurrency mining, to be achieved much sooner than the previously discussed dates given for 51\% quantum attack viability. We will first set out to derive a general equation that can be used to calculate the potential profitability of quantum-assisted cryptocurrency mining. We will then apply this equation to various credible scenarios, and give estimates of near-future profitability. \subsection{Profitability Calculation}\label{sec:eqn_main} In this section we will be setting out an equation to calculate whether mining on a classical or a quantum entity is more profitable. The primary element to be considered when making this calculation is the income from any device mining blocks on a blockchain. This is based on the probability of mining a block during the time it takes for a new block to be generated. This exact value varies among blockchains with a new block being generated on average every 600 seconds within Bitcoin \cite{project_2009} and approximately every 15 seconds within Ethereum \cite{io_2015}. However, this value can be generalized, since the relation between block generation and the probability of mining a specific block will be the same across all PoW based blockchains. This block time is controlled by the difficulty of a particular blockchain in relation to the hash size in bits defined by the blockchain's architecture \cite{garay2015bitcoin}. This is changed periodically in order to maintain a consistent block time and so, across a larger timescale, the time taken to generate a new block can be averaged dependent on blockchain. Based on these values and the given hash rate of any considered classical miner, we can say that the probability of mining a block is defined as: \begin{equation}\label{eq:1} P_{C} =\frac{H_C t}{\frac{\eta D}{t}} \end{equation} Where $P_C$ is the probability of mining a block from a classical device, $H_C$ is the hash rate of the classical device, $t$ is the block time, $D$ is the difficulty of the blockchain network and $\eta$ is the hash size. The denominator of \ref{eq:1} is the calculation for the total network hash rate of any one network. This can then be simplified to: \begin{equation} P_{C} =\frac{H_C t^{2}}{\eta D}, \end{equation} as derived from the hash rate of any one device divided by the total network hash rate \cite{BitcoinHashRate}. As discussed in Sections \ref{sec:pow} and \ref{sec:grover}, as blockchain technologies utilize NP-Hard problems for PoW and as $D$ determines the complexity of said problems, $D$ is the value where the quadratic increase in efficiency can be applied. Due to this advantage the probability of mining a block on any quantum device based on a given equivalent hash rate can then be defined as: \begin{equation} P_{Q} =\frac{H_Q t^{2}}{\eta \sqrt{D}} \end{equation} Where $P_Q$ is the probability of mining a block from a quantum device and $H_Q$ is the equivalent hash rate of that device. These probabilities, when taken across any given operational timespan can then be used to calculate the overall income across said timespan for any given blockchain, taking into account a conversion into fiat currency, defined as a function $f$. As the exact conversion between cryptocurrencies and real world fiat currency can vary, this has been abstracted to a single function. The exact reward gained per block mined is another element which varies based on the cryptocurrency being considered and duration of the operating period. When performing the profitability calculations, this needs to be taken into careful consideration as, for some cryptocurrencies, the block reward can change across the lifespan of any particular cryptocurrency. For example, Bitcoin halves its block reward every 210,000 blocks, meaning that though it originally rewarded 50 bitcoins (BTC) per block mined \cite{meynkhard2019fair}, the current value is 6.25 BTC. The reward is expected to approach 0 by approximately 2140 \cite{controlled_supply}. Taking these elements into account, the total income over the given timespan can be calculated as the following for classical miners: \begin{equation} I_{C} = f\left(\frac{T}{t} \cdot P_{C} B\right), \end{equation} where $I_C$ is the income for a classical miner across the timespan $T$, and $B$ is the block reward for the considered blockchain. The following holds for quantum miners: \begin{equation} I_{Q} = f\left(\frac{T}{t} \cdot P_{Q} B\right), \end{equation} where $I_Q$ is the income for a quantum miner across $T$. Following this, we can bring in the initial cost of any particular device in order to calculate the point at which the given device becomes profitable whilst operating on the network across $T$. Once this value becomes greater than 0, it is then deemed to be profitable to run the miner on the blockchain network. As discussed in Section \ref{sec:pow}, miners are required to expend energy (in the form of computation) to ensure honesty between parties. This is considered here as the operational costs of any given device. From this, the profit returns for classical miners can be determined as: \begin{equation} R_{C} = I_{C} - (T \cdot O_{C}) - S_{C}, \end{equation} where $R_C$ is the profit, $O_C$ is the operating costs and $S_C$ is the setup costs for the classical device. The profit calculation for quantum miners is as follows: \begin{equation} R_{Q} = I_{Q} - (T \cdot O_{Q})-S_{Q}, \end{equation} where $R_Q$ is the profit, $O_Q$ is the operating costs and $S_Q$ is the setup costs for the quantum device. From these two equations, we can then calculate a profit ratio ($G$): \begin{equation} G = \frac{R_{C}}{R_{Q}}.\label{eq:golden} \end{equation} The above equation is particularly important: $G = 1$ is the inflection point at which quantum and classical technologies are equally feasible. Values of $G$ less than 1 imply that the quantum miner in question is more profitable than a classical one, even after factoring in initial investment costs considered in the calculation. Eq.\ \ref{eq:golden} can be expanded, using the previous equations, to: \begin{equation}\label{eq:final} G = \frac{f\left(T \cdot \frac{H_C t}{\eta D} \cdot B\right) - (T \cdot O_{C})-S_{C}}{f\left(T \cdot \frac{H_Q t}{\eta \sqrt{D}} \cdot B\right) - (T \cdot O_{Q})-S_{Q}} \end{equation} The above equation has many practical uses. For one, it allows one to \emph{`plug in'} various known values, like research and development and other initial investments necessary to jump-start a quantum crypto-currency mining operation, along with running costs for both classical and quantum mining, and decide whether the investment in quantum mining can pay off. It can also be used---as we do below---to estimate the timescales at which quantum cryptocurrency mining can become a profitable enterprise. An important fact to emphasize is that Eq.\ \ref{eq:golden} takes into account the introduction of further quantum computing machines onto the network. This is because difficulty is defined at the protocol level of a blockchain as a mechanism to ensure that the block time stays within certain bounds. For example the Bitcoin blockchain's difficulty operates so that over a period of time, if the block time exceeds or is less than 10 minutes, the difficulty of the PoW problem is corrected to bring the block time back in line with the pre-defined desirable time. This means that the introduction of quantum computers onto the network will in fact decrease the block time as they have a quadratic advantage over their classical peers. The Bitcoin protocol will thereby increase the difficulty of the PoW algorithm. This is then taken into account within our equation. Introduction of quantum computers into the mining ecosystem could potentially cause a dramatic increase in the difficulty. This means that the equation presented here will take into account new quantum computers mining the network as their inclusion will factor into difficulty. The above is important for various reasons, but of particular import is the \emph{first mover vs. second mover advantages.} Being a first mover, that is, being the first to enter a market (in this case with a quantum miner) has definite advantages and is of particular interest to entrepreneurs and investors. A common concern among potential investors is that of making a large investment, only to arrive \emph{late} to a market, potentially ruining return-on-investment prospects. As we shall discuss in the last section of this paper, quantum mining has the peculiar property that the more quantum mining \emph{`competitors'} one has, however, the more profitable it may become for one to do quantum mining. \subsection{Scenarios and Forecasts} Using the equation derived earlier, we can analyze some possible near-future scenarios. The general goal will be to determine the profitability of quantum-based cryptocurrency mining. The cryptocurrency which shall be used for this investigation will be Bitcoin as this is currently the blockchain with the highest comparative market value\cite{CoinMarketCap}. This shall be performed utilizing the denominator of Eq.\ \ref{eq:final} which can be formalized with the target as such: \begin{equation} f\left(T \cdot \frac{H_Q t}{\eta \sqrt{D}} \cdot B\right) - (T \cdot O_{Q})-S_{Q} \leq 1 \end{equation} For our case-analysis scenario, let us consider using a cloud quantum computing service. IBM\cite{ibmquantumexperience}, amongst others, have announced for-profit cloud-based quantum computing services. This is a natural scenario to consider since most quantum computation in the near future is likely to involve cloud-based services\cite{devitt2016performing,castelvecchi2017ibm}. This scenario has a composite advantage as well: it obviates the need for an initial investment, requiring instead only that the potential miner pay the rolling costs of renting quantum CPU time from the cloud provider. It will allow us, within this analysis, to set $S_Q = 0$. Next, let us consider a time-frame. According to the roadmap set out by IBM, a quantum computer which can run a quantum search algorithm on Bitcoin's hashing function can be expected by roughly 2023\cite{gambetta_2020}. To be conservative, we consider 2025 to be an estimated `year zero' in which a quantum computer can run a quantum search on hash-based PoW and so 01/01/2025 shall be used whenever a given date is required. For our case scenario we are focusing on Bitcoin. This sets some further variables in our equation. These are $t = 600s$, $\eta = 2^{32}$\cite{antonopoulos2014mastering,nakamoto2019bitcoin} and $B = 3.125 BTC$\cite{meynkhard2019fair,buybitcoinworldwide}. As an additional part of the blockchain architecture, the difficulty is calculated and adjusted every 210,000 blocks in order for the block period to remain relatively constant. To provide a difficulty for this scenario, we plotted the historical difficulties and then the appropriate difficulty was extrapolated to our given date using polynomial curve of best fit. This provided a difficulty of $D = 4.2903 \times 10^{18}$. Though there are varying opinions of the future of Bitcoin difficulty\cite{kraft2016difficulty}, this matches the current trends. The value of Bitcoin has had a general increasing trend year-on-year, however due to the volatile nature of cryptocurrencies, no single prediction can be made. Therefore the values shown in table \ref{tab:table1} will account for various BTC to USD conversion rates including the current price (as of 17/12/2020 this is \$23,536.12)\cite{CoinMarketCap}, the average price over the last 12 months (taken as the average closing price from 01/01/2020 until 17/12/2020, \$10,385.49), a predicted conservative price (\$31,000) and a predicted high-end price (\$100,000). The final element of the equation to be assigned is the hash rate equivalent of the quantum computer. How the hashing power will increase as the development of quantum computers continues is not known. Thus, we consider two possibilities. In the first scenario, we take the clock-speed of (one of) Google's current quantum computers of $H_Q = 40 MHz/s$\cite{arute_et_al._2019}, and keep that value constant throughout time. In the second, more plausible, scenario we increase the quantum computer's clock-speed according to Moore's Law. After four doubling cycles, we arrive at a clock-speed of $H_Q = 640 MHz/s$. Table \ref{tab:table1} collects the calculations made for the various scenarios. \begin{table}[] \begin{tabular}{|r|r|p{60pt}|} \hline $H_Q (MHz/s)$ & $f (USD)$ & $O_Q$ \\ \hline 40 & 23,536.12 & 6,258.27 \\ \hline 40 & 10,385.49 & 2,761.51 \\ \hline 40 & 31,000.00 & 8,242.92 \\ \hline 40 & 100,000.00 & 26,590.06 \\ \hline 640 & 23,536.12 & 100,132.28 \\ \hline 640 & 10,385.49 & 44,184.12 \\ \hline 640 & 31,000.00 & 131,886.68 \\ \hline 640 & 100,000.00 & 425,440.90 \\ \hline \end{tabular} \caption{This table shows the income generated over the period of a year in USD related to specified quantum computer clock speed and fiat currency conversions. In the third column $O_Q$ is calculated with $I_Q = 1 (USD)$} \label{tab:table1} \end{table} From these results, the best case scenario can be found when $H_Q = 640MHz/s$ and the market conversion result is $f = \$100,000$. In this case, as long as the operational cost of the quantum device (\emph{i.e.} the quantum cloud CPU time charged by the provider) is below $O_Q = \$425,440.90$ a year, a quantum miner would still be able to turn a profit. \subsection{The Effects of Introducing Quantum PoW Technology}\label{sec:eqn_loop} Finally, presented in Figure \ref{fig:graph2} is a cascading \emph{virtuous cycle} that will propagate upon the introduction of quantum computers to a PoW based blockchain network. This will happen as they become profitable when compared to classical alternatives, according to Equation \ref{eq:final}. Firstly, introducing quantum computers into a PoW based blockchain, as discussed, will consequently increase the hash-rate of the entire network, thereby shortening the average time it takes for the network to calculate a block. According to a blockchains protocol this will cause an increase in the PoW difficulty parameter in order to recalibrate the block-time to the prescribed value. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{graph2.pdf} \end{center} \caption{\textbf{Self-propagating cycle of increasing quantum advantage on PoW networks.} Adding quantum miners to the cryptocurrency network increases the network's hash-rate. An increased hash-rate will raise the difficulty parameter. An increased difficulty parameter increases the relative quantum-advantage. This, in turn, increases the profitability of quantum-mining, which in turn motivates the introduction of more quantum miners.} \label{fig:graph2} \end{figure} Increasing the difficulty parameter of PoW has been shown to solidify the quadratic advantage of quantum computers as miners. This advantage means that there will be greater incentive for investment into quantum mining technologies as the profit margin when compared with their classical counterparts will increase. This greater incentive will once again increase the number of quantum miners on the network, thereby decreasing the block-time and increasing the PoW difficulty in turn. This creates cycle within which quantum computing technologies will, eventually, completely replace classical miners, as the later cease to be cost-effective. This cascading effect also has a security benefit for the network itself. As soon as the majority (roughly) of the miners are quantum, the network itself become impervious to 51\% attacks based on quantum advantage \emph{alone}. It would still be technically possible to mount such an attack, but such an attack would only succeed by using \emph{other} methods such as miner-collusion, rather than by merely leveraging quantum advantage. Over time the increased difficulty parameter of PoW will lead to classical miners being made obsolete. The increase in difficulty will cause the PoW problem to become exponentially harder for both classical and quantum devices. However, the impact to classical miners is quadratically worse, over time, than the impact to quantum miners. Eventually, this will lead to all quantum miners being more cost-effective than classical miners (regardless of their initial setup costs). \section{Conclusion}\label{sec:conc} In closing, quantum computation gives a definite advantage over classical computation for the purpose of calculating PoW for blockchains. As we have seen, in Sec.\ \ref{sec:grover}, this quantum advantage can be used by an adversarial party, in order to attempt what are called 51\% attacks, on the cryptocurrency. The possibility of these types of attacks is, however, in the reasonably distant future. On the other hand, it is very unlikely that there ever will be a quantum-secure---or \emph{post-quantum}---alternative to hashing for the purposes of PoW. Not only is hashing-based PoW susceptible to quantum advantage, but so are other well-known PoW systems such as Zcash's use of the Birthday Paradox-based computational problem\cite{hopwood2016zcash}. Moreover, it is unlikely that any PoW system can be derived is not susceptible to some form of quantum advantage. This is because PoW, by definition, requires a problem whose solution is hard to \emph{compute} (to ensure miners are required to do meaningful \emph{work} for their PoW), while being fairly easy to \emph{verify} (to ensure any third party can verify that the work has been performed). And these are exactly the type of problems where quantum search algorithms provide definite advantage over classical. This means that once a quantum computer \emph{does} exist that can attack the network in this way, there will be very little that can be done to safeguard the blockchain network against said attacks. One possible avenue is to drop the use of PoW by the blockchain completely, and move to another consensus mechanism entirely---such as \emph{Proof of Space}\cite{dziembowski2015proofs}, or other alternatives\cite{bentov2014proof,alwen2017scrypt}. Another safeguarding mechanism would be to move the entire cryptocurrency from ASIC miners to quantum miners. In Sec.\ \ref{sec:eqn}, we discussed the possibility of doing this. We showed that mining cryptocurrency, using quantum computation, can quickly become a profitable proposition. In Sec.\ \ref{sec:eqn_main} we gave a precise formula that allows one to calculate a potential profit of using quantum computation for PoW. How profitable this is will clearly depend on considerations such as the running cost of a quantum computer, and the initial costs of setting one up. This latter cost can be removed if one chooses to use cloud quantum computation. We calculated the precise revenue that one can expect from mining bitcoin in 2025 across the period of a year using predicted available cloud quantum computing at that time to be between \$44,184.12 and \$425,440.90, depending on whether the most conservative or optimistic parameters are used. This variable is based on the conversion rate of Bitcoin and the exact hashing power of the quantum device at the time. Whether this is profitable will depend on how much quantum cloud CPU time is charged for at that time. The existence of secure remote quantum computing protocols such as \emph{blind quantum computation}\cite{barz2012demonstration}, means that a client can safely use a cloud quantum server for the purposes of mining Bitcoin, or other cryptocurrencies, without any interference from the server. In short, this shows a very likely profitable use of quantum computational resources in the coming decades. As shown in Fig. \ref{fig:graph2}, and described more generally in Sec.\ \ref{sec:eqn_loop}, the introduction of quantum computers into a mining ecosystem will make subsequent use of quantum computers even more profitable, as compared to classical computers---which in turn will become less profitable over time. In closing, we've introduced the mathematical machinery necessary to understand, accurately, the impact of introducing quantum PoW technology into cryptocurrency ecosystems---used both by malicious, and non-malicious actors. A clear next step is to branch out the analysis we have done here to other blockchain consensus mechanisms that were outside the scope of this work. Another clear next step is to take the work we have developed here, as well as real-world economic data, and use both together to create accurate, predictive, models. Several useful predictive models could be developed to help inform, say, investment strategies into quantum technologies, hedging strategies for cryptocurrency investors and miners, etc. We have made some simple predictions here, in Sec.\ \ref{sec:eqn_loop}. These simple models are meant, mostly, to showcase the power of the mathematical machinery we have introduced---and to hopefully motivate their use in creating more accurate, powerful, predictive models. Even our very simplistic models already suggest a trend however: we expect all PoW-based cryptocurrency mining to move to quantum platforms in the coming decades. \bibliographystyle{ieeetr}
{ "timestamp": "2021-05-06T02:07:43", "yymm": "2105", "arxiv_id": "2105.01821", "language": "en", "url": "https://arxiv.org/abs/2105.01821" }
\section{Introduction} In unconventional superconductors, not only the gap function, but also the superconducting fluctuations can be quite different from their conventional counterparts (for reviews, see Ref. \citep{varlamov2018,larkin2005book}). Indeed, several high-$T_{c}$ superconductors have strongly anisotropic properties and small coherence lengths, suggestive of a wider temperature range in which fluctuations are important. Moreover, the magnitude of these fluctuations as well as their temperature dependence can also display unusual behaviors \citep{pelc2019}. Signatures of superconducting fluctuations have been widely probed in both conventional and unconventional superconductors, in observables as diverse as specific heat \citep{suzuki1977,tsuboi1977,tallon2011}, linear and nonlinear conductivity \citep{glover1967,strongin1968,ruggiero1980,mircea2009,rullier2011,leridon2016,popcevic2018,Pelc2018}, microwave and THz response \citep{orenstein1999,orenstein2006,grbic2011,bilbro2011,Pelc2018}, susceptibility \citep{geballe1971,gollub1973,ong2010,kokanovic2013,Kasahara2016,yu2019}, and the Nernst coefficient \citep{ong2006,Forget2006,taillefer2012,taillefer2014,behnia2016}. Experimentally, one of the main difficulties is to unambiguously identify contributions that can be uniquely attributed to superconducting fluctuations, since these are usually small compared to the regular normal-state contributions \citep{geballe1971}. Theoretically, modeling contributions of superconducting fluctuations to the magnetic susceptibility and to the conductivity, both phenomenologically and microscopically, dates back several decades \citep{schmidt1968onset,shmidtvv1968,maki1968,thompson1970,schmid1969,prange1970,Abrahams1970,kurkijarvi1972,Aslamazov1974}. More recent studies on superconducting fluctuations have focused on the role of phase fluctuations \citep{Emery1995}, on disordered 2D superconductors \citep{glatz2011}, and on thermal and electric transport properties above $T_{c}$ in cuprates \citep{Ullah1991,Fisher1991,loffe1993,Huse2002,Galistki09,Michaeli2009,Levchenko2020}. Recently, a method to probe superconducting fluctuations based on the third-harmonic magnetic response was put forward in Ref. \citep{pelc2019}. Specifically, an ac magnetic field $H(t)=H_{0}\cos(\omega t)$ is applied and the magnetization is measured at a frequency $3\omega$. This observable, which we hereafter denote by $\overline{M_{3}}$, is related to, but not identical to the standard nonlinear susceptibility $\chi_{3}$. The key point is that the third-harmonic response $\overline{M_{3}}$ is vanishingly small in the normal state. As a result, its magnitude and temperature dependence near the superconducting transition temperature $T_{c}$ should be dominated by superconducting fluctuations. In Ref. \citep{pelc2019}, it was empirically found that $\overline{M_{3}}$ displays an unusual exponential temperature dependence in perovskite-based superconductors such as cuprates, Sr$_{2}$RuO$_{4}$ (SRO) and SrTiO$_{3}$, as opposed to a power-law temperature dependence in standard electron-phonon superconductors. However, the implications of these observations for the nature of superconducting fluctuations in unconventional superconductors remain unsettled. In this paper, we employ a phenomenological approach based on the Lawrence-Doniach (LD) free-energy to compute the contributions to the experimentally-measured quantity $\overline{M_{3}}$ of Ref. \citep{pelc2019} arising from Gaussian superconducting fluctuations. The main appeal of such an approach is that, being phenomenological, it is potentially applicable to both conventional and unconventional superconductors. In particular, we perform a quantitative comparison between the theoretical results predicted by the LD formalism and the data on several elemental superconductors (Pb, Nb, V) and on the unconventional superconductor SRO. We find that the LD result provides a good description of the data for elemental superconductors over a wide range of reduced temperature values, $\epsilon\equiv\frac{T-T_{c}}{T_{c}}$, and correctly captures the observed $5/2$ power-law behavior of $\overline{M_{3}}$ for intermediate values of $\epsilon$. The theoretically extracted values for the zero-temperature upper critical field $H_{c2}(0)$ differ by factors of $2$ to $6$ from the experimental ones; we argue that this difference could be an artifact of the LD model, which was developed for layered superconductors rather than cubic systems. Overall, the results demonstrate that measurements of the third-harmonic magnetic response are indeed a powerful probe of superconducting fluctuations. However, in the case of Sr$_{2}$RuO$_{4}$, we find a sharp disagreement between the LD theoretical results and the data for $\overline{M_{3}}$. Not only is the temperature dependence qualitatively different, but the observed magnitude of $\overline{M_{3}}$ near $T_{c}$ is strongly underestimated by the theoretical model. Motivated by the evidence for significant inhomogeneity in several perovskite-based superconductors \citep{Pelc2018,pelc2019,pelc2021}, we modify our LD model for $\overline{M_{3}}$ and include a distribution of $T_{c}$ values. We find that even a modest width of this $T_{c}$ distribution is capable of capturing the typical values of $\overline{M_{3}}$ observed experimentally. However, this modification is not sufficient to explain the exponential temperature dependence reported in Ref. \citep{pelc2019}. We thus conclude that while inhomogeneity at the mean-field level is important to elucidate the behavior of superconducting fluctuations in Sr$_{2}$RuO$_{4}$, it is likely not the sole reason for the observed exponential temperature dependence. One possibility is that such behavior arises from rare-region contributions \citep{dodaro2018,pelc2019,pelc2021} or from non-Gaussian fluctuations, which are absent in the LD model employed here. The paper is organized as follows: in Sec. \ref{sec:Phenomenology}, we employ the LD model to derive an expression for the third-harmonic magnetic response $\overline{M_{3}}$, and discuss the temperature dependence of this quantity in different regimes. Sec. \ref{sec:Comparison} presents a quantitative comparison between the theoretical and experimental results for three conventional superconductors (Pb, Nb, and V) and the unconventional superconductor Sr$_{2}$RuO$_{4}$. We note that some of the data were previously published in Ref. \citep{pelc2019}. An extension of the model presented in Sec. \ref{sec:Phenomenology} that includes the role of inhomogeneity is also introduced. Our conclusions are presented in Sec. \ref{sec:Conclusions}. \section{Phenomenological model for the third-harmonic magnetic response \label{sec:Phenomenology}} In this section, we derive an expression for the third-harmonic magnetic response $\overline{M_{3}}$, measured in the experiments of Ref. \citep{pelc2019}, based on the Lawrence-Doniach (LD) approach. We first review the contribution of the superconducting fluctuations to the magnetization in the presence of a static magnetic field within the LD approach. Here we only quote the LD results, which are well known from the literature -- for their derivations, see for instance Refs. \citep{larkin2005book,mishonov2000}. Using the LD results, we then proceed to include an ac field to explicitly calculate $\overline{M_{3}}$, and discuss its temperature dependence in different regimes. \subsection{Linear and nonlinear susceptibilities in the Lawrence-Doniach model} Fluctuations of a superconductor in the presence of an external magnetic field can be modeled within the phenomenological Ginzburg-Landau framework. In a regime close to $T_{c}$, the general superconducting Ginzburg-Landau free-energy functional takes the form: \begin{equation} \begin{aligned}\Delta\mathcal{F}\left[\Psi\left(\mathbf{x}\right)\right]&= \int d^{d}x\left(a\left|\Psi\right|^{2}+\frac{b}{2}\left|\Psi\right|^{4}\right.\\ & \left.+\frac{1}{2m^{*}}\left|\left(\frac{\nabla}{i}-e^{*}\mathbf{A}\right)\Psi\right|^{2}+\frac{1}{8\pi}\left|\nabla\times\mathbf{A}\right|^{2}\right) \end{aligned} \label{eq_GL} \end{equation} Here, $\Psi\left(\mathbf{x}\right)$ is the superconducting order parameter, $m^{*}=2m$ and $e^{*}=2e$ are the mass and charge of a Cooper pair, $\mathbf{A}$ is the vector potential, and $b>0$ is a Ginzburg-Landau parameter. The coefficient $a$ is parametrized as $a=\alpha\left(T-T_{c}\right)=\alpha T_{c}\epsilon$, where $\epsilon=\frac{T-T_{c}}{T_{c}}$ is the reduced temperature and $\alpha$ a positive constant. Near $T_{c}$, but above the temperature range where critical fluctuations become important, as set by the Ginzburg-Levanyuk parameter, one assumes that the order parameter is small and slowly-varying. As a result, the quartic term in Eq. (\ref{eq_GL}) can be neglected, and only Gaussian fluctuations are considered: \begin{equation} \Delta\mathcal{F}\left[\Psi\left(\mathbf{x}\right)\right]=\int{d^{d}x\left(a\left|\Psi\right|^{2}+\frac{1}{4m}\left|\left(\frac{\nabla}{i}-2e\mathbf{A}\right)\Psi\right|^{2}\right)} \end{equation} To obtain the Lawrence-Doniach (LD) free-energy expression, one assumes a layered superconductor and considers a magnetic field $H$ applied perpendicular to the layers. A detailed derivation can be found in standard textbooks and review papers, see for instance Refs. \citep{larkin2005book,mishonov2000}. For completeness, we only highlight the main steps of the derivation and quote the results from Ref. \citep{larkin2005book}. Because of the layered nature of the system, there is a difference between in-plane and out-of-plane kinetic terms. While the former assumes the same form as in Eq. (\ref{eq_GL}), the latter is described by $\delta_{z}\left|\Psi_{l+1}-\Psi_{l}\right|^{2}$, where $\delta_{z}$ is the inter-layer coupling constant and the subscript $l$ is a layer index. It is also convenient to introduce two dimensionless quantities, $h$ and $r$. By using the result $H_{c2}\left(0\right)=\frac{2m\alpha T_{c}}{e}$ for the zero-temperature critical field, we define the dimensionless applied field $h\equiv \frac{H}{H_{c2}(0)}$. Moreover, we define the dimensionless anisotropy parameter $r\equiv\frac{2\delta_{z}}{\alpha T_{c}}$, which can also be expressed in terms of the ratio between the correlation length along the $z$ direction, $\xi_{z}(0)$, and the inter-layer separation $s$, $r=\frac{4\xi_{z}^{2}\left(0\right)}{s^{2}}$. Writing the order parameter in a product form between in-plane Landau-level wave functions and plane waves propagating along the $z$ direction, one can evaluate the partition function $Z=\int{D\Psi D\Psi^{*}e^{-\frac{\Delta\mathcal{F}\left[\Psi\left(\mathbf{x}\right)\right]}{T}}}$ and then obtain the LD free-energy expression (up to a constant) \citep{larkin2005book,mishonov2000}: \begin{align} \frac{F\left(\epsilon\right)}{M_{\infty}H_{c2}(0)} &= -\frac{2\left(\epsilon+1\right)h}{\ln2}\left[\left(\epsilon+\frac{r}{2}\right)\frac{\ln h}{2h}-\frac{1}{2}\ln2\pi\right.\nonumber \\ & \left.+\int_{0}^{\pi/2}\frac{d\phi}{\pi/2}\,\ln\Gamma\left(\frac{1}{2}+\frac{\epsilon+r\sin^{2}\phi}{2h}\right)\right] \end{align} Here, $\Gamma(x)$ is the gamma function, the integration over the variable $\phi$ effectively sums over the layers, $v$ is the volume, and $M_{\infty}\equiv\frac{T_{c}}{\Phi_{0}s}\frac{\ln2}{2}$ is the absolute value of the saturation magnetization at $T_{c}$, with $\Phi_{0}$ denoting the flux quantum. Similarly, the LD expression for the magnetization is given by \citep{larkin2005book,mishonov2000}: \begin{equation} \begin{aligned}\frac{M\left(\epsilon\right)}{M_{\infty}}= & -\frac{2\left(\epsilon+1\right)}{\ln2}\int_{0}^{\pi/2}\frac{d\phi}{\pi/2}\left\{ \frac{\epsilon+r\sin^{2}{\phi}}{2h}\times\right.\\ & \left[\psi\left(\frac{\epsilon+r\sin^{2}{\phi}}{2h}+\frac{1}{2}\right)-1\right]\\ & \left.-\ln{\Gamma}\left(\frac{\epsilon+r\sin^{2}{\phi}}{2h}+\frac{1}{2}\right)+\frac{1}{2}\ln{2\pi}\right\} \end{aligned} \label{mag} \end{equation} where $\psi(x)=\frac{d\ln\Gamma(x)}{dx}$ is the digamma function. By taking $h\gg\epsilon,r$ in Eq. (\ref{mag}), the right-hand side gives $-1$ at $\epsilon=0$, confirming that $M_{\infty}$ is the saturation magnetization at $T_{c}$. Note that this expression is valid for $h>0$; in the case of $h<0$, symmetry implies $F\left(-h\right)=F\left(h\right)$ and $M\left(-h\right)=-M\left(h\right)$. For future reference, we list the three dimensionless parameters that will be employed throughout this work: \begin{equation} \begin{aligned} & \epsilon=\frac{T-T_{c}}{T_{c}}\\ & r=\left[\frac{2\xi_{z}\left(0\right)}{s}\right]^{2}\\ & h=\frac{H}{H_{c2}\left(0\right)} \end{aligned} \end{equation} While the anisotropy parameter $r$ is fixed, its impact on the magnetization depends on the temperature range probed. In a regime sufficently far from $T_{c}$, $r\ll\epsilon$, the system essentially behaves as decoupled layers ($r\rightarrow0$) and Eq.(\ref{mag}) becomes \citep{larkin2005book,mishonov2000} \begin{equation} \begin{aligned}\frac{M\left(\epsilon\gg r\right)}{M_{\infty}}= & -\frac{2\left(\epsilon+1\right)}{\ln2}\left\{ \frac{\epsilon}{2h}\left[\psi\left(\frac{1}{2}+\frac{\epsilon}{2h}\right)-1\right]\right.\\ & \left.-\ln\frac{{\Gamma}\left(\frac{1}{2}+\frac{\epsilon}{2h}\right)}{\sqrt{2\pi}}\right\} . \end{aligned} \label{2dmag} \end{equation} On the other hand, as $T_{c}$ is approached, the system will eventually cross over to the regime $r\gg\epsilon$. Then, the three-dimensional nature of the system cannot be neglected, and the magnetization becomes \citep{larkin2005book,mishonov2000,kurkijarvi1972}: \begin{equation} \begin{aligned}\frac{M\left(\epsilon\ll r\right)}{M_{\infty}}= & -\frac{6\left(\epsilon+1\right)}{\ln2}\left(\frac{2}{r}\right)^{1/2}\sqrt{h}\left[\zeta\left(-\frac{1}{2},\frac{1}{2}+\frac{\epsilon}{2h}\right)\right.\\ & \left.-\frac{1}{3}\zeta\left(\frac{1}{2},\frac{1}{2}+\frac{\epsilon}{2h}\right)\frac{\epsilon}{2h}\right] \end{aligned} \label{3dmag} \end{equation} where $\zeta(\nu,x)$ is Hurwitz zeta function. \begin{figure} \centering\includegraphics[width=0.5\textwidth]{1_new} \caption{Magnetization (red curve, in units of $M_{\infty}$) induced by superconducting fluctuations, in the presence of a dc field $h$, as a function of the reduced temperature $\epsilon$ according to Eq. (\ref{mag}). We also include for comparison the asymptotic expressions for $M(\epsilon\gg r)$ (green dashed curve) and $M(\epsilon\ll r)$ (blue dotted curve), Eqs. (\ref{2dmag}) and (\ref{3dmag}), respectively. A crossover clearly takes place when $\epsilon\sim r$. The dimensionless parameters chosen here were $h=0.01,\,r=0.5$. The insets are zooms on different temperature ranges. } \label{fig:flucmag} \end{figure} Therefore, as $T_{c}$ is approached from above, we expect a crossover of the temperature-dependent magnetization from 2D-like behavior to 3D-like behavior, with the crossover temperature corresponding to $\epsilon\sim r$. This general behavior is illustrated in Fig. \ref{fig:flucmag}, where $M$ given by Eq. (\ref{mag}) is plotted as a function of the reduced temperature $\epsilon$ together with the asymptotic expressions in Eqs. (\ref{2dmag})-(\ref{3dmag}) for a fixed field value. As expected, the contribution of the superconducting fluctuations to the magnetization are negative. It will be useful later to contrast the temperature dependence of the third-harmonic response $\overline{M_{3}}$ with that of the nonlinear magnetic susceptibility. To derive the latter, we consider the limit of small fields, \textit{i.e.} when the dimensionless magnetic field is the smallest parameter of the problem, $h\ll\epsilon,r$. Going back to the main expression for the magnetization in Eq. (\ref{mag}), it is convenient to define $y=\frac{\epsilon+r\sin^{2}\phi}{2h}$. Since $h\ll\epsilon,r$, it follows that $y\gg1$ and the integrand can be expanded as: \begin{align} y\left[\psi\left(y+\frac{1}{2}\right)-1\right]-\ln\Gamma\left(y+\frac{1}{2}\right)+\frac{1}{2}\ln(2\pi) & =\nonumber \\ \frac{1}{12y}-\frac{7}{720y^{3}}+\frac{31}{6720y^{5}}+\mathcal{O}\left(x^{-7}\right) \end{align} The integrals over $\phi$ can be analytically evaluated. Expanding the magnetization in odd powers of $h$, \begin{equation} \frac{M}{M_{\infty}}=\chi_{1}h+\chi_{3}h^{3}+\chi_{5}h^{5}+\mathcal{O}(h^{7})\label{M_suscept} \end{equation} we find the following expressions for the linear and nonlinear susceptibilities (see also Refs. \citep{tsuzuki1969,mishonov2000}): \begin{align} \chi_{1} & =-\frac{\left(1+\epsilon\right)}{3\ln2}\frac{1}{\epsilon^{1/2}\sqrt{\epsilon+r}}\\ \chi_{3} & =\frac{7\left(1+\epsilon\right)}{360\ln2}\frac{\left(3r^{2}+8r\epsilon+8\epsilon^{2}\right)}{\epsilon^{5/2}\left(\epsilon+r\right)^{5/2}}\label{chi3}\\ \chi_{5} & =-\frac{31\left(1+\epsilon\right)}{13440\ln2}\times\nonumber \\ & \frac{\left(35r^{4}+160r^{3}\epsilon+288r^{2}\epsilon^{2}+256r\epsilon^{3}+128\epsilon^{4}\right)}{\epsilon^{9/2}\left(\epsilon+r\right)^{9/2}} \end{align} Close enough to $T_{c}$, when $\epsilon\ll r$, we find the following power-law behaviors \begin{align} \chi_{1} & \sim-\frac{\epsilon^{-1/2}}{\sqrt{r}}\\ \chi_{3} & \sim\frac{\epsilon^{-5/2}}{\sqrt{r}}\label{chi3_power_law}\\ \chi_{5} & \sim-\frac{\epsilon^{-9/2}}{\sqrt{r}} \end{align} \subsection{The third-harmonic magnetic response $\overline{M_3}$: experimental setup and theory} One of the most common experimental probes of superconducting fluctuations is to apply a dc magnetic field and measure the magnetic response, see Eq. (\ref{M_suscept}). The key issue with measuring the linear susceptibility $\chi_{1}$ is that the diamagnetic contribution due to the superconducting fluctuations is typically much smaller than the paramagnetic contributions from other normal-state degrees of freedom. For the nonlinear susceptibility $\chi_{3}$, however, one generally expects that the intrinsic normal-state contribution is negligible in most cases, which could in principle allow one to assess the contribution from the superconducting fluctuations in a more unambiguous fashion. Note that, while in principle the susceptibilities $\chi_{1}$ and $\chi_{3}$ are tensor quantities, our experimental setup is designed in such a way that both the excitation and detection coils are along the same axis. We therefore only measure in-plane diagonal components, which are equivalent for a tetragonal or cubic system. Hereafter we refer only to a scalar $\chi_{3}$. Instead of applying a dc magnetic field, the experimental technique presented in Ref. \citep{pelc2019} and utilized here employs an ac field (of the form $H_{0}\cos{\omega}t$) and a system of coils to measure the oscillating sample magnetization. In order to determine the third-order response, a lock-in amplifier is used at the third harmonic of the fundamental frequency $\omega$, which is typically in the kHz range. If the fifth-order susceptibility is significantly smaller than the third-order susceptibility, the third harmonic response is a good measure of the third-order susceptibility. This condition was experimentally verified by measuring at the fifth harmonic, where the signal was found to be vanishingly small except extremely close to $T_{c}$, where it was still an order of magnitude smaller than the third harmonic. We can thus safely ignore the higher-order contributions. Most of the data presented here were published in Ref. \citep{pelc2019}, and were obtained in two separate experimental setups. Low-temperature measurements on strontium ruthenate were performed in a $^{3}$He evaporation refrigerator with a custom-made set of coils. Samples of conventional superconductors were measured in a modified Quantum Design MPMS, where we used the built-in AC susceptibility coil to generate the excitation magnetic field, and a custom-made probe with small detection coils to maximize the filling factor. We estimate that the magnetization sensitivity of both setups is better than 1 nanoemu, an improvement of 1-2 orders of magnitude over standard SQUID-based instruments. This is made possible by lock-in detection, matching the impedance of the detection coils and lock-in amplifier inputs, and large filling factors of the detection coils \citep{Drobac2013}. \begin{figure} \centering\includegraphics[width=0.5\textwidth]{2_v2eq} \caption{Absolute value of the third-harmonic response, $|\overline{M_{3}}|$ in Eq. (\ref{numeric}), in units of $M_{\infty}$, as a function of the reduced temperature $\epsilon\equiv\frac{T-T_{c}}{T_{c}}$, plotted on a log-log scale (red curve). The dashed black line corresponds to the analytical approximation in Eq. (\ref{M3_analytics}), which gives a $\epsilon^{-5/2}$ power-law behavior. The dimensionless parameters used here are $h_{0}=10^{-3}$ and $r=1$.} \label{fig:I3log} \end{figure} Although we expect the third-harmonic response to exhibit behavior similar to the third-order nonlinear susceptibility $\chi_{3}$, there are important differences, since the amplitude of the oscillating field, albeit small ($H_{0}\sim1$ Oe), is nonzero. Thus, to provide a more direct comparison between the LD model and experiments, we directly compute the third-harmonic response, which we denote by $\overline{M_{3}}$. In our experimental setup, the signal corresponds to the Fourier transform of $\frac{\partial M}{\partial t}$ at $3\omega$, \begin{equation} \overline{M_{3}}\left(\epsilon\right)=\int_{-\frac{\pi}{\omega}}^{\frac{\pi}{\omega}}\frac{\partial M\left(\epsilon,h(t)\right)}{\partial t}\,e^{3i\omega t}dt,\label{response} \end{equation} where $M\left(\epsilon,h(t)\right)$ is obtained from Eq.(\ref{mag}) by substituting $h=h_{0}\cos{\omega}t$. Integration by parts gives $\overline{M_{3}}\left(\epsilon\right)=-3i\int_{-\pi}^{\pi}M(\epsilon,h_{0}\cos\theta)e^{3i\theta}d\theta$ with $\theta=\omega t$. Using the fact that $M\left(\epsilon,-h\right)=-M\left(\epsilon,h\right)$, we have $\int_{-\pi}^{-\pi/2}M(\epsilon,h_{0}\cos\theta)e^{i3\theta}d\theta=\int_{0}^{\pi/2}M(\epsilon,h_{0}\cos\theta)e^{i3\theta}d\theta$ and $\int_{\pi/2}^{\pi}M(\epsilon,h_{0}\cos\theta)e^{i3\theta}d\theta=\int_{-\pi/2}^{0}M(\epsilon,h_{0}\cos\theta)e^{i3\theta}d\theta$, which yields \begin{equation} \overline{M_{3}}\left(\epsilon\right)=-6i\int_{-\pi/2}^{\pi/2}d\theta\,M\left(\epsilon,h_{0}\cos\theta\right)\cos3\theta,\label{numeric} \end{equation} where the field $h_{0}\cos\theta$ remains positive between the integration limits. Experimentally, both the imaginary and real parts can be measured. However, due to issues with lock-in phase determination in third-harmonic measurements \citep{Drobac2013}, we simply use the absolute value of $\overline{M_{3}}$ for comparison between the experimental and theoretical results. \begin{figure} \begin{centering} \includegraphics[width=0.5\textwidth]{3and4_2} \par\end{centering} \caption{Absolute value of the third-harmonic response $|\overline{M_{3}}|$ (in units of $M_{\infty}$) as a function of the reduced temperature $\epsilon$ for varying $h_{0}$ values (fixed $r=1$, panel (a)) and varying $r$ values (fixed $h_{0}=10^{-3}$, panel (b)). The dashed lines mark the power-law behavior $\epsilon^{-5/2}$ displayed by the curves with larger values of $r$.} \label{fig:I3varyh} \end{figure} In the temperature range where $h_{0}\ll\epsilon$, we can substitute the series expansion (\ref{M_suscept}) in Eq. (\ref{numeric}) and find: \begin{equation} \frac{\left|\overline{M_{3}}\right|}{M_{\infty}}\approx\frac{3\pi}{4}\chi_{3}h_{0}^{3}+\frac{15\pi}{16}\chi_{5}h_{0}^{5}. \end{equation} Now, in the relevant regime $r\gg\epsilon$, according to Eqs. (\ref{chi3_power_law}), we have $\chi_{3}\sim\epsilon^{-5/2}$ and $\chi_{5}\sim\epsilon^{-9/2}$. Therefore, as long as we remain in the regime $h_{0}\ll\epsilon$, the contribution from the fifth-order nonlinear susceptibility $\chi_{5}$ can be neglected. Using Eq. (\ref{chi3}) we obtain: \begin{equation} \frac{\left|\overline{M_{3}}\right|}{M_{\infty}}\approx\left(\frac{7\pi}{160\ln2}\right)\frac{h_{0}^{3}\left(1+\epsilon\right)\epsilon^{-5/2}}{\sqrt{r}}\label{M3_analytics} \end{equation} Therefore, we expect that, in the temperature range $h_{0}\ll\epsilon\ll r$, the third-harmonic response $\left|\overline{M_{3}}\right|$ displays the power-law behavior $\left(T-T_{c}\right)^{-5/2}$ characteristic of the third-order nonlinear susceptibility $\chi_{3}$. To verify this behavior explicitly, in Fig. \ref{fig:I3log} we present the numerically calculated $|\overline{M_{3}}|$ for $h_{0}=10^{-3}$ and $r=1$, and compare it with the analytical approximation in Eq. (\ref{M3_analytics}). It is clear that the expected power-law behavior appears over a rather wide temperature range. As one approaches $T_{c}$ from above and reaches the temperature scale $\epsilon\sim h_{0}$, deviations from the power-law are observed, and $\left|\overline{M_{3}}\right|$ saturates to a constant value. This is a direct consequence of the fact that we are not computing the dc susceptibility, but the ac third-harmonic response at a fixed field amplitude $h_{0}$. Figs. \ref{fig:I3varyh}(a)-(b) depict how the temperature window in which power-law behavior is observed is affected by changing $r$ and $h_{0}$. As expected, increasing $h_{0}$ significantly suppresses the window of power-law behavior, as the temperature scale $\epsilon\sim h_{0}$ is moved up. On the other hand, the anisotropy parameter $r$ has a rather minor impact on the temperature range in which $\epsilon^{-5/2}$ behavior is observed. \section{Comparison with experimental data \label{sec:Comparison}} \subsection{Conventional Superconductors (Pb, Nb, and V)} In order to validate the LD approach for the third-harmonic response, we first compare the theoretical results for $\overline{M_{3}}$ from Eq.(\ref{numeric}) with the experimental third-harmonic data for three conventional elemental superconductors: lead (Pb), niobium (Nb), and vanadium (V). Besides an overall pre-factor, there are three fitting parameters in our formalism: the upper critical field $H_{c2}$, the critical temperature $T_{c}$, and the anisotropy ratio $r$. The field $H_{0}$ is 1.3~Oe as generated by the excitation coil, but the true value could be modified by demagnetization factors (especially very close and below $T_{c}$) by up to a factor of $\sim2$. Hereafter, for concreteness, we will use $H_{0}=1.3$ Oe for all cases. Since these materials are rather three-dimensional, we expect the $z$-axis correlation length $\xi_{z}$ to be larger than the layer distance $s$ in the LD model, \textit{i.e.} $r>4$. Thus, because the reduced temperatures probed are very small ($\epsilon_{\mathrm{max}}\sim10^{-2}$), the precise value of $r$ does not significantly affect the temperature dependence of $|\overline{M_{3}}|$ in the experimentally relevant temperature regime (as shown above in Fig. \ref{fig:I3varyh}(b)). Therefore, to minimize the number of fitting parameters, we set $r=10$ in all cases. This leaves only two free parameters, $H_{c2}$ and $T_{c}$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{5_v2n} \caption{Comparison between the measured third-harmonic response $\left|\overline{M_{3}}\right|$ (circle and square black symbols, in arbitrary units) for Pb and the theoretical results obtained from Eq. (\ref{numeric}) (dashed and solid red lines). Panels (a) and (c) ((b) and (d)) show the data on a linear (logarithmic) scale. Fit parameters are shown in Table \ref{Table_fitting}. In panels (a)-(b), the fit parameter is the critical field $\tilde{H}_{c2}$ in Table \ref{Table_fitting}, whereas the critical temperature is set to its experimental value $T_{c}^{(\mathrm{exp})}$. In panels (c)-(d), the fit parameters are $H_{c2}$ and $T_{c}$. The anisotropy parameter is set to $r=10$.} \label{fig:I3Pb} \end{figure} \begin{figure} \centering\includegraphics[width=0.5\textwidth]{6_v2n} \caption{Comparison between the measured third-harmonic response $\left|\overline{M_{3}}\right|$ (circle and square blue symbols, in arbitrary units) for Nb and the theoretical results obtained from Eq. (\ref{numeric}) (dashed and solid red lines). Panels (a) and (c) ((b) and (d)) show the data in linear (logarithmic) scale. Fit parameters are shown in Table \ref{Table_fitting}. In panels (a)-(b), the fit parameter is the critical field $\tilde{H}_{c2}$ in Table \ref{Table_fitting}, whereas the critical temperature is set to its experimental value $T_{c}^{(\mathrm{exp})}$. In panels (c)-(d), the fitting parameters are $H_{c2}$ and $T_{c}$. The anisotropy parameter is set to $r=10$. } \label{fig:I3Nb} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{7_v3} \caption{Comparison between the measured third-harmonic response $\left|\overline{M_{3}}\right|$ (circle and square green symbols, in arbitrary units) for V and the theoretical results obtained from Eq. (\ref{numeric}) (dashed and solid red lines). Data from two different samples are presented (light green and dark green symbols). Panels (a) and (c) ((b) and (d)) show the data in linear (logarithmic) scale. Fit parameters are shown in Table \ref{Table_fitting}. In panels (a)-(b), the fit parameter is the critical field $\tilde{H}_{c2}$ in Table \ref{Table_fitting}, whereas the critical temperature is set to its experimental value $T_{c}^{(\mathrm{exp})}$. In panels (c)-(d), the fit parameters are $H_{c2}$ and $T_{c}$. The anisotropy parameter is set to $r=10$.} \label{fig:I3V} \end{figure} The comparison between theoretical and experimental results is shown Figs. \ref{fig:I3Pb}, \ref{fig:I3Nb}, and \ref{fig:I3V} for Pb, Nb, and V, respectively. In all figures, the circle and square symbols correspond to data, whereas dashed and solid lines correspond to theoretical results. Experimental measurements of $\left|\overline{M_{3}}\right|$ become challenging below $\epsilon\sim10^{-4}$ due to thermometry resolution issues, and the signal typically decays below the noise level around $\epsilon\sim10^{-2}$, indicating a small temperature regime of significant superconducting fluctuations. In the case of V, a kink is observed in one sample (light green symbols), which is possibly a spurious signal due to solder superconductivity or the result of a slight macroscopic sample inhomogeneity. For this reason, we also include results from a second sample (dark green symbols). Because the overall magnitude of the experimental $\left|\overline{M_{3}}\right|$ is arbitrary and changes with modifications of the set-up, we rescaled the $\left|\overline{M_{3}}\right|$ values of the second sample (dark green symbols) by an overall constant to better match the behavior of $\left|\overline{M_{3}}\right|$ of the first sample (light green symbols) at larger $\epsilon$ values. In order to obtain the best fit, we considered two slightly different procedures. In panels (a)-(b) of each figure (dashed lines), we fixed $T_{c}$ to be the temperature at which the third-harmonic response displays a maximum; we refer to this value as $T_{c}^{(\mathrm{exp})}$. It is important to note, however, that this value is not necessarily the exact temperature of zero resistance onset. For this reason, and given the intrinsic experimental uncertainties in the precise absolute determination of $T_{c}$, in panels (c)-(d) (solid lines) we allowed $T_{c}$ to vary from $T_{c}^{(\mathrm{exp})}$, but by no more than $0.5\%$. The fit parameters are shown in Table \ref{Table_fitting}, together with the experimental values for $T_{c}^{(\mathrm{exp})}$ and $H_{c2}^{(\mathrm{exp})}$, the latter taken from Ref. \citep{lide2004crc}. Note that, to distinguish between the two fitting procedures, we denote by $\tilde{H}_{c2}$ the value used in panels (a)-(b) of the figures. Moreover, since Pb is a type-I superconductor, $H_{c2}^{(\mathrm{exp})}$ was estimated through $\sqrt{2}\kappa H_{c}$ \citep{tinkham2004book}, with $\kappa=0.24$ \citep{smith1969PbkappaTcHc,Pbkappa1969} and $H_{c}=803$ Oe \citep{smith1969PbkappaTcHc,martienssen2006springer}. Panels (a)-(b) of Figs. \ref{fig:I3Pb}, \ref{fig:I3Nb}, and \ref{fig:I3V} show that the theoretical curves obtained by fixing $T_{c}=T_{c}^{(\mathrm{exp})}$ provide a reasonable description of the third-harmonic data in the region not too close to $T_{c}$ for Pb and V (Figs. \ref{fig:I3Pb} and \ref{fig:I3V}), and in the region close to $T_{c}$ for Nb (Fig. \ref{fig:I3Nb}). In particular, the latter does not seem to display the characteristic $\epsilon^{-5/2}$ power-law behavior observed in the former two in the regime of intermediate $\epsilon$ values. However, because of the definition of the reduced temperature, $\epsilon=\frac{T-T_{c}}{T_{c}}$, even small changes in $T_{c}$ within typical experimental uncertainty could account for these deviations between theory and experiment. As noted above, to address this issue we performed a second fit procedure allowing $T_{c}$ to be slightly different than $T_{c}^{(\mathrm{exp})}$. As shown in panels (c)-(d) of the same figures, we find a better agreement between the theoretical and experimental results over a wider temperature range, including in the case of Nb in the intermediate $\epsilon$ range. Comparing the theoretical $T_{c}$ values in Table \ref{Table_fitting} with the $T_{c}^{(\mathrm{exp})}$ values, we note that in all cases $T_{c}$ is slightly larger than $T_{c}^{(\mathrm{exp})}$. This is the reason why in panels (c)-(d) the theoretical curves stop at $\epsilon=0$ whereas the data extend to the region $\epsilon<0$. On the other hand, there is a more significant difference between $H_{c2}$ and the experimental value $H_{c2}^{(\mathrm{exp})}$ taken from the literature, with the former being a factor of approximately $2$ to $6$ smaller or larger than the latter. We note that the intrinsic uncertainty in the precise value of $H_{0}$ in our experiment may explain at least part of this discrepancy. Moreover, the value of $H_{c2}^{(\mathrm{exp})}$ strongly depends on material preparation details, especially for polycrystalline samples where significant internal strains can be present \citep{van1967NbVTcHc2}. In principle, the critical fields are lower in more pristine materials, and it is therefore meaningful to take the lowest known experimental values (taken from Ref. \citep{lide2004crc}) for our comparison. Finally, while the LD model employed here to calculate $\left|\overline{M_{3}}\right|$ assumes a layered system, the bulk elemental superconductors are cubic. On top of that, the LD approach of including only Gaussian fluctuations is expected to break down below a very small $\epsilon_{\mathrm{crit}}$, whose precise value is likely different for distinct materials. Despite these drawbacks, this comparison shows that the LD model for the third-harmonic response $\left|\overline{M_{3}}\right|$ due to contributions from superconducting fluctuations provides a satisfactory description of the experimental results. \begin{table} \begin{centering} \begin{tabular}{|c|c|c|c|c|c|} \hline & $T_{c}^{(\mathrm{exp})}$(K) & $H_{c2}^{(\mathrm{exp})}$(G) & $\tilde{H}_{c2}$ (G) & $H_{c2}$ (G) & $T_{c}^{(\mathrm{exp})}/T_{c}$\tabularnewline \hline \hline Pb & 7.18 & 273 & 2170 & 1083 & 0.9996\tabularnewline \hline Nb & 9.31 & 1710 & 166 & 286 & 0.9955\tabularnewline \hline V & 5.29 & 1200 & 1300 & 520 & 0.9980\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Experimental critical temperature and critical field values, $T_{c}^{(\mathrm{exp})}$ and $H_{c2}^{(\mathrm{exp})}$, compared to the theoretical fitting parameters $T_{c}$,\textcolor{blue}{{} }$\tilde{H}_{c2}$ and $H_{c2}$. $\tilde{H}_{c2}$ corresponds to the fits in panels (a)-(b) of Figs. \ref{fig:I3Pb}, \ref{fig:I3Nb}, and \ref{fig:I3V}, where $T_{c}$ is forced to be equal to the temperature where the experimental third-harmonic response displays a maximum (denoted here by $T_{c}^{(\mathrm{exp})}$). On the other hand, $H_{c2}$ corresponds to the fits in panels (c)-(d) of the same figures, where $T_{c}$ is allowed to be different from the experimental value. The $H_{c2}^{(\mathrm{exp})}$ values for Nb and V are the smallest ones reported in Ref. \citep{lide2004crc}, whereas $H_{c2}^{(\mathrm{exp})}$ for Pb was estimated as explained in the text. \label{Table_fitting}} \end{table} \subsection{Strontium Ruthenate (Sr$_{2}$RuO$_{4}$)} Having validated our theoretical approach to compute the third-harmonic response $\left|\overline{M_{3}}\right|$ by comparison with data for elemental superconductors, we now perform the same comparison with the lamellar perovskite-derived superconductor Sr$_{2}$RuO$_{4}$ (SRO). The main advantage of our LD calculation of $\left|\overline{M_{3}}\right|$ is that it is entirely phenomenological and independent of microscopic details. In fact, the main assumption is that the superconducting fluctuations can be described by a Gaussian approximation. Consequently, the calculation could in principle be applicable to unconventional superconductors as well. SRO is believed to host an unconventional superconducting state that breaks time-reversal symmetry \citep{Luke1998,Kapitulnik2006,Grinenko2021}. Whereas for a long time SRO was considered a promising candidate for $p$-wave triplet superconductivity \citep{Mackenzie2003,Kallin2012}, recent experiments have revealed problems with this interpretation \citep{mackenzie2017,Pustogow2019,Chronister2020}. This has motivated alternative proposals involving \textit{e.g.} $d$-wave and $g$-wave superconductivity \citep{Simon2019,Ramires2019,Romer2019,Agterberg2020,Kivelson2020,Willa2020}. As mentioned above, the data presented here are the same as in Ref. \citep{pelc2019}. As shown there, the third-harmonic response of other perovskite-based superconductors like strontium titanate and the cuprates display a similar unusual temperature dependence. The data for SRO are shown by the orange symbols in Fig. \ref{fig:I3SRO} on linear scale (panel (a)), logarithmic scale (panel (b)), and semi-logarithmic scale (panel (c)). The theoretical results for $\left|\overline{M_{3}}\right|$ are plotted in the same panels using the experimental critical temperature value, $T_{c}=1.51\:\mathrm{K}=T_{c}^{(\mathrm{exp})}$, and two different critical field values: $H_{c2}=750\:\mathrm{G}=H_{c2}^{(\mathrm{exp})}$ (dashed lines) and $H_{c2}=7.6\:\mathrm{G}\approx0.01H_{c2}^{(\mathrm{exp})}$ (dotted lines). Here, $T_{c}^{(\mathrm{exp})}$ corresponds to the temperature at which the third-harmonic response is maximum, and $H_{c2}^{(\mathrm{exp})}$ is the experimental value reported in the literature \citep{Mackenzie2003,kittaka2009}. The key observation is that the theoretical $\left|\overline{M_{3}}\right|$ curve with $H_{c2}=H_{c2}^{(\mathrm{exp})}$ grossly underestimates the data. It is necessary to reduce $H_{c2}$ by two orders of magnitude to obtain values that are comparable between theory and experiment. In contrast, for the elemental superconductors, the difference in the theoretical and experimental $H_{c2}$ values was at most a factor of $6$. More importantly, even by changing $H_{c2}$ by such a large amount, the temperature dependence of the data is not captured by the theoretical $\left|\overline{M_{3}}\right|$ curve, in contrast again to the case of conventional superconductors. Indeed, while the theoretical $\left|\overline{M_{3}}\right|$ curve shows a power-law for intermediate reduced temperatures, the data display an accurately exponential temperature dependence, as discussed in Ref. \citep{pelc2019} and shown in panel (c) of Fig. \ref{fig:I3SRO}. We note that the experimental $H_{c2}$ value depends very strongly on the orientation of the field with respect to the crystalline $c$-axis, such that a small misalignment can lead to sizable variation \citep{kittaka2009}. However, the discrepancy between the theoretical and experimental results cannot be explained by sample misalignment, since the critical field \textit{increases} with increasing angle between the field direction and the crystalline $c$-axis, whereas our theoretical results require \textit{smaller} $H_{c2}$ values. \begin{figure} \begin{centering} \includegraphics[width=0.5\textwidth]{8_v3_2} \par\end{centering} \caption{Comparison between the experimentally measured third-harmonic response $\left|\overline{M_{3}}\right|$ (orange symbols, in arbitrary units) for SRO and the theoretical results obtained from Eq. (\ref{numeric}) (dashed and dotted red lines). Panels (a), (b), and (c) show the data on linear, logarithmic, and semi-logarithmic scale, respectively. For the theoretical curves, the critical temperature is set to its experimental value $T_{c}^{(\mathrm{exp})}$ whereas the critical field is set to $H_{c2}^{(\mathrm{exp})}$ (dashed lines) and to $0.01H_{c2}^{(\mathrm{exp})}$ (dotted lines). The anisotropy parameter is set to $r=10$.} \label{fig:I3SRO} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{9new} \caption{Comparison between the normalized third-harmonic response data and the theoretical $\left|\overline{M_{3}}\right|$ results for Pb, Nb, V and SRO on a logarithmic scale. The solid lines correspond to the best fits in Figs. \ref{fig:I3Pb}, \ref{fig:I3Nb}, and \ref{fig:I3V}, which refer to the conventional superconductors, whereas the dashed and dotted lines correspond to the fits for SRO in Fig. \ref{fig:I3SRO}.} \label{fig:I3all} \end{figure} Fig. \ref{fig:I3all} summarizes the third-harmonic response $\left|\overline{M_{3}}\right|$ of the three conventional superconductors studied here (Pb, Nb, V), as well as of the unconventional superconductor SRO. The differences between SRO and the conventional superconductors are not only on the temperature dependence of $\left|\overline{M_{3}}\right|$, but also on the fact that $\left|\overline{M_{3}}\right|$ is larger and extends over a much wider relative temperature range in SRO. Indeed, while superconducting fluctuations are detected up to $\epsilon\sim10^{-2}$ in conventional superconductors, they extend all the way up to $\epsilon\sim1$ in SRO. To attempt to address the discrepancy between the theoretical and experimental results for SRO, we revisit the assumptions behind the LD model, from which we derived the expression for $\left|\overline{M_{3}}\right|$. As discussed above, the LD model makes no reference to the microscopic pairing mechanism. However, it does assume a homogeneous system. In contrast, perovskites are known for their intrinsic inhomogeneity, arising from \textit{e.g.} oxygen vacancies and local structural distortions that deviate strongly from the average lattice structure (see \citep{pelc2021} and references therein). Indeed, the experiments of Ref. \citep{pelc2019} indicate that universal structural inhomogeneity is present in perovskite-based superconductors such as SRO. It has also been argued that dislocations can have a strong impact on the superconducting state properties of several perovskites \citep{Ying2013,Hameed2020,Willa2020}. In the particular case of SRO, muon spin-rotation measurements find a rather inhomogeneous signature of time-reversal symmetry-breaking below $T_{c}$ \citep{Grinenko2021}. It is also known that the $T_{c}$ of SRO is strongly dependent on stress \citep{steppke2017,Grinenko2021}, implying that inhomogeneous internal stresses would lead to regions with locally modified $T_{c}$. Simple point disorder also leads to a variation of the local critical temperature \citep{mackenzie1998}. Indeed, scanning SQUID measurements have directly detected $T_{c}$ inhomogeneity on the micron scale \citep{watson2018}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{10new} \caption{(a) Normalized probability distribution function of the critical temperature $t_{c}$ for different values of the parameter $\sigma$ in Eq. (\ref{P_tc}). Here, the parameter $\mu$ is fixed by the condition $v_{F}\left(T_{c}^{(\mathrm{exp})}\right)=0.3$, with $T_{c}^{(\mathrm{exp})}=1.51$ K (indicated by the dashed gray vertical line) and the temperature-dependent superconducting volume fraction $v_{F}$ defined by Eq. (\ref{volume_fraction}). (b) Averaged third-harmonic response $\left\langle \left|\overline{M_{3}}\right|\right\rangle $ calculated from the distribution functions of panel (a), compared to the data for SRO, as a function of $\epsilon=\frac{T}{T_{c}}-1$. In this calculation, we used the experimental values $T_{c}^{(\mathrm{exp})}=1.51\:\mathrm{K}$ and $H_{c2}^{(\mathrm{exp})}=750\:\mathrm{G}$, and set $r=10$.} \label{fig:ln} \end{figure} The impact of inhomogeneity on superconducting properties has been studied by a variety of approaches \citep{coleman1995,andersen2006,Trivedi2014,Pelc2018,dodaro2018}. Here, we consider a phenomenological approach that introduces a probability distribution of the local $T_{c}$ (see also Ref. \citep{mayoh2015}). Such an inhomogeneous $T_{c}$ distribution may explain why the superconducting fluctuations in SRO are stronger and extend to higher reduced temperatures as compared to conventional superconductors, since regions with a locally higher $T_{c}$ are expected to result in a much larger contribution to $\left|\overline{M_{3}}\right|$ than that arising from the rest of the sample. To test this idea, we include a distribution function for $T_{c}$ into our LD-based phenomenological model. We denote the ``transition temperature variable'' as $t_{c}$, and reserve the notation $T_{c}$ for the actual transition temperature of the system to avoid confusion. The form of the distribution function $P(t_{c})$ depends on several sources of inhomogeneity in the system, see for instance Ref. \citep{dodaro2018}. A microscopic derivation is thus very challenging, and beyond the scope of this work. Instead, here we opt for a simple phenomenological modeling of $P(t_{c})$. In particular, we employ a normalized log-normal distribution: \begin{equation} P\left(t_{c}\right)=\frac{1}{t_{c}\sqrt{2\pi\sigma^{2}}}\exp\left[-\frac{\left(\ln\frac{t_{c}}{\mu}\right)^{2}}{2\sigma^{2}}\right]\label{P_tc} \end{equation} where $\mu$ and $\sigma$ are positive parameters that determine the mean value and variance of the distribution. The choice of this distribution is motivated by its properties of only allowing non-zero values of $t_{c}$ and of having long tails toward larger values of $t_{c}$. We note that a log-normal distribution for the local gap -- and consequently of the local $T_{c}$ -- was previously derived theoretically in Ref. \citep{mayoh2015} for disordered quasi-two-dimensional superconductors in the limit of weak multifractality, and observed experimentally in weakly disordered monolayer NbSe$_{2}$ \citep{Rubio2020}. The averaged fluctuation magnetization in Eq.(\ref{mag}) acquires the following form: \begin{equation} \left\langle M\right\rangle \left(\epsilon\right)=\int_{0}^{T}\,\frac{dt_{c}}{t_{c}\sqrt{2\pi\sigma^{2}}}\exp\left[-\frac{\left(\ln\frac{t_{c}}{\mu}\right)^{2}}{2\sigma^{2}}\right]M\left(\frac{T}{t_{c}}-1\right) \end{equation} with $M\left(\epsilon\right)$ given by Eq. (\ref{mag}). We can then compute the averaged third-harmonic response $\left\langle \left|\overline{M_{3}}\right|\right\rangle $ from Eq. (\ref{numeric}). We assume that $\left\langle \left|\overline{M_{3}}\right|\right\rangle $ is dominated by superconducting fluctuations contributions, which appear only in regions that are locally non-superconducting (\textit{i.e.} for which $\epsilon=\frac{T}{t_{c}}-1$ is positive). For this reason, the limits of the $t_{c}$ integration are such that $0<t_{c}<T$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{11new} \caption{Averaged third-harmonic response $\left\langle \left|\overline{M_{3}}\right|\right\rangle $ as a function of the reduced temperature $\epsilon=\frac{T}{T_{c}}-1$ calculated using the parameters $T_{c}=1.41\:\mathrm{K}\approx0.93T_{c}^{(\mathrm{exp})}$ and $\sigma=0.08$, while keeping $H_{c2}=H_{c2}^{(\mathrm{exp})}=750\:\mathrm{G}$ and $r=10$ (solid red line). The orange symbols are the experimental results, and the dashed red line reproduces the theoretical third-harmonic response $\left|\overline{M_{3}}\right|$ of the clean system with $T_{c}=T_{c}^{(\mathrm{exp})}$ and $H_{c2}=H_{c2}^{(\mathrm{exp})}$.} \label{fig:final} \end{figure} The two parameters characterizing the distribution function, $\mu$ and $\sigma$, are not independent, since they are related by the value of $T_{c}$. To see that, we first define the temperature-dependent superconducting volume fraction $v_{F}\left(T\right)$, which is given by \begin{equation} v_{F}\left(T\right)=1-\int_{0}^{T}{P\left(t_{c}\right)dt_{c}}=\frac{1}{2}-\frac{1}{2}\mathrm{erf}\left(\frac{\ln\frac{t_{c}}{\mu}}{\sqrt{2}\sigma}\right),\label{volume_fraction} \end{equation} since the integral on the right-hand side gives the non-superconducting volume fraction ($T>t_{c}$). When the volume fraction becomes larger than a threshold value $v_{F}^{*}$, the local superconducting regions are expected to percolate and the whole sample becomes superconducting. Note that a similar criterion was used in the analysis of Ref. \citep{mayoh2015}. $T_{c}$ is then obtained by solving the equation $v_{F}\left(T_{c}\right)=v_{F}^{*}$, \begin{equation} \frac{\mu}{T_{c}}=\exp\left[-\sqrt{2}\sigma\mathrm{erf}^{-1}\left(1-2v_{F}^{*}\right)\right], \end{equation} where $\mathrm{erf}^{-1}(x)$ is the inverse error function. For simplicity, we use for $v_{F}^{*}$ the site percolation threshold value for a cubic lattice, $v_{F}^{*}=0.3$. While $v_{F}^{*}$ itself could be considered a free parameter, we opt to fix it to avoid increasing the number of fitting parameters. As a result, the only additional parameter needed to compute $\left\langle \left|\overline{M_{3}}\right|\right\rangle $, as compared to the ``clean'' system $\left|\overline{M_{3}}\right|$, is the dimensionless $\sigma$, which determines the width of the distribution. In Fig. \ref{fig:ln}(a), we illustrate the profile of $P\left(t_{c}\right)$ for different values of $\sigma$ under the constraint $v_{F}\left(T_{c}^{(\mathrm{exp})}\right)=0.3$. The full expression for $\left\langle \left|\overline{M_{3}}\right|\right\rangle $ then becomes: \begin{widetext} \begin{equation} \frac{\left\langle \left|\overline{M_{3}}\right|\right\rangle \left(\epsilon\right)}{M_{\infty}}=\frac{24\left(\epsilon+1\right)}{\pi\ln2}\int_{0}^{1}\frac{dx}{x\sqrt{2\pi\sigma^{2}}}\exp\left\{ -\left[\frac{\ln\left(x\epsilon+x\right)}{\sqrt{2}\sigma}+\mathrm{erf}^{-1}\left(1-2v_{F}^{*}\right)\right]^{2}\right\} \int_{-\pi/2}^{\pi/2}\mathcal{M}\left(x,h_{0}\cos\theta\right)\cos3\theta\,d\theta \end{equation} with: \begin{equation} \mathcal{M}\left(x,h\right)=-\int_{0}^{\frac{\pi}{2}}d\phi\left\{ \frac{\frac{1}{x}-1+r\sin^{2}{\phi}}{2h}\left[\psi\left(\frac{\frac{1}{x}-1+r\sin^{2}{\phi}}{2h}+\frac{1}{2}\right)-1\right]-\ln{\Gamma}\left(\frac{\frac{1}{x}-1+r\sin^{2}{\phi}}{2h}+\frac{1}{2}\right)+\frac{1}{2}\ln{\left(2\pi\right)}\right\} \end{equation} \end{widetext} Using the distribution functions of Fig. \ref{fig:ln}(a), in Fig. \ref{fig:ln}(b) we present the calculated averaged third-harmonic response $\left\langle \left|\overline{M_{3}}\right|\right\rangle $ (solid red line) using the experimentally determined values for $T_{c}$ and $H_{c2}$. The comparison with the data shows that even a relatively mild width of the distribution of $t_{c}$ values, with $\sigma\apprle0.1$, is capable of capturing the extended temperature window for which the third-harmonic response is sizable. As anticipated, this behavior is a consequence of the fact that regions with a local higher $T_{c}$ value, although occupying a small volume, provide a sizable contribution to the third-harmonic response. The temperature dependence of the third-harmonic response data, however, is not very well captured by the theoretical curves in Fig. \ref{fig:ln}(b). To try to address this issue, we promote $T_{c}$ to a free parameter and allow it to deviate slightly from the experimental value $T_{c}^{(\mathrm{exp})}=1.51$ K. Fig. \ref{fig:final} shows the results for $\left\langle \left|\overline{M_{3}}\right|\right\rangle $ in the case of $T_{c}=1.41\:\mathrm{K}\approx0.93T_{c}^{(\mathrm{exp})}$ and $\sigma=0.08$. Clearly, the temperature dependence of the calculated $\left\langle \left|\overline{M_{3}}\right|\right\rangle $ becomes more similar to the experimentally measured one, but still fails to capture it completely. Thus, our conclusion is that while $T_{c}$ inhomogeneity may explain the extended temperature range where the third-harmonic response is sizable, it is unlikely to explain the exponential tail of $\left|\overline{M_{3}}\right|$ observed experimentally in Ref. \citep{pelc2019}. \section{Concluding remarks \label{sec:Conclusions}} In this work, we used the LD model to compute the third-harmonic magnetic response $\left|\overline{M_{3}}\right|$ due to Gaussian superconducting fluctuations. Due to its phenomenological nature, the LD model could in principle be applicable to both conventional and unconventional superconductors. Our detailed comparison with measurements of $\left|\overline{M_{3}}\right|$ found that the theoretical modeling provides a good description of the data in the case of Pb, Nb, and V -- provided that the critical field is properly modified from its experimental value -- but a rather poor account of the data for SRO. Inclusion of $T_{c}$ inhomogeneity, which is intrinsically present in SRO, improved significantly the agreement between theoretical model and experimental data, although the model could not properly capture the experimentally observed exponential temperature dependence of $\left|\overline{M_{3}}\right|$ (see Ref. \citep{pelc2019}). Further investigation is thus required to elucidate the origin of this exponential behavior of $\left|\overline{M_{3}}\right|$, which was also seen in other perovskite superconductors such as STO and the cuprates, and appears to be quite robust \citep{pelc2019}. One cannot completely discard simple $T_{c}$ inhomogeneity as the source of this effect, since here we only focused on a very specific and particularly simple distribution function for $T_{c}$. While this choice allowed us to argue on a more quantitative basis that $T_{c}$ inhomogeneity can explain why $\left|\overline{M_{3}}\right|$ remains large over a wide temperature window in SRO, the actual $T_{c}$ distribution is certainly more complicated and likely material-dependent. A phenomenological $T_{c}$ distribution will likely require fine tuning to give an exponential temperature dependence of the third-harmonic response. Nevertheless, if rare regions are present, they might give rise to specific tails in the distribution function that may be common to different materials; these types of effects have been explored in more detail in Refs. \citep{dodaro2018,pelc2021}. We also note that, in the particular case of the cuprates, an exponential temperature-dependent behavior associated with superconducting fluctuations was also observed in other observables such as linear/nonlinear conductivity and specific heat, and described in terms of a Gaussian $T_{c}$ distribution \citep{popcevic2018,Pelc2018}. It would be interesting to investigate whether the exponential temperature dependence observed in the third-harmonic response of SRO is also manifested in these other observables in the case of SRO. In fact, as shown in Ref. \citep{pelc2019}, prior specific heat data \citep{nishizaki1999} are consistent with this possibility. Besides inhomogeneity, another effect, more specific to SRO, is that if it is indeed a two-component superconductor, as proposed by different models \citep{Romer2019,Agterberg2020,Kivelson2020,Willa2020}, the superconducting fluctuation spectrum will likely be more complicated than that of the LD model. However, the fact that the same exponential temperature-dependence of $\left|\overline{M_{3}}\right|$ is seen in STO and cuprates, the latter being single-component superconductors, renders this scenario less likely. Finally, a crucial approximation of the LD model is that it solely focuses on Gaussian superconducting fluctuations. This raises the interesting question of whether non-Gaussian fluctuations may also play an important role in the fluctuation spectra of perovskite superconductors. \begin{acknowledgments} We thank Z. Anderson and S. Griffitt for assistance in ac susceptibility probe design and construction, and A. Mackenzie and C. Hicks for providing the SRO samples. This work was supported by the U. S. Department of Energy through the University of Minnesota Center for Quantum Materials, under Award No. DE-SC-0016371. \end{acknowledgments} \bibliographystyle{apsrev}
{ "timestamp": "2021-05-06T02:07:30", "yymm": "2105", "arxiv_id": "2105.01813", "language": "en", "url": "https://arxiv.org/abs/2105.01813" }
\section{Introduction\label{sec:intro}} Modeling the full spatio-temporal evolution of natural processes is often computationally expensive, motivating the use of reduced-order models (ROMs) that capture only the dominant behaviors of a system~\cite{noack2003hierarchy,Noack2011book,carlberg2011efficient,carlberg2013gnat,benner2015survey,Rowley2017arfm}. Projection-based model reduction is a common approach for generating such models; a high-dimensional system, such as a spatially discretized set of partial differential equations (PDEs), is projected onto a low-dimensional basis of modes~\cite{Taira2017aiaa,taira2020modal}. This projection leads to a computationally efficient system of ordinary differential equations (ODEs) that describes how the mode amplitudes evolve in time~\cite{holmes2012turbulence}. However, these models often suffer from stability issues, causing solutions to diverge in finite-time. To address this issue, Schlegel and Noack~\cite{Schlegel2015jfm} developed a ``trapping theorem'' with necessary and sufficient conditions for long-term model stability for systems that exhibit quadratic, energy-preserving nonlinearities. Quadratic nonlinearity is pervasive in nature, with common examples including convection in the Navier-Stokes equations and the Lorentz force in magnetohydrodynamics (MHD). The trapping theorem provides conditions for the existence of a global trapping region, towards which every system trajectory asymptotically and monotonically converges; once a trajectory enters this region, it remains inside for all time, guaranteeing that all trajectories are bounded. These types of guarantees are ideal for the application of real-time flow-control strategies. An example trapping region is illustrated by the blue sphere in Fig.~\ref{fig:overview} for the Lorenz system. For convenience, we will use the terms ``global stability'', ``long-term boundedness'', and ``monotonically trapping region'' interchangeably, although systems exhibiting trapping regions are a strict subset of globally stable systems (see Fig.~1 of Schlegel and Noack~\cite{Schlegel2015jfm} for a useful organizational diagram of these various notions of stability). In this work, we adapt the trapping theorem from projection-based modeling to promote global stability in data-driven machine learning models. \begin{figure}[t] \vspace{-.05in} \begin{center} \begin{overpic}[width=0.99\textwidth]{overview.pdf} \end{overpic} \end{center} \vspace{-.22in} \caption{Left: Decision diagram to determine global stability, modified from Schlegel and Noack~\cite{Schlegel2015jfm} and described in Section~\ref{sec:trapping_theorem}. Right: Illustration of a trapping region (blue sphere) for the Lorenz system; all outside trajectories monotonically approach this region, and after entering, remain inside. Trajectories inside the red ellipsoid experience positive energy growth, in this case precluding convergence to a fixed point.} \label{fig:overview} \vspace{-0.1in} \end{figure} Increasingly, reduced-order models of complex systems, such as fluids and plasmas, are discovered from data with modern machine learning algorithms~\cite{schmid_dynamic_2010,Rowley2009jfm,mezic_analysis_2013,Kaiser2014jfm,Brunton2016pnas,klus2018data,Rudy2017sciadv,Dam2017pf,loiseau2018constrained,Towne2018jfm,deng2020low,raissi2018hidden,pathak2018model,bar2019learning,Duraisamy2019arfm,rackauckas2020universal,alves2020data,pan2020physics,qian2020lift,kaptanoglu2020physics,li2020fourier,lee2020model,Herrmann2020arxiv,sanchez2020learning,kochkov2021machine,kaptanoglu2021structure}, rather than classical projection-based methods that are intrusive and require intricate knowledge of the governing equations. These data-driven approaches for modeling fluid dynamics~\cite{brenner2019perspective,brunton2020machine} range from generalized regression techniques~\cite{schmid_dynamic_2010,Brunton2016pnas,loiseau2018constrained} to deep learning~\cite{Duraisamy2019arfm,lee2020model,li2020fourier,sanchez2020learning,kochkov2021machine,raissi2020science}. It is often possible to improve the stability and performance of data-driven models by incorporating partially known physics, such as conservation laws and symmetries~\cite{loiseau2018constrained,kaptanoglu2020physics,kaptanoglu2021structure}, or known physical structure~\cite{cranmer2020lagrangian}. Thus, incorporating physics into machine learning and developing hybrid data-driven and operator-based approaches are rapidly growing fields of research~\cite{Majda2012nonlinearity,ballarin2015supremizer,peherstorfer2016data,loiseau2018constrained,yang2020physics,Raissi2019jcp,mohebujjaman2019physically,Noe2019science,lee2019deep,cranmer2020lagrangian}. Physics can be incorporated into machine learning algorithms through model structure, by augmenting training data with known symmetries, by adding constraints to the optimization, or by adding custom loss functions~\cite{brunton2020machine}. However, even physics-informed data-driven models often lack global stability guarantees, and the ability of these methods to find long-term bounded models depreciates as the state dimension increases. In this work, we use the Schlegel and Noack~\cite{Schlegel2015jfm} trapping theorem to diagnose and promote global stability of data-driven models with quadratic nonlinearities. Even though their theorem was developed in the context of projection-based ROMs, we emphasize that it can be applied directly to analyze data-driven model stability \emph{post hoc}, and we examine conditions under which it holds. Next, we describe how to use this theorem to promote global stability in machine learned models by modifying the optimization loss function. We illustrate this approach on the sparse identification of nonlinear dynamics (SINDy) algorithm~\cite{Brunton2016pnas,Rudy2017sciadv} by implementing a custom optimization loss term that promotes models that are globally stable by construction. A constrained version of the SINDy optimization was previously developed to enforce energy-preserving quadratic structure in incompressible fluids~\cite{loiseau2018constrained} and it has since been extended for arbitrary state size and global conservation laws in magnetohydrodynamic systems~\cite{kaptanoglu2020physics,kaptanoglu2021structure}. These constrained SINDy variants generally produce more stable models, and reflect a broader trend that stability issues in system identification can often be improved by building physical constraints into system identification methods~\cite{loiseau2018constrained,champion2020unified}. Our ``trapping SINDy'' algorithm generalizes previous stabilized or constrained reduced-order models for fluids by considering global rather than local stability, allowing for both transients and long-time attracting sets. Promoting global stability also improves robustness to noise over unconstrained or constrained SINDy. Recent work by Erichson et al.~\cite{erichson2019physics} promotes a more restrictive Lyapunov stable origin in fluid flows by adding a similar loss term to the optimization problem. Additionally, much of the literature has focused on the long-time energy properties of a dynamic attractor~\cite{balajewicz2013low} by either prescribing that the system be \textit{fully} energy-preserving (or Hamiltonian)~\cite{balajewicz2013lyapunov,carlberg2015preserving,peng2016symplectic,afkham2017structure, bhat2019learning,chu2020discovering} or applying real-time control~\cite{lasagna2016sum}. Mohebujjaman et al.~\cite{mohebujjaman2019physically} also used a simple version of the trapping theorem in order to constrain a hybrid projection-based and data-driven method. The present work builds on these studies, providing a framework for addressing the long-standing challenge of promoting global stability in data-driven models. The remainder of this paper is organized as follows: in Section~\ref{sec:roms}, we introduce the general class of systems with energy-preserving quadratic nonlinearities, investigate the circumstances under which the trapping theorem holds, and indicate connections with other stability descriptions in fluid mechanics. In Section~\ref{sec:methodology}, we define our ``trapping SINDy'' algorithm. Our trapping SINDy implementation is open-source and available through the PySINDy software package~\cite{silva2020pysindy}. This is a rather technical section on nonconvex optimization; the reader may skip this section and proceed to the results if the algorithmic details are not of interest. In Section~\ref{sec:results}, we demonstrate the effectiveness of this new system identification technique on a wide range of examples. Abridged versions of all of the results have been incorporated into a single PySINDy example notebook and can be reproduced in a few minutes on a laptop. In Section~\ref{sec:conclusion}, we conclude with suggestions for future work. Similar trapping theorems are promising for data-driven models in fields such as neuroscience, epidemiology, and population dynamics. \section{Reduced-order modeling and the trapping theorem \label{sec:roms}} Before describing how we incorporate the trapping theorem of Schlegel and Noack~\cite{Schlegel2015jfm} into data-driven models, here we briefly describe the family of projection-based ROMs for which the trapping theorem was introduced, and investigate the circumstances under which this theorem holds. It is helpful to first motivate this work by reviewing the many scenarios under which energy-preserving quadratic nonlinearities can arise. In fluid dynamics, the quadratic nonlinearity often represents the convective derivative $(\bm{u}\cdot\nabla)\bm{u}$ in the Navier-Stokes equations. This quadratic nonlinearity is energy-preserving for a large number of boundary conditions. Examples include no-slip conditions, periodic boundary conditions~\cite{mccomb1990physics,holmes2012turbulence}, mixed no-slip and periodic boundary conditions~\cite{rummler1998direct}, and open flows in which the velocity magnitude decreases faster than the relevant surface integrals expand (e.g., two-dimensional rigid body wake flows and three-dimensional round jets)~\cite{schlichting2016boundary}. In magnetohydrodynamics, there are additional quadratic nonlinearities through $\nabla\times(\bm{u}\times\bm{B})$ and $\bm{J}\times{\bm{B}}$, which are also energy-preserving with common experimental boundary conditions such as a conducting wall~\cite{freidberg2014ideal}, or a balance between dissipation and actuation in a steady-state plasma device~\cite{kaptanoglu2020two,kaptanoglu2021structure}. Notably, dissipationless Hall-MHD has four invariants; energy, cross-helicity, magnetic helicity, and generalized helicity~\cite{galtier2016introduction}, providing a wealth of potential model constraints for Hall-MHD ROMs. Here $\bm{u}$ is the fluid velocity, $\bm{J}$ is the electromagnetic current, and $\bm{B}$ is the magnetic field. \subsection{Projection-based ROMs}\label{Sec:ProjROMS} In modern scientific computing, a set of governing partial differential equations is typically discretized into a high-dimensional system of coupled ordinary differential equations. In this work we will explicitly consider dynamics with linear plus quadratic structure, as are found in many fluid and plasma systems: \begin{align} \dot{\bm{u}} = \bm{L}^0\bm{u} + \bm{Q}^0(\bm{u}). \end{align} Here we assume that the PDE has already been discretized for numerical computation, resulting in a coupled system of $n$ differential equations. The state of the system $\bm{u}(\bm{x},t)\in\mathbb{R}^n$ is a high-dimensional vector that represents the fluid velocity or other set of spatio-temporal fields, for example sampled on a high-resolution spatial grid. Thus, $\bm{L}^0$ and $\bm{Q}^0$ are high-dimensional operators used to perform the numerical simulation. The zero subscript distinguishes these operators from the Galerkin coefficients defined below in Eq.~\eqref{eq:Galerkin_model}. The goal of a projection-based ROM is to transform this high-dimensional system into a lower-dimensional system of size $r\ll n$ that captures the essential dynamics. One way to reduce the set of governing equations to a set of ordinary differential equations is by decomposition into a desired low-dimensional basis $\{\bm{\varphi}_i(\bm x)\}$ in a process commonly referred to as Galerkin expansion: \begin{align} \label{eq:galerkin_expansion} \bm{u}(\bm{x}, t) = \overline{\bm{u}}(\bm{x}) + \sum_{i=1}^r a_i(t) \bm{\varphi}_i(\bm{x}). \end{align} Here, $\overline{\bm{u}}(\bm{x})$ is the mean field, $\bm{\varphi}_i(\bm x)$ are spatial modes, and $a_i(t)$ describe how the amplitude of these modes vary in time. The proper orthogonal decomposition (POD)~\cite{holmes2012turbulence,brunton2019data} is frequently used to obtain the basis, since the modes $\bm{\varphi}_i(\bm x)$ are orthogonal. Many other modal expansions and bases have been introduced for reduced-order fluid~\cite{Taira2017aiaa, taira2020modal} and plasma models~\cite{vskvara2020detection,ferreira2020deep,nayak2020dynamic,kaptanoglu2020}, including balanced POD~\cite{willcox2002balanced,rowley2005model}, spectral POD~\cite{Towne2018jfm}, dynamic mode decomposition (DMD)~\cite{schmid_dynamic_2010,Rowley2009jfm,Kutz2016book}, the Koopman decomposition~\cite{koopman_hamiltonian_1931,mezic_analysis_2013,pan2020sparsity}, resolvent analysis~\cite{mckeon2010critical,luhar2014opposition}, and autoencoders~\cite{lusch2018deep,champion2019data, lee2020model}. The governing equations are then Galerkin projected onto the basis $\{\bm{\varphi}_i(\bm x)\}$ by substituting Eq.~\eqref{eq:galerkin_expansion} into the PDE and using inner products to remove the spatial dependence. Orthogonal projection onto POD modes is the simplest and most common procedure, resulting in \emph{POD-Galerkin} models, although Petrov-Galerkin projection~\cite{carlberg2011efficient,carlberg2013gnat} has been shown to improve model performance in some cases. If the governing equations for $\bm{u}(\bm{x},t)$ are at most quadratic in nonlinearity, Galerkin projection produces the following system of ODEs for the set of temporal functions $ a_i(t)$, \begin{align} \label{eq:Galerkin_model} \dot{a}_i(t) &= E_i+ \sum_{j=1}^rL_{ij}a_j + \sum_{j,k=1}^r Q_{ijk}a_ja_k. \end{align} $E_i$, $L_{ij}$, and $Q_{ijk}$ are tensors of static coefficients, obtained from spatial inner products between the $\bm{\varphi}_i(\bm x)$ and the operators $\bm{L}^0$ and $\bm{Q}^0$, that define the model dynamics. The class of systems we consider are those with energy-preserving nonlinearity, for which \begin{align} \label{eq:energy_preserving_nonlinearity_full} \sum_{i,j,k=1}^rQ_{ijk}a_ia_ja_k = 0, \end{align} or equivalently, for all $i,j,k \in \{1,...,r\}$, \begin{align} \label{eq:energy_preserving_nonlinearity} Q_{ijk} + Q_{jik} + Q_{kji}= 0. \end{align} $Q_{ijk}$ is symmetric in swapping $j$ and $k$ without loss of generality. \subsection{Schlegel--Noack trapping theorem\label{sec:trapping_theorem}} The Schlegel and Noack~\cite{Schlegel2015jfm} theorem, summarized in Theorem~\ref{th:trapping_theorem} below, provides necessary and sufficient conditions for the projected ROM in Eq.~\eqref{eq:Galerkin_model} to be globally stable by admitting a trapping region. This theorem is necessary and sufficient for systems that exhibit effective nonlinearity, i.e., the system does not manifest invariant manifolds where there exists some $i$ such that $Q_{ijk}a_ja_k = 0$ for all time, for which a linear stability analysis must be adopted. In other words, systems that start in purely linear model subspaces, and remain in those subspaces, do not exhibit effective nonlinearity. Fortunately, realistic fluid flows exhibit effective nonlinearity, although there are some subtleties we discuss in Section~\ref{sec:effective_nonlinearity}. In this case, we can always use the energy $K$ as a Lyapunov function for the trapping region. This is ideal, as finding a suitable Lyapunov function is often the most difficult task in stability analysis. A generic nonlinear system may exhibit multiple fixed points, limit cycles, and other equilibrium point behavior. However, any physical system should produce bounded trajectories, and the global stability property from the trapping theorem is agnostic to any \textit{local} stability properties. This manuscript solely considers systems that are globally stable, or equivalently, long-term (ultimately) bounded, by virtue of exhibiting globally trapping regions. Long-term boundedness means that there exists some $T_0$ and $R_0$ such that $\|\bm{a}(t)\| < R_0$ for all $t > T_0$. A trapping region encompasses an attractor or attracting set, which is typically defined as a set of the system phase space that many trajectories converge towards; this can be an equilibrium point, periodic trajectory, Lorenz's ``strange attractor'', or some other chaotic trajectory. Whenever it does not detract from the discussion, we omit the qualifiers ``globally'', ``monotonically'' and ``long-term'', as this is the only characterization of stability considered in the present work. Examples of physical systems that are globally stable but do not exhibit a trapping region include Hamiltonian systems and systems that do not fit into the trapping theorem assumptions (examined further in Section~\ref{sec:effective_nonlinearity} and summarized in Fig.~\ref{fig:overview}). For instance, fully energy-preserving systems satisfy $\dot{K} = 0$, so trajectories represent shells of constant distance from the origin; these trajectories are globally bounded but no trapping region exists. \vspace{0.05in} \begin{mytheo}{Schlegel and Noack Trapping Theorem}{trapping_theorem} This theorem provides necessary and sufficient conditions for energy-preserving, effectively nonlinear, quadratic systems to exhibit a trapping region $B(\bm{m},R_m)$, a ball centered at $\bm{m}$ with radius $R_m$. Outside this region the rate of change of energy $K$ is negative everywhere, producing a Lyapunov function that renders this system globally stable. Recentering the origin by an arbitrary constant vector $\bm{m}$, the energy may be expressed in terms of the shifted state vector $\bm{y}(t)=\bm{a}(t)-\bm{m}$ as \begin{align} \label{eq:K} K = \frac{1}{2}\bm{y}^T\bm{y}. \end{align} Taking a derivative and substituting in Eq.~\eqref{eq:Galerkin_model} produces \begin{align} \label{eq:Kdot} \frac{d}{dt}K = \bm{y}^T\bm{A}^S\bm{y} + \bm{d}_m^T\bm{y}, \end{align} \begin{align}\label{eq:def_AS_LS_dm} \bm{A}^S &= \bm{L}^S - \bm{m}^T\bm{Q}, \qquad \bm{L}^S = \frac{1}{2}(\bm{L} + \bm{L}^T), \quad \text{and}\quad \bm{d}_m = \bm{E} + \bm{L}\bm{m} + \bm{Q}\bm{m}\bm{m}. \end{align} $\bm{m}^T\bm{Q}$ refers to $m_iQ_{ijk}$ and $\bm{Q}\bm{m}\bm{m}$ to $Q_{ijk}m_jm_k$. The trapping theorem may now be stated as: \textit{There exists a monotonically trapping region at least as small as the ball $B(\bm{m},R_m)$ if and only if the real, symmetric matrix $\bm{A}^S$ is negative definite$^*$\let\thefootnote\relax\footnotetext{$^*$ If a system is long-term bounded (not necessarily exhibiting a monotonically trapping region) and effectively nonlinear, only the existence of an $\bm{m}$ producing a negative \emph{semi}definite $\bm{A}^S$ is guaranteed.} with eigenvalues $\lambda_r \leq \cdots \leq \lambda_1 < 0$; the radius is then given by $R_m = \|\bm{d}_{m}\|/|\lambda_1|$. } \vspace{.15in} In practice, the goal is then to find an origin $\bm{m}$ so that the matrix $\bm{A}^S$ is negative definite, guaranteeing a trapping region and global stability. Without effective nonlinearity, described at the beginning of Section~\ref{sec:trapping_theorem}, only the backwards direction holds; if we can find an $\bm{m}$ so that $\bm{A}^S$ is negative definite, the system exhibits a trapping region. However, such systems can be globally stable without admitting such an $\bm{m}$. Subsequently, the goal of Section~\ref{sec:methodology} is to use this theorem to define a constrained machine learning optimization that identifies a reduced-order model with a guaranteed trapping region. Even when the requirements of the trapping theorem are not fully satisfied, the algorithm results in Section~\ref{sec:results} indicate that this approach tends to produce models with improved stability properties. \end{mytheo} To understand the $R_m$ bound in Thm.~\ref{th:trapping_theorem}, we transform into eigenvector coordinates $\bm{z} = \bm{T}\bm{y}$, $\bm{h} = \bm{d}_m\bm{T}^T$, where $\bm{T}$ are the eigenvectors of $\bm{A}^S$. Now Eq.~\eqref{eq:Kdot} becomes \begin{align} \label{eq:ellipsoid_details} \frac{d}{dt}K = \sum_{i=1}^r h_iz_i + \lambda_i z_i^2 = \sum_{i=1}^r \lambda_i \left(z_i + \frac{h_i}{2\lambda_i}\right)^2 - \frac{h_i^2}{4\lambda_i}. \end{align} We can see that the trapping region will be determined by the equation of the ellipsoid where $\dot{K} = 0$, \begin{align} \label{eq:ellipsoid} 1 = \sum_{i=1}^r \frac{1}{\alpha_i^2} \left(z_i + \frac{h_i}{2\lambda_i}\right)^2, \qquad \alpha_i = \frac{1}{2}\sqrt{\frac{1}{\lambda_i}\sum_{j=1}^r\frac{ h_j^2}{\lambda_j}} \leq \frac{1}{2|\lambda_1|}\|\bm{d}_m\|. \end{align} The origin at $\bm{y} = 0$ ($\bm{a} = \bm{m}$) lies on the ellipsoid, and in the worst case scenario lies at the tip of the major axis. Thus, to guarantee that a ball centered at this origin entirely contains this region, we estimate $R_m$ as twice the size of the largest possible value of the half-axes $\alpha_i$. Note that our definition of $\alpha_i$ differs from Schlegel and Noack~\cite{Schlegel2015jfm}; we believe that there is a minor typo in their Eq. 3.14. Fortunately, the only consequence is a change in the estimate of $R_m$. Lastly, recall that long-term bounded (not necessarily exhibiting a monotonically trapping region) and effectively nonlinear systems only guarantee an $\bm{m}$ exists such that $\bm{A}^S$ is negative semidefinite. In the case of mixed zero and nonzero eigenvalues, the ellipsoid becomes a paraboloid. The infinite extent of the paraboloid precludes a monotonic trapping region but not other forms of global stability. This edge case is not further discussed because in practice (numerically) there is no chance of arriving at an eigenvalue of exactly zero. \subsection{Model truncation, effective nonlinearity, and closure models \label{sec:effective_nonlinearity}} Before implementing the trapping theorem into system identification, we investigate the circumstances under which truncated projection-based models will exhibit effective nonlinearity; the reader may skip this section if the subtleties of the trapping theorem are not of interest, although the discussion here is pertinent to Section~\ref{sec:results_burgers}. Effectively nonlinear dynamics are ideal because they can be decisively classified as globally stable or not, requiring no additional stability analysis. To proceed, consider a Fourier-Galerkin model of Burgers' equation derived from the Fourier expansion $u(x,t) = \sum a_k(t)e^{ikx}$, and further examined in Section~\ref{sec:results_burgers}, \medmuskip=0mu \thickmuskip=0mu \thinmuskip=0mu \begin{align} \label{eq:burgers_cascade} \dot{u} = -u\partial_x u + \nu \partial_{xx}u \quad \Longrightarrow \quad \dot{a}_k = -\nu k^2a_k - \sum_{\ell=-\infty}^{\infty} i \ell a_{\ell} a_{k - \ell} \quad \Longrightarrow \quad \dot{K} = -\nu\sum_{k=-\infty}^\infty k^2 a_k^2 - \sum_{k,\ell=-\infty}^{\infty} i \ell a_{\ell} a_{k - \ell}a_k. \end{align} \medmuskip=4mu \thickmuskip=4mu \thinmuskip=4mu The particular ``triadic'' structure of the nonlinear term in the spectral domain, where the only nonzero terms acting on $a_k$ are those whose wavenumbers sum to $k$, is identical to that arising in isotropic turbulence~\cite{Tennekes1972book}. The triadic term in $\dot{K}$ transfers energy between length scales. Since the viscous term scales with $k^2$, energy is most effectively dissipated at the smallest scales; the combination of the two terms leads to the traditional energy cascade in which energy flows ``downhill'' from larger to smaller scales. This description implies that heavily truncating the Galerkin system leads to under-resolving the dissipation rate and a closure scheme may be required to re-introduce the full dissipation. Towards this goal, modern sparse regression and deep learning methods have been used to produce new closures for fluid models~\cite{ling2016reynolds,san2018neural,pan2018data,Duraisamy2019arfm,Maulik2019jfm,beetham2020formulating}. While the traditional explanation for unstable Galerkin models derives from these truncated dissipative scales, increasingly there are alternate explanations including fundamental numerical issues with the Galerkin framework (potentially resolved in a Petrov-Galerkin framework)~\cite{grimberg2020stability} and the Kolmogorov width issues of linear subspaces more generally~\cite{lee2020model}. If true, this is probably good news for (incompressible, dissipationless) Hall-MHD, where the conservation of energy and the cross, magnetic, and generalized helicities leads to direct, inverse, and even bidirectional cascades~\cite{pouquet2019helicity}. Interestingly, the notion of effective nonlinearity appears to be another approach from which we can attempt to resolve these disagreements about the sources of ROM instability. To proceed with this theme, we show that the triadic structure of the model has repercussions for the presence of effective nonlinearity. Consider the truncated model \begin{align} \dot{a}_k = -\nu k^2a_k - \sum_{\ell=-r}^{r} i\ell a_{\ell} a_{k - \ell}, \qquad k \in \{1,...,r\} \end{align} with the initial condition $a_{j} = 1$ for any $j \in \{\pm(\frac{r}{2}+1), \pm(\frac{r}{2}+2),..., \pm r\}$, and $a_k = 0$, $k \neq j$. For simplicity we have assumed $r$ is divisible by two. In this case the system has $r$ invariant 1D subspaces for which \begin{align} \label{eq:subspaces} \dot{a}_{j} = -\nu j^2a_{j}. \end{align} These invariant linear subspaces exist because higher wavenumber modes that \emph{could} interact to transfer energy between coefficients have been discarded. In other words, \textit{Fourier-Galerkin models with finite truncation do not exhibit effective nonlinearity}. In contrast, POD-Galerkin models weakly break the triadic structure of the nonlinearity~\cite{couplet2003intermodal}, and therefore in general will weakly satisfy the trapping theorem criteria for effective nonlinearity, to the extent that they differ from the Fourier modes because of inhomogeneity in the system. There are also modern ROMs which attempt to retain the full dissipation by utilizing bases that intentionally mix length scales~\cite{balajewicz2012stabilization} -- these types of models should be more likely to satisfy effective nonlinearity. Lastly, numerical errors appear to weakly restore effective nonlinearity, since errors break any triadic structure. Proceeding with this analysis is complicated because the numerical errors also weakly break our foundational assumption that $Q_{ijk}$ is energy-preserving. Future investigations should be pursued to explore relationships between effective nonlinearity, the energy cascade, and closure models that reintroduce stabilizing dissipation to truncated models. It is difficult to quantify ``how close'' a model is to exhibiting effective nonlinearity, since a lack of effective nonlinearity $Q_{ijk} a_j a_k = 0$ must hold for all time, for any $i$, and for any valid system trajectory. However, for an orthonormal set of temporal modes, and assuming there exists at least one index $i$ such that $Q_{ijj} \neq 0$, we propose quantifying the average strength of model effective nonlinearity through the metric \begin{align} \label{eq:effective_nonlinearity_strength} S_e = \frac{\min_i |Q_{ijk}\overline{a_ja_k}|}{\max_i |Q_{ijk}\overline{a_ja_k}|} = \frac{\min_i |Q_{ijj}|}{\max_i |Q_{ijj}|}. \end{align} The bar in $\overline{a_ja_k}$ denotes a temporal average. We will show in Section~\ref{sec:results_burgers} that in system identification a lack of effective nonlinearity is not a terrible loss. Our trapping SINDy algorithm in Section~\ref{sec:methodology} minimizes $\dot{K}$ whether or not a negative definite $\bm{A}^S$ can be realized. However, without additional stability analysis, such models are no longer provably stable for any initial condition. Although Eq.~\eqref{eq:subspaces} is a linearly stable system, this is not guaranteed for more general fluid models than the simple Burgers' equation considered here. \subsection{Constraints in model reduction and system identification\label{sec:constraints}} Before moving on to system identification, it is worth noting that enforcing these types of existence-based stability conditions is subtle. There are modern techniques to implement implicit constraints of the form \begin{align} \label{eq:general_implicit_constraint} \mathcal{C}_i(\dot{\bm{a}},\bm{a},t,...) = 0,\,\,\,\,\,\,\,\,\, i \in \{1,2,...\} \end{align} into both model reduction~\cite{lee2019deep,schein2020preserving} and system identification~\cite{loiseau2018constrained,kolter2019learning,champion2020unified,srivastava2021generalizable}. Precisely in this way, the energy-preserving constraint in Eq.~\eqref{eq:energy_preserving_nonlinearity} is cast as an affine version of Eq.~\eqref{eq:general_implicit_constraint} in our optimization in Section~\ref{sec:methodology}. However, enforcing stability in quadratic energy-preserving models is more complicated than Eq.~\eqref{eq:general_implicit_constraint}. To see this, note that there a few different circumstances under which we might want to promote stability. If the true $\bm{A}^S$ \textit{and} the optimal $\bm{m}$ are known, we can simply constrain the coefficients in Eq.~\eqref{eq:Galerkin_model} to produce this known negative definite $\bm{A}^S$. This would imply that we already know the optimally-shifted eigenvalues of the system and an $\bm{m}$ that produces these negative eigenvalues; if this is the case, so much information about the system of ODEs is already known that machine learning methods are likely unnecessary. But far more interesting are the cases in which 1) the underlying system is known to be globally stable and effectively nonlinear, so we want to find the ``correct'' $\bm{m}$ and corresponding $\bm{A}^S$, or 2) it is not known if any $\bm{m}$ exists such that $\bm{A}^S$ is negative definite. In system identification, either of these cases can be addressed by searching for a model that both optimally fits the data and is globally stable. In this context, we adopt a mixed approach in the next section where we enforce the energy-preserving constraint and then separately bias the optimization towards models with a trapping region. This technique is a significant methodological extension because we can no longer rely on constraints of the form in Eq.~\eqref{eq:general_implicit_constraint}. \section{Trapping SINDy algorithm \label{sec:methodology}} We now describe how to incorporate the trapping theorem of Schlegel and Noack~\cite{Schlegel2015jfm} into data-driven model identification, specifically for the SINDy algorithm. Before describing the modified algorithm in Sec.~\ref{sec:sindy_trapping}, we first present the standard SINDy algorithm~\cite{Brunton2016pnas} along with a recent variant that incorporates explicit constraints~\cite{loiseau2018constrained,champion2020unified}. We then build on this framework to incorporate the Schlegel--Noack trapping theorem. \subsection{Standard and constrained SINDy algorithms\label{sec:sindy_vanilla}} The goal of system identification is to identify a system of ODEs or PDEs that describe how a given data set evolves dynamically in time. The SINDy algorithm~\cite{Brunton2016pnas} identifies sparse, parsimonious models that remain interpretable and avoid the overfitting issues that are common in this field. As in Loiseau et al.~\cite{loiseau2018constrained}, we develop SINDy models for the dynamics of $\bm{a}$, representing the coefficients or amplitudes of a modal Galerkin expansion in Eq.~\eqref{eq:galerkin_expansion}. We assume that the dynamics of $\bm{a}$ will be described as a sparse linear combination of elements from a library $\bm{\Theta}$ containing candidate terms such as: \begin{align} \label{eq:SINDyExpansion} \frac{d}{dt}{\bm{a}} \approx \bm{\Theta}(\bm{a})\bm{\Xi},\qquad \bm{\Theta}(\bm{a}) = \begin{bmatrix} ~~\vline&\vline & \vline\\ ~~\bm{1}&\hspace{-.025in}\bm{a} & \hspace{-.025in}\bm{a}\otimes\bm{a} & \hspace{-.025in}\\ ~~\vline &\vline & \vline \end{bmatrix}. \end{align} Here $\bm{a}\otimes\bm{a}$ contains all combinations of $a_ia_j$ without duplicates. The $\bm{\Theta}$ matrix may contain any desired candidate terms, but in this work we consider only terms up to quadratic polynomials in $\bm{a}$ because we are searching for energy-preserving quadratic models. The expressions in Eq.~\eqref{eq:SINDyExpansion} are typically evaluated on a data matrix $\bm{X}$ obtained from time-series data of the state, $\bm{a}(t_1),\bm{a}(t_2),...,\bm{a}(t_M)$: \begin{eqnarray} \bm{X} = \overset{\text{\normalsize state}}{\left.\overrightarrow{\overset{~~}{\begin{bmatrix} a_1(t_1) & a_2(t_1) & \cdots & a_r(t_1)\\ a_1(t_2) & a_2(t_2) & \cdots & a_r(t_2)\\ \vdots & \vdots & \ddots & \vdots \\ a_1(t_M) & a_2(t_M) & \cdots & a_r(t_M) \end{bmatrix}}}\right\downarrow}\begin{rotate}{270}\hspace{-.125in}time~~\end{rotate} \hspace{.125in} \label{Eq:DataMatrix}. \end{eqnarray} A matrix of derivatives in time, $\dot{\bm{X}}$, is defined similarly and can be numerically computed from $\bm{X}$. In this case, Eq.~\eqref{eq:SINDyExpansion} becomes $\dot{\bm{X}} = \bm{\Theta}(\bm{X})\bm{\Xi}$. The goal of SINDy is to determine a sparse matrix of coefficients $\bm{\Xi} = \begin{bmatrix}\bm{\xi}_1 &\bm{\xi}_2 & \cdots & \bm{\xi}_r\end{bmatrix}$, also written in vectorized form as \begin{align} \label{eq:vectorized_xi} \bm{\Xi}[:] = \bm{\xi} = [\xi^{a_1}_1, \ldots, \xi^{a_r}_1,\xi^{a_2}_2 ,\ldots, \xi^{a_r}_2,\ldots, \xi^{a_1}_N, \ldots, \xi^{a_r}_N], \end{align} where $N$ is the number of candidate functions and $r$ is the state space size; nonzero elements in each column $\bm{\xi}_j$ indicate which terms are active in the dynamics of $\dot{a}_j(t)$. The matrix of coefficients $\bm{\Xi}$ is determined via the following sparse optimization problem: \begin{align} \label{eq:optimization_vanilla} \argmin_{\bm{\xi}}&\left[ \frac{1}{2}\|\bm{\Theta}\bm{\xi}-\dot{\bm{X}}\|^2 + \lambda \|\bm{\xi}\|_0\right]. \end{align} We deviate from the typical SINDy definitions by explicitly formulating the problem in terms of the vectorized $\bm{\xi} \in \mathbb{R}^{rN}$, $\bm{\Theta}(\bm{X}) \in \mathbb{R}^{rM\times rN}$, and $\dot{\bm{X}} \in \mathbb{R}^{rM}$. The first term in the SINDy optimization problem in Eq.~\eqref{eq:optimization_vanilla} fits a system of ODEs $\bm{\Theta}\bm{\xi}$ to the given data in $\dot{\bm{X}}$. The $\|\bm{\xi}\|_0$ term counts the number of nonzero elements of $\bm{\Xi}$; however, it is not technically a norm and leads to a non-convex optimization, so several convex relaxations have been proposed~\cite{Brunton2016pnas,Rudy2017sciadv,champion2020unified}. Since the original SINDy algorithm, Loiseau et al.~\cite{loiseau2018constrained} introduced an extension to directly enforce constraints on the coefficients in $\bm{\Xi}$. In particular, they enforced energy-preserving, skew-symmetry constraints on the quadratic terms for incompressible fluid flows, demonstrating improved model performance over standard Galerkin projection. The quadratic library in Eq.~\eqref{eq:SINDyExpansion} has ${N = \frac{1}{2}(r^2+3r)+1}$ terms. With the energy-preserving structure, it can be shown that the number of constraints is ${p = r(r+1)(r+2)/6}$ and therefore the number of free parameters is $rN - p = 2p$. This constraint is encoded as $\bm{C}\bm{\xi} = \bm{d}$, $\bm{C} \in \mathbb{R}^{p\times rN}$, $\bm{d} \in \mathbb{R}^{p}$, and the constrained SINDy algorithm solves the following minimization, \begin{align} \label{eq:optimization_constrained} \argmin_{\bm{\xi}}&\left[ \frac{1}{2}\|\bm{\Theta}\bm{\xi}-\dot{\bm{X}}\|^2_2 + \lambda \|\bm{\xi}\|_1+ \delta_0(\bm{C}\bm{\xi} - \bm{d})\right]. \end{align} In general we can use nonconvex regularizers that promote sparsity in $\bm{\xi}$, but the trapping SINDy modifications below require a convex regularizer such as the $L^1$ norm. The third term $\delta_0$ is an indicator function that encodes the constraint $\bm{C}\bm{\xi} = \bm{d}$, guaranteeing the energy-preserving structure in the quadratic nonlinearity is retained in the identified model. There are also variants of the constrained SINDy objective function in Eq.~\eqref{eq:optimization_constrained} that utilize sparse relaxed regularized regression (SR3) in order to improve performance~\cite{zheng2019unified,champion2020unified}. \subsection{Proposed trapping SINDy algorithm}\label{sec:sindy_trapping} Model constraints in system identification, such as global conservation laws or other physical considerations, often result in improved models, but do not generally guarantee global stability. Here, we will additionally promote globally stable models that exhibit a monotonically trapping region. Recall from Thm.~\ref{th:trapping_theorem} that $\bm{m}$ is an arbitrary, constant vector, of the same state size as $\bm{a}$, that specifies the center of a possible trapping region. Stability promotion is then achieved by jointly determining the sparse model coefficients $\bm{\Xi}$ and state vector $\bm{m}$ such that $\bm{A}^S$ from Eq.~\eqref{eq:def_AS_LS_dm} is negative definite. To proceed with our trapping SINDy formulation, we must relate the model coefficients in $\bm{\xi}$ to the matrix $\bm{A}^S$ appearing in the trapping theorem. We first define the projection operators ${\bm{P}^L\in \mathbb{R}^{r \times r \times rN}}$, ${\bm{P}^Q\in \mathbb{R}^{r \times r \times r \times rN}}$, and ${\bm{P}\in \mathbb{R}^{r \times r \times rN}}$. The operator $\bm{P}^L$ projects out the symmetric part of the linear coefficients through $\bm{L}^S = \bm{P}^L\bm{\xi}$. The same is true for the quadratic coefficients, $\bm{Q} = \bm{P}^Q\bm{\xi}$. The operator ${\bm{P} = \bm{P}^L - \bm{m}^T\bm{P}^Q}$ provides a concise representation of $\bm{A}^S$ through the following equation: \begin{align} \label{eq:W_def} \bm{A}^S = \bm{L}^S - \bm{m}^T\bm{Q} = \bm{P}\bm{\xi} = (\bm{P}^L-\bm{m}^T\bm{P}^Q)\bm{\xi}. \end{align} We now phrase a tentative version of the trapping SINDy optimization problem, in analogy to the constrained SINDy optimization in Eq.~\eqref{eq:optimization_constrained}, that incorporates an additional loss term to reduce the maximal (most positive) eigenvalue $\lambda_1$ of the real, symmetric matrix $\bm{A}^S$: \begin{align} \label{eq:optimization_alternate} \argmin_{\bm{\xi},\bm{m}}&\left[ \frac{1}{2}\|\bm{\Theta}\bm{\xi}-\dot{\bm{X}}\|^2_2 + \lambda \|\bm{\xi}\|_1+ \delta_0(\bm{C}\bm{\xi} - \bm{d}) + \frac{1}{\eta}\lambda_1\right]. \end{align} Although $\lambda_1$ is a convex function of the matrix elements~\cite{overton1988minimizing}, $(\bm{P}^L - \bm{m}^T\bm{P}^Q)\bm{\xi}$ is not affine in $\bm{\xi}' = [\bm{\xi}, \bm{m}]$. The result is that this new term is not convex, but {\it convex composite}. It is possible to approximately solve this problem with a variable projection technique, where we essentially treat $\bm{\xi}$ and $\bm{m}$ as independent, solve the convex problem in $\bm{\xi}$, and then substitute $\bm{\xi}^*$, the solution at each iteration, into the optimization for $\bm{m}$. In practice this algorithm performs fairly well, although the convergence properties are unclear. Eq.~\eqref{eq:optimization_alternate} is also amenable to other approaches, such as Gauss-Newton~\cite{burke1995gauss} or the prox-linear algorithm~\cite{drusvyatskiy2019efficiency}, because $\lambda_1$ is a convex function and $\bm{P}\bm{\xi}$ is smooth in $\bm{m}$ and $\bm{\xi}$. Although we institute a modified algorithm below, these convex-composite approaches are a promising future direction for effectively solving this nonconvex optimization problem. In order to produce an algorithm with better performance and better understood convergence properties, we adopt a relax-and-split approach~\cite{zheng2020relax}, similar to the approach taken in Champion et al.~\cite{champion2020unified}. We introduce an auxiliary variable $\bm{A}$ that represents the projection of $\bm{A}^S=\bm{P}\bm{\xi}$ onto the space of negative definite matrices, and introduce two new terms in the optimization: \begin{align} \label{eq:optimization_wAm} \argmin_{\bm{\xi},\bm{m},\bm{A}}&\left[ \frac{1}{2}\|\bm{\Theta}\bm{\xi}-\dot{\bm{X}}\|^2_2 + \lambda \|\bm{\xi}\|_1+ \delta_0(\bm{C}\bm{\xi} - \bm{d}) + \frac{1}{2\eta}\|\bm{P}\bm{\xi}-\bm{A}\|^2_2 +\delta_{\mathcal{I}}(\bm{\Lambda})\right]. \end{align} The new least-squares term enforces a ``soft'' constraint (or bias) towards $\bm{A}^S=\bm{P}\bm{\xi}$ being negative definite by minimizing the difference between $\bm{P}\bm{\xi}$ and its projection into the space of negative definite matrices. The auxiliary variable $\bm{A}$ is updated to approximate $\bm{A}^S=\bm{P}\bm{\xi}$, and then, through the $\delta_{\mathcal{I}}$ term, enforced to be negative definite by requiring that the diagonalized matrix $\bm{\Lambda} = \bm{V}^{-1}\bm{A}\bm{V}$ lies in $\mathcal{I} = (-\infty, -\gamma\,]$, $\gamma > 0$. Directly enforcing $\bm{P}\bm{\xi}$ to be negative definite tends to wreck the model fit to the data. Instead, the auxiliary variable $\bm{A}$ in Eq.~\eqref{eq:optimization_wAm} allows the algorithm to accurately fit the data with $\bm{\xi}$ and then relax the coefficients towards a negative definite $\bm{A}^S$ to promote global stability. This flexible formulation also allows $\bm{A}$, our proxy for the projection of $\bm{P}\bm{\xi}$ onto the space of negative definite matrices, to vary, and therefore fit the particular eigenvalues of the system in question. In other words, the proposed approach pushes $\bm{A}^S$ into the space of negative definite matrices in $\mathbb{R}^{r\times r}$ with minimal assumptions about the eigenvalues, only assuming that they are negative. Contrast our algorithm to a more restrictive approach that prescribes an $\bm{A}$, meaning we already know a set of negative eigenvalues of $\bm{P\xi}$ that is compatible with the data. A description of each of the hyperparameters $\lambda$, $\eta$, and $\gamma$, is provided in Table~\ref{tab:hyperparams}. Note that Eq.~\eqref{eq:optimization_wAm} is not convex in $\bm{A}$, and this is the most challenging aspect of this formalism. Now that we have defined our problem in Eq.~\eqref{eq:optimization_wAm}, we need to solve it. If we denote the convex part of the optimization, \begin{align} \label{eq:convex_optimization_piece} F(\bm{\xi},\bm{m},\bm{A}) = \|\bm{\Theta}\bm{\xi}-\dot{\bm{X}}\|^2_2/2 + \lambda \|\bm{\xi}\|_1+ \delta_0(\bm{C}\bm{\xi} - \bm{d}) + \|\bm{P}\bm{\xi}-\bm{A}\|^2_2/2\eta, \end{align} and fix initial guesses for $\bm{m}$ and $\bm{A}$, then we can define the solution vector $\bm{\xi}^*$ through \begin{align} \label{eq:optimization_xi} \bm{\xi}^* = \argmin_{\bm{\xi}}\left[F(\bm{\xi}, \bm{m}, \bm{A})\right]. \end{align} \begin{table}[t] \centering \begin{tabular}{ |>{\columncolor[gray]{0.85}}p{0.15cm}|p{15cm}| } \hline \rowcolor{gray!30} \multicolumn{2}{|c|}{Trapping SINDy hyperparameters} \\ \hline $\lambda$ & Specifies the strength of sparsity-promotion through the regularizer $R(\bm{\xi})$. $\lambda=0$ already works well for low-dimensional systems because the $\|\bm{P}\bm{\xi} - \bm{A}\|^2_2$ term promotes stability. \\ \hline $\eta$ & Specifies how strongly to push the algorithm towards models with negative definite $\bm{A}^S$. If $\eta \gg 1$, $\bm{\xi}^*$ is unaffected by the minimization over $\bm{m}$. If $\eta \ll 1$, the problem is increasingly nonconvex. \\ \hline $\gamma$ & Determines how far to push the eigenvalues of $\bm{A}^S$ towards being negative definite. Typically $\gamma \lesssim 0.1$ works for a variety of problems regardless of the true eigenvalues of $\bm{A}^S$. \\ \hline \end{tabular} \caption{Description of the trapping SINDy hyperparameters. } \label{tab:hyperparams} \vspace{-0.2in} \end{table} If $\lambda = 0$, $\bm{\xi}^*$ is structurally identical to the $\bm{\xi}^*$ in Champion et al.~\cite{champion2020unified}: \begin{align} \label{eq:H} \bm{H} &= (\bm{\Theta}^T\bm{\Theta} + \frac{1}{\eta}\bm{P}^T\bm{P})^{-1}, \\ \label{eq:w_update} \bm{\xi}^* &= \bm{H}\left[\bm{I} - \bm{C}^T(\bm{CHC}^T)^{-1}\bm{CH}\right]\left[\bm{\Theta}^T\dot{\bm{X}} + \frac{1}{\eta}\bm{P}^T\bm{A}\right] + \bm{H}\bm{C}^T(\bm{CHC}^T)^{-1}\bm{d}. \end{align} $\bm{H}$ is positive definite, $\bm{I}$ is the identity matrix, and $\bm{C}\bm{\xi}^* = \bm{d}$ can be verified using Eq.~\eqref{eq:w_update}. The minimization over $\bm{\xi}$ with $\lambda \neq 0$ is still convex but not analytically tractable as in Eq.~\eqref{eq:w_update}. Since it is convex, it can be solved with standard convex optimization libraries such as CVXPY~\cite{diamond2016cvxpy}. It can also be shown to reduce to a constrained quadratic program over the unit box with a positive semidefinite cost matrix. A barrier to this route is that typical numerical solvers either assume that the quadratic cost matrix is sparse or positive definite. Neither assumption is true here. Now that we have solved the minimization over $\bm{\xi}$, we can use prox-gradient descent on $(\bm{m}, \bm{A})$; each algorithm iteration we alternate between solving for $\bm{\xi}^*$ and solving for $(\bm{m}^*, \bm{A}^*)$. Again, we can think about this problem as a variable projection~\cite{aravkin2016variable,zhang2020offline}, which is a value function optimization over the remaining variables $(\bm{m},\bm{A})$. To make this viewpoint more precise, we define \begin{align} \widetilde F(\bm{m},\bm{A}) = F(\bm{\xi}^*,\bm{m},\bm{A}), \end{align} The problem we want to solve is now written more simply as \[ \argmin_{\bm{m},\bm{A}} \left[\widetilde F(\bm{m},\bm{A}) + \delta_{ \mathcal{I}}(\bm{\Lambda})\right]. \] We apply prox-gradient descent to this nonconvex problem, so that \begin{align} \label{eq:mA_update} \bm{m}^* = \bm{m} - \alpha_m\nabla_{\bm m} \widetilde F(\bm{m},\bm{A}), \qquad \bm{A}^* = \text{proj}_{\mathcal{I}}\left[\bm{A} - \alpha_A\nabla_{\bm A} \widetilde F(\bm{m},\bm{A})\right], \end{align} where $\alpha_m$ and $\alpha_A$ are step sizes. All that remains is to compute the gradients of the value function $\widetilde F$, \begin{align} \nabla_{\bm A} \widetilde F(\bm{m},\bm{A}) = \frac{1}{\eta}(\bm{A}-\bm{P}\bm{\xi}^*), \qquad \nabla_{\bm m} \widetilde F(\bm{m},\bm{A}) = \frac{1}{\eta}\bm{P}^Q\bm{\xi}^*(\bm{A}-\bm{P}\bm{\xi}^*). \end{align} These are Lipschitz continuous functions with Lipschitz constants $L_A$, $L_m$ satisfying \begin{align} \label{eq:lipshitz_m} \alpha_A \leq \frac{1}{L_A} \leq \eta, \qquad \alpha_m \leq \frac{1}{L_m} \leq \frac{\eta}{\|(\bm{P}^Q\bm{\xi}^*)_{ijk}(\bm{P}^Q\bm{\xi}^*)_{ljk}\|_\text{F}}, \end{align} in order for guaranteed convergence of fixed step-size, prox-gradient descent~\cite{attouch2013convergence}. While the denominator in Eq.~\eqref{eq:lipshitz_m} varies with the update in $\bm{\xi}$, in practice, one can reduce $\alpha_m$ until convergence is found. The full trapping SINDy optimization is illustrated in Algorithm~\ref{algo}. \begin{algorithm*}[t] \caption{Trapping SINDy} \label{algo} \begin{algorithmic}[1] \INPUT{Numerical data $\dot{\bm{X}}$ and optional initial guesses for $\bm{m}$ and $\bm{A}$.} \OUTPUT{Optimal model coefficients $\bm{\xi}^*$ and shift vector $\bm{m}^*$}. \Procedure{SINDy}{$\dot{\bm{X}}$, $\lambda$, $\eta$, $\gamma$} \State Construct matrices $\bm{\Theta}(\bm{X})$, $\bm{P}$, $\bm{C}$, and $\bm{d}$. \Comment{Initialize variables} \State $\textbf{while}\,\,\,\, |\bm{\xi}_k - \bm{\xi}_{k+1}| > \epsilon_\text{tol}^\xi \text{ and } |\bm{m}_k - \bm{m}_{k+1}| > \epsilon^m_\text{tol}$ \Comment{Begin iteration loop} \State \quad $\bm{\xi}_{k+1} \Longleftarrow \argmin_{\bm{\xi}_k}\left[F(\bm{\xi}_k,\bm{m}_k,\bm{A}_k)\right]$, \Comment{Convex minimization for $\bm{\xi}_{k+1}$} \State \quad $\bm{V}_{k+1}\bm{\Lambda}_{k+1}(\bm{V}_{k+1})^{-1} \Longleftarrow \bm{A}_k - \alpha_A\nabla_{\bm A} \widetilde F(\bm{m},\bm{A})|_{m_k,A_k}$, \Comment{Prox-gradient step for $\bm{A}$} \State \quad $\bm{A}_{k+1} \Longleftarrow \bm{V}_{k+1}\text{proj}_{\mathcal{I}}\left[\bm{\Lambda}_{k+1} \right](\bm{V}_{k+1})^{-1}$, \Comment{Project $\bm{A}$ into $\mathcal{I}$, rotate into $\bm{P\xi}$ basis} \State \quad $\bm{m}_{k+1} \Longleftarrow \bm{m}_k - \alpha_m\nabla_{\bm m} \widetilde F(\bm{m},\bm{A})|_{m_k,A_k}$, \Comment{Prox-gradient step for $\bm{m}$} \EndProcedure \end{algorithmic} Note that inequalities~\eqref{eq:lipshitz_m} should be satisfied, and there tends to be a sweet spot for $\eta$. It is often useful to start with $\eta \gg 1$ and then reduce $\eta$ until the model coefficients are significantly affected. \end{algorithm*} $\epsilon_\text{tol}^\xi$ and $\epsilon_\text{tol}^m$ are convergence tolerances. The $\bm{V}_{k+1}$ are the eigenvectors of $\bm{P}\bm{\xi}_{k+1}$ and are used to transform $\bm{A}$ into the same basis as $\bm{P}\bm{\xi}_{k+1}$. An example of the algorithm iterating on noisy data from the chaotic Lorenz system is shown in Fig.~\ref{fig:SINDy_progress}, demonstrating how the algorithm transitions from a poor initial guess that decays to a fixed point to a stable model converging to the correct attractor. We also implement an optional FISTA method~\cite{beck2009fast,nesterov2013gradient} for reducing the convergence time in the $(\bm{m},\bm{A})$ optimization. Algorithm~\ref{algo} is computationally intensive, but it can be parallelized for speed in future work, following other SINDy variants~\cite{kaheman2020sindy}. Initial guesses for $\bm{m}$ and $\bm{A}$ also facilitate continuation of previous optimization runs. Along with these methods, we also implement the $\lambda_1$ variant of the trapping algorithm in Eq.~\eqref{eq:optimization_alternate} in the open-source PySINDy code~\cite{silva2020pysindy}. \begin{figure}[t] \vspace{0.2in} \begin{flushright} \begin{overpic}[width=.97\linewidth]{SINDy_progress.jpg} \put(-1, 10){iteration:} \LARGE{ \put(9.75, 4){$\bm{\cdots}$} \put(85.75, 4){$\bm{\cdots}$}} \normalsize \put(8, 10){0} \put(18, 10){151} \put(27, 10){152} \put(35.5, 10){153} \put(44, 10){154} \put(53, 10){155} \put(62, 10){156} \put(70.5, 10){157} \put(79.5, 10){158} \put(94, 10){500} \end{overpic} \end{flushright} \vspace{-0.225in} \caption{Illustration of trapping SINDy progress on noisy Lorenz data. The minimization results in the transition from a poor initial guess to identification of the correct attractor dynamics.} \label{fig:SINDy_progress} \vspace{-0.2in} \end{figure} A key insight to the trapping algorithm is that the energy-preserving constraint $\bm{C}\bm{\xi} = \bm{d}$ is non-negotiable. Although in practice small errors in $\bm{C}\bm{\xi} = \bm{d}$ do not significantly affect the optimization problem, the $\|\bm{P}\bm{\xi} - \bm{A}\|^2_2$ term in the optimization loses its physical interpretation if the coefficients are not exactly energy-preserving. % Thus, the goal is to satisfy $\bm{C}\bm{\xi} = \bm{d}$ \textit{exactly}, and then to push a potential model towards a more refined model that exhibits a trapping region, potentially at the expense of the fit to the data (this can also mitigate overfitting). There tends to be a ``sweet spot'' regime for the value of $\eta$. If $\eta^{-1}\|\bm{P}\bm{\xi}-\bm{A}\|^2_2 \ll \|\bm{\Theta}\bm{\xi} - \dot{\bm{X}}\|^2_2$, then $\bm{\xi}^*$ is essentially unaffected by the minimizations over $\bm{m}$ and $\bm{A}$. In practice, this means that poor initial guesses for $\bm{\xi}^*$ do not improve as the full optimization problem is solved. In the opposite extreme, $\eta^{-1}\|\bm{P}\bm{\xi}-\bm{A}\|^2_2 \gg \|\bm{\Theta}\bm{\xi} - \dot{\bm{X}}\|^2_2$, the optimization in Eq.~\eqref{eq:optimization_wAm} is increasingly nonconvex and potentially pulls $\bm{\xi}^*$ far away from the data. Finding the $\eta$ regime where updating $\bm{m}$ perturbs $\bm{\xi}^*$, instead of leaving $\bm{\xi}^*$ unaffected or mangled, requires scanning $\eta$. \section{\label{sec:results}Results} We now investigate the utility of our trapping SINDy algorithm to identify stable, sparse, nonlinear models for a number of canonical systems. These examples illustrate that it is possible to both effectively discover stable models that exhibit trapping regions and improve the discovery of systems that do not satisfy Thm.~\ref{th:trapping_theorem} or the requirement of effective nonlinearity. For each system, we train SINDy on a single trajectory with a random initial condition and evaluate the model on a different trajectory of the same temporal duration with a new random initial condition. It is difficult to quantity model performance for chaotic systems, such as the Lorenz system, where lobe-switching is extremely sensitive to initial conditions and the coefficients of the identified model, and for systems with transients, for which the precise timing of instability must be matched to achieve the correct phase. A reasonable definition for the model quality, for models with closed forms, is the relative Frobenius error in the model coefficients, \begin{align} \label{eq:model_error} \text{E}_\text{m} = \frac{\|\bm{\Xi}_\text{True}- \bm{\Xi}_\text{SINDy}\|_\text{F}}{\|\bm{\Xi}_\text{True}\|_\text{F}}. \end{align} When appropriate, we also report a far more demanding relative prediction error, \begin{align} \label{eq:prediction_error} \text{E}_\text{pred} = \frac{\|\bm{X}_\text{True} - \bm{X}_\text{SINDy}\|_F}{ \|\bm{X}_\text{True}\|_F}. \end{align} Table~\ref{tab:results_summary} summarizes the sampling, hyperparameters, and identified trapping regions for each example discussed in Sections~\ref{sec:results_meanfield}--\ref{sec:results_vonKarman}. Table~\ref{tab:results_summary} is intended to be instructive rather than exhaustive. For clarity, the training and testing trajectories used to generate this table do not have added noise, although Fourier modes from the Burgers' Equation and POD modes from the Von K\`arm\`an street are obtained from direct numerical simulation (DNS), and subsequently contain minor numerical noise; the performance on noisy data will be explored further in Sec.~\ref{sec:results_Lorenz}. To compare trapping region sizes $R_m$ across different examples, we also report $R_\text{eff} = R_m / \sqrt{\sum_{i=1}^r \overline{y}_i^2}$, which is normalized to the approximate radius of the training data. The denominator denotes the root-mean-square of the temporal average of each component of the trajectory. \begin{table}[h] \centering \begin{tabular}{ |>{\columncolor[gray]{0.85}}p{2.07cm}|p{0.35cm}|p{1.45cm}|p{1cm}|p{0.4cm}|p{0.5cm}|p{0.5cm}|p{2cm}|p{0.5cm}|p{0.9cm}|p{0.8cm}|p{0.9cm}| } \hline \rowcolor{gray!30} & r & $\Delta t$ & M & $\lambda$ & $\eta$ & $\gamma$ & $\bm{m}^*$ & $R_m$ & $R_\text{eff}$ & $\lambda_1$ & $\text{E}_\text{m} (\%)$ \\ \hline Mean field & 3 & $10^{-2}$ & $50000$ & 0 & $10^{10}$ & 1 & $[0, 0, 1.3]$ & 1.3 & 218 & -1 & $10^{-3}$ \\ \hline Oscillator & 3 & $5\times 10^{-3}$ & $50000$ & 0 & $10^{8}$ & 0.1 & $[0, -0.9, 0.4]$ & 300 & 597 & $-0.01$ & $10^{-2}$\\ \hline Lorenz & 3 & $5\times 10^{-3}$ & $50000$ & 0 & 0.1 & 1 & $[-1.2,0.1,38]$ & $106$ & 4.4 & $-1$ & $0.3$\\ \hline Triadic MHD & 6 & $10^{-3}$ & $50000$ & 0 & $10^3$ & 0.1 & $[0, ...]$ & $-$ & $-$ & $0$ & $10^{-4}$\\ \hline Burgers' Eq. & 10 & 0.1 & 30000 & 0 & 500 & 0.1 & $[-0.2,0,...]$ & $-$ & $-$ & 0.1 & $-$ \\ \hline Von K\`arm\`an & 5 & 0.1 & 30000 & 0.1 & 1 & 0.1 & $[-1.2, ..., 1.1]$ & $29$ & $17$ & $-0.1$ & $-$\\ \hline \end{tabular} \caption{Description of the sampling, trapping SINDy hyperparameters, and identified trapping region for the dynamic systems examined in Section~\ref{sec:results}. Trajectory data does not include any added noise so $\lambda = 0$ works for most of the systems. The SINDy models are identified from a single trajectory. These parameters produce reasonable results for these systems, but a hyperparameter scan can lead to further improvements. } \label{tab:results_summary} \vspace{-0.15in} \end{table} \subsection{Mean field model\label{sec:results_meanfield}} Often the trajectories of a nonlinear dynamical system, whose linear part has some stable directions, will approach a slow manifold of reduced dimension with respect to the full state space. As an example of this behavior, consider the following linear-quadratic system originally proposed by Noack et al.~\cite{noack2003hierarchy} as a simplified model of the von Karman vortex shedding problem explored further in Sec.~\ref{sec:results_vonKarman}: \begin{align} \label{eq:mean-field} \frac{d}{dt}\begin{bmatrix} x \\ y \\ z \\ \end{bmatrix} = \begin{bmatrix} \mu & -1 & 0 \\ 1 & \mu & 0 \\ 0 & 0 & -1 \\ \end{bmatrix}\begin{bmatrix} x \\ y \\ z \end{bmatrix} + \begin{bmatrix} - xz \\ - yz \\ x^2 + y^2 \end{bmatrix}. \end{align} Systems of this form commonly arise in PDEs with a pair of unstable eigenmodes represented by $x$ and $y$. The third variable $z$ models the effects of mean-field deformations due to nonlinear self-interactions of the instability modes. The system undergoes a supercritical Hopf bifurcation at $\mu = 0$; for $\mu \ll 1$ trajectories quickly approach the parabolic manifold defined by ${z = x^2 + y^2}$. All solutions asymptotically approach a stable limit cycle on which $z = x^2 + y^2 = \mu$. It is enough to notice that $\bm{m} = [0, 0, \mu+\epsilon]$, $\epsilon > 0$ produces \begin{align} \bm{A}^S = \bm{L}^S - \bm{m}^T\bm{Q} = \begin{bmatrix} -\epsilon & 0 & 0 \\ 0 & -\epsilon & 0 \\ 0 & 0 & -1 \end{bmatrix}, \end{align} so this system exhibits a trapping region. We illustrate a stable and accurate model identified by our trapping SINDy algorithm in Fig.~\ref{fig:meanfield_model}. This system is of particular interest because it is a prototypical example of how quadratic interactions in a multi-scale system can give rise to effective higher-order nonlinearities. If the dynamics are restricted to the slow manifold, the system reduces to the cubic Hopf normal form~\cite{noack2003hierarchy,guckenheimer_holmes} \begin{align} \label{eq:mean-field-slow} \frac{d}{dt}\begin{bmatrix} x \\ y \\ \end{bmatrix} = \begin{bmatrix} \mu - (x^2 + y^2) & -1 \\ 1 & \mu - (x^2 + y^2) \\ \end{bmatrix}\begin{bmatrix} x \\ y \\ \end{bmatrix}. \end{align} Systems of this type arise in weakly nonlinear pattern-forming systems and are often called Stuart-Landau equations. In this case, the nonlinear interactions are no longer energy-preserving, since the manifold restriction removes the fast, dissipative degree of freedom. We might intuitively expect that this type of manifold reduction would inherit the trapping properties of the underlying system, but to our knowledge a general theory of such situations has not yet been worked out, even for the quadratic energy-preserving case. \begin{figure}[t] \begin{subfigure}[b]{0.48\textwidth} \raggedleft \begin{overpic}[width=0.99\linewidth]{meanfield_summary.pdf} \end{overpic} \caption{Trapping SINDy model (black) of a mean field system trajectory (red) with $\mu = 0.01$ and initial condition $[\mu, \mu, 0]$. The trajectory is shown within the estimated trapping region and ellipsoid where $\dot{K} \geq 0$. The prediction error is $\text{E}_\text{pred} \approx 0.6\%$.} \label{fig:meanfield_model} \end{subfigure} \hspace{0.2in} \begin{subfigure}[b]{0.48\textwidth} \raggedright \begin{overpic}[width=0.99\linewidth]{oscillator_summary.pdf} \end{overpic} \caption{Same illustration for the atmospheric oscillator with random initial condition chosen from the unit ball. There is large scale separation in this system, so that $|\lambda_1| \ll |\lambda_2|, |\lambda_3|$. This leads to an overestimate of the trapping region size. The prediction error is $\text{E}_\text{pred} \approx 6\%$.} \label{fig:oscillator_model} \end{subfigure} \caption{Identified models and trapping regions for the mean field and atmospheric oscillator systems.} \vspace{-0.15in} \end{figure} \subsection{Atmospheric oscillator model\label{sec:results_oscillator}} Here we examine a more complicated Lorenz-like system of coupled oscillators that is motivated from atmospheric dynamics: \begin{align} \label{eq:oscillator} \frac{d}{dt}\begin{bmatrix} x \\ y \\ z \\ \end{bmatrix} = \begin{bmatrix} \mu_1 & 0 & 0 \\ 0 & \mu_2 & \omega \\ 0 & -\omega & \mu_2 \end{bmatrix}\begin{bmatrix} x \\ y \\ z \end{bmatrix} + \begin{bmatrix} \sigma xy \\ \kappa yz + \beta z^2 - \sigma x^2 \\ - \kappa y^2 - \beta yz \end{bmatrix}. \end{align} For comparison, we use the parameters in Tuwankotta et al.~\cite{tuwankotta2006chaos}, $\mu_1 = 0.05$, $\mu_2 = -0.01$, $\omega = 3$, $\sigma = 1.1$, $\kappa = -2$, and $\beta = -6$, for which a limit cycle is known to exist. The trapping SINDy algorithm finds $\bm{m}$ such that $\bm{A}^S$ is negative definite for a wide range of parameter and hyperparameter choices, and accurate model results are illustrated in Fig.~\ref{fig:oscillator_model} alongside the mean-field model results. So far, we have illustrated that the trapping algorithm successfully produces accurate and provably stable models on simple systems that exhibit well-behaved attractors. In the next sections, we investigate progressively noisier (Section~\ref{sec:results_Lorenz}) and higher-dimensional (Sections~\ref{sec:results_mhd}--\ref{sec:results_vonKarman}) systems that typically provide significant challenges for model discovery algorithms. \subsection{Noisy Lorenz attractor}\label{sec:results_Lorenz} The Lorenz 1963 system~\cite{lorenz1963deterministic} is among the simplest systems exhibiting chaotic dynamics, developed to model thermal convection in the atmosphere based on computer simulations from his graduate students Ellen Fetter and Margaret Hamilton: \begin{align} \frac{d}{dt}\begin{bmatrix} x \\ y \\ z \\ \end{bmatrix} &= \begin{bmatrix} -\sigma & \sigma & 0 \\ \rho & -1 & 0 \\ 0 & 0 & -\beta \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} + \begin{bmatrix} 0 \\ -xz \\ xy \end{bmatrix}. \end{align} For this system, it is possible to write $\bm{A}^S$ explicitly as \begin{align} \bm{A}^S = \begin{bmatrix} -\sigma & \frac{1}{2}(\rho+\sigma - m_3) & \frac{1}{2}m_2 \\ \frac{1}{2}(\rho+\sigma - m_3) & -1 & 0 \\ \frac{1}{2}m_2 & 0 & -\beta \end{bmatrix}. \end{align} For Lorenz's choice of parameters, $\sigma = 10$, $\rho = 28$, $\beta = 8/3$, this system is known to exhibit a stable attractor. For $\bm{m} = [0,m_2,\rho+\sigma]$ ($m_1$ does not contribute to $\bm{A}^S$ so we set it to zero), \begin{align} \bm{A}^S &= \begin{bmatrix} -\sigma & 0 & \frac{1}{2}m_2 \\ 0 & -1 & 0 \\ \frac{1}{2}m_2 & 0 & -\beta \end{bmatrix}, \qquad \lambda_1 = -1, \qquad \lambda_{\pm} = -\frac{1}{2}\left[\beta+\sigma \mp \sqrt{m_2^2 + (\beta-\sigma)^2}\right], \end{align} so that if $\lambda_{\pm} < 0$, then $-2\sqrt{\sigma\beta} < m_2 < 2\sqrt{\sigma\beta}$. Our algorithm successfully identifies the optimal $\bm{m}$, and identifies the inequality bounds on $m_2$ for stability. As this analysis is invariant to $m_1$, in principle the trapping region is given by a cylinder, extruded in the $m_1$ direction, rather than a sphere. We can show further improvements in model quality. We train unconstrained, constrained, and trapping SINDy models four times; the data for each is a single Lorenz attractor with four different noise instantiations. Then we test the performance of the resulting models with a random initial condition in $[-10,10]\times[-10, 10] \times[-10, 10]$. For direct comparison, we use the $L^1$ regularizer for each method. Fig.~\ref{fig:lorenz_comparison} illustrates the increased performance with our trapping SINDy algorithm over the constrained SINDy algorithm on noisy Lorenz data for varying threshold levels $\lambda = \{0$, $0.01$, $0.1\}$. The unconstrained method is not pictured because most of the identified models diverge at these high noise levels. At all values of $\lambda$ and most initial conditions, the unconstrained method overfits to the data and produces unstable and diverging models. The traditional constrained SINDy variant mostly manages to produce stable models but produces increasingly poor data fits as $\lambda$ increases. In contrast, the trapping version continues to produce stable models that lie on the correct attractor. In this way, the additional optimization loss terms that promote stable models provide both a trapping region of known size and additional robustness to noise, even when the models appear otherwise stable, as with many of the constrained SINDy models that incorrectly decay to a fixed point. \begin{figure*}[] \centering \begin{overpic}[width=0.82\linewidth]{noisy_lorenz.pdf} \put(13,50){$\lambda = 0$} \put(46,50){$\lambda = 0.01$} \put(80,50){$\lambda = 0.1$} \put(-1, 30){\begin{rotate}{90}constrained\end{rotate}} \put(-1, 7){\begin{rotate}{90}trapping\end{rotate}} \end{overpic} \caption{Comparison between the constrained SINDy (magenta) and trapping SINDy (black) results for the Lorenz system using three different values of the sparsity-promotion strength $\lambda$. Unconstrained SINDy results are not pictured because most of the models diverge. Each model is trained on a single Lorenz attractor with noise sampled from $\mathcal{N}(0, 4)$ and an initial condition of $[1,-1,20]$ (blue). The illustrations depict the model performance on data evolved from four random initial conditions between $[-10,10]$ (this testing data is not shown but the attracting set is unchanged). Trapping SINDy produces stable models that follow the underlying attractor for all values of $\lambda$.} \label{fig:lorenz_comparison} \vspace{-0.1in} \end{figure*} \subsection{Triadic MHD model\label{sec:results_mhd}} Magnetohydrodynamic systems exhibit quadratic nonlinearities that are often energy-preserving with typical boundary conditions. We consider a simple model of the nonlinearity in two-dimensional incompressible MHD, which can be obtained from Fourier-Galerkin projection of the governing equations onto a single triad of wave vectors. For the Fourier wave vectors $\bm{k}_1 = (1,1)$, $\bm{k}_2 = (2,-1)$, and $\bm{k}_3 = (3,0)$ and no background magnetic field, the Carbone and Veltri~\cite{carbone1992relaxation} system is \setlength{\arraycolsep}{-1pt} \begin{align} \label{eq:simpleMHD_model} \frac{d}{dt} \begin{bmatrix} {V}_1 \\ {V}_2 \\ {V}_3 \\ {B}_1 \\ {B}_2 \\ {B}_3 \\ \end{bmatrix} = \begin{bmatrix} -2 \nu & 0 & 0 & 0 & 0 & 0 \\ 0 & -5 \nu & 0 & 0 & 0 & 0 \\ 0 & 0 & -9 \nu & 0 & 0 & 0 \\ 0 & 0 & 0 & -2 \mu & 0 & 0 \\ 0 & 0 & 0 & 0 & -5 \mu & 0 \\ 0 & 0 & 0 & 0 & 0 & -9 \mu \\ \end{bmatrix}\begin{bmatrix} V_1 \\ V_2 \\ V_3 \\ B_1 \\ B_2 \\ B_3 \end{bmatrix} + \begin{bmatrix} 4(V_2V_3 - B_2B_3) \\ -7(V_1V_3 - B_1B_3) \\ 3(V_1V_2 - B_1B_2) \\ 2(B_3V_2 - V_3B_2) \\ 5(V_3B_1 - B_3V_1) \\ 9(V_1B_2 - B_1V_2) \\ \end{bmatrix}, \end{align} where $\nu \geq 0$ is the viscosity and $\mu \geq 0$ is the resistivity. Without external forcing, this system is stable, dissipating to zero, so we consider the inviscid limit $\nu = \mu = 0$. The system is now Hamiltonian and our algorithm correctly converges to $\bm{m} = 0$, $\bm{A}^S = 0$. The results in Fig.~\ref{fig:mhd_model} provide a useful illustration that trapping SINDy converges to stable energy-preserving models even when the trapping theorem is not satisfied. These results also provide a reminder that there are a large number of dynamical systems beyond fluids, such as MHD, which may benefit from these types of techniques. The reason our algorithm converges to the correct behavior is because it is still minimizing $\dot{K}$; in this case trapping SINDy converges to $\dot{K} \approx 0$ and can make no further improvement. \begin{figure} \centering \begin{overpic}[width=0.62\linewidth]{mhd_lissajou.pdf} \put(-5, 88){$V_1$} \put(-5, 72){$V_2$} \put(-5, 55.5){$V_3$} \put(-5, 40){$B_1$} \put(-5, 25){$B_2$} \put(-5, 9){$B_3$} \put(8, -3.5){$V_1$} \put(25, -3.5){$V_2$} \put(41.5, -3.5){$V_3$} \put(57.5, -3.5){$B_1$} \put(73.5, -3.5){$B_2$} \put(90.5, -3.5){$B_3$} \end{overpic} \vspace{0.1in} \caption{The triad model for 2D inviscid MHD training data (blue, upper triangle) and a trapping SINDy model (black) capturing Hamiltonian dynamics on testing data (red, lower triangle).} \label{fig:mhd_model} \vspace{-0.1in} \end{figure} \subsection{Forced Burgers' equation\label{sec:results_burgers}} The viscous Burgers' equation has long served as a simplified one-dimensional analogue to the Navier-Stokes equations~\cite{Burgers1948, Hopf1948}. The forced, viscous Burgers' equation on a periodic domain $x \in [0,2\pi)$ is: \begin{align} \label{eq:burgers} \frac{d}{dt}{u} &= -(U + u)\partial_x u + \nu \partial_{xx}^2u + g(x,t), \end{align} where $\nu$ is viscosity and the constant $U$ models mean-flow advection. We project this system onto a Fourier basis and assume constant forcing acting on the largest scale, i.e., $g(x, t) = \sigma \left( a_1(t) e^{ix} + a_{-1}(t) e^{-ix} \right)$, as in Noack et al.~\cite{noack2008finite}. After Fourier projection, the evolution of the coefficients $a_k(t)$ is given by the Galerkin dynamics \begin{equation} \label{eq:burgers_galerkin} \dot{a}_k = \left( \delta_{|k|1} \sigma - \nu k^2 - ikU \right) a_k - \sum_{\ell=-r}^{r} i \ell a_{\ell} a_{k - \ell}. \end{equation} In the subcritical case $\sigma < \nu$, the origin of this system is stable to all perturbations and all solutions decay for long times. However, in the supercritical case $\sigma > \nu$, the excess energy input from the forcing cascades to the smaller dissipative scales. The ``absolute equilibrium'' limit $\sigma = \nu = 0$ has a Hamiltonian structure; for long times the coefficients approach thermodynamic equilibrium and equipartition of energy~\cite{majda2000remarkable}. This structure does not correspond to any physical behavior of the Navier-Stokes equations, although it does approximate some properties of the inviscid Euler equations~\cite{Kraichnan1989}. Due to its rich dynamics, this modified Burgers' equation has also been investigated in the context of closure schemes for Galerkin models~\cite{noack2008finite}. We simulate the PDE in Eq.~\eqref{eq:burgers} with a high-resolution Godunov-type finite volume method using a van Leer flux limiter, implemented in the open-source Clawpack solver~\cite{clawpack}. \begin{figure*}[] \begin{subfigure}[b]{0.99\textwidth} \centering \begin{overpic}[width=0.95\linewidth]{burgers_regimes.pdf} \put(6.5,16){$a_{10}$} \put(2,2){$a_{1}$} \put(13,5.5){$a_{2}$} \put(21, 22){absolute equilibrium} \put(51.5, 22){supercritical, $\sigma > \nu$} \put(79, 22){subcritical, $\sigma < \nu$} \end{overpic} \caption{Trapping SINDy model (black) for the modified Burgers' equation in the three dynamic regimes. For improved illustration, the ground truth data (blue) is generated from the 10D Noack et al.~\cite{noack2008finite} model rather than DNS. } \label{fig:burger_regimes} \vspace{0.15in} \end{subfigure} \begin{subfigure}[b]{0.96\textwidth} \hspace{.225in}\begin{overpic}[width=0.95\linewidth]{burgers_lissajou.pdf} \normalsize \put(-3, 92.5){$a_1$} \put(-3, 83){$a_2$} \put(-3, 73){$a_3$} \put(-3, 63){$a_4$} \put(-3, 53.5){$a_5$} \put(-3, 43.5){$a_6$} \put(-3, 34){$a_7$} \put(-3, 24){$a_8$} \put(-3, 14.5){$a_9$} \put(-4, 5){$a_{10}$} \put(5, -1.5){$a_1$} \put(14.5, -1.5){$a_2$} \put(24, -1.5){$a_3$} \put(34, -1.5){$a_4$} \put(43.5, -1.5){$a_5$} \put(53.5, -1.5){$a_6$} \put(63, -1.5){$a_7$} \put(73, -1.5){$a_8$} \put(83, -1.5){$a_9$} \put(93, -1.5){$a_{10}$} \end{overpic} \vspace{0.2in} \caption{Temporal evolutions of each $(a_i,a_j)$ pair for $i,j=1,...,10$ obtained from DNS training data (blue, upper triangle), DNS testing data (red, lower triangle), and trapping SINDy prediction on both DNS datasets (black). The trapping algorithm struggles a bit with the transients, but obtains the correct attractor behavior.} \label{fig:burger_DNS} \end{subfigure} \caption{Summary of trapping SINDy performance for the forced Burgers' equation.} \label{fig:burger_results} \end{figure*} We illustrate the model performance in Fig.~\ref{fig:burger_regimes} for the subcritical case with $\sigma = 0.01$ and $\nu = 0.025$, the supercritical case with $\sigma = 0.1$ and $\nu = 0.025$, and the absolute equilibrium. In all cases $U = 1$. For the subcritical condition, all the eigenvalues of $\bm{L}^S$ are negative, and thus the algorithm finds stable models. For the supercritical condition $\sigma > \nu$, there is some subtlety. The algorithm does not converge to a negative definite $\bm{A}^S$, although it finds a solution with $\dot{K} \leq 0$. As mentioned in Section~\ref{sec:effective_nonlinearity}, this system does not exhibit effective nonlinearity. This lack of effective nonlinearity was also true for the MHD example in Section~\ref{sec:results_mhd}, since the initial condition with no magnetic field perturbation, $B_1(0) = B_2(0) = B_3(0) = 0$, remains on the purely hydrodynamic manifold. In the inviscid limit, we did not need to consider this subspace because the system already does not satisfy the trapping theorem by virtue of being Hamiltonian. Lastly, in the absolute equilibrium regime the trapping SINDy algorithm correctly identifies vanishing eigenvalues of $\bm{A}^S$. In practice, we find excellent models for all of the aforementioned systems and for all practical purposes these models are typically stable, regardless of effective nonlinearity or Hamiltonian dynamics, because the SINDy trapping algorithm is minimizing $\dot{K}$. However, without effective nonlinearity we are not guaranteed to produce a stable model for every possible initial condition. In Fig.~\ref{fig:burger_DNS} we illustrate the $r=10$ model built from the DNS data in the supercritical regime with $\sigma = 0.1$, $\nu = 0.025$. It struggles a bit with the transient but otherwise the performance is accurate. Part of the reason for the poor fit to the transient is that $\lambda = 0$ is used here. The biasing towards stability appears to mitigate some of the need for sparsity-promotion; in other words, sparsity-promotion is not necessarily needed to produce a stable model, but may be needed for a more accurate or interpretable model, since the number of coefficients in $Q_{ijk}$ is $\mathcal{O}(r^3)$ despite the constraints. Using finite $\lambda$ may improve the model further, especially the transients, but instead of further investigating this example, we move on and conclude the results by addressing the challenging von K\`arm\`an vortex shedding behind a circular cylinder. \subsection{Von K\`arm\`an vortex street } \label{sec:results_vonKarman} Here we investigate the fluid wake behind a bluff body, characterized by a periodic vortex shedding phenomenon known as a von K\`arm\`an street. The two-dimensional incompressible flow past a cylinder is a stereotypical example of such behavior, and has been a benchmark problem for Galerkin models for decades~\cite{noack2003hierarchy}. The transition from a steady laminar solution to vortex shedding is given by a Hopf bifurcation, as a pair of eigenvalues of the linearized Navier-Stokes operator cross the real axis. The transient energy growth and saturation amplitude of this instability mode is of particular interest and has historically posed a significant modeling challenge. Early Galerkin models of vortex shedding, based on a POD expansion about the mean flow, captured the oscillatory behavior but were structurally unstable~\cite{Deane1991pof}. This was later resolved by Noack et al.~\cite{noack2003hierarchy}, who recognized that the transient behavior could be explained by Stuart-Landau nonlinear stability theory, in which the unsteady symmetric flow is deformed to the neutrally stable mean flow via a nonlinear self-interaction of the instability mode. In that work, an 8-mode POD basis was augmented with a ninth ``shift mode'' parameterizing this mean flow deformation. This approach was later formalized with a perturbation analysis of the flow at the threshold of bifurcation~\cite{Sipp2007jfm}. This modification encodes the intuition that the dynamics take place on the parabolic manifold associated with the Hopf bifurcation; without it, the energy quadratic models tends to overshoot and oscillate before approaching the post-transient limit cycle. Nevertheless, the 9-mode quadratic Galerkin model does resolve the transient dynamics, nonlinear stability mechanism, and post-transient oscillation, accurately reproducing all of the key physical features of the vortex street. Moreover, in Schlegel and Noack~\cite{Schlegel2015jfm} stability of the quadratic model was proven with $m_9 = m_\text{shift} = \epsilon$, $\epsilon > 1$, and $m_i = 0$ for $i = \{1,...,8\}$. Recall from the discussion in Section~\ref{sec:effective_nonlinearity} that POD-Galerkin models will generally weakly satisfy the effective nonlinearity criteria and it is unclear if the shift-mode complicates this picture. Although the POD-Galerkin model is an accurate description of the flow past a cylinder, it is an intrusive model, in the sense that evaluating the projected dynamics requires evaluating individual terms in the governing equations, such as spatial gradients of the flow fields. POD-Galerkin models therefore tend to be highly sensitive to factors including mesh resolution, convergence of the POD modes, and treatment of the pressure and viscous terms. Recent work by Loiseau et al.~\cite{loiseau2018constrained,loiseau2018sparse,loiseau2019pod} has bypassed the Galerkin projection step by using the SINDy algorithm to directly identify the reduced-order dynamics. This approach has been shown to yield compact, accurate models for low-dimensional systems ($r=2$ or $3$), but preserving accuracy and stability for higher-dimensional systems remains challenging. Higher-dimensional regression problems often become ill conditioned; for example, in the cylinder wake example, the higher modes 3-8 are essentially harmonics of the driving modes 1-2, and so it is difficult to distinguish between the various polynomials of these modes during regression. Because these higher harmonics are driven by modes 1-2, the 3D constrained quadratic SINDy model with modes 1-2 plus the shift mode from Loiseau et al.~\cite{loiseau2018constrained} already performs well enough to capture the energy evolution with minor overshoot and correct long-time behavior. Details of the DNS and the POD-Galerkin technique used to reproduce the 9D shift-mode model can be found in Appendix~\ref{appendix:vonKarman_DNS}. With the trapping SINDy algorithm, we obtain new 5-dimensional and 9-dimensional models for the cylinder wake and compare the performance against the same-size analytic POD-Galerkin models. The 5D trapping SINDy model is provably stable and we illustrate the identified trapping region in Fig.~\ref{fig:vonKarman_trappingRegion}. We also compare the 5D SINDy and 9D POD-Galerkin models in Fig.~\ref{fig:vonKarman_trajectories}. The 5D trapping SINDy model outperforms the 9D POD-Galerkin model by significantly improving the transient and improving the identification of the long-term attractor. For the 9D trapping SINDy model, we managed to reduce the largest eigenvalue of $\bm{A}^S$ to $\mathcal{O}(10^{-2}-10^{-4})$ but were unable to produce accurate trapping SINDy models with fully negative definite $\bm{A}^S$. In practice, these models are functionally stable; we tested a large set of random initial conditions and did not find unbounded trajectories. Further searching in the hyperparameter space, or more algorithm iterations for better convergence, could potentially produce fully stable models. \begin{figure}[h] \begin{minipage}{0.48\textwidth} \begin{subfigure}[b]{0.99\textwidth} \centering \begin{overpic}[width=1.0\linewidth]{vonKarman_trappingRegion.pdf} \end{overpic} \caption{Trapping SINDy 5D model (black) of a Von K\`arm\`an trajectory (red). The trajectory is shown within the estimated trapping region and ellipsoid where $\dot{K} \geq 0$.} \label{fig:vonKarman_trappingRegion} \end{subfigure} \begin{subfigure}[b]{0.99\textwidth} \centering \vspace{0.5cm} \begin{overpic}[width=0.99\linewidth]{vonKarman_energies.pdf} \put(-5, 15){$K$} \put(50, -3.75){$t$} \end{overpic} \vspace{0.02in} \caption{Comparison of the energies for DNS and the 5 and 9 mode POD-Galerkin and trapping SINDy models.} \label{fig:vonKarman_energies} \end{subfigure} \end{minipage} \label{fig:vonKarman_results} \begin{minipage}{0.48\textwidth} \begin{subfigure}[b]{0.99\textwidth} \centering \begin{overpic}[width=0.82\linewidth]{vonKarman_trajectories.pdf} \put(0, 3.5){$a_2$} \put(17, 9){$a_3$} \put(5, 25){$a_\text{shift}$} \put(0, 37.5){$a_1$} \put(17, 43){$a_2$} \put(5, 59){$a_3$} \put(0, 71.5){$a_1$} \put(17, 77){$a_2$} \put(5, 93){$a_\text{shift}$} \end{overpic} \caption{5-mode trapping SINDy (black) and 9-mode POD-Galerkin (blue) models with a random initial condition, and the Von K\`arm\`an trajectory used for training (red).} \label{fig:vonKarman_trajectories} \end{subfigure} \end{minipage} \vspace{0.4in} \begin{subfigure}[b]{0.99\textwidth} \centering \begin{overpic}[width=0.99\linewidth]{vonKarman_reconstructions.pdf} \put(-2.25, 28){\begin{sideways}DNS\end{sideways}} \put(-2.25, 14.5){\begin{sideways}POD-9\end{sideways}} \put(-2.25, 1){\begin{sideways}SINDy-9\end{sideways}} \put(9, 36.25){t = 65} \put(35, 36.25){t = 120} \put(59, 36.25){t = 200} \put(83, 36.25){t = 250} \end{overpic} \caption{Predictions of the vorticity field for the von K\`arm\`an street at four snapshots in time. The trapping SINDy model outperforms the 9D POD-Galerkin model, although an initial phase error in the trapping SINDy prediction (visible in the first snapshot) persists throughout the prediction.} \label{fig:vonKarman_recons} \end{subfigure} \caption{Summary of the differences between DNS, POD-Galerkin models, and trapping SINDy models.} \end{figure} Despite this setback, the 9D trapping SINDy model performs quite well. The Galerkin model and the trapping SINDy model exhibit comparable performance and the SINDy model improves the transient prediction. The energies in Fig.~\ref{fig:vonKarman_energies} illustrate convergence to the true fluid flow energy for all the SINDy and POD-Galerkin models, with only the 9D trapping SINDy model capturing the precise timing of the transient. The flow reconstructions in Fig.~\ref{fig:vonKarman_recons} are quite accurate for both models. This is surprisingly strong performance with SINDy; recall that: 1) the Galerkin model is far more invasive a procedure than SINDy, requiring computation of spatial derivatives and inner products from the DNS, 2) the Galerkin model can still be quite sensitive to the DNS data, boundary conditions, and mesh size, and 3) the 9D trapping SINDy model is far sparser and has far fewer ``active'' terms than the 9D POD-Galerkin model. The difficulty in producing provably stable, 9D trapping SINDy models here appears to reveal an interesting optimization tradeoff. While sparsity-promotion tends to promote more accurate models and reduce the complexity of the nonconvex optimization problem (since there are fewer active terms to manage), it also deemphasizes our proposed metric for the strength of effective nonlinearity, $S_e$ from Eq.~\eqref{eq:effective_nonlinearity_strength}, by reducing the values of unimportant model terms. For instance, the SINDy model here exhibits weak effective nonlinearity, $S_e \approx 10^{-5}$, compared with $S_e \approx 10^{-2}$ for the POD-Galerkin model. This small value of $S_e$ may indicate increased difficulty in obtaining a fully negative definite $\bm{A}^S$. SINDy models with weaker sparsity-promotion exhibit larger $S_e$, but then it becomes exceedingly difficult to obtain accurate models in the nonconvex optimization problem. Without any sparsity-promotion this is an ill-conditioned, nonconvex optimization in a $330$-dimensional space. In this way, there appears to be some tradeoff between sparsity-promotion and the strength of effective nonlinearity. Given these points, we consider the sparse 5-mode and 9-mode SINDy models to be promising first steps towards incorporating stability constraints into higher-dimensional data-driven models. Before concluding, we should note that the eight-mode (no shift mode) POD-Galerkin model from Noack et al.~\cite{noack2003hierarchy}, and all eight-mode models found by trapping SINDy, do not exhibit global stability. The problem fundamentally stems from the marginal stability of the mean flow and the very weak effective nonlinearity, both of which are somewhat addressed by the shift mode in the 9-mode model. This should be taken as a cautionary warning; success of these algorithms still relies on useful representations that capture the stability information of the underlying dynamics. This may require high-resolution data or the alternative dynamic bases mentioned in Section~\ref{Sec:ProjROMS}. \section{Conclusion\label{sec:conclusion}} The present work develops physics-constrained system identification by biasing models towards fulfilling global stability criteria, and subsequently produces long-term bounded models with no extra assumptions about the stability properties of equilibrium points and equilibrium trajectories. In order to produce globally stable models, we have implemented a new trapping SINDy algorithm based on the Schlegel-Noack trapping theorem~\cite{Schlegel2015jfm}. Biasing models towards stability, and post-fit, proving that identified models are globally stable, will likely become increasingly important for both projection-based and data-driven models of fluids and plasmas. Our approach, which relies on using the energy as a Lyapunov function for an entire class of models with fixed nonlinear structure, is challenging for application to higher-order nonlinearities where generic Lyapunov functions are often unknown. Fortunately, data-driven methods are now increasingly used to discover Lyapunov functions and barrier functions for nonlinear control~\cite{neumann2013neural,khansari2014learning, richards2018lyapunov,kolter2019learning,jin2020neural,takeishi2020learning,boffi2020learning,chang2020neural,massaroli2020stable}. These methods build a heuristic Lyapunov function for a given dataset, rendering the search for a Lyapunov function tractable but possibly at the cost of model generality. We demonstrated the effectiveness of this optimization to identify stable models and additionally managed to improve the discovery of models that do not conform to the assumptions of the trapping theorem. Our trapping SINDy algorithm resulted in more accurate and stable models for a range of systems, including simple benchmark problems, noisy data from chaotic systems, and DNS from full spatial-temporal PDEs. In these examples, we found that our modified SINDy algorithm could effectively discover stable, accurate, and sparse models from significantly corrupted data. Even when an explicit stable trapping region was not found, improved stability was observed. Finally, we explored relatively high-dimensional reduced-order models, with $\mathcal{O}(10)$ degrees of freedom, which are typically challenging for unconstrained data-driven algorithms. There is considerable future work for biasing machine learning methods to discover models that satisfy existence-style proofs of stability, especially those that require nonconvex optimization; we find that the lack of convexity in the trapping SINDy algorithm leads to deprecating algorithm speed and tractability as the size of the problem increases. There are many fluid flows which have known stable and unstable projection-based and data-driven reduced-order models, and which would benefit from a larger class of models with trapping region guarantees. Future work should apply this methodology to heavily-researched systems such as the fluidic pinball~\cite{deng2020low,raibaudo2020machine} and the lid-cavity flow~\cite{terragni2011local, lorenzi2016pod}. Other promising future work includes adapting this structure to N-body coupled Stuart-Landau equations for which stability theorems already exist~\cite{panteley2020practical}. However, the nonconvexity of this formulation may require adaptation to a deep learning approach for high-dimensional N-body problems that occur in fluids and modern neuronal models. For all of the examples in this work, we train our trapping SINDy algorithm on a single trajectory, although most data-driven methods can improve performance by processing data from multiple trajectories. Very large data can be effectively addressed with modern approaches, such as manifold Galerkin projection~\cite{loiseau2019pod} and autoencoder~\cite{baldi1989neural,Milano2002jcp,lusch2018deep,champion2019data,lee2020model} methods. These approaches may also address the significant Kolmogorov width limitations of linear transformations~\cite{pinkus2012n}, and help ease the nonconvexity of our new optimization problem. There are also modern reduced-order modeling techniques, such as ``lift $\&$ learn''~\cite{qian2020lift}, which produce quadratic ROMs regardless of the nonlinear structure of the underlying governing equations. Similarly Koopman analysis aims to produce a map from the original state-space, where the dynamics are nonlinear, to a new coordinate system, typically infinite dimensional, where the dynamics become linear~\cite{koopman_hamiltonian_1931,mezic_analysis_2013,klus2018data,lusch2018deep,li_extended_2017,yeung2019learning,Takeishi2017nips,otto2019linearly}. It will be interesting to further explore the connections between the trapping theorem and these related approaches. For instance, Pan et al.~\cite{pan2020physics} builds stable Koopman models by requiring that the real parts of the eigenvalues of the linear Koopman operator are non-positive, although the relationship between this linear stability and the trapping theorem is unclear. In related work, neural-network-based encoders are often used to reverse this mapping; encoders can input quadratically nonlinear fluid flow data and apply nonlinear transformations to find useful reduced-order models beyond what is capable with traditional projection-based methods~\cite{gonzalez2018deep}. A natural question that arises is: assuming the original energy-preserving, quadratically nonlinear fluid flow exhibits a trapping region, under what conditions can we conclude that global stability holds in a new coordinate system given by $\bm{b} = \bm{g}(\bm{y})$? The transformation could be an encoder, the reverse lifting map~\cite{qian2020lift}, or some other coordinate transform. Understanding how the stability properties manifest in the transformed system is a promising future direction for extending this stability theorem for ROMs with alternative dynamic bases. \section*{Acknowledgements} This work was supported by the Army Research Office ({ARO W}911{NF}-19-1-0045) and the Air Force Office of Scientific Research (AFOSR {FA}9550-18-1-0200). JLC acknowledges support from the National Defense Science and Engineering Fellowship. SLB would like to acknowledge valuable discussions with Tulga Ersal, Ronan Fablet, Alireza Goshtabi, Nathan Kutz, JC Loiseau, Bernd Noack, and Said Ouala related to SINDy, Galerkin models, and stability constraints.
{ "timestamp": "2021-05-06T02:08:38", "yymm": "2105", "arxiv_id": "2105.01843", "language": "en", "url": "https://arxiv.org/abs/2105.01843" }
\section{} \subsection{} \subsubsection{} \section{Introduction} \label{sec:intro} Capacitively coupled plasma sources (CCPs) driven by radio-frequency (RF) waveforms have been aiding plasma processing industry for decades. As RF current can flow through dielectric substances as well, the electrode materials are not restricted to conducting ones, which tremendously widens the range of applications. Utilising the bombardment by high energy ions, e.g., material can be removed from surfaces (plasma etching in microelectronics). At low bombarding energies, deposition from the plasma prevails (e.g. in fabrication of photovolatic devices). Other surface properties playing an important role in, e.g., biomedicine, like wetability and biocompatibility can also be changed by plasma processing.\cite{lieberman2005principles,makabe2014plasma,chabert_braithwaite_2011,Schulze_2016}. As the efficiency and the rates of the processes at the surfaces depend on the flux and the flux-energy distribution of the impinging species (mostly ions and radicals, but also electrons in some cases) a lot of effort has been devoted to the understanding and the optimisation of these characteristics.\cite{heil2008possibility,kawamura1999ion,qin2010tailored,zhang2015control,bruneau2016controlling,kruger2019voltage,bogdanova2020virtual} The flux of the ions is mainly defined by the plasma density, whereas the flux-energy distribution is controlled by (i) the voltage drop over the sheath adjacent to the surface, (ii) the collisionality of the sheath, and (iii) the relation between the ion transit time and the period of the RF excitation.\cite{Wild1,Wild2,Donko_2012} At high pressures, the ions flying through the sheaths collide several times with the atoms/molecules of the background gas and have, consequently, a low energy upon arrival at the electrode surfaces. In contrast, at low pressures, the ions have a long free path and can gain high energies while flying through the sheaths. When the ions cross the sheaths in a fraction of the RF period, their energy is determined by the instantaneous sheath voltage. When the ion transit time is much longer than the RF period the ion energy is largely determined by the time-averaged sheath voltage. Besides the flux-energy distribution of the ions, the angular distribution of the ions at the surfaces may become very important, e.g. when high aspect ratio trenches are "milled" into semiconductor wafers. \cite{huang2019plasma,huang2020pattern,Hartmann_2020} During the past decades, a number of approaches have been developed to provide additional "degrees of freedom" to control ion properties at the electrodes. The first of these has been the introduction of {\it Dual-Frequency (DF) excitation}, i.e. the simultaneous application of two radio frequency waveforms to the plasma.\cite{Kitajima,Boyle_2004,booth2009dual,o2008role,boyle2004electrostatic,georgieva2004numerical,voloshin2017modeling} The DF excitation utilises the “functional separation” of these two excitation signals: the high-frequency signal is responsible for the creation of the plasma whereas the low-frequency voltage is responsible for the acceleration of the ions. This way the amplitude of the high-frequency component controls the plasma density and, consequently, the ion flux at the surfaces while the amplitude of the low frequency signal controls the energy of the ions. The functional separation is most efficient when the two excitation frequencies are significantly different, but even in this case frequency coupling effects hinder the efficient control of the ion properties for this “classical” dual-frequency excitation.\cite{Gans,turner2006collisionless} Another major step has been the discovery of the {\it Electrical Asymmetry Effect} (EAE) that allows to make geometrically symmetric plasma sources electrically asymmetric.\cite{heil2008possibility} This is achieved by applying a base RF and its second harmonic for the excitation of the plasma, which leads to a development of a DC self-bias voltage. It was explained by theory\cite{Czarnetzki_2009} and subsequently confirmed by both simulations\cite{Donko_2008} and experiments\cite{Schulze_2009} that the self-bias voltage can be controlled by the phase angle between the driving voltage harmonics. As the self-bias voltage influences the voltage drops over the sheaths, the ion energy can readily be controlled whereas the ion flux as shown by subsequent studies can be kept at a reasonably constant level. The EAE also develops when geometrically asymmetric discharges are driven with specific waveforms and allows controlling the discharge properties within a wide range.\cite{schulze2011making,zhang2012separate} Studies of the EAE were also extended to a higher number of harmonics ($N>2$), various special waveforms like {\it peaks-} and {\it valleys-}waveforms,\cite{delattre2013radio} as well as {\it sawtooth}-waveforms\cite{bruneau2014ion} have been introduced and investigated both experimentally and computationally. These waveforms, known as “{\it Tailored Voltage Waveforms}” (TVW)\cite{Lafleur_2015}, have been shown to provide large flexibility for controlling charged particle dynamics, the spatio-temporal distribution of the rates of elementary processes (e.g. ionization and excitation), the electron energy distribution function, as well as the ion properties. As peaks- and valleys-waveforms have markedly different positive and negative peak amplitudes these cause an {\it amplitude asymmetry effect} in the discharge. Sawtooth-type waveforms, on the other hand, have equal positive and negative peak amplitudes, however, notably different rising and falling slopes. These result in different sheath expansion velocities and, consequently, different rates of excitation and ionization at the two sides of the discharge, which generates an asymmetry and a DC self-bias voltage, termed as the {\it slope asymmetry effect}.\cite{bruneau2015strong} In electrically asymmetric discharges, the DC self-bias voltage ($\eta$) has a direct effect on the ion flux-energy distribution (IFED) at the electrodes. This makes such discharges attractive for surface processing applications.\cite{bruneau2014effect,bruneau2014growth,ries2019ion} Moreover, in the presence of $\eta \neq 0$, the IFED-s at the two electrodes will be different, which may be advantageous in applications when a high ion energy is required at one electrode, while this is to be prevented at the other electrode. The dependence of the self-bias voltage on the amplitudes and the phases of multi-frequency waveforms has thoroughly been investigated.\cite{Czarnetzki_2009,derzsi2015experimental,donko2018ion} A discharge asymmetry was also found to be induced by {\it differing materials} of the two electrodes, represented by, e.g., different electron reflection probabilities\cite{Korolov_2016sticking} or different secondary electron yields\cite{lafleur2013secondary,korolov2013influence} or the combination of these.\cite{hartmann2020charged} More recently, discharge asymmetries induced by {\it inhomogeneous magnetic fields} have also been studied.\cite{yang2017magnetical,yang2018magnetical,oberberg2018experimental,sharma2018spatial} The investigation of the nonlinear coupling of these various asymmetry effects clearly warrants further studies. Note, that most plasma reactors used in industrial applications are geometrically asymmetric. When such a discharge is driven by a multi-frequency waveform and has different electrode materials, three types of asymmetry effects are present simultaneously. If a magnetic field is applied as well, then four non-linearly coupled types of asymmetry mechanisms will be present. Moreover, many of these applications use an electronegative gas or gas mixture, in which the asymmetry effects may differ significantly\cite{schulze2011electron,brandt2019control,zhang2011numerical,skarphedinsson2020tailored,schungel2016tailored,gibson2017controlling} from those in thoroughly investigated electropositive discharges. The formation of the DC self-bias voltage and its dependence on the properties of the applied waveform have been studied experimentally and via simulations in a number of studies. The primary computational tool for these investigations has been the Particle-in-Cell/Monte Carlo Collisions (PIC/MCC) simulation. \cite{Birdsall_2004,Verboncoeur_2005,matyash2007particle,Tskhakaya_2007} This particle based approach is fully capable to capture kinetic effects,\cite{Basti_tutorial}therefore is well suited for the description of plasma sources operated at low pressures where non-local particle transport \cite{tsendin2010nonlocal,gallagher2012nonequilibrium,fu2020similarity,Wang2020asymmetries} appears. An analytical model\cite{Czarnetzki_2009} based on the voltage balance of the discharge has aided the understanding of the observations. While a lot of knowledge has accumulated in the previous studies (part of which has been reviewed above) some details of the discharge dynamics and the self-bias formation in CCPs require further studies. Our aim here is to understand the effects of the number of harmonics used to construct the excitation wavefrom and to reveal how these vary as a function of the parameters of the surface processes: (i) the reflection coefficient of the electrons at the electrodes and (ii) the ion-induced secondary electron yield. Here, the same values of these parameters will be assumed for both electrodes. Following the introduction of the physical setting considered, in Section \ref{sec:methods} the analytic model and the basics of the computational method will be described in Sections \ref{sec:model} and \ref{sec:pic}, respectively. Section \ref{sec:results} is devoted to the presentation of the results, where we provide a very detailed analysis that goes beyond the details covered in previous studies. In particular, we examine the effects of the floating sheath potentials and the finite voltage drop over the plasma bulk on the discharge asymmetry and the self-bias voltage. Subsequently, the effects of the surface processes are discussed. A brief summary is given in Section \ref{sec:summary}. \section{Physical system and methods} \label{sec:methods} In this work, we consider a capacitively coupled plasma source that has two parallel, planar electrodes. The diameter of the electrodes is assumed to be much larger than the gap between them, allowing the use of a one-dimensional model. The discharge is excited by a voltage waveform \cite{derzsi2015experimental} \begin{equation} \phi(t) = \sum_{k=1}^N \phi_k \cos(k \omega_1 t + \theta_k), \label{eq:waveform} \end{equation} where $\omega_1 = 2 \pi f_1$, with $f_1$ being the "base" radio frequency, $\phi_k$ and $\theta_k$, respectively, the amplitude and the phase of the $k$-th harmonic. The amplitudes of the individual harmonics are set according to \begin{equation} \phi_k = \frac{2(N-k+1)}{(N+1)^2} \phi^\ast. \label{eq:amplitudes} \end{equation} $\phi^\ast$ is usually called the peak-to-peak voltage of the waveform. Note, however, that this is true only if Eq.\,(\ref{eq:waveform}) generates peaks- or valleys-types waveforms. The first of these cases is realised by setting $\theta_k=0$, $\forall k$, while the second case is realised by setting $\theta_k=0$ for the odd values of $k$ and $\theta_k=180^\circ$ for even values of $k$. By keeping the phase angles of all odd harmonics $0^\circ$ and varying the common value, denoted by $\theta$, for all even harmonics, various waveforms (which include both the peaks and valleys cases) can be realised as shown in Figure \ref{fig:waveforms} for $N=2$ and $N=4$. One can note that for arbitrary values of $\theta$, the peak-to-peak amplitude of the waveform specified by (\ref{eq:waveform}) indeed varies, at $\theta = 90^\circ$, e.g., $\phi_{\rm pp} \approx 1.15 \phi^\ast$ for $N=2$ and $\phi_{\rm pp} \approx 1.23 \phi^\ast$ for $N=4$. Peaks-type waveforms ($\theta = 0^\circ$) have sharp positive peaks and nearly flat negative parts between these peaks. Correspondingly, the sheath at the powered electrode is expanded for a relatively long part of the fundamental RF period, whereas the sheath at the grounded electrode is expanded for a short time. For valleys-type waveforms ($\theta = 180^\circ$) the scenario is reversed. \begin{figure} \includegraphics[width=0.45\textwidth]{waveform2.png} \includegraphics[width=0.45\textwidth]{waveform4.png} \footnotesize \caption{Voltage waveforms corresponding to Eq.\,(\ref{eq:waveform}) with $\phi^\ast$ = 300\,V, for $N=2$ (a) and $N=4$ (b) harmonics, for various values of $\theta$. The peaks- and valleys-waveforms are plotted with thick lines. $T$ is the period of the fundamental frequency ($f_1$).} \label{fig:waveforms} \end{figure} \subsection{Model for the DC self-bias voltage formation} \label{sec:model} Figure \ref{fig:circuit} shows the equivalent electrical circuit consisting of the RF generator (G), the blocking capacitor (C), as well as the plasma that is represented by three circuit elements corresponding to the three regions of the discharge: the sheath at the powered side of the plasma (or "powered sheath"), the plasma bulk, and the sheath at the grounded side of the plasmas (or "grounded sheath").\cite{Czarnetzki_2009} Two of these elements, the sheaths, exhibit capacitive impedance, while the impedance of the bulk region consists of a resistive and an inductive part, originating from, respectively, the finite conductivity due to electron-atom collisions and from the inertia (finite mass) of the electrons.\cite{chabert2020foundations} The balance equation for the voltage components marked in Figure \ref{fig:circuit} is \begin{equation} \phi(t) = \phi_{\rm C} + \phi_{\rm sp} + \phi_{\rm b} + \phi_{\rm sg}. \label{eq:circuit1} \end{equation} Here, $\phi_{\rm C}$ is the DC voltage drop over the blocking capacitor, we assume the AC voltage drop over this element is negligible due to its high capacitance. In this case, the DC voltage drop $\phi_{\rm C}$ is the opposite of the DC self-bias voltage $\eta$ that develops over the plasma due to the EAE.\cite{Czarnetzki_2009} As a consequence, Eq.\,(\ref{eq:circuit1}) can be rewritten as \begin{equation} \phi(t) + \eta = \phi_{\rm sp} + \phi_{\rm b} + \phi_{\rm sg}. \label{eq:circuit2} \end{equation} \begin{figure} \includegraphics[width=0.43\textwidth]{network.png} \caption{Equivalent electrical circuit of the system investigated. The shaded area marks the plasma region, the external circuit consists of the generator G and the coupling capacitor C.} \label{fig:circuit} \end{figure} The model of the EAE, which assumes that (i) the sheath are fully collapsed at one side of the plasma at times of the extrema of the applied voltage waveforms and that (ii) there is no voltage drop over the bulk region of the plasma, predicts the dc self-bias voltage, based on the voltage balance of the circuit, to be \begin{equation} \eta = - \frac{\phi_{\rm max}+ \varepsilon \phi_{\rm min}}{1+\varepsilon}, \label{eq:bias-simple} \end{equation} where $\phi_{\rm max}$ and $\phi_{\rm min}$ are, respectively, the maximum and the minimum of the applied voltage waveform, $\phi(t)$. The more general expression, which considers the nonzero sheath voltages upon sheath collapse (i.e. the floating potentials) and the finite voltage drop over the plasma bulk,\cite{schungel2016tailored} is \begin{equation} \eta = \underbrace{- \frac{\phi_{\rm max}+ \varepsilon \phi_{\rm min}}{1+\varepsilon}}_{\eta_{\rm w}} + \underbrace{\frac{\phi_{\rm sp}^{\rm f}+ \varepsilon \phi_{\rm sg}^{\rm f}}{1+\varepsilon}}_{\eta_{\rm f}} + \underbrace{ \frac{\phi_{\rm max}^{\rm b}+ \varepsilon \phi_{\rm min}^{\rm b}}{1+\varepsilon}}_{\eta_{\rm b}}. \label{eq:bias-precise} \end{equation} Here, we already introduced notations for the contributions of different origin: due to the waveform, $\eta_{\rm w}$, to the floating potentials, $\eta_{\rm f}$, and to the bulk voltage drop, $\eta_{\rm b}$. In the above expressions, $\varepsilon$ is the symmetry parameter, which is the magnitude of the ratio of the peak values of the sheath voltages at both sides of the plasma \cite{Czarnetzki_2009}: \begin{equation} \varepsilon = \Bigg| \frac{\widehat{\phi}_{\rm sg}}{\widehat{\phi}_{\rm sp}} \Bigg|. \label{eq:epsilon_original} \end{equation} Calculations, which are not repeated here, express the extrema of the sheath voltages as\cite{heil2008possibility} \begin{eqnarray} \widehat{\phi}_{\rm sp} = -\frac{1}{2e\varepsilon_0}\biggl( \frac{Q_{\rm mp}}{A_{\rm p}} \biggr)^2 \frac{I_{\rm sp}}{\overline{n}_{\rm sp}}, \label{eq:phisp}\\ \widehat{\phi}_{\rm sg} = \frac{1}{2e\varepsilon_0}\biggl( \frac{Q_{\rm mg}}{A_{\rm g}} \biggr)^2 \frac{I_{\rm sg}}{\overline{n}_{\rm sg}}. \label{eq:phisg} \end{eqnarray} Here, $e$ is the elementary charge, $\varepsilon_0$ the permittivity of free space, $Q_{\rm mp/mg}$ the maximum charges within the sheaths, $A_{\rm p/g}$ the surfaces, $I_{\rm sp/sg}$ the sheath integrals, and $\overline{n}_{\rm sp/sg}$ the mean charged particle densities within the sheaths at the powered (p) and grounded sides (g) of the system. Note, that upon the original derivation of these expressions,\cite{heil2008possibility} the electron front was assumed to exhibit a step profile at the sheath edge and the ion density profile was taken to be static within the sheath regions. Here, however, the data obtained from the PIC/MCC simulations include the slight penetration of a finite electron density into the sheaths, i.e. $Q$ and $\overline{n}$ represent, respectively, the {\it net} charge and charged particle density. The ratio of the sheath integrals appearing in the above expressions is customarily approximated\cite{Czarnetzki_2009} by a value of 1.0. In the symmetric system considered here, $A_{\rm p} = A_{\rm g} = A$. With these two simplifications, Eq.\,(\ref{eq:epsilon_original}) becomes: \begin{equation} \varepsilon = \biggl( \frac{Q_{\rm mg}}{Q_{\rm mp}} \biggr)^2 ~ \frac{\overline{n}_{\rm sp}}{\overline{n}_{\rm sg}}. \label{eq:epsilon_model} \end{equation} Upon the presentation of the results, the most precise of $\varepsilon$ will be computed from this equation, however, the validity of simplified approaches will also be tested, as follows: \begin{itemize} \item The simplest approximation for $\varepsilon$ is represented by a value of $\varepsilon=1$, which neglects the differences between the magnitudes of the peak sheath voltages at the two electrodes. This approximation is termed as 'Model 1'. \item As a refinement, one may calculate $\varepsilon$ with neglecting the difference between $Q_{\rm mp}$ and $Q_{\rm mg}$ in Eq.\,(\ref{eq:epsilon_model}), i.e. taking \begin{equation} \varepsilon = \frac{\overline{n}_{\rm sp}}{\overline{n}_{\rm sg}}, \end{equation} which is the form that was used in the first model of the EAE.\cite{heil2008possibility} This is our 'Model 2'. \item Calculating $\varepsilon$ from the "full" Eq. (\ref{eq:epsilon_model}), as done in several recent studies.\cite{Schulze_2010,schungel2016tailored,brandt2019control} We refer to this as 'Model 3'. \end{itemize} The origin of any discharge asymmetry can also be approached from the most important elementary process in the plasmas: the ionization. This process is the primary source of the charged particles and under the conditions studied here, is driven by high-energy electrons that represent a minor fraction of the electron population.\cite{Basti_tutorial} The two basic ways of gaining enough energy for ionization (relevant at our conditions) are: acceleration of the electrons (i) near the edges of the expanding sheaths ("$\alpha$-heating")\cite{schulze2014effect} and (ii) within the sheaths in the strong electric field whenever (secondary) electrons are emitted from the electrodes, due to, e.g., ion bombardment ("$\gamma$-heating").\cite{belenguer1990transition} The causes of asymmetry effects discussed above, like different positive vs. negative values or different rising vs. falling slopes of the driving voltage waveform, as well as different secondary electron yields at the two electrodes can also be viewed to act via establishing an imbalance of the ionization at the two sides of the plasma. A faster sheath expansion (controlled by the driving waveform), e.g., gives rise to a higher energy gain of the electrons and, generally, creates a higher charge density at the corresponding side of the plasma. In the presence of secondary electrons, the magnitude of the sheath voltages (accelerating these electrons) and the duration of the expanded phase of the sheath are the important factors. A higher sheath voltage and/or a longer expanded phase of the sheath gives rise to higher ionization rate. All these effects can couple in a complicated nonlinear way in a CCP. \subsection{Computational method} \label{sec:pic} Our numerical results are obtained from one-dimensional (1D3V) bounded electrostatic PIC/MCC simulations. As this is a well-established method, the description of its details is omitted here, only some details specific to the current study are outlined below. More information about the approach can be found in the literature.\cite{donko2021edupic} Our code considers electrons and Ar$^+$ ions and follows their motion in an electric field that is defined by the potentials of the electrodes and the presence of the charged particles in the electrode gap. The powered electrode (situated at $x=0$) is at a potential $\phi(t) + \eta$, while the other electrode (situated at $x=L$) is grounded ($\phi(t)$ is defined by Eq.\,(\ref{eq:waveform})). The equation of motion of the charged particles is integrated using the leapfrog scheme, with a time step of $\Delta t = T/3000$. The computational grid for the potential, the electric field, and the charged particle densities (that has a spatial resolution of $\Delta x$) comprises 500 points. These parameters fulfil the relevant stability criteria of the PIC/MCC method.\cite{kim2005particle,lymberopoulos1995two} At the electrode surfaces, as already mentioned in Section \ref{sec:intro}, two processes are considered. (i) Ar$^+$ ions arriving at the surface induce the emission of a secondary electron with a probability that is expressed by the secondary electron yield, $\gamma$. (ii) Electrons arriving at the electrode surfaces undergo an elastic reflection event with a probability $R$ (of which the dependence on energy and angle of incidence is not taken into account). The DC self-bias voltage of the discharges driven by $N>1$ harmonics is determined in an iterative manner.\cite{Donko_2008} At the initialization of the simulation, $\eta = 0$\,V is set. After executing the simulation for a given number (typically 50) of RF cycles, the currents of the electrons and argon ions reaching each electrode are compared. Depending on the balance of these currents, the self-bias voltage is changed by a small quantity. This procedure is continued until $\eta$ reaches a converged value and the time-averaged charged particle currents to each of the two electrodes balance over (within the noise level). For our studies the identification of the position of the RF sheath edge as a function of time, $s(t)$, is a crucial task. It is carried out based on computing the spatially and temporally resolved distributions of the electron and ion densities.\cite{brinkmann2007beyond} The position of the sheath sheath edge, e.g., at the electrode at $x=0$ (i.e. $s = s_{\rm p}$) can be found from \begin{equation} \int_0^s n_{\rm e}(x) {\rm d}x = \int_s^h [n_{\rm i}(x)-n_{\rm e}(x)] {\rm d}x. \end{equation} Here, $h$ is a position where quasineutrality holds, we set this value as $h=L/2$. Solving the above equation for each time step within the RF cycle, the $s(t)$ function can be determined at both sides of the discharge. When $s_{\rm p}(t)$ and $s_{\rm g}(t)$ are known, the voltage drops over the sheaths ($\phi_{\rm sp}(t)$ and $\phi_{\rm sg}(t)$), the net space charges ($Q_{\rm p}(t)$ and $Q_{\rm g}(t)$) and mean net charged particle densities ($\overline{n}_{\rm sp}(t)$ and $\overline{n}_{\rm sg}(t)$) within the sheaths can readily be determined. \section{Results} \label{sec:results} The simulations are carried out for Ar gas, at fixed values of the pressure, $p$ = 10 Pa, the electrode gap, $L$ = 2.5 cm, and the base frequency, $f_1$ = 13.56 MHz. Driving voltage waveforms specified by Eq.\,(\ref{eq:waveform}) will be used with $\phi^\ast$ = 300\,V and with up to $N=4$ harmonics, with phase angles over the whole domain of interest ($0^\circ \leq \theta \leq 360^\circ$). For the surface coefficients we adopt the following values. (i) For the secondary electron yield we take $\gamma$ = 0, 0.2, and 0.4. For metal surfaces, $\gamma$ is expected to be rather small, $\lesssim 0.1$, while for dielectric surfaces the actual values may be well approximated by the higher $\gamma$-s adopted. (ii) For the elastic reflection coefficient of the electrons we take $R=0$ and 0.2. \begin{figure}[h] \begin{center} \includegraphics[width=0.42\textwidth]{voltage_example_N1.png}\\ \includegraphics[width=0.42\textwidth]{voltage_example_N2.png}\\ \end{center} \caption{Time dependence of the quantities involved in the voltage balance of the discharge, for single- (a) and dual-frequency (b) excitation. For $N=2$, the phase angle is $\theta=0^\circ$. Other discharge conditions: $f_1=$ 13.56\,MHz, $p=$ 10\,Pa, $L$ = 2.5\,cm, $R=$\,0, $\gamma=$\,0. The driving voltage waveform is defined by eq.\,(\ref{eq:waveform}), with amplitudes given by Eq.\,(\ref{eq:amplitudes}), $\phi^\ast$ = 300\,V. $T$ is the period of the fundamental RF frequency, $f_1$. Note, that the bulk voltage drop is multiplied by a factor of 10.} \label{fig:voltage_example} \end{figure} We start the presentation of the results by discussing the temporal behavior of the various quantities that appear in the voltage balance equation (\ref{eq:circuit2}), for the cases of single- ($N=1$) and dual-frequency ($N=2$) excitation. The $N=1$ case is displayed in Figure \ref{fig:voltage_example}(a), for the base conditions of $\phi_1$ = 150\,V, $f_1=$ 13.56\,MHz, $p=$ 10\,Pa, $L$ = 2.5\,cm, at zero values of the surface coefficients. At this electrically symmetric excitation waveform the plasma is symmetric and no self-bias voltage develops. The magnitudes of the sheath voltages ($\phi_{\rm sp}$ and $\phi_{\rm sg}$) vary with opposite phase. Note, that $\phi_{\rm sp} \leq 0$ and $\phi_{\rm sg} \geq 0$ (cf. Eqs.\,(\ref{eq:phisp}) and (\ref{eq:phisg})). The minima of $|\phi_{\rm sp}|$ and $|\phi_{\rm sg}|$ at the extrema of the applied voltage amount a few Volts. These are the so-called floating potentials, $\phi_{\rm sp}^{\rm f}$ and $\phi_{\rm sg}^{\rm f}$, respectively, at the powered and at the grounded electrodes, which limit the losses of the electrons to the electrode where the sheath momentarily collapses. The sum of the sheath voltages approximates quite well the discharge voltage (eq.\,(\ref{eq:circuit2})) as the voltage drop over the bulk of the plasmas, $\phi_{\rm b}$, amounts $\pm$\,few Volts only, due to the high conductivity of the plasma. The results for the $N=2$ case (with keeping all other parameters the same) are shown in Figure \ref{fig:voltage_example}(b), for the choice of $\theta=0^\circ$. The simulation reveals that for these conditions a self-bias voltage of $\eta \approx -54.4$\,V forms (indicated by the horizontal dashed line in Figure \ref{fig:voltage_example})(b)). The substantial contributions to the discharge voltage, which is the sum of $\eta$ and the generator voltage $\phi(t)$, are the sheath voltage drops, as above. A small additional contribution is provided by the nonzero voltage drop over the bulk region. As compared to the $N=1$ case, now the behavior of the two sheaths is quite different. Due due to the specific applied voltage waveform, the sheath at the powered electrode collapses once within the "principal" RF cycle ($T=1/f_1$) at $t/T=0$, while at the grounded side the sheath collapses twice during this period. Moreover, the sheath stays collapsed in the latter case for a longer time. As a consequence, the floating potential has a higher value of $\phi_{\rm sg}^{f} \approx 4.6$\,V as compared to the $\phi_{\rm sp}^{f} \approx -1.1$\,V found at the powered electrode. (Note, that these values are not resolved in the figure). These potentials ensure the compensation of electron and ion currents over an RF period by regulating the electron fluxes that reach the electrodes. \begin{figure}[h] \includegraphics[width=0.38\textwidth]{sheath_example_N2.png} \caption{Time dependence of the sheath lengths (chain lines, left scale) and net charge in the individual sheaths ($Q_{\rm p}$ and $Q_{\rm g}$) and their sum ($Q_{\rm tot} = Q_{\rm p}+ Q_{\rm g}$), per unit area of $A = 1$ cm$^2$ (thick solid lines, right scale), for the $N=2$, $\theta=0^\circ$ case. The other conditions are the same as in Figure \ref{fig:voltage_example}.} \label{fig:sheath_example} \end{figure} The time dependence of the length of the sheaths for this case is presented in Figure \ref{fig:sheath_example}. The sheath at the powered side has a minimum length of about 0.06\,cm, whereas at the grounded side the minimum of $s_{\rm g}$ is $\approx$\,0.1\,cm. This figure also shows the net charge contained within the sheaths, for a unit electrode area of 1\,cm$^2$. The temporal change of $Q_{\rm p}$ and $Q_{\rm g}$ closely follows the variation of the length of the corresponding sheath. The sum of the charges in the two sheaths, $Q_{\rm tot}$ is almost invariant of time, as Figure \ref{fig:sheath_example} reveals. The slow drop (small negative slope) of $Q_{\rm tot}(t)$ is due to the continuous ion flux to the electrodes, while the temporary increase upon times of sheath collapses are due to the losses of the electrons to the electrodes. After illustrating the time-dependent behavior of the relevant physical quantities for a few selected cases, next we address the behavior of the various voltages as a function of the phase angle $\theta$. Figure \ref{fig:phase1} shows the maximum and minimum values of the applied voltage ($\phi_{\rm max}$ and $\phi_{\rm min}$), the peak sheath voltages ($\widehat{\phi}_{\rm sp}$ and $\widehat{\phi}_{\rm sg}$), the floating potentials ($\phi_{\rm sp}^{\rm f}$ and $\phi_{\rm sg}^{\rm f}$), the bulk voltage drops at the times of the maximum and minimum of the applied voltage ($\phi_{\rm max}^{\rm b}$ and $\phi_{\rm min}^{\rm b}$), as well as the DC self-bias voltage, $\eta$, obtained directly from the simulation. (Recall that Eq.\,(\ref{eq:bias-precise}) formulates a connection between these quantities based on theory, but the comparison of these $\eta$ values with the simulation results is presented later.) \begin{figure} \includegraphics[width=0.43\textwidth]{phasedep_N2.png}\\ \includegraphics[width=0.43\textwidth]{phasedep_N4.png}\\ \caption{Maximum and minimum values of the applied voltage ($\phi_{\rm max}$ and $\phi_{\rm min}$), peak sheath voltages ($\widehat{\phi}_{\rm sp}$ and $\widehat{\phi}_{\rm sg}$), floating potentials ($\phi_{\rm sp}^{\rm f}$ and $\phi_{\rm sg}^{\rm f}$), bulk voltage drops at the time of the maximum and minimum of the applied voltage ($\phi_{\rm max}^{\rm b}$ and $\phi_{\rm min}^{\rm b}$), as well as the self-bias voltage $\eta$ computed from the simulation. (a) $N=2$, (b) $N=4$. Note that some quantities are multiplied by 10. Discharge conditions: Ar at $p$ = 10 Pa, $L$ = 2.5 cm, $f_1$ = 13.56 MHz, $\phi^\ast$ = 300 V, $R = 0$, $\gamma = 0$.} \label{fig:phase1} \end{figure} The difference between $\phi_{\rm max}$ and $\phi_{\rm min}$ equals $\phi^\ast$ (being fixed at a value of 300\,V) only at $\theta = 0^\circ$ and $180^\circ$, at other values $\phi_{\rm max}-\phi_{\rm min}>\phi^\ast$. This disparity between $\phi_{\rm max}$ and $|\phi_{\rm min}|$ is higher in the case of $N=4$, as compared to the $N=2$ case, according to the increasing asymmetry of the applied waveform (see Figure \ref{fig:waveforms}). The maxima of the magnitudes, $|\widehat{\phi}_{\rm sp}|$ and $\widehat{\phi}_{\rm sg}$ as a function of $\theta$, are on the other hand, very similar in the two cases with different number of harmonics ($N$). The floating sheath potentials show also very similar patterns in the $N=2$ and $N=4$ cases, while the voltage drop over the bulk plasma (although this is a small value), grows to almost a factor of two higher when the number of harmonics is doubled from $N=2$ to $N=4$. As to the DC self-bias voltage $\eta$, the highest values are obtained near, but not exactly at $\theta = 0^\circ$ and $180^\circ$.\cite{Schulze_2010} At $N=2$, $|\widehat{\eta}| \cong 54$\,V, while for $N=4$, $|\widehat{\eta}| \cong 100$\,V peak values are found. These values, as mentioned above, result from the simulations, where they are determined based on the balance between the electron and ion currents to the electrodes. Next, we address the question how well the model of the EAE, outlined in Section \ref{sec:model}, reproduces these results for the self-bias voltage. For this, (i) the importance of the different terms in the expression (\ref{eq:bias-precise}) are examined, and (ii) various approximations ("modeling levels") for the calculation of the symmetry parameter $\varepsilon$ (see the end of Sec. \ref{sec:model}) are tested. This analysis is aided by Figure \ref{fig:bias1}. The findings for $N=2$ (panel (a)) and for $N=4$ (panel (b)) are very similar, only the magnitude of $\eta$ is higher in the $N=4$ case. When Model 1 is used, i.e. $\varepsilon$ is taken to be 1.0, a triangular shape for $\eta(\theta)$ is obtained, which approximates reasonably the simulation results, although somewhat smaller $|\eta|$ values are found at the extrema of the self-bias voltage. A similar $\eta(\theta)$ dependence is found when Model 2 is adopted, however, the peak amplitudes of $\eta$ obtained this way are higher than those predicted by the PIC/MCC simulations. Finally, Model 3 provides a very good agreement with the simulation results. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{bias_N2.png}\\ \includegraphics[width=0.45\textwidth]{bias_N4.png}\\ \caption{Self-bias voltage as a function of the phase angle ($\theta$) as obtained from the different models for the symmetry parameter ("$\eta$ Model 1,2,3"), as well as contributions of the terms of Eq. (\ref{eq:bias-precise}) to $\eta$ for Model 3 (Eq. (\ref{eq:epsilon_model})), in comparison with the values obtained directly from the PIC/MCC simulations ("$\eta$ PIC"). (a) $N=2$ and (b) $N=4$, for Ar at $p$ = 10 Pa, $L$ = 2.5 cm, $f_1$ = 13.56 MHz, $\phi^\ast$ = 300 V, $R = 0$, $\gamma = 0$.} \label{fig:bias1} \end{figure} For this latter, the different contributions to the self-bias voltage, as specified in Eq. (\ref{eq:bias-precise}) are also displayed in Figure \ref{fig:bias1}. The contributions of the floating potentials and the bulk voltage drop prove to be small and acting against each other over the whole range of the phase angle $\theta$. The dominant term, $\eta_{\rm w}$, is thus hardy distinguishable from the sum of the three terms that yields $\eta$. The above observations make us conclude that consideration of the first term only in Eq. (\ref{eq:bias-precise}) is sufficient, however, for the calculation of $\varepsilon$ the more precise form of Eq. (\ref{eq:epsilon_model}) is required. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{epsilon_N2.png}\\ \includegraphics[width=0.45\textwidth]{epsilon_N4.png}\\ \caption{Symmetry parameter $\varepsilon$ obtained from the PIC/MCC simulation and from the different models, as well as the terms involved in the calculation of $\varepsilon$ from Eq. (\ref{eq:epsilon_model}). Note that "$\varepsilon$ Model 2" is equivalent to $\overline{n}_{\rm sp}/\overline{n}_{\rm sg}$, and that "$\varepsilon$ Model 3" is computed as $(Q_{\rm mg}/Q_{\rm mp})^2 (\overline{n}_{\rm sp}/\overline{n}_{\rm sg})$ (see Eq. (\ref{eq:epsilon_model})). Discharge conditions: Ar at $p$ = 10 Pa, $L$ = 2.5 cm, $f_1$ = 13.56 MHz, $V_{\rm pp}$ = 300 V.} \label{fig:epsilon1} \end{figure} At this point it is useful to analyze the $\varepsilon$ values obtained from the different models, together with its values taken directly from the simulation (via Eq. (\ref{eq:epsilon_original})). The corresponding data are shown in Figure \ref{fig:epsilon1} as a function of the phase angle $\theta$ (panel (a) for $N=2$ and (b) for $N=4$). Additionally, the terms involved in the calculation of $\varepsilon$ via Eq. (\ref{eq:epsilon_model}) are also displayed in Figure \ref{fig:epsilon1}. As in the case of the self-bias voltage (see Figure \ref{fig:bias1}), the most accurate model (i.e. Model 3) reproduces very well the symmetry parameter as a function of $\theta$ resulting from the simulation. The $\varepsilon =1$ approximation of Model 1 is clearly a bad choice, as $\varepsilon$ varies with $\theta$ about $\pm 10\%$ in the $N=2$ case and more than $\pm 20\%$ in the case of $N=4$. Model 2 (which considers only the difference of the mean charge densities within the sheaths), on the other hand, largely (by a factor of $\approx 2$) overestimates the variation of $\varepsilon$ with $\theta$. These findings confirm that the charge dynamics, represented by the term $(Q_{\rm mg}/Q_{\rm mp})^2$ plays an important role.\cite{Schulze_2010} The inclusion of this term in the calculation of $\varepsilon$ ("$\varepsilon$ Model 3" in Figure \ref{fig:epsilon1}) yields a very good agreement with the simulation data ("$\varepsilon$ PIC"). \begin{figure}[h] ~~~~~~~~~~~~~~\includegraphics[width=0.44\textwidth]{bias_comp_90.png} \includegraphics[width=0.38\textwidth]{bias_comp_180.png} \caption{Self-bias voltage obtained from the PIC/MCC simulations in the vicinity of $\theta = 90^\circ$ (a) and $\theta = 180^\circ$ (b), for different number of harmonics ($N$), secondary electron yields ($\gamma$), and electron reflection coefficients ($R$). The legend in (a) also holds for panel (b). In (b), the data corresponding to $N=2$ are given in the left axis, while data for $N=4$ are shown in the right axis. Discharge conditions: Ar at $p$ = 10 Pa, $L$ = 2.5 cm, $f_1$ = 13.56 MHz, $V_{\rm pp}$ = 300 V.} \label{fig:bias_details_pic} \end{figure} Next, we analyze the behavior of the discharge in the vicinity of phase angles where the self-bias voltage is (i) zero and where (ii) it is maximised. The first domain corresponds to phase angles near $\theta = 90^\circ$, while for the second domain $\theta$ is around $180^\circ$. These two domains, respectively, are shown in Figures \ref{fig:bias_details_pic}(a) and (b), for various values of the number of harmonics. Here we also include data obtained with different values of the secondary electron yield and the electron reflection coefficient. We recall that the effects of both of these parameters were already analyzed in the context of the EAE.\cite{lafleur2013secondary,korolov2013influence,Korolov_2016sticking} These studies have, however, focused on {\it establishing} an asymmetry by using {\it unequal} coefficients at the two electrodes, while here we study the effects of these parameters with {\it equal} values at both electrodes, i.e., they are not the causes of the asymmetry, but may modify it by influencing the plasma behavior. Based on Figure \ref{fig:bias_details_pic} the following observations can be made: \begin{enumerate} \item The electron reflection has a marginal effect on $\eta$, at all values of $\theta$, $N$, and $\gamma$ considered. \item At $\gamma=0$, the zero crossing of $\eta$ for $N=2$ occurs at $\theta>90^\circ$, while for $N=4$ it occurs at $\theta<90^\circ$. \item At $N=2$, the angle where $\eta$ becomes zero changes with $\gamma$, while for $N=4$ no such dependence is observed. \item The increase of $\gamma$ decreases the maximum self-bias voltage at both $N$ values. \end{enumerate} \begin{figure}[] \includegraphics[width=0.45\textwidth]{slope.png}\\ ~~~~~~~\includegraphics[width=0.45\textwidth]{slope2.png} \caption{(a) Discharge voltage ($\phi+\eta$) and sheath voltages as a function of time, at $\gamma$ = 0. Solid lines: $N=2$, chain lines: $N=4$. (b) Spatial distribution of the time-averaged ionization rate for the conditions indicated. Other conditions: Ar at $p$ = 10 Pa, $L$ = 2.5 cm, $f_1$ = 13.56 MHz, $\phi^\ast$ = 300 V, $R$ = 0. $\theta=90^\circ$ for all cases.} \label{fig:slope} \end{figure} In the following, we provide explanations for these observations. The first of these can be explained by the fact that although reflected electrons generally increase the plasma density, at the relatively low value of $R$ considered here this increase is small.\cite{Korolov_2016sticking} We observe only a slight effect of electron reflection at $\gamma=0$, comparison of the pairs of data sets for $R=0$ vs. 0.2 in Figure \ref{fig:bias_details_pic}(b) for $N=2$ and 4 reveals a small decrease of $\eta$ when $R$ is increased. Regarding the second observation, it is important to realise that at $\theta=90^\circ$ the maximum and minimum values of the driving voltage waveform have the same magnitudes ($\phi_{\rm max} = - \phi_{\rm min}= \phi_{\rm m}$), i.e. {\it no} amplitude asymmetry is present at this specific $\theta$. According to Eq. (\ref{eq:bias-simple}), any deviation of $\eta$ from zero can only be attributed to $\varepsilon \neq 1$, as $ \eta / \phi_{\rm m}= - (1-\varepsilon)/(1+\varepsilon)$. (Using Eq. (\ref{eq:bias-simple}) instead of the more precise Eq. (\ref{eq:bias-precise}) is justified by our earlier conclusion that the floating potential and the bulk voltage drop act against each other in the latter equation.) Having ruled out the effect of different (positive vs. negative) voltage amplitudes, the observed asymmetry of the discharge can only originate from the specific shapes of the driving voltage waveform. Indeed, it turns out that it is the {\it slope} of the waveform, i.e., $d\phi(t)/dt$ that is responsible for the observed effect. At $N=2$, as shown (by the red solid line) in Figure \ref{fig:slope}(a), the driving waveform has a long falling slope and a short rising slope, i.e. it resembles a sawtooth-down waveform.\cite{bruneau2015strong} This shape is expected to result in higher excitation/ionization rate at the grounded side of the discharge and this is confirmed by the computed ionization rate function shown in Figure \ref{fig:slope}(b) for $N=2$ and $\gamma=0$. The theory of sawtooth waveforms\cite{Lafleur_2015} predicts a negative self-bias voltage, in accordance with our observation in Figure \ref{fig:bias_details_pic}(a). For $N=4$, we find the fastest sheath expansion around times of $t/T \approx 0.4$ at the powered side, and the ionization source (see Figure \ref{fig:slope}(b)) exhibits a peak at this side of the discharge. This explains the observation of a small positive self-bias voltage (that differs from the $N=2$ case). \begin{table}[h!] \caption{Self-bias voltage ($\eta$) at $\theta=90^\circ$, at $N=2$ and 4, as a function of $\gamma$, the values of the symmetry parameter $\varepsilon$ and the terms involved in $\varepsilon$, as well as the peak sheath voltages. \label{table:crossing90} } \begin{tabular}{cccccccc} \hline $N$ & $\gamma$ & $\eta$ [V] & $\overline{n}_{\rm sp} / \overline{n}_{\rm sg}$ & $(Q_{\rm mg} / Q_{\rm mp})^2$ & $\varepsilon$ (Model 3) & $\widehat{\phi}_{\rm sg}$ [V] & $|\widehat{\phi}_{\rm sp}|$ [V]\\ \hline 2 & 0 & $-$6.10 & 0.893 & 1.043 & 0.932 & 172.9 & 184.8\\ 2 & 0.2 & $-$2.71 & 0.920 & 1.054 & 0.969 & 175.3 & 180.7\\ 2 & 0.4 & $+$3.36 & 0.967 & 1.078 & 1.042 & 180.5 & 173.6\\ \hline 4 & 0 & $+$5.40 & 1.040 & 1.011 & 1.055 & 197.3 & 187.1\\ 4 & 0.2 & $+$5.21 & 1.029 & 1.023 & 1.054 & 196.4 & 186.3\\ 4 & 0.4 & $+$5.24 & 1.008 & 1.043 & 1.056 & 195.1 & 184.8\\ \hline \end{tabular} \end{table} The explanation of the third observation is aided by tabulated values of the DC self-bias voltage, the symmetry parameter and its relevant terms, as well as the peak sheath voltages for various values of $\gamma$, at $\theta=90^\circ$, included in Table \ref{table:crossing90}. The $\gamma=0$ case was discussed above and the origin of $\eta<0$ at $N=2$ was clarified. It is important to realise that at $\gamma>0$ the ionization balance is also influenced by secondary electrons (emitted from the electrode surfaces). The energy gain and the multiplication of these electrons, and the consequent ionization are sensitive functions of the accelerating voltage, i.e. {\it the peak sheath voltage drop}. For $N=2$, we find that $\widehat{\phi}_{\rm sg} < |\widehat{\phi}_{\rm sp}|$ at $\gamma=0$. Consequently, when secondary emission sets on at $\gamma>0$, ionization at the powered side of the discharge increases by a higher amount as compared to that at the grounded side. This is the reason why we find an increasing value of $\overline{n}_{\rm sp} / \overline{n}_{\rm sg}$ (seen in Table \ref{table:crossing90}), which, in turn results in an increase of $\varepsilon$ with $\gamma$. The other factor involved in $\varepsilon$, $(Q_{\rm mg} / Q_{\rm mp})^2$, also increases with increasing $\gamma$, further contributing to the change from $\varepsilon<1$ to $\varepsilon>1$ while $\gamma$ reaches 0.4. This change of $\varepsilon$ results in a switch of the sign of $\eta$. To understand the behavior of $(Q_{\rm mg} / Q_{\rm mp})^2$, the time dependence of the total charge $Q_{\rm tot}(t)$ in the plasma (per unit area) is plotted in Figure \ref{fig:Qtot}. for various values of $\theta$, $N$, and $\gamma$. All the curves seen in this figure exhibit slowly decaying parts that correspond to the continuous losses of ions at the electrodes and short, rapidly increasing segments where $Q_{\rm tot}$ increases because of the losses of the electrons. Losses of electrons occur during the times of sheath collapse at either side of the plasma. Taking the case of $N=2$ as an example, according to Figure \ref{fig:slope}(a) the grounded sheath collapses at $t_{\rm g}/T \approx 0.58$, whereas the powered sheath collapses at $t_{\rm p}/T \approx 0.9$. At the time of collapse of the {\it grounded} sheath $Q_{\rm tot}$ resides in the {\it powered} sheath, thus the peak of $Q_{\rm tot}$ at $t_{\rm g}$ in Figure \ref{fig:Qtot}(a) can be associated with $Q_{\rm mp}$ (illustrated for the $N=2$, $\gamma=0.4$ curve). Similarly, $Q_{\rm tot}$ at $t_{\rm p}$ peaks at a value of $Q_{\rm mg}$. Having understood this, we can look now at the changes of $Q_{\rm mp}$ and $Q_{\rm mg}$ resulting from an increase of $\gamma$. When $\gamma$ is increased from zero, more ionization occurs near the powered electrode (because of the higher peak sheath voltage there, see above) and, consequently, the ion flux to the powered electrode increases more than the ion flux to the grounded electrode. As the flux of the ions and the electrons to either of the electrodes must compensate each other on time average, an increased ion flux at one electrode also requires an increased electron flux at the same electrode. Recall that the electron flux can flow only during the collapse of the sheath. Due to the higher ionization at the powered side of the plasma the electron flux at the powered electrode will get enhanced more than at the grounded electrode. Consequently, the total charge in the plasma, $Q_{\rm tot}$, will be increased more at the time of sheath collapse at the powered side, as compared to that at the grounded side. According to the explanations about the behavior of $Q_{\rm tot}(t)$ provided above, $Q_{\rm mg}$ will get more enhanced than $Q_{\rm mp}$ when $\gamma$ is increased, leading to the increase of $(Q_{\rm mg} / Q_{\rm mp})^2$, as observed. \begin{figure}[] \includegraphics[width=0.48\textwidth]{Q24_angle90.png}\\ \includegraphics[width=0.48\textwidth]{Q24_angle180.png} \caption{Total uncompensated charge per unit area (of 1 cm$^2$) as a function of time within a fundamental RF period for (a) $\theta=90^\circ$ and (b) $\theta=180^\circ$, for various values of $N$ and $\gamma$. Parts of the curves with a slow decay correspond to the continuous losses of positive ions, while parts with a steep rise correspond to sheath collapses when electrons are lost to the electrodes. The maximum charges in the two sheaths are illustrated for one of the cases in both panels. For more explanation see text. Discharge conditions: Ar at $p$ = 10 Pa, $L$ = 2.5 cm, $f_1$ = 13.56 MHz, $\phi^\ast$ = 300 V.} \label{fig:Qtot} \end{figure} The scenario at the higher number of harmonics, $N=4$, is somewhat different. In this case, a significantly faster sheath expansion is observed in Figure \ref{fig:slope}(a) after the sheath collapse at the powered side of the discharge as compared to the $N=2$ case (compare the falling slopes of the red solid vs. chain lines, around $t/T=0$). This fast sheath expansion, in contrast with the $N=2$ case, gives rise to a nearly symmetrical ionization function, which is a result of a compensation between the faster sheath expansion at the powered side and a higher sheath voltage at the grounded side (see Table \ref{table:crossing90}). Therefore, in the $N=4$ case the arguments presented above do not apply for the changes of $Q_{\rm mg}$ and $Q_{\rm mp}$ with $\gamma$. Moreover, the explanation presented above clearly assumed a certain degree of locality of the electron kinetics, i.e., fluxes at a given electrode were assumed to be largely determined by the ionization rate near the same electrode. However, at the conditions considered, energy gain of the electrons at one side of the discharge can contribute to the ionization and particle fluxes at the other side of the discharge as well. This property of the discharge is obviously confirmed by the ("flat top") shape of the ionization source functions plotted in Figure \ref{fig:slope}(b), especially at $\gamma>0$. As a consequence of these effects the small counter-acting changes of the $\overline{n}_{\rm sp} / \overline{n}_{\rm sg}$ and $(Q_{\rm mg} / Q_{\rm mp})^2$ on $\varepsilon$, with increasing $\gamma$ at $N=4$ are difficult to explain. This interplay of these two terms results in a nearly constant $\varepsilon$ and DC self-bias in these cases (see Table \ref{table:crossing90}). \begin{table}[h!] \caption{Self-bias voltage ($\eta$) at $\theta=180^\circ$, at $N=2$ and 4, as a function of $\gamma$, as well as the values of the symmetry parameter $\varepsilon$ and the terms involved in $\varepsilon$.\label{table:crossing180} } \begin{tabular}{cccccc} \hline $N$ & $\gamma$ & $\eta$ [V] & $\overline{n}_{\rm sp} / \overline{n}_{\rm sg}$ & $(Q_{\rm mg} / Q_{\rm mp})^2$ & $\varepsilon$ (Model 3) \\ \hline 2 & 0 & 54.60 & 1.274 & 0.851 & 1.084\\ 2 & 0.2 & 51.6 & 1.239 & 0.843 & 1.043 \\ 2 & 0.4 & 47.7 & 1.194 & 0.825 & 0.985 \\ \hline 4 & 0 & 101.1 & 1.509 & 0.768 & 1.159 \\ 4 & 0.2 & 96.8 & 1.472 & 0.755 & 1.111 \\ 4 & 0.4 & 84.9 & 1.412 & 0.677 & 0.956 \\ \hline \end{tabular} \end{table} Finally, we address our fourth observation, i.e., the question: why does the self-bias voltage at its extremum (near $\theta=180^\circ$) decrease with increasing $\gamma$ (see Figure \ref{fig:bias_details_pic}(b))? Corresponding data for the DC self-bias voltage, the symmetry parameter and its relevant terms for various values of $\gamma$ are given in Table \ref{table:crossing180}. At $\gamma=0$, ionization is maintained by electron energy gain at the phase of sheath expansion. At $\theta=180^\circ$ (see the thick black lines in Figure \ref{fig:waveforms}), the expansion of the sheath is much faster at the powered electrode and therefore the ionization rate is also higher at that side of the discharge. This results in the high $\overline{n}_{\rm sp} / \overline{n}_{\rm sg}$ values seen in Table \ref{table:crossing180} for both $N=2$ and $N=4$. (Note that the value is higher for the higher $N$ due to the faster sheath expansion induced by the steeper voltage waveform.) As a consequence of this a large $\eta$ is created. When $\gamma$ is increased from zero, ionization by the secondary electrons starts to play a role. For this contribution the {\it duration} of the expanded phase of the sheaths is a key parameter. For a longer period of expansion, that is actually found at the grounded side of the discharge due to the specific applied waveform, a higher enhancement of the ionization rate is expected. In accordance with this, $\overline{n}_{\rm sp} / \overline{n}_{\rm sg}$ decreases with increasing $\gamma$. Using again the argument that a higher ionization rate at the grounded side of the discharge leads to an increase of the total uncompensated charge contained within powered sheath upon the collapse of the grounded sheath, the higher increase of $Q_{\rm mp}$ with respect to that of $Q_{\rm mg}$ (confirmed in Figure \ref{fig:Qtot}(b)), and the consequent decrease of $(Q_{\rm mg} / Q_{\rm mp})^2$ can be understood. As both terms contributing to the symmetry parameter decrease, a strong suppression of the self-bias voltage appears. At the pressure and electrode gap values considered here, energetic electrons created at one side of the plasma also contribute to ionization at the other side of the plasma, as confirmed by the shape of the ionization source functions shown in Figure \ref{fig:slope}(b). Therefore, the discharge has a tendency to become symmetrical at high $\gamma$ values. \section{Summary} \label{sec:summary} In this work, we have examined the establishment of a discharge asymmetry and the concomitant formation of a DC self-bias voltage in capacitively coupled RF discharges driven by multi-frequency voltage waveforms. Computations have been carried out with various values of the coefficients that characterize the electrode surfaces, i.e., (i) the coefficient of elastic electron reflection and (ii) the ion-induced secondary electron yield. The latter ranged between zero and a high value of $\gamma=0.4$, which can characterize high-electron-yield dielectric surfaces. The understanding of the computational results has been aided by an analytical model that is based on a voltage balance of the RF discharge. We have shown that this model, in its more complete form when it also includes the charge dynamics (by accounting for the ratios of the net charges in the two sheaths), is able to successfully reproduce and explain the behavior of the DC self-bias voltage as a function of the phase angle between the harmonics of the driving voltage waveform. The investigations of the surface coefficients indicated that the elastic reflection of the electrons, as long as equal values are used at both electrodes, has a minor influence on the discharge asymmetry and the self-bias voltage. The secondary electron emission coefficient (for which also the same values were adopted for both electrodes) was found to influence the discharge asymmetry and the self-bias voltage in a complicated manner, depending on the phase angle and/or the number of harmonics ($N$). These effects were understood based on the differences of the maximum sheath voltages and the durations of the expanded phases of the sheaths at the two sides of the discharge, as well as on the charge dynamics that is influenced by the ion fluxes to the electrodes. At our choice of the gas pressure and the electrode gap, the ionization source function was found to be non-local and a high secondary electron yield induced a tendency to restore the symmetry of the discharge at the conditions of the highest amplitude asymmetry (i.e. in the case of peaks- and valleys-type excitation waveforms). Further studies could examine such effects at conditions of lower and higher pressures and/or electrode gaps when the nonlocality of the ionization could be enhanced or suppressed. \acknowledgements This work was supported by the National Office for Research, Development and Innovation (NKFIH) of Hungary via the grant K-134462, by the German Research Foundation in the frame of the project, ``Electron heating in capacitive RF plasmas based on moments of the Boltzmann equation: from fundamental understanding to knowledge based process control'' (No. 428942393), via SFB TR 87 (project C1), by the National Natural Science Foundation of China (Grant No. 12020101005) and by the grant AP09058005 of the Ministry of Education and Science of the Republic of Kazakhstan. \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \nocite{*}
{ "timestamp": "2021-05-06T02:11:03", "yymm": "2105", "arxiv_id": "2105.01890", "language": "en", "url": "https://arxiv.org/abs/2105.01890" }
\subsection{} ABSTRACT. An experiment by Proietti {\it et al} purporting to instantiate the `Wigner's Friend' thought experiment is discussed. It is pointed out that the stated implications of the experiment regarding the alleged irreconcilability of facts attributed to different observers warrant critical review. In particular, violation of a Clauser-Horne-Shimony inequality by the experimental data actually shows that the attribution of measurement outcomes to the ``Friends'' (modeled by internal photons undergoing unitary interactions) is erroneous. An elementary but often overlooked result regarding improper mixtures is adduced in support of this assessment, and a basic logical error in the analysis leading to the authors' ontological claims that different observers are subject to irreconcilable `facts' is identified. A counterexample is provided which refutes the popular notion that quantum theory leads to `relative facts' that never manifest as empirical inconsistencies. It is further noted that under an assumption of unbroken unitarity, no measurement correlation can ever yield an outcome, since all systems remain in improper mixtures, and attributing a definite but unknown outcome contradicts their composite pure state. It is pointed out that there already exists a solution to this conundrum in the form of an alternative formulation of quantum theory, which accounts for the data showing that no outcomes occurred at the interior entangled photon level and also predicts that outcomes can and do occur at the exterior ``super-observer'' level in this type of experiment. \bigskip \section{Background.} This paper presents a critique of a recent attempt by Proietti {\it et al} (2019) to experimentally model the Wigner's Friend experiment (WF), a famous elaboration by Wigner of the Schr\"odinger's Cat experiment (SC). While the present work addresses the specific details of that experiment, the critique applies more broadly to all recent discussions of the WF experiment that assume that outcomes occur for subsystems of an entangled state; i.e., that outcomes can ``coexist'' for the ``friend'' who is modeled as a subsystem of an entangled state, as well as for the external ``Wigner'' level. These two levels have been termed in the literature (such as in the scenario of Frauchiger-Renner, 2018) as the ``observer'' and ``super-observer'' level, respectively. Such a treatment is also found for example in Bong {\it et al} (2020). Both SC and WF are thought experiments that amplify the measurement problem (MP) of standard quantum theory by showing that it leads to absurdities when entanglements are presumed to continue to the macroscopic level of everyday experience. This author has argued elsewhere (Kastner, 2020b) that these thought experiments are properly understood not as experiments proposed to be performed in the lab (since what they predict, such as cats or people in superposition, is already known to be falsified by experience), but rather as {\it reductio ad absurdum} arguments making explicit the arbitrariness of the so-called `shifty split' that is inevitable when quantum theory is assumed to have only unitary (linear) dynamics. In what follows, I will refer to this standard approach to quantum theory as `UOQM' for `Unitary-Only Quantum Mechanics.' UOQM consists of any approach that assumes that quantum theory has only unitary physical processes except perhaps upon `observation,' where that term is ambiguous. In addition, UOQM assumes that any non-unitary `collapse' or `reduction' upon `observation' has no quantitative physical counterpart in the theory, making it an {\it ad hoc} postulate. This is the approach assumed in Poietti {\it et al} and in other works claiming that outcomes occurred at the ``friend'' level, at least implicitly. Before dealing with the specifics of the experiment (which is quite an impressive technical accomplishment despite its misinterpretation), it is important to review an elementary but often-overlooked fact regarding the fraught notion of ``measurement'' in the context of UOQM. This fact is that, as a purely logical matter, there can be no well-defined measurement outcomes under UOQM. This was originally pointed out by Feyerabend (1962) and instructively reviewed by Hughes (1989),\textsection 5.8. We now review the basic argument. Under UOQM, a good (sharp) measurement of some observable A on a system S is modeled as an interaction between a measuring apparatus or ``pointer'' P, initially in a ready state $|\phi_0\rangle $, that couples it to S such that states $|\phi_i\rangle$ of P point to the different eigenstates $|a_i\rangle$ of A, as follows: $$ |\phi_0\rangle \otimes |\psi\rangle \rightarrow c_1 |\phi_1\rangle |a_1\rangle + c_2 |\phi_2\rangle |a_2\rangle + c_3 |\phi_3\rangle |a_3\rangle + \ldots c_N |\phi_N\rangle |a_N\rangle \eqno(1)$$ \medskip \noindent where the $c_i$ are the amplitudes associated with this decomposition of the system's prepared state $|\psi\rangle$. Thus, the pointer P enters into an entanglement with S such that states of P ``copy'' the eigenstates of $|a_i\rangle$ pertaining to the system, with the appropriate amplitudes. However, according to standard UOQM, nothing beyond this unitary process ever physically happens--at least nothing that can be modeled in any quantitative way. Either (i) a ``projection postulate'' (PP) is invoked based on an undefined notion of ``observer,'' or (ii) the entanglement is simply assumed to continue. Under (i), one cannot define the physical circumstances constituting an ``observer'' or ``observation''; this is the ``Heisenberg Cut'' or ``shifty split'' in which certain systems are taken as described by quantum theory while others are not, for no principled reason. Under (ii), one cannot say that any outcome actually occurred, at least not one in which the system can be taken as represented by the corresponding eigenstate, even if we don't know which one it is. The reason is as follows. For simplicity, let us suppose that the observable A has only two eigenstates (like the observables in the Proietti {\it et al} experiment), i.e., the composite state $|\Psi\rangle$ is: $$|\Psi\rangle = c_1 |\phi_1 , a_1\rangle + c_2 |\phi_2 , a_2\rangle \eqno(2) $$ \smallskip \noindent where we use abbreviated notation for the direct product. The state of the system S, as a component of the entangled pure state (2), is given by the reduced density matrix $\rho_S$. This is obtained by tracing the composite density matrix $\rho$ over the pointer basis. In this case, the composite pure state density matrix $\rho$ is: $$\rho = |c_1|^2 | \phi_1, a_1\rangle\langle \phi_1, a_1| + c_2^*c_1 | \phi_1, a_1\rangle\langle \phi_2, a_2| + c_1^*c_2 | \phi_2, a_2\rangle\langle \phi_1, a_1| + |c_2|^2 |\phi_2,a_2\rangle\langle \phi_2, a_2 | \eqno(3) $$ \medskip \noindent and the reduced density matrix for the system is $$\rho_S = |c_1|^2 |a_1\rangle\langle a_1| + |c_2|^2 |a_2\rangle\langle a_2| \eqno (4)$$ \medskip This looks like a mixed state in which outcomes occur with the relevant Born probabilities. However, it is an improper mixed state, meaning that it cannot represent a situation in which P and S really possess an outcome represented by the eigenstates $|\phi_k\rangle$ and $|a_k\rangle$ while the probabilities just represent our ignorance about which outcome has occurred. For were that the case, P and S would be in the mixed state $$\rho = |c_1|^2 | \phi_1, a_1\rangle\langle \phi_1, a_1| + |c_2|^2 |\phi_2,a_2\rangle\langle \phi_2, a_2 | \eqno(5) $$ \bigskip \noindent That is, the off-diagonal entries, reflecting interference obtaining in the pure state (3), are missing. Since the assumption that the system and pointer are really in well-defined eigenstates corresponding to an outcome contradicts the actual composite state, that assumption is false: a unitary interaction yielding a correlation between degrees of freedom, in which one ``copies'' the other, does not constitute the occurrence of a well-defined outcome. With this point in mind, we now turn to the specifics of the experiment. \bigskip \section{The Experiment} The experiment uses a source $S_0$ of Bell-entangled photons $a$ and $b$ that are prepared in the state $$ {| \bar \Psi \rangle}_{ab} \rangle = \frac{1}{\sqrt2} [cos \frac{\pi}{8} (|h_a, v_b\rangle + |v_a, h_b\rangle) + sin \frac{\pi}{8} (|h_a, h_b\rangle - | v_a, v_b \rangle )] \eqno(6) $$ \smallskip \noindent where $|h_{a/b}\rangle$ stands for ``a/b is horizontally polarized'' and $|v_{a/b}\rangle$ stands for ``a/b is vertically polarized.'' The setup is schematically illustrated in Figure 2 of Proietti {\it et al} (2019). The photons $a$ and $b$ are sent in opposite directions, to be ultimately detected by Alice and Bob respectively. Alice and Bob have a choice of measuring either a Bell-type observable $A_1$ ($B_1$) or a ``which polarization'' observable $A_0$ ($B_0$) by leaving in or removing a final beamsplitter. Between the source and the Alice/Bob detectors on each side are two additional sources of entangled photon pairs, $S_A$ and $S_B$. We now describe the situation on the ``Alice'' side; the same interactions occur on the ``Bob'' side. Source $S_A$ produces an entangled pair $\{\alpha, \alpha^{\prime}\}$ in the state $$|\Psi ^- \rangle = \frac {1}{\sqrt 2} [ |h\rangle _{\alpha^\prime} |v\rangle _\alpha - |v\rangle _{\alpha^\prime} \rangle |h\rangle _{\alpha}] \eqno(7) $$ \smallskip \noindent while $S_B$ produces the same state for $\{\beta, \beta^{\prime}\}$. Here, the photons $\alpha$ and $\beta$ are supposed to play the role of ``Alice's Friend'' and ``Bob's Friend'' respectively while $\alpha^\prime$ and $\beta^\prime$ are alleged to ``herald'' the measurement purported to take place. In follows, we will refer to $\alpha$ and $\beta$ collectively as ``Photon Friends*,'' or PF*s for short, where the asterisk denotes some question as to whether they really qualify as ``Friends'' in the sense of the original Wigner's Friend experiment. The putative ``measurement'' by the PF*s consists of the establishment of a correlation between $\alpha$ and $a$ and $\beta$ and $b$, which will be discussed in detail below. When the auxiliary ``heralding'' photons $\alpha^\prime$ and $\beta^\prime$ are detected by a neutral detector, this signals that the correlation has been established. \bigskip \section{Critique of Interpretation of the Experiment} The first thing to note about the portrayal of the PF*s' ``measurement'' interactions with the source photons {\it a} and {\it b} is that they overlook the fact that these are in an entangled state, and that the interaction with the PF*s just increases the entanglement by correlating $\alpha$ with $a$ and $\beta$ with $b$, respectively. This means that all the individual photons are in improper mixed states. The ``Friends'' are nothing more than pointers P as discussed in the previous section, and as we have already seen, correlated pointers alone do not yield actual outcomes. Yet the authors seem to suggest that they are in proper mixed states. Specifically, the Appendix contains the following statement and characterization of the interaction: \bigskip ``{\it Depending on the state of the incoming photon}, the operation performed by Alice's friend transforms the overall state as follows: $$ |h_a, \Psi ^- \rangle_{\alpha^\prime , \alpha} = \frac {1}{\sqrt 2} [ |h\rangle _a |h\rangle _{\alpha^\prime} |v\rangle _\alpha - |h\rangle _a |v\rangle _{\alpha^\prime} |h\rangle _\alpha] \rightarrow \frac{1}{2} |h\rangle_a |v\rangle_\alpha, $$ $$ |v_a, \Psi ^- \rangle_{\alpha^\prime , \alpha} = \frac {1}{\sqrt 2} [ |v\rangle _a |h\rangle _{\alpha^\prime} |v\rangle _\alpha - |v\rangle _a |v\rangle _{\alpha^\prime} |h\rangle _\alpha] \rightarrow \frac{1}{2} |v\rangle_a |h\rangle_\alpha \eqno (S3) '' $$ \bigskip In the excerpt above, I have italicized the problematic phrase that seems to suggest that the incoming photon {\it a} or {\it b} really is in a particular (but unknown) pure state, when that is not the case. In their equations (S3), which omit the entire entangled state of {\it a} and {\it b}, these transformations are presented as yielding well-defined outcomes for the source photons {\it a} and {\it b}, when in fact they do not. The actual transformation taking place upon the interaction with $\alpha$ is as follows: $$ {| \bar \Psi \rangle}_{ab} |\Psi^-\rangle_{\alpha^\prime, \alpha} \rightarrow \frac{1}{2\sqrt2} [cos \frac{\pi}{8} (|h_a, v_\alpha , v_b\rangle + |v_a, h_\alpha, h_b\rangle) + sin \frac{\pi}{8} (|h_a, v_\alpha , h_b\rangle - | v_a, h_\alpha , v_b \rangle )] \eqno(8) $$ \bigskip This is clearly still an entangled state, so it is unambiguous that there has been no collapse or reduction as a result of the unitary interaction between $\alpha$ and the source state. Applying the transformation for Bob's ``Friend'' $\beta$ then yields the authors' equation (S7), an entangled state whose Bell-state properties can be confirmed via Alice and Bob's observable $A_1 = B_1$. ((S7) is just the final version of (8) including correlation of the $\beta$ states with $b$.) \smallskip Despite the fact that there has been no collapse or reduction as a result of the interaction of the PF*s with the source photons, the authors maintain that an outcome occurred at that point. This sort of ambiguity about whether there is an outcome, how there could be an outcome, and even ``for whom'' there might be an outcome, is perhaps inevitable under the general assumption that quantum theory ``really'' is unitary-only, according to which even human beings must be quantum subsystems of much larger composite entangled states. The authors reveal this as their background assumption in discussing the original Wigner's Friend thought experiment, which involves a human being as the Friend. They say: \smallskip ``Concurrently however, the friend does always record a definite outcome, which suggests that the original superposition was destroyed and Wigner should not observe any interference. The friend can even tell Wigner that she recorded a definite outcome (without revealing the result), yet Wigner and his friend's respective descriptions remain unchanged.'' \smallskip The above says that ``the original superposition was destroyed'' but also that Wigner correctly describes the situation with a superposition: clearly a self-contradictory account. This kind of inconsistency is a feature of the ``shifty split'' of UOQM, in which collapse can be nothing more than a wholly mysterious, non-modeled process assumed to (somehow) occur based on observation by an ``external conscious observer.'' (Indeed the fact that ``external conscious observer'' cannot be defined constitutes the {\it reductio} nature of the WF experiment.) But there need be no inconsistency between the descriptions: if there really {\it were} non-unitary collapse as a result of the Friend's measurement, then Wigner would {\it not} be able to correctly model his system as remaining in a pure state. That is, had a definite outcome occurred along with collapse, the Friend would {\it not} be able to ``tell Wigner that she recorded a definite outcome without revealing the result'' and still have Wigner correctly describe his system by a pure state. Instead, Wigner's correct and experimentally verifiable description of the Friend and her system would be a proper mixed state, regardless of whether the Friend revealed anything (however uninformative) to Wigner or not. Thus, the idea of the non-physical, mysterious collapse allegedly attending a ``measurement'' interaction leads the authors to the self-contradictory position of supposing that collapse ``must have occurred'' at the level of $\alpha$ and $\beta$ even though they themselves must deny collapse in order to obtain the correct final state (A7), and their own data show that no collapse occurred. The inability of UOQM to define what counts as a ``measurement'' or ``observation'' yielding an outcome -- because none can under UOQM -- is the ultimate source of the confusion here. It is perhaps worthwhile to note in this regard that Baumann and Wolf (2018) discuss the idea that an outcome only occurs ``relative to a subsystem'' (which they term``subjective collapse''). This is an attempt to preserve the idea that a subsystem defined as an ``observer'' may obtain an outcome that would theoretically lead to predictions inconsistent with those of the Wigner (super-observer) level, as noted above, yet such inconsistencies allegedly can never be empirically observed. Such a formulation appeals to the absence of a ``classical record'' in order to argue for the hidden nature of such inconsistencies. Yet the notion of a ``classical record'' itself depends on the unambiguous existence of measurement outcomes that, again, are never obtainable in the unitary-only formulation. In any case, however, the claim that such inconsistencies must always remain hidden is untenable. A specific counterexample is provided in Kastner (2020b): an entangled subsystem degree of freedom defined as an ``observer,'' and to ``whom,'' based on that characterization, outcomes are attributed, could in principle yield an observable inconsistency with the outcomes of a ``super-observer.'' This counterexample demonstrates that attributing outcomes to entangled subsystems, even if designated as only ``relative'' to the subsystem and involving only ``subjective collapse,'' leads to empirical inconsistency signaling theoretical breakdown. In view of the importance of this issue, the counterexample will be repeated here. Let W and F be modeled as complex molecules with several excitable degrees of freedom subject to 2D state spaces. Let the system ``measured'' by F be labeled A, a spin-$\frac{1}{2}$ system, while F comprises 2 degrees of freedom labeled B and C. Now, recall that according to UOQM, a ``measurement'' is nothing more than the establishment of a correlation between different degrees of freedom. In this account, the F-level ``measurement'' of A is represented by a correlation between A and F's degree of freedom B. A is prepared in an equal superposition of outcomes `up' and `down'. C, F's communication degree of freedom, remains in its initial unexcited state $|0\rangle$ at this stage. According to the idea of a ``subjective collapse'' or ``relative outcome'' for F, after the ``measurement'' of A by B, A and F are either in the state $$ |\Psi\uparrow\rangle_{FA} = |\uparrow\rangle_A \otimes |\uparrow\rangle_B \otimes |0\rangle_C\eqno(9a)$$ or $$ |\Psi\downarrow\rangle_{FA} = |\downarrow\rangle_A \otimes |\downarrow\rangle_B \otimes |0\rangle_C\eqno(9b)$$ \smallskip \noindent with equal probability--i.e., they are in a mixed state. On the other hand, according to W, A and F end up in a pure Bell state with F's communication degree of freedom C along for the ride: $$ |\Psi \rangle_{FA} = |\Phi^+ \rangle |0 \rangle_C = \frac{1}{\sqrt 2} \bigl ( |\uparrow\rangle_A |\uparrow\rangle_B + |\downarrow\rangle_A |\downarrow\rangle_B \bigl ) \otimes |0\rangle_C\eqno(10)$$ Now, let W subject the B+A system to a measurement of the Bell observable for which the state $|\Phi^+\rangle$ is an eigenstate. W's experiment is accompanied by a signal to F as follows. An outcome finding the state $|\Phi^+\rangle_{FA}$ results in a photon being emitted to F to excite his communication degree of freedom C. According to W, F should receive that photon for every run of the experiment. This makes the inconsistency manifest, since according to F, his probability of receiving the photon is only 1/2 given his description of the situation by the mixed state (9a,b). Of course the practical logistics are nontrivial in carrying out this experiment, but nothing prevents it, in principle, under UOQM. Thus, the claim that one can attribute an outcome, however ``relative,'' to an entangled subsystem by calling it an ``observer'' leads to an in-principle observable inconsistency. It cannot be maintained that the resulting inconsistency is benignly ``hidden.'' Instead, theoretical breakdown results. The above considerations lead us to the Clauser-Horne-Shimony (CSH) inequality that is violated by the data obtained in this experiment. CSH (1969) originally formulated their inequality in terms of hidden variables, which are indeed forms of ``observer independent facts.'' But in the current context, which does not involve any hidden variables, the inequality refers to the assumption that there were determinate measurement outcomes by the ``Friends,'' even though those putative outcomes did not collapse the superposition obtaining in the prepared Bell state. That is, the assumption of a ``relative outcome'' involving ``subjective collapse'' as defined by Baumann and Wolf (discussed above) arguably plays the role of a hidden variable, since it is assumed to obtain even while the quantum state does not reflect that any outcome occurred (being still an entangled pure state). Since there are no other hidden variables assumed in this experiment, and the only nonstandard assumption is that the PF*s precipitated an outcome even without objective collapse or reduction of the quantum state, the violation of the inequality by the experimental statistics is properly taken as refuting that assumption. This is reinforced by the fact that the application of the quantum formalism required to match the experimentally observed results does not collapse the quantum state between the putative `measurement' by the PF*s and the detections by Alice and Bob. Thus, this experiment, rather than demonstrate an antirealist conclusion such as ``different observers irreconcilably disagree about what happened in an experiment,'' confirms what Feyerabend and Hughes pointed out decades ago: quantum theory (when taken as unitary-only) does not allow one to ascribe a fact of the matter, corresponding to an eigenvalue of a subsystem observable, to a subsystem of an entangled system. A subsystem of an entangled system is therefore not an entity that can yield a well-defined ``fact'' corresponding to an outcome of an observable pertaining to that system. Something more is needed for an outcome (``fact'' ) to occur; specifically, physical collapse of the entangled (nonseparable) state to a single separable state. That has not occurred at the ``Friend'' level in this experiment. However, if the heralding photon were made into a pointer to the $h, v$ states, the final state (the authors' S7) would not be tenable even according to UOQM due to traditional decoherence arguments (i.e., putative entanglement with the degrees of freedom correlating the heralding photon to the relevant states). Of course, that account remains inadequate since there is still no justification for any definite outcome to arise under unitary-only dynamics. For further discussion of this point, see Kastner (2014, 2020a, 2020b). There is perhaps a more direct way to see the basic logical error being committed in the Prioetti {\it et al} account. The error is a violation of an element of the truth table for a conditional statement, i.e.: if a statement Q is false, and if P implies Q, then P must be false. Specifically, Proietti {\it et al} obtain a conditional statement: ``If P then Q,'' where P is ``Outcomes occur for subsystems of entangled states'' and Q is ``CHS inequality results.'' The conditional statement is obtained by deduction (when the state remains entangled and outcomes corresponding to eigenvalues of observables are nevertheless assumed for the subsystems) and therefore must be true. What their experiment does is to test the truth of Q, and it finds that in fact, Q is false. Thus, the very first step in the analysis of the implications of this experiment should be (at least) to question P. However, Proietti {\it et al} never question P. Instead, they insist on P and draw sweeping ontological conclusions from it (and the same basic error is repeated widely in the recent literature on this topic). This approach arguably leads, to paraphrase Adrian Kent, to ``scientific ruin,'' since it sets a precedent of refusing to even consider the possibility that the falsification of the consequent of a deductively true conditional statement might indicate that the antecedent is false.\footnote{Kent (2010) was concerned about the excessive freedom to ``conclude anything we please'' based on arbitrary theorizing about the relation between mind states and brain states. In the present case, refusing to falsify the antecedent for a falsified consequent of a deductively true conditional statement could similarly lead to being able to conclude anything we please.} The bottom line: Establishment of a correlating entanglement and verification of the correlating entanglement at the level of the ``Friend'' is not equivalent to ``outcome occurred'' or ``fact was obtained'' at the Friend level. The inequality (represented by Q above) expressing this erroneous assumption (P above) is thus violated by the experiment, which verifies the probabilistic predictions of quantum theory for the entangled state. Had reduction and a real outcome for the polarization observable occurred at the ``friend'' level (or even correlation of the heralding photon with the $h$ and $v$ states as mentioned above), there would be no ongoing entanglement at the level of Alice and Bob and that would be reflected in the statistics, which would confirm the loss of their prepared Bell state and its replacement by a mixed state (even if only an improper one under UOQM). \section{So how do we get outcomes?} It is perfectly possible (beyond UOQM) to get a definite outcome, and of course we can all testify to the existence of definite outcomes in the lab. Rather than take the latter empirical fact as license to mis-attribute definite outcomes to systems in improper mixed states when that is logically contradictory (yet a practice that is endemic under the assumption of UOQM), a more consistent approach is to take our empirical experience as strong evidence that non-unitarity is real, and that this is why we get actual measurement results. The present author has been studying a formulation of quantum theory called the Relativistic Transactional Interpretation (RTI), which is based on the direct-action theory of fields. RTI contains quantitatively well-defined physical non-unitarity that does not involve any {\it ad hoc} change to quantum theory, and is empirically equivalent at the level of the Born probabilities to standard unitary-only quantum theory. While it is not the purpose of this paper to advocate any particular alternative formulation of quantum theory, the interested reader may consult Kastner (2018), (2020a) and (2021) for details. These references explain why RTI does not need to make any {\it ad hoc} change to the Schr\"odinger evolution as in the better known ``spontaneous collapse'' models such as that of Ghirardi, Rimini and Weber (1985). For present purposes, it may be noted that under RTI, decay rates $\Gamma(t)$ are actually the probability of collapse, involving photon emission, at any time $t$ for the usual situation in which an excited atom is surrounded by absorbers. Thus, any time there is a radiative process, collapse with respect to some observable has occurred. Under macroscopic conditions, the decay rates are very high and collapse is quite frequent, which constitutes the correspondence principle for the quantum/classical threshold under RTI. At the ``friend'' level of the Proietti {\it et al} experiment, the detection at the ``friend'' level involves only a photon whose state is uncorrelated to the observable of interest, so while there is collapse, it is not with respect to the observable of interest, so there is no outcome established for that observable. However, collapse occurs with respect to the relevant observable at the ``Alice and Bob'' level, which is why Alice and Bob find definite outcomes confirming their Bell state. In Wigner's original thought experiment, without restriction to UOQM and under a direct measurement by the macroscopic Friend of the system presented to him, collapse could easily occur at that stage. (What counts as ``macroscopic'' and why that level generally precipitates collapse is discussed in Kastner (2018)). Then, even if the Friend ``merely'' signaled to Wigner that he saw a result without revealing the result, the system would still have been collapsed, and Wigner would find his system in a proper mixed state, not a pure state. Thus, we have to be careful about assuming that such a signal (``I see a result but I'm not revealing which'') preserves entanglement. In fact it does not if reduction actually occurred at that point. On the other hand, if entanglement is preserved beyond that point, then nobody really ``saw a result,'' since there could be no well-defined result (as Feyerabend and Hughes point out). Thus any portrayal of a ``Friend'' as really seeing a result but not disclosing it and yet continuing on as a component of an entangled state is erroneous, since no definite outcome can occur under those circumstances.\footnote{Zukowski and Markiewicz (2021) argue that there is no measurement outcome at the ``Friend'' level in this experiment using decoherence arguments. However, decoherence alone is not sufficient to establish a measurement outcome; this is implied by Hughes' argument and is discussed in detail in Kastner (2014) and references therein. See also Kastner (2020a).} \section{Conclusion} It has been argued that the observed violation of the relevant CHS inequality, rather than supporting claims such as ``observers see irreconcilable facts'' demonstrates that it is a mistake to attribute an outcome or `fact' of the matter concerning any specific observable to a system that is in an improper mixed state, such as the PF*s in the Proietti {\it et al} experiment. The authors do not consider this interpretation of the violation of the CHS inequality, because they do not question their claim that the PF*s did find an outcome. Yet, as we reviewed above, it is unambiguously incorrect to assert that the PF*s found an outcome, since that assumption violates quantum theory itself: a degree of freedom in an entangled pure state, like the PF*, is in an improper mixture that cannot be described by a definite but unknown observable eigenstate. The observed violation of the CHS inequality expressing the incorrect assumption that an outcome occurred at the PF* level confirms this elementary fact. Another way to see that this is an error is to consider one of the electrons in a two-electron Bell state. Yes, the electron's state is correlated to its partner, but that is not justification for attributing any measurement outcome, i.e., value of an observable, to the entangled electron. This mistake is made by the authors when they say: \bigskip ``Recalling from Eq. (S4) how the friends' measurement results are encoded in their polarisation states\ldots '' \bigskip This is no more correct than asserting that a Bell state like $$\Psi\rangle = \frac{1}{\sqrt 2} [|z\uparrow\rangle|z\downarrow\rangle - |z\downarrow\rangle|z\uparrow\rangle] \eqno (11) $$ \bigskip \noindent expresses an `encoding of measurement results.' Yes, the electron's state is correlated to its partner, but that does not mean that any `information has been extracted' nor that there can be any value of Z-spin -- i.e., no outcome -- attributed to the electron. Finally, the point of this paper is not to quibble about whether the internal photons $\alpha$ and $\beta$ count as ``observers'' or not, since UOQM is utterly helpless to say anything about what an ``observer'' could possibly be. If an ``observer'' is a system that can yield an outcome, then no system, not even a human being, can give rise to a well-defined outcome under UOQM. As we reviewed in Part 1, the mere copying of a quantum number pertaining to one degree of freedom by another degree of freedom (the definition of ``observer'' used by the authors) does not qualify as yielding an outcome, despite the language about ``extracting information.'' (This is just the measurement problem facing UOQM again.) Thus, our routine experience of definite outcomes remains at odds with UOQM. Nevertheless, it is quite clear from the data correctly predicted by the final uncollapsed state (the authors' (S7)) that there is no collapse at the level of the PF*s. The fact that they copied quantum numbers in a superposition, i.e., acted as pointers, does not establish any fact of the matter; there is no warrant to claim that they ``observed an outcome'' since no outcome is possible according to quantum theory itself. That is, quantum theory itself does not permit the interpretation of the improper mixed state of a subsystem of a composite pure state as a situation in which an outcome occurred,``relative'' or not. Therefore, it is not tenable to claim that there was a measurement outcome at the level of $\alpha$ and $\beta$. The ``heralding'' photons $\alpha^{\prime}$ and $\beta^{\prime}$ merely signal that a unitary interaction has occurred at that point, not that there was any outcome. The observed violation of the relevant CHS inequality, rather than indicating that there are irreconcilable facts among observers, demonstrates the falsity of the assumption that there was a ``fact'' (outcome) at the internal Friend level. Thus, rather than a test of local observer independence (as stated in the title), this experiment is a test of the assumption that there is a measurement outcome based only on a non-decohering unitary correlation among photons. That assumption fails the test. \newpage \section{References} Baumann, V. and Wolf, S. (2018). ``On Formalisms and Interpretations,'' {\it Quantum 2}, 99. Bong, K.-W. et al (2020). ``A strong no-go theorem on the Wigner?s friend paradox,'' {\it Nature Physics, Vol.16}, 1199-1205. J. Clauser, M. Horne, A. Shimony, and R. Holt (1969). {\it Phys.Rev. Lett.23}, 880. Feyerabend, P. K. (1962). ``On the Quantum Theory of Measurement.'' In Korner S. (ed.) (1962). {\it Observation and Interpretation in the Philosophy of Physics}. New York: Dover, pp. 121-130. Frauchiger, D. and Renner, R. (2018). ``Quantum theory cannot consistently describe the use of itself,'' {\it Nature Communications 9}, Article number: 3711. Ghirardi, G.C., Rimini, A., and Weber, T. (1986). {\it Phys. Rev. D 34}, 470. Hughes, R. I. G. (1989). {\it The Structure and Interpretation of Quantum Mechanics}. Cambridge: Harvard University Press. Kastner, R. E. (2012). {\it The Transactional Interpretation of Quantum Mechanics: The Reality of Possibility.} Cambridge: Cambridge University Press. Kastner, R. E. (2014). `` 'Einselection' of pointer observables: the new H-theorem?'' {\it Stud. Hist. Philos. Mod. Phys. 48}, 56-8. Kastner, R. E. (2018). ``On the Status of the Measurement Problem: Recalling the Relativistic Transactional Interpretation,'' {\it Int'l Jour. Quan. Foundations 4}, 1:128-141. Kastner, R. E. (2020a). ``Decoherence in the Transactional Interpretation,'' {\it Int'l Jour. Quan. Foundations 6}, 2:24-39. Kastner, R. E. (2020b). ``Unitary-Only Quantum Theory Cannot Consistently Describe the Use of Itself: On the Frauchiger-Renner Paradox,'' {\it Foundations of Physics 50}, 441-456. Kastner, R. E. (2021). ``The Relativistic Transactional Interpretation and The Quantum Direct-Action Theory,'' preprint. https://arxiv.org/abs/2101.00712. (This material is excerpted from the forthcoming second edition of Kastner (2012). Kent, A. (2010). ``One world versus many: the inadequacy of Everettian accounts of evolution, probability, and scientific confirmation,'' in {\it Many Worlds? Everett, Quantum Theory and Reality}. S. Saunders, J. Barrett, A. Kent and D. Wallace (eds), Oxford University Press. Proietti, M. et al (2019). ``Experimental test of local observer independence,'' {\it Science Advances}, Vol. 5, no. 9. DOI: 10.1126/sciadv.aaw9832 Zukowski, M. and Markiewicz, M. (2021). ``Physics and Metaphysics of Wigner?s Friends: Even Performed Premeasurements Have No Results,'' {\it Phys. Rev. Lett. 126} (13). \end{document}
{ "timestamp": "2021-08-19T02:10:07", "yymm": "2105", "arxiv_id": "2105.01773", "language": "en", "url": "https://arxiv.org/abs/2105.01773" }
\section{Introduction} Throughout this paper, all graphs are finite and simple. We denote the set of positive integers by $\mathbb{N}$, and for every $k\in \mathbb{N}$, we define $[k]=\{1,\ldots, k\}$. Let $G$ be a graph. We denote by $V(G)$ and $E(G)$ the vertex set and the edge set of $G$, respectively. By a \textit{clique} in $G$ we mean a set of pairwise adjacent vertices, and a \textit{stable set} in $G$ is a set of pairwise nonadjacent vertices. For every $d\in \mathbb{N}$ and every vertex $v\in V(G)$, we denote by $N^d_G(v)$ the set of all vertices in $G$ at distance $d$ from $v$, and by $N^d_G[v]$ the set of all vertices in $G$ at distance at most $d$ from $v$. In particular, we write $N_G(v)$ for $N_G^1(v)$, which is the set of neighbors of $v$ in $G$, and $N_G[v]$ for $N^1_G[v]=N_G(v)\cup \{v\}$. Also, for every $X\subseteq V(G)$, we define $N^d_G[X]=\cup_{x\in X}N^d_G[x]$ and $N^d_G(X)=N^d_G[X]\setminus X$. Again, we write $N_G(X)$ for $N_G^1(X)$ and $N_G[X]$ for $N^1_G[X]$. For every $Z$ which is either a vertex or a subset of vertices of $G$, we write $G-Z$ for the graph obtained from $G$ by removing $Z$. A graph $H$ is an \emph{induced subgraph} of a graph $G$ if $H$ is isomorphic to $G-X$ for some $X\subseteq V(G)$, and otherwise $G$ is \emph{$H$-free}. Also, for every graph $G$ and every $X\subseteq V(G)$, we denote \textit{the subgraph of $G$ induced on $X$}, that is, $G-(V(G)\setminus X)$, by $G|X$. For $r\in \mathbb{N}$ and graphs $G$ and $H$, we denote by $G+H$ the disjoint union of $G$ and $H$ and by $rH$ the union of $r$ pairwise disjoint copies of $H$. For all $t\in \mathbb{N}$, we use $P_t$ to denote the path on $t$ vertices. Let $G$ be a graph and $k\in \mathbb{N}$. By a $k$\textit{-coloring} of $G$, we mean a function $\phi: V(G) \rightarrow [k]$. A coloring $\phi$ of $G$ is said to be \textit{proper} is a $\phi(u) \neq \phi(v)$ for every edge $uv \in E(G)$. In other words, $\phi$ is proper if and only if for every $i\in [k]$, $\phi^{-1}(i)$ is a stable set in $G$. We say $G$ is \emph{$k$-colorable} if $G$ has a proper $k$-coloring. For fixed $k\in \mathbb{N}$, the \textsc{$k$-Coloring Problem} asks, given graph $G$, whether $G$ is $k$-colorable. A \emph{$k$-list-assignment} of $G$ is a map $L:V(G)\rightarrow 2^{[k]}$. For $v\in V(G)$, we refer to $L(v)$ as the \textit{list of} $v$. Also, for every $i\in [k]$, we define $L^{(i)}=\{v\in V(G):i\in L(v)\}$. An \emph{$L$-coloring} of $G$ is a proper $k$-coloring $\phi$ of $G$ with $\phi(v) \in L(v)$ for all $v \in V(G)$. For example, if $L(v)=\emptyset$ for some $v\in V(G)$, then $G$ admits no $L$-coloring. Also, if $V(G)=\emptyset$, then $G$ vacuously admits an $L$-coloring for every $k$-list-assignment $L$. For fixed $k\in \mathbb{N}$, the \textsc{List-$k$-Coloring Problem} is to decide, given an instance $(G,L)$ consisting of a graph $G$ and a $k$-list-assignment $L$ for $G$, whether $G$ admits an $L$-coloring. Note that the \textit{$k$-coloring problem} is in fact the \textsc{List-$k$-Coloring Problem} restricted to instances $(G,L)$ where $L(v)=[k]$ for every $v\in V(G)$. The \textsc{$k$-coloring Problem}, and so the \textsc{List-$k$-Coloring Problem}, are well-known to be \textsf{NP}-complete for all $k\geq 3$ \cite{karp}. This motivates studying the complexity of these problems restricted to graphs with a fixed forbidden induced subgraph, that is, $H$-free graphs for some fixed graph $H$. As a narrowing start, the following two theorems show that there is virtually no hope for a polynomial-time algorithm unless $H$ is a disjoint union of paths. \begin{theorem}[Kami\'{n}ski and Lozin \cite{CycleFree}] \label{thm:cycle} For all $k\geq 3$, the \textsc{$k$-Coloring} problem restricted to $H$-free graphs is \textsf{NP}-complete if $H$ contains a cycle. \end{theorem} \begin{theorem}[Holyer \cite{ClawFree}] \label{thm:claw} For all $k\geq 3$, the \textsc{$k$-Coloring Problem} restricted to $H$-free graphs is \textsf{NP}-complete if $H$ contains a `claw' (a vertex with three pairwise nonadjacent neighbors). \end{theorem} Accordingly, an extensive body of work has been devoted to showing that excluding short paths (or their disjoint unions) as an induced subgraph makes the \textsc{$k$-Coloring} and the \textsc{List-$k$-Coloring} problem easier. Here is a list of known results in this direction. \begin{theorem}[Ho{\`a}ng, Kami{\'n}ski, Lozin, Sawada and Shu \cite{kP5}] For all fixed $k$, the \textsc{List-$k$-Coloring Problem} restricted to $P_5$-free graphs can be solved in polynomial time. \end{theorem} \begin{theorem} [Chudnovsky, Spirkl and Zhong \cite{4P6}] The {$4$-Coloring Problem} restricted to $P_6$-free graphs can be solved in polynomial time. \end{theorem} \begin{theorem} [Bonomo, Chudnovsky, Maceli, Schaudt, Stein and Zhong \cite{3P7}] The {List-$3$-Coloring Problem} restricted to $P_7$-free graphs can be solved in polynomial time. \end{theorem} \begin{theorem} [Chudnovsky, Huang, Spirkl and Zhong \cite{L3P6+rP3}] For every $r\in \mathbb{N}$, the {List-$3$-Coloring Problem} restricted to $(P_6+rP_3)$-free graphs can be solved in polynomial time. \end{theorem} \begin{theorem} [Couturier, Golovach, Kratsch and Paulusma \cite{LkP5+rP1&L5P4+P2-NP}] \label{thm:p5rp1} For all $r, k\in \mathbb{N}$, the {List-$k$-Coloring Problem} restricted to $(P_5+rP_1)$-free graphs can be solved in polynomial time. \end{theorem} \begin{theorem} [Golovach, Johnson, Paulusma, Song \cite{golovachsurvey}] For every $r\in \mathbb{N}$, the \textsc{List-$k$-Coloring Problem} restricted to $rP_2$-free graphs can be solved in polynomial time. \end{theorem} \begin{theorem} [Broersma, Golovach, Paulusma and Song \cite{3*P2+P4&3*rP3}]\label{3colrP3} For every $r \in \mathbb{N}$, the \textsc{$3$-Coloring Problem} restricted to $rP_3$-free graphs can be solved in polynomial time. \end{theorem} On the other hand, the following hardness results are known. \begin{theorem} [Huang \cite{5P6&4P7-NP}] \label{thm:p6} The {$5$-Coloring Problem} restricted to $P_6$-free graphs and \textsc{$4$-Coloring Problem} restricted to $P_7$-free graphs are both \textsf{NP}-complete. \end{theorem} \begin{theorem} [Golovach, Paulusma and Song \cite{L4P6-NP}] \label{thm:listp6} The \textsc{List-$4$-Coloring Problem} restricted to $P_6$-free graphs is \textsf{NP}-complete. \end{theorem} \begin{theorem} [Couturier, Golovach, Kratsch and Paulusma \cite{LkP5+rP1&L5P4+P2-NP}] \label{thm:p4p2} The \textsc{List-$5$-Coloring Problem} restricted to $(P_4+P_2)$-free graphs is \textsf{NP}-complete. \end{theorem} \begin{theorem} [Chudnovsky, Huang, Spirkl and Zhong \cite{L3P6+rP3}] \label{thm:p5p2} The \textsc{$5$-Coloring Problem} restricted to $(P_5+P_2)$-free graphs is \textsf{NP}-complete. \end{theorem} The complexity of the \textsc{$3$-Coloring Problem} restricted to $P_t$-free graphs for $t\geq 8$ is still open. Indeed, for connected $H$, this is the only case where the complexity of the \textsc{$k$-Coloring Problem} restricted to $H$-free graphs is not known. In this paper, we prove the following. \begin{theorem} \label{thm:main} For every $r \in \mathbb{N}$, the \textsc{List-$5$-Coloring Problem} restricted to $rP_3$-free graphs can be solved in polynomial time. \end{theorem} In addition to extending Theorem \ref{3colrP3}, this completely classifies the complexity of the \textsc{List-$5$-Coloring Problem} restricted to $H$-free instances, as follows. \begin{theorem} \label{thm:dichotomy} Let $H$ be a graph. Assuming \textsf{P}$\neq$\textsf{NP}, the \textsc{List-$5$-Coloring Problem} restricted to $H$-free graphs can be solved in polynomial time if and only if $H$ is an induced subgraph of $rP_3$ or $P_5+rP_1$ for some $r \in \mathbb{N}$. \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:dichotomy} assuming Theorem \ref{thm:main}.] If $H$ is an induced subgraph of $rP_3$, then the result follows from Theorem \ref{thm:main}, and if $H$ is an induced subgraph of $P_5+rP_1$, then the result follows from Theorem \ref{thm:p5rp1}. So we may assume that neither is the case. If $H$ is not a disjoint union of paths, then either $H$ is not a forest, in which case the result follows from Theorem \ref{thm:cycle}, or $H$ is a forest with a vertex of degree at least three, and so the result follows from Theorem \ref{thm:claw}. Therefore, we may assume that $H$ is a disjoint union of paths. Since $H$ is not an induced subgraph of $rP_3$, there is a connected $C$ of $H$ which is isomorphic to $P_t$ for some $t\geq 4$. If $t\geq 6$, then the result follows from Theorem \ref{thm:p6}. So we may assume that $t\in \{4,5\}$. Now, since $H$ is not an induced subgraph of $P_5+rP_1$, it follows that $H-V(C)$ contains an edge. But now $H$ contains $P_4+P_2$ as an induced subgraph, and thus the result follows from Theorem \ref{thm:p4p2}. This completes the proof. \end{proof} As a hardness counterpart to Theorem \ref{thm:main}, using a reduction similar to the one in \cite{LkP5+rP1&L5P4+P2-NP}, we also show that: \begin{theorem} \label{thm:hardness} The \textsc{$k$-Coloring Problem} restricted to $rP_4$-free graphs is \textsf{NP}-complete for all $k\geq 5$ and $r\geq 2$. \end{theorem} In view of Theorem \ref{thm:p5p2}, this leaves open the complexity of the \textsc{$5$-Coloring Problem} restricted to $H$-free graphs only if some connected component $C$ of $H$ isomorphic to $P_4$, and $H-V(C)$ is an induced subgraph of $rP_3$ containing at least one edge. The remainder of this paper is organized as follows. In Sections \ref{frugsec}-\ref{23sec}, we prepare the tools required for the proof of Theorem \ref{thm:main}. In Section \ref{mainsec}, we prove Theorem \ref{thm:main}, and finally in Section \ref{sec:hardness}, we prove Theorem \ref{thm:hardness}. It is worth noting that the main results of Sections \ref{frugsec} and \ref{goodsec}, namely Theorems \ref{frugality} and \ref{goodp3theorem}, respectively, are in fact proved for the \textsc{List-$k$-Coloring Problem} restricted to $rP_3$-free graphs with arbitrary $k$. However, our results from Section \ref{23sec} fail to extend to this general setting for $k\geq 6$. On the other hand, we were not able to decide whether there exists $k\in \mathbb{N}$ for which the \textsc{List-$k$-Coloring Problem} restricted to $rP_3$-free graphs in \textsf{NP}-complete. \section{Refinements, profiles and Frugality}\label{frugsec} We begin with introducing the notions of a \textit{refinement} and a \textit{profile} as a unified terminology we employ pervasively in this paper. Let $k\in \mathbb{N}$ and $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem}. By a $(G,L)$\textit{-refinement} we mean an instance $(G',L')$ of the \textsc{List-$k$-Coloring Problem} where $G'$ is an induced subgraph of $G$ and $L'(v)\subseteq L(v)$ for all $v\in V(G')$. The $(G,L)$-refinement $(G',L')$ is \textit{spanning} if $G'=G$. Also, a $(G,L)$\textit{-profile} is a set of $(G,L)$-refinements. A $(G,L)$-profile is \textit{spanning} if all its elements are spanning. A large portion of this work deals with how the feasibility of an instance of the \textsc{List-$k$-Coloring Problem} is tied to the feasibility of certain refinements. For example, the following is easily observed. \begin{lemma}\label{spanspread} Let $k\in \mathbb{N}$ be fixed and $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem} and $(G,L')$ be a spanning $(G,L)$-refinement. If $G$ admits an $L'$-coloring, the $G$ admits an $L$-coloring. \end{lemma} Let $k\in \mathbb{N}$ and $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem}. An $L$-coloring $\phi$ of $G$ is said to be \textit{frugal} if for all $v\in V(G)$ and every $i\in L(v)$, $v$ has at most one neighbor in $\phi^{-1}(i)$. This could be viewed as a list-variant of the so-called \textit{frugal coloring} introduced by Hind et al \cite{frugal}. Also, it is crucially different from another list-variant of frugal coloring studied in \cite{listfrugal}, where the restriction applies to all colors, not just those in the list of $v$. The following lemma is straightforward to verify. \begin{lemma}\label{frugalspread} Let $k\in \mathbb{N}$ be fixed and $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem}, $(G',L')$ be a $(G,L)$-refinement, and $\phi$ be a frugal $L$-coloring of $G$. If $\phi(v)\in L'(v)$ for every $v\in V(G')$ (that is, $\phi|_{V(G')}$ is an $L'$-coloring of $G'$), then it is a frugal $L'$-coloring of $G'$. \end{lemma} Note that for an instance $(G,L)$ of the \textsc{List-$k$-Coloring Problem}, if $|L(v)|=1$ for some $v\in V(G)$, then we may remove $v$ from $G$ and also remove the single color in $L(v)$ from the lists of all neighbors of $v$ in $G$, obtaining an instance with the same state of feasibility. To remain precise, let us state this simple observation formally, as follows. \begin{theorem}\label{kill1} Let $k\in \mathbb{N}$ be fixed and $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem}. Then there exists a $(G,L)$-refinement $(\hat{G},\hat{L})$ with the following specifications. \begin{itemize} \item $(\hat{G},\hat{L})$ can be computed from $(G,L)$ in time $\mathcal{O}(|V(G)|^2)$. \item $|\hat{L}(v)|\neq 1$ for all $v\in V(\hat{G})$. \item If $G$ admits a frugal $L$-coloring, then $\hat{G}$ admits a frugal $\hat{L}$-coloring. \item If $\hat{G}$ admits an $\hat{L}$-coloring, then $G$ admits an $L$-coloring. \end{itemize} \end{theorem} \begin{proof} The proof is easy, so we only give a sketch and leave it to the reader to check the details. One can find in time $\mathcal{O}(|V(G)|)$ a vertex $v\in V(G)$ with $|L(v)|=1$, or confirm that there is none. In the former case, we replace $G$ by $G-v$ and $L(w)$ by $L(w)\setminus L(v)$ for every vertex $w\in N_G(v)$. In the latter case, we output the current instance and stop. Applying the same procedure iteratively, it is straightforward to check that we obtain a $(G,L)$-refinement $(\hat{G},\hat{L})$ satisfying Theorem \ref{kill1}. This completes the proof. \end{proof} The main goal of this section, though, is to establish a reduction from list-coloring to frugal list-coloring restricted to $rP_3$-free graphs. To achieve this, we need the main result of \cite{HGL}, the statement of which calls for a few definitions. A \textit{hypergraph} $H$ is an ordered pair $(V(H),E(H))$ where $V (H)$ is a finite set of vertices and $E(H)$ is a collection of nonempty subsets of $V (H)$, usually referred to as \textit{hyperedges}. A \textit{matching} in $H$ is a set of pairwise disjoint hyperedges, and a \textit{vertex-cover} in $H$ is a set of vertices meeting every hyperedge. We denote by $\nu(H)$ the maximum size of a matching in $H$, and by $\tau(H)$ the minimum size of a vertex-cover in $H$. Also, we denote by $\Lambda(H)$ the maximum $k\geq 2$ for which there exists hyperedges $e_1,...,e_k\in E(H)$ with the following property. For all distinct $i,j\in [k]$, there exists a vertex $v_{i,j}\in e_i\cap e_j$ which belongs to no other hyperedge among $e_1,\ldots, e_k$. If there is no such $k$ (that is, if the elements of $E(H)$ are mutually disjoint), then we set $\Lambda (H)=2$. \begin{theorem}[Ding, Seymour and Winkler\cite{HGL}]\label{HGL} For every hypergraph $H$, we have \[\tau(H) \leq 11\Lambda (H)^2(\Lambda(H)+\nu(H)+3)\binom{\Lambda(H) + \nu(H) }{\nu(H)}^2\cdot\] \end{theorem} We apply Theorem \ref{HGL} to prove the following. \begin{lemma}\label{biplem} Let $r\in \mathbb{N}$ and $\eta(r)=11(r+1)^2(2r+3)\binom{2r}{r-1}$. Let $G$ be an $rP_3$-free graph and $A,B$ be two disjoint stable sets in $G$, such that every vertex in $B$ has at least two neighbors in $A$. Then there exists $S\subseteq A$ with $|S|\leq \eta(r)$ such that every vertex in $B$ has a neighbor in $S$. \end{lemma} \begin{proof} For every vertex in $b\in B$, let $A(b)=N_G(b)\cap A$. Let $H$ be the hypergraph with $V(H)=A$ and $E(H)=\{A(b): b\in B\}$. \sta{\label{nubndd}We have $\nu(H)\leq r-1$.} Suppose not. Let $b_1,\ldots, b_r\in B$ be distinct such that hyperedges $A(b_1)\, \ldots, A(b_r)$ of $H$ are pairwise disjoint. By the assumption, for each $i\in [r]$, there exist two distinct vertices $a_i,c_i\in A(b_i)$. But then $G|\{a_i,b_i,c_i:i\in [r]\}$ is isomorphic to $rP_3$, a contradiction. This proves \eqref{nubndd}. \sta{\label{lambndd} We have $\Lambda (H)\leq r+1$.} For otherwise $\Lambda (H)\geq r+2\geq 3$, and so there exist distinct vertices $b_1,\ldots, b_{r+2}\in B$ with the following property. For all distinct $i,j\in [r+2]$, there exists a vertex $c_{i,j}\in A(b_i)\cap A(b_j)$ which belongs to no other set among $A(b_1), \ldots, A(b_{r+2})$. But then $G|\{b_i,c_{i,r+1},c_{i,r+2}:i\in [r]\}$ is isomorphic to $rP_3$, a contradiction. This proves \eqref{lambndd}.\vspace*{3mm} From \eqref{nubndd} and \eqref{lambndd} combined with Theorem \ref{HGL}, we obtain $\tau(H)\leq 11(r+1)^2(2r+3)\binom{2r}{r-1}=\eta(r)$, and so $H$ has a vertex-cover of size at most $\eta(r)$. In other words, there exists $S\subseteq A$ with $|S|\leq \eta(r)$ such that for every vertex $b\in B$, $S\cap A(b)\neq \emptyset$; that is, every vertex in $B$ has a neighbor in $S$. This completes the proof of Lemma \ref{biplem}. \end{proof} Now we can prove the main theorem of this section. \begin{theorem}\label{frugality} For all fixed $k,r\in \mathbb{N}$, there exists $\pi(k,r)\in \mathbb{N}$ with the following property. Let $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem} where $G$ is $rP_3$-free. Then there exists a spanning $(G,L)$-profile $\Pi(G,L)$ with the following specifications. \begin{itemize} \item \sloppy $|\Pi(G,L)|\leq \mathcal{O}(|V(G)|^{\pi(k,r)})$ and $\Pi(G,L)$ can be computed from $(G,L)$ in time $ \mathcal{O}(|V(G)|^{\pi(k,r)})$. \item If $G$ admits an $L$-coloring, then for some $(G,L')\in \Pi(G,L)$, $G$ admits a frugal $L'$-coloring. \end{itemize} \end{theorem} \begin{proof} Let $\mathcal{S}$ be the set of all $k$-tuples $(S_1,\ldots, S_k)$ of subsets of $V(G)$ where \begin{itemize} \item[(S1)] $S_i\subseteq L^{(i)}$ and $|S_i|\leq (k-1)\eta(r)$ for all $i\in [k]$; \item[(S2)] $S_i$ is a stable set; and \item[(S3)] $S_i\cap S_j=\emptyset$ for all distinct $i,j\in [k]$. \end{itemize} For each $S=(S_1,\ldots, S_k)\in \mathcal{S}$, we define a $k$-list-assignment $L_S$ of $G$ as follows. Let $v \in V(G)$. \begin{itemize} \item[(L1)] If $v\in S_i$ for some $i\in [k]$, then let $L_S(v)=\{i\}$. \item[(L2)] Otherwise, if $v\in V(G)\setminus (\bigcup_{i=1}^kS_i)$, then let $L_S(v)=L(v)\setminus \{i\in [k]: N_G(v)\cap S_i\neq \emptyset\}$. \end{itemize} This definition immediately yields the following.\\ \sta{\label{inotin}For all $S=(S_1,\ldots, S_k)\in \mathcal{S}$, $i\in [k]$ and $v\in V(G)\setminus S_i$ with a neighbor in $S_i$, we have $i\notin L_S(v)$.} Note that for every $S\in \mathcal{S}$, $(G,L_S)$ is a spanning $(G,L)$-refinement. Consider the spanning $(G,L)$-profile $\Pi(G,L)=\{(G,L_S):S\in \mathcal{S}\}$.\\ \sloppy\sta{\label{frugrun}$|\Pi(G,L)|\leq \mathcal{O}(|V(G)|^{k(k-1)\eta(r)})$ and $\Pi(G,L)$ can be computed from $(G,L)$ in time $ \mathcal{O}(|V(G)|^{k(k-1)\eta(r)+2})$.} Let $\mathcal{T}$ be the set of all $k$-tuples $(S_1,\ldots, S_k)$ of subsets of $V(G)$ satisfying (S1). Then for each $i\in [k]$, there are at most $((k-1)\eta(r)+1)|V(G)|^{(k-1)\eta(r)}$ possibilities for $S_i$. As a result, we have $|\mathcal{T}|\leq ((k-1)\eta(r)+1)^k|V(G)|^{k(k-1)\eta(r)}$, which along with $|\Pi(G,L)|=|\mathcal{S}|$ and $\mathcal{S}\subseteq \mathcal{T}$ proves the first assertion. For the second, it is straightforward to observe that the elements of $\mathcal{T}$ can be enumerated in time $\mathcal{O}(|\mathcal{T}|)$. Then, for each $S=(S_1,\ldots, S_k)\in \mathcal{T}$, one can check in constant time whether $S$ satisfies (S2) and (S3). Thus, $\mathcal{S}$ can be computed in time $\mathcal{O}(|\mathcal{T}|)$. Also, for each $S\in \mathcal{S}$ and every $v\in V(G)$, it is readily seen from (L1) and (L2) that $L_S(v)$ can be computed in time $\mathcal{O}(|V(G)|)$. Therefore, $\Pi(G,L)$ can be computed from $(G,L)$ in time $\mathcal{O}(|\mathcal{T}||V(G)|^2)= \mathcal{O}(|V(G)|^{k(k-1)\eta(r)+2})$. This proves \eqref{frugrun}.\vspace*{3mm} Assuming $\eta(r)$ to be as in Lemma \ref{biplem}, the next statement follows directly from Lemma \ref{biplem} applied to $G$ and the stable sets $A=\phi^{-1}(i)$ and $B=B_{i,j}$ of $G$. \sta{\label{applybiplem}Let $\phi$ be an $L$-coloring of $G$ and $i,j\in [k]$ be distinct. Let $B_{i,j}$ be the set of all vertices in $\phi^{-1}(j)$ with at least two neighbors in $\phi^{-1}(i)$. Then there exists $S_{i,j}\subseteq \phi^{-1}(i)$ with $|S_{i,j}|\leq \eta(r)$ such that every vertex in $B_{i,j}$ has a neighbor in $S_{i,j}$.} \sta{\label{ordtofrug}If $G$ admits an $L$-coloring, then for some $S\in \mathcal{S}$, $G$ admits a frugal $L_S$-coloring.} Let $\phi$ be an $L$-coloring of $G$. For distinct $i,j\in [k]$, let $B_{i,j}$ be the set of all vertices in $\phi^{-1}(j)$ with at least two neighbors in $\phi^{-1}(i)$. By \eqref{applybiplem}, there exists $S_{i,j}\subseteq \phi^{-1}(i)$ with $|S_{i,j}|\leq \eta(r)$ such that every vertex in $B_{i,j}$ has a neighbor in $S_{i,j}$. For each $i\in [k]$, let $S_i=\bigcup_{j\in [k],j\neq i}S_{i,j}$. Then from \eqref{applybiplem}, we have $S_i\subseteq \phi^{-1}(i)\subseteq L^{(i)}$, $|S_i|\leq (k-1)\eta(r)$ and $S_i\cap S_j\subseteq \phi^{-1}(i)\cap \phi^{-1}(j)= \emptyset$ for every $j\in [k]$. It follows that $S=(S_1,\ldots,S_k)$ satisfies both (S1), (S2) and (S3), and so $S\in \mathcal{S}$. Let $L_S$ be the corresponding list-assignment defined in (L1) and (L2). We claim that $\phi$ is a frugal $L_S$-coloring of $G$. To see this, note that being an $L$-coloring, $\phi$ is proper. In addition, for every $v\in V(G)$, if $v\in S_i\subseteq \phi^{-1}(i)$ for some $i\in [k]$, then by (L1), we have $L_S(v)=\{i\}=\{\phi(v)\} $. Otherwise, if $v\in V(G)\setminus (\bigcup_{i=1}^k S_i)$, then since $\phi^{-1}(\phi(v))$ is a stable set of $G$ containing $S_{\phi(v)}\cup \{v\}$, $v$ has no neighbor in $S_{\phi(v)}$, and so by (L2), we have $\phi(v)\in L_S$. As a result, we have $\phi(v)\in L_S$ for every $v\in V(G)$, and $\phi$ is in fact an $L_S$-coloring. It remains to argue the frugality of $\phi$. For each $i\in [k]$, let $B_i=\bigcup_{j\in [k], j\neq i}B_{i,j}$; that is, $B_i$ is the set of all vertices in $G$ with at least two neighbors in $\phi^{-1}(i)$. Note that $B_i\subseteq V(G)\setminus S_i$. By \eqref{applybiplem}, every vertex in $B_i$ has a neighbor in $S_i$. Therefore, by \eqref{inotin}, we have $i\notin L_S(v)$ for every $v\in B_i$. In other words, for all $v\in V(G)$ and every $i\in L_S(v)$, $v$ has at most one neighbor in $\phi^{-1}(i)$, and so $\phi$ is frugal. This proves \eqref{ordtofrug}.\vspace*{3mm} Finally, by setting $\pi(k,r)=k(k-1)\eta(r)+2$, from \eqref{frugrun} and \eqref{ordtofrug}, we conclude that $\Pi(G,L)$ satisfies Theorem \ref{frugality}. This completes the proof. \end{proof} \section{Good $P_3$'s}\label{goodsec} Let $G$ be a graph and $\{x_1,x_2,x_3\}\subseteq V(G)$ with $E(G|\{x_1,x_2,x_3\})=\{x_1x_2,x_2x_3\}$. Then $G|\{x_1,x_2,x_3\}$ is isomorphic to $P_3$, and we refer to it as an \textit{induced $P_3$} in $G$, denoting it by $x_1-x_2-x_3$. Also, for all $Z,W\subseteq V(G)$, we say $Z$ is \textit{complete} (\textit{anticomplete}) to $W$ if $Z\cap W=\emptyset$ and every vertex in $Z$ is adjacent (nonadjacent) to every vertex in $W$. If $Z=\{z\}$ and $Z$ is complete (anticomplete) to $W$, then we say $z$ is \textit{complete} (\textit{anticomplete}) to $W$. For two induced $P_3$'s $P$ and $Q$ in $G$, we say $P$ is \textit{anticomplete} to $Q$ (or \textit{$P$ and $Q$ are anticomplete}), if their vertex sets are anticomplete in $G$. Let $k\in \mathbb{N}$ be an integer and $\gamma=(I_1,I_2,I_3)$ be a triple of subsets of $[k]$. We say $\gamma$ is \textit{good} if $|I_1|,|I_2|,|I_3|\geq 2$ and $I_1\cap I_2$, $I_1\cap I_3$ and $I_2\cap I_3$ are all nonempty. Let $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem} and $x_1-x_2-x_3$ be an induced $P_3$ in $G$. We refer to $(L(x_1),L(x_2),L(x_3))$ as the $L$\textit{-type} of $x_1-x_2-x_3$ and to $|L(x_1)|+|L(x_2)|+|L(x_3)|$ as the $L$\textit{-weight} of $x_1-x_2-x_3$. Also, an $L$\textit{-good} $P_3$ in $G$ is an induced $P_3 $ with good $L$-type. The following lemma, easy to check, asserts that excluding good $P_3$'s is inherited by refinements. \begin{lemma}\label{goodspread} Let $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem} and $(G',L')$ be a $(G,L)$-refinement. Suppose that $G$ has no $L$-good $P_3$. Then $G'$ has no $L'$-good $P_3$. \end{lemma} In this section, we show how to reduce an $rP_3$-free instance of the \textsc{List-$k$-Coloring Problem} to polynomially many instances with no good $P_3$ in polynomial time. This goal is attained in Theorem \ref{goodp3theorem}. First, we need two lemmas. \begin{lemma}\label{goodp3lem1} Let $k\in \mathbb{N}$ be fixed and $\gamma=(I_1,I_2,I_3)$ be a good triple of subsets of $[k]$. Let $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem} and $x_1-x_2-x_3$ be an induced $P_3$ in $G$ of $L$-type $(I_1,I_2,I_3)$. Then there exists a spanning $(G,L)$-profile $\Upsilon_1(G,L)$ with the following specifications. \begin{itemize} \item \sloppy $|\Upsilon_1(G,L)|\leq \mathcal{O}(|V(G)|^{3k-3})$ and $\Upsilon_1(G,L)$ can be computed from $(G,L)$ in time $ \mathcal{O}(|V(G)|^{3k-2})$. \item For every $(G,L_1)\in \Upsilon_1(G,L)$, every induced $P_3$ in $G$ of $L_1$-type $(I_1,I_2,I_3)$ is anticomplete to (and thus disjoint from) $x_1-x_2-x_3$. \item If $G$ admits a frugal $L$-coloring, then for some $(G,L_1)\in \Upsilon_1(G,L)$, $G$ admits a frugal $L_1$-coloring. \end{itemize} \end{lemma} \begin{proof} Let $\mathcal{S}$ be the set of all pairs $(S,\psi)$ where \begin{itemize} \item[(T1)] $S$ is a subset of $N_G[\{x_1,x_2,x_3\}]$ containing $\{x_1,x_2,x_3\}$ with $|S|\leq 3k$, and \item[(T2)] $\psi$ is an $L|_S$-coloring of $G|S$, where for every $i\in \{1,2,3\}$ and every $j\in L(x_i)$, $x_i$ has at most one neighbor $v$ in $S$ with $\psi(v)=j$. \end{itemize} We deduce: \sta{\label{goodlem1Srun}$|\mathcal{S}|\leq \mathcal{O}(|V(G)|^{3k-3})$ and $\mathcal{S}$ can be computed from $(G,L)$ in time $\mathcal{O}(|V(G)|^{3k-3})$.} Let $\mathcal{T}$ be the set of all pairs $(S,\psi)$, consisting of a set $S$ satisfying (T1) and a coloring $\psi:S\rightarrow[k]$ of $G|S$. Note that for each $(S,\psi)\in \mathcal{T}$, there are at most $(3k-2)|V(G)|^{3k-3}$ choices for $S$, and for each such choice, there are $k^{|S|}\leq k^{3k}$ possibilities for $\psi$. So we have $|\mathcal{T}|\leq (3k-2)k^{3k}|V(G)|^{3k-3}$, which along with $\mathcal{S}\subseteq \mathcal{T}$ proves the first assertion. To see the second, note that the elements of $\mathcal{T}$ can be enumerated in time $\mathcal{O}(|\mathcal{T}|)=\mathcal{O}(|V(G)|^{3k-3})$. Also, for every $(S,\psi)\in \mathcal{T}$, since $|S|\leq 3k$, it can be checked in constant time whether $\psi$ satisfies (T2), or equivalently $(S,\psi)\in \mathcal{S}$. Hence, $\mathcal{S}$ can be computed from $(G,L)$ in time $\mathcal{O}(|\mathcal{T}|)=\mathcal{O}(|V(G)|^{3k-3})$. This proves \eqref{goodlem1Srun}.\vspace*{3mm} For every $\sigma=(S,\psi)\in \mathcal{S}$, consider the $k$-list-assignment $L_{\sigma}$ of $G$, defined as follows. Let $v\in V(G)$. \begin{itemize} \item[(M1)] If $v\in S$, then $L_{\sigma}(v)=\{\psi(v)\}$. \item [(M2)] If $v\in N_G[\{x_1,x_2,x_3\}]\setminus S$, then $L_{\sigma}(v)=L(v)\setminus (\bigcup_{ j\in \{1,2,3\}, v\in N_G(x_j)} I_j)$. \item [(M3)] If $v\notin N_G[\{x_1,x_2,x_3\}]$, then $L_{\sigma}(v)=L(v)$. \end{itemize} Note that for every $\sigma\in \mathcal{S}$, $(G,L_{\sigma})$ is a spanning $(G,L)$-refinement. Consider the spanning $(G,L)$-profile $\Upsilon_1(G,L)=\{(G,L_{\sigma}):\sigma\in \mathcal{S}\}$.\\ \sta{\label{goodlem1run} $|\Upsilon_1(G,L)|\leq \mathcal{O}(|V(G)|^{3k-3})$ and $\Upsilon_1(G,L)$ can be computed from $(G,L)$ in time $ \mathcal{O}(|V(G)|^{3k-2})$.} The first assertion follows from \eqref{goodlem1Srun} and the fact that $|\Upsilon_1(G,L)|=|\mathcal{S}|$. For the second, we need to compute $\mathcal{S}$, which by \eqref{goodlem1Srun} is attainable in time $\mathcal{O}(|V(G)|^{3k-3})$. Then, for every $\sigma\in \mathcal{S}$, it is easily seen from (M1),(M2) and (M3) that $L_{\sigma}$ can be computed in time $\mathcal{O}(|V(G)|)$. Therefore, $\Upsilon_1(G,L)$ can be computed in time $\mathcal{O}(|V(G)|^{3k-3}+|V(G)||\mathcal{S}|)=\mathcal{O}(|V(G)|^{3k-2})$, where the last equality follows from \eqref{goodlem1Srun}. This proves \eqref{goodlem1run}. \sta{\label{goodlem1anti}For every $\sigma=(S,\psi)\in \Upsilon_1(G,L)$, every induced $P_3$ in $G$ of $L_{\sigma}$-type $\gamma$ is anticomplete to $x_1-x_2-x_3$.} Let $Q=y_1-y_2-y_3$ be an induced $P_3$ in $G$ and $(L_{\sigma}(y_1),L_{\sigma}(y_2),L_{\sigma}(y_3))=\gamma$. Since $\gamma$ is good, we have $|L_{\sigma}(y_i)|=|I_i|\geq 2$ for all $i\in \{1,2,3\}$, while $|L_{\sigma}(v)|=1$ for all $v\in S$. So $S\cap \{y_1,y_2,y_3\}=\emptyset$. Also, if $y_i\in N_G(x_j)\setminus S$ for some $i,j\in \{1,2,3\}$, then by (M2), we have $I_i=L_{\sigma}(y_i)\subseteq L(y_i)\setminus I_j$, and so $I_i\cap I_j=\emptyset$, which violates $\gamma$ being good. Hence, $N_G[\{x_1,x_2,x_3\}]\cap \{y_1,y_2,y_3\}=\emptyset$, and so $Q$ is anticomplete to $x_1-x_2-x_3$. This proves \eqref{goodlem1anti}. \sta{\label{goodlem1frugtofrug} If $G$ admits a frugal $L$-coloring, then for some $\sigma\in \mathcal{S}$, $G$ admits a frugal $L_{\sigma}$-coloring.} Let $\phi$ be a frugal $L$-coloring of $G$. For every $j\in \{1,2,3\}$, we define $A_j=\{v\in N_G(x_j): \phi(v)\in I_j\}$. Let $S=\{x_1,x_2,x_3\}\cup A_1\cup A_2\cup A_3$. Then we have $\{x_1,x_2,x_3\}\subseteq S\subseteq N_G[\{x_1,x_2,x_3\}]$, and since $\phi$ is a frugal $L$-coloring of $G$, we have $|A_j|\leq |I_j|-1\leq k-1$ for every $i\in \{1,2,3\}$, which in turn implies that $|S|\leq 3k$. Thus, $S$ satisfies (T1). We define $\psi=\phi|_S$. Then, again by the frugality of $\phi$, for every $i\in \{1,2,3\}$ and every $j\in L(x_i)$, $x_i$ has at most one neighbor $v$ in $S$ with $\psi(v)=j$. In other words, $\psi$ satisfies (S2), and so $\sigma=(S,\psi)\in \mathcal{S}$. Let $L_{\sigma}$ be the corresponding $k$-list-assignment defined in (M1), (M2) and (M3). We claim that $\phi$ is a frugal $L_{\sigma}$-coloring of $G$. Let $v\in V(G)$. If $v\in S$, then by (M1), we have $\phi(v)=\psi(v)\in \{\psi(v)\}=L_{\sigma}(v)$. Also, if $v\in N_G(\{x_1,x_2,x_3\})\setminus S=N_G(\{x_1,x_2,x_3\})\setminus (A_1\cup A_2\cup A_3)$, then for every $j\in \{1,2,3\}$ with $v\in N_G(x_j)$, we have $\phi(v)\notin I_j$, and so by (M2), we have $\phi(v)\in L(v)\setminus (\bigcup_{ j\in \{1,2,3\}, v\in N_G(x_j)} I_j)=L_{\sigma}(v)$. Finally, if $v\notin N_G(\{x_1,x_2,x_3\})$, then by (M3), we have $\phi(v)\in L(v)=L_{\sigma}(v)$. In summary, we have $\phi(v)\in L_{\sigma}(v)$ for every $v\in V(G)$. Therefore, by Lemma \ref{frugalspread}, $\phi$ is a frugal $L_{\sigma}$-coloring of $G$. This proves \eqref{goodlem1frugtofrug}.\vspace*{3mm} Finally, from \eqref{goodlem1run}, \eqref{goodlem1anti} and \eqref{goodlem1frugtofrug}, we conclude that $\Upsilon_1(G,L)$ satisfies Lemma \ref{goodp3lem1}. This completes the proof. \end{proof} Let $k\in \mathbb{N}$ and $\gamma= (I_1,I_2,I_3)$ be a triple of subsets of $[k]$. Also, let $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem}. We denote by $f_{L,\gamma}(G)$ be the maximum number of mutually anticomplete induced $P_3$'s in $G$ of $L$-type $\gamma$. \begin{lemma}\label{goodp3lem2} Let $k\in \mathbb{N}$ be fixed and $\gamma= (I_1,I_2,I_3)$ be a good triple of subsets of $[k]$. Let $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem}. Also, suppose that no $L$-good $P_3$ in $G$ is of $L$-weight strictly larger than $|I_1|+|I_2|+|I_3|$. Then there exists a spanning $(G,L)$-profile $\Upsilon_2(G,L)$ with the following specifications. \begin{itemize} \item \sloppy $|\Upsilon_2(G,L)|\leq \mathcal{O}(|V(G)|^{(3k-3)f_{L,\gamma}(G)})$ and $\Upsilon_2(G,L)$ can be computed from $(G,L)$ in time $\mathcal{O}(|V(G)|^{(3k-2)f_{L,\gamma}(G)})$. \item For every $(G,L_2)\in \Upsilon_2(G,L)$, $G$ has no induced $P_3$ of $L_2$-type $\gamma$ (and, of course, none of $L_2$-weight strictly larger than $|I_1|+|I_2|+|I_3|$). \item If $G$ admits a frugal $L$-coloring, then for some $(G,L_2)\in \Upsilon_2(G,L)$, $G$ admits a frugal $L_2$-coloring. \end{itemize} \end{lemma} \begin{proof} For fixed $\gamma$, we proceed by induction on $f_{L,\gamma}(G)$. If $f_{L,\gamma}(G)=0$, then $\Upsilon_2(G,L)=\{(G,L)\}$ satisfies Lemma \ref{goodp3lem2}. So we may assume that $k\geq 2$, and we may choose $x_1-x_2-x_3$ as an induced $P_3$ in $G$ with $(L(x_1),L(x_2),L(x_3))=\gamma$. Consequently, we may apply Lemma \ref{goodp3lem1} to $(G,L)$, $\gamma$ and $x_1-x_2-x_3$, obtaining a spanning $(G,L)$-profile $\Upsilon_1(G,L)$ satisfying Lemma \ref{goodp3lem1}. \sta{\label{fsmaller} For every $(G,L_1)\in \Upsilon_1(G,L)$, we have $f_{L_1,\gamma}(G)<f_{L,\gamma}(G)$.} Suppose for a contradiction that $f_{L_1,\gamma}(G)\geq f_{L,\gamma}(G)=t\geq 1$. We may consider a collection $\{q^i_1-q^i_2-q^i_3:i\in [t]\}$ of $t$ mutually anticomplete induced $P_3$'s in $G$, such that for every $i\in [t]$, $(L_1(q^i_1),L_1(q^i_2),L_1(q^i_3))=\gamma$. Now, for each $i\in [t]$, by the second bullet of Lemma \ref{goodp3lem1}, $q^i_1-q^i_2-q^i_3$ is anticomplete to $x_1-x_2-x_3$. Also, since $ (G,L_1) $ is a $(G,L)$-refinement, for every $j\in \{1,2,3\}$, we have $I_j=L_1(q^i_j)\subseteq L(q^i_j)$. Therefore, $\gamma$ being good, $q^i_1-q^i_2-q^i_3$ is an $L$-good $P_3$ in $G$ of $L$-weight at least $|I_1|+|I_2|+|I_3|$. This, along with the assumption of Lemma \ref{goodp3lem2} that no $L$-good $P_3$ in $G$ is of $L$-weight strictly larger than $|I_1|+|I_2|+|I_3|$, implies that $L(q^i_j)=I_j$ for every $j\in \{1,2,3\}$. In other words, $q^i_1-q^i_2-q^i_3$ is an induced $P_3$ in $G$ of $L$-type $\gamma$ which is anticomplete to $x_1-x_2-x_3$. Hence, $\{q^i_1-q^i_2-q^i_3:i\in [t]\}\cup \{x_1-x_2-x_3\}$ comprises $t+1$ mutually anticomplete $P_3$'s in $G$ of $L$-type $\gamma$, which is impossible. This proves \eqref{fsmaller}. \sta{\label{goodlem2IH}Let $(G,L_1)\in\Upsilon_1(G,L)$. Then there exists a spanning $(G,L_1)$-profile $\Upsilon_2(G,L_1)$ with the following specifications. \begin{itemize} \item \sloppy $|\Upsilon_2(G,L_1)|\leq \mathcal{O}(|V(G)|^{(3k-3)(f_{L,\gamma}(G)-1)})$ and $\Upsilon_2(G,L_1)$ can be computed from $(G,L_1)$ in time $\mathcal{O}(|V(G)|^{(3k-2)(f_{L,\gamma}(G)-1)})$. \item For every $(G,L_2)\in \Upsilon_2(G,L_1)$, $G$ has no induced $P_3$ of $L_2$-type $\gamma$ (and, of course, none of $L_2$-weight strictly larger than $|I_1|+|I_2|+|I_3|$). \item If $G$ admits a frugal $L_1$-coloring, then for some $(G,L_2)\in \Upsilon_2(G,L)$, $G$ admits a frugal $L_2$-coloring. \end{itemize}} By the assumption of Lemma \ref{goodp3lem2}, $G$ has no $L$-good $P_3$ of $L_1$-weight strictly larger than $|I_1|+|I_2|+|I_3|$. So since $(G,L_1)$ is a $(G,L)$-refinement, $G$ has no $L_1$-good $P_3$ of $L_1$-weight strictly larger than $|I_1|+|I_2|+|I_3|$. This, along with \eqref{fsmaller} and the induction hypothesis, proves \eqref{goodlem2IH}.\vspace*{3mm} \sloppy Finally, we define $\Upsilon_2(G,L)=\bigcup_{(G,L_1)\in \Upsilon_1(G,L)}\Upsilon_2(G,L_1)$, where for every $(G,L_1)\in \Upsilon_1(G,L)$, $\Upsilon_2(G,L_1)$ is as promised in \eqref{goodlem2IH}. By the first bullet of \eqref{goodlem2IH} and the first bullet of Lemma \ref{goodp3lem1}, we have $|\Upsilon_2(G,L)|\leq \mathcal{O}\left(|V(G)|^{(3k-3)f_{L,\gamma}(G)}\right)$ and $\Upsilon_2(G,L)$ can be computed from $(G,L)$ in time $\mathcal{O}\left(|V(G)|^{(3k-2)}\right)+\mathcal{O}\left(|V(G)|^{(3k-3)}\right)\mathcal{O}\left(|V(G)|^{(3k-2)(f_{L,\gamma}(G)-1)}\right)=\mathcal{O}\left(|V(G)|^{(3k-2)f_{L,\gamma}(G)}\right)$. So $\Upsilon_2(G,L)$ satisfies the first bullet of Lemma \ref{goodp3lem2}. Also, the second bullet of Lemma \ref{goodp3lem2} for $\Upsilon_2(G,L)$ follows from the second bullet of \eqref{goodlem2IH}, and the third bullet of Lemma \ref{goodp3lem2} for $\Upsilon_2(G,L)$ follows from the third bullet of \eqref{goodlem2IH} together with the third bullet of Lemma \ref{goodp3lem1}. Hence, $\Upsilon_2(G,L)$ satisfies Lemma \ref{goodp3lem2}. This completes the proof. \end{proof} Here is the main theorem of this section. \begin{theorem}\label{goodp3theorem} For all $k,r\in \mathbb{N}$, there exists $\upsilon(k,r)\in \mathbb{N}$ with the following property. Let $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem} where $G$ is $rP_3$-free. Then there exists a spanning $(G,L)$-profile $\Upsilon(G,L)$ with the following specifications. \begin{itemize} \item \sloppy $|\Upsilon(G,L)|\leq \mathcal{O}(|V(G)|^{\upsilon(k,r)})$, and $\Upsilon(G,L)$ can be computed from $(G,L)$ in time $ \mathcal{O}(|V(G)|^{\upsilon(k,r)})$. \item For every $(G,L')\in \Upsilon(G,L)$, $G$ has no $L'$-good $P_3$. \item If $G$ admits a frugal $L$-coloring, then for some $(G,L')\in \Upsilon(G,L)$, $G$ admits a frugal $L'$-coloring. \end{itemize} \end{theorem} \begin{proof} If $k=1$, then by setting $\Upsilon(G,L)=\{(G,L)\}$, we are done. So we may assume that $k\geq 2$. Let $(\gamma_i=(I_1^i, I_2^i,I_3^i):i\in [m])$ be an enumeration of all good triples of subsets of $[k]$, such that for $i,j\in [m]$, $i>j$ implies $|I^i_1|+|I^i_2|+|I^i_3|\leq |I^j_1|+|I^j_2|+|I^j_3|$. Note that $m\leq 2^{3k}$, and so this enumeration can be computed in constant time. \sta{\label{f<r}For every list assignment $K$ of $G$ and every $i\in [m]$, we have $f_{K,\gamma_i}(G)\leq r-1$.} For otherwise $G$ contains $r$ mutually anticomplete induced $P_3$'s, which violates the assumption of Theorem \ref{goodp3theorem} that $G$ is $rP_3$-free. This proves \eqref{f<r}. \sta{\label{goodthmseq}There exists a sequence $(\Lambda_0,\ldots, \Lambda_m)$ of spanning $(G,L)$-profiles, where $\Lambda_0=\{(G,L)\}$, and for each $i\in [m]$, the following hold. \begin{itemize} \item $|\Lambda_i|\leq \mathcal{O}(|V(G)|^{i(r-1)(3k-3)})$ and $\Lambda_i$ can be computed from $\Lambda_{i-1}$ in time $ \mathcal{O}(|V(G)|^{i(r-1)(3k-2)})$. \item For every $(G,L')\in \Lambda_i$, $G$ has no induced $P_3$ of $L'$-type in $\{\gamma_j:j\in [i]\}$. \item If $G$ admits a frugal $L''$-coloring for some $(G,L'')\in \Lambda_{i-1}$, then for some $(G,L')\in \Lambda_{i}$, $G$ admits a frugal $L'$-coloring. \end{itemize}} \sloppy We generate this sequence recursively. To initiate, note that $G$ has no $L$-good $P_3$ of $L$-weight larger that $|I^1_1|+|I^1_2|+|I^1_3|$. Thus, we may apply Lemma \ref{goodp3lem2} to $(G,L)$ and $\gamma_1$, obtaining a spanning $(G,L)$-profile $\Upsilon_2(G,L)$ which satisfies Lemma \ref{goodp3lem2}. As result, defining $\Lambda_1=\Upsilon_2(G,L)$, then by \eqref{f<r}, $\Lambda_1$ satisfies the bullet conditions of \eqref{goodthmseq} for $i=1$. Next, assume that for some $i\in \{2,\ldots, m\}$, the $(G,L)$-profile $\Lambda_{i-1}$, satifying the bullet conditions of \eqref{goodthmseq}, is computed. In particular, for every $(G,L'')\in \Lambda_{i-1}$, $G$ has no induced $P_3$ of $L''$-type in $\{\gamma_j:j\in [i-1]\}$. As a result, $G$ has no $L''$-good $P_3$ of $L''$-weight larger that $|I^i_1|+|I^i_2|+|I^i_3|$. Thus, we may apply Lemma \ref{goodp3lem2} to $(G,L'')$ and $\gamma_i$, obtaining a spanning $(G,L)$-profile $\Upsilon_2(G,L'')$ which satisfies Lemma \ref{goodp3lem2}. Let $\Lambda_i=\bigcup_{(G,L'')\in \Lambda_{i-1}}\Upsilon_2(G,L'')$. We claim that $\Lambda_i$ satisfies the bullet conditions of \eqref{goodthmseq}. To see this, from \eqref{f<r} and the first bullet of Lemma \ref{goodp3lem2}, we deduce that for every $(G,L'')\in \Lambda_{i-1}$, $|\Upsilon_2(G,L'')|\leq \mathcal{O}(|V(G)|^{f_{L'',\gamma_i}(G)(3k-3)})=\mathcal{O}(|V(G)|^{(r-1)(3k-3)})$ and $\Upsilon_2(G,L'')$ can be computed from $(G,L'')$ in time $\mathcal{O}(|V(G)|^{f_{L'',\gamma_i}(G)(3k-2)})=\mathcal{O}(|V(G)|^{(r-1)(3k-2)})$. This, along with the fact that $\Lambda_{i-1}$ satisfies the first bullet of \eqref{goodthmseq}, implies that $|\Lambda_i|\leq \mathcal{O}(|V(G)|^{(i-1)(r-1)(3k-3)})\mathcal{O}(|V(G)|^{(r-1)(3k-3)})=\mathcal{O}(|V(G)|^{i(r-1)(3k-3)})$, and $\Lambda_i$ can be computed from $\Lambda_{i-1}$ in time $\mathcal{O}(|V(G)|^{(i-1)(r-1)(3k-3)})\mathcal{O}(|V(G)|^{(r-1)(3k-2)})=\mathcal{O}(|V(G)|^{i(r-1)(3k-2)})$. Therefore, $\Lambda_i$ satisfies the first bullet of \eqref{goodthmseq}. Moreover, for every $(G,L')\in \Lambda_i$, say $(G,L')\in \Upsilon_2(G,L'')$ for some $(G,L'')\in \Lambda_{i-1}$, by second bullet of Lemma \ref{goodp3lem2}, $G$ has no induced $P_3$ of $L'$-type $\gamma_i$. Also, since $(G,L')$ is a $(G,L'')$-refinement, by the second bullet \eqref{goodthmseq} for $\Lambda_{i-1}$, $G$ has no induced $P_3$ of $L'$-type in $\{(I_1^j, I_2^j,I_3^j):j\in [i-1]\}$. It follows that $G$ has no induced $P_3$ of $L'$-type in $\{(I_1^j, I_2^j,I_3^j):j\in [i]\}$. So $\Lambda_i$ satisfies the second bullet of \eqref{goodthmseq}. Finally, the third bullet of Lemma \ref{goodp3lem2} implies that, if $G$ admits a frugal $L''$-coloring for some $(G,L'')\in \Lambda_{i-1}$, then for some $(G,L')\in \Lambda _i$, $G$ admits a frugal $L'$-coloring. So $\Lambda_i$ satisfies the third bullet \eqref{goodthmseq}. This proves \eqref{goodthmseq}.\vspace*{3mm} Now, let $(\Lambda_1,\ldots, \Lambda_m)$ be as in \eqref{goodthmseq}. Let $\Upsilon(G,L)=\Lambda_m$. Then, since $m\leq 2^{3k}$, by the first bullet of \eqref{goodthmseq} for $i=m$, we have $|\Upsilon(G,L)|\leq \mathcal{O}(|V(G)|^{(r-1)(3k-3)2^{3k}})$, and by the first bullet of \eqref{goodthmseq} for $i=0,1,\ldots, m$, $\Upsilon(G,L)$ can be computed in time $\mathcal{O}(|V(G)|^{(r-1)(3k-2)2^{3k}})$. So by setting $\upsilon(k,r)=(r-1)(3k-2)2^{3k}$, $\Upsilon(G,L)$ satisfies the first bullet of Theorem \ref{goodp3theorem}. Also, by the second bullet of \eqref{goodthmseq} for $i=m$, for every $(G,L')\in \Upsilon(G,L)$, $G$ has no induced $P_3$ of $L'$-type in $\{\gamma_i:i\in [m]\}$, and so $G$ has no $L'$-good $P_3$. Therefore, $\Upsilon(G,L)$ satisfies the second bullet of Theorem \ref{goodp3theorem}. Finally, applying the third bullet of \eqref{goodthmseq} to $i=0,1,\ldots,m$ consecutively, it follows that if $G$ admits a frugal $L$-coloring, then for some $(G,L')\in \Upsilon(G,L)$, $G$ admits a frugal $L'$-coloring. Hence, $\Upsilon(G,L)$ satisfies the third bullet of Theorem \ref{goodp3theorem}. This completes the proof. \end{proof} \section{Five colors and vertices with large lists}\label{23sec} In this section, we take the last major step towards the proof of Theorem \ref{thm:main}: we show that essentially every instance of the \textsc{List-$k$-Coloring Problem} which has at least one vertex of list-size three or more and no good $P_3$'s can be reduced in polynomial time to a `smaller' instance. We prove this formally in Theorem \ref{23}, whose proof relies crucially on two lemmas, and in order to state them, we need another definition. Let $k\in \mathbb{N}$ and $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem}. We denote by $G^L$ the graph with $V(G^L)=V(G)$ and $E(G^L)=\{uv\in E(G):L(u)\cap L(v)\neq \emptyset\}$. Note that $(G,L)$ and $(G^L,L)$ have the same state of feasibility. But $G^L$ is not necessarily an induced subgraph of $G$, and so for our purposes, it seems dangerous to consider $(G^L,L)$ as a `simplified' instance to investigate. However, it turns out that we may still take advantage of certain properties of $G^L$. For example, the following lemma proposes a useful interaction between frugality and good $P_3$'s in terms of vertex degrees in $G^L$. \begin{lemma}\label{smallgooddeg} Let $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem} such that $|L(v)|\neq 1$ for every $v\in V(G)$, and $G$ has no $L$-good $P_3$. If $G$ admits a frugal $L$-coloring $\phi$, then for every vertex $v\in V(G)$, $\phi$ assigns mutually distinct colors all vertices in $N_{G^L}[v]$, and in particular, we have $|N_{G^L}(v)|<k$. \end{lemma} \begin{proof} Suppose not. Then since $\phi$ is proper, there exist two vertices $u,w\in N_{G^L}(v)$ such that $u$ and $w$ are nonadjacent in $G$ and $\phi(u)=\phi(w)\in L(u)\cap L(w)$. Also, since $u,w\in N_{G^L}(v)$, both $L(u)\cap L(v)$ and $L(v)\cap L(w)$ are nonempty. But then $u-v-w$ is an $L$-good $P_3$ in $G$, a contradiction. This completes the proof. \end{proof} The following technical lemma also unravels the structural properties of the second neighborhood of certain vertices in $G^L$. We use this lemma extensively while proving Theorem \ref{23}. \begin{lemma} \label{ABhomo} Let $(G,L)$ be an instance of the \textsc{List-$5$-Coloring Problem} such that $|L(u)|\in \{0,2,3\}$ for all $u\in V(G)$ and $G$ has no $L$-good $P_3$. Moreover, suppose that there exists a vertex $u_0\in V(G)$ with $L(u_0)=\{1,2,3\}$. In addition, let $A=N_{G^L}(u_0)\cap L^{(4)}$, $B=N_{G^L}(u_0)\cap L^{(5)}$, $A'=\{w\in N^2_{G^L}(u_0): N_{G^L}(u_0)\cap N_{G^L}(w)\neq\emptyset\}$, $B'=\{w\in N^2_{G^L}(u_0): N_{G^L}(u_0)\cap N_{G^L}(w)\neq\emptyset\}$. Then the following hold. \begin{itemize} \item $|L(u)|\in \{2,3\}$ for every $u\in N^2_{G^L}[u_0]$. \item $L(w)=\{4,5\}$ for every $w\in N^2_{G^L}(u_0)$. \item $N^2_{G^L}(u_0)=A'\cup B'$. \item Both $A$ and $B$ are cliques of $G^L$. \item For every vertex $w\in N^2_{G^L}[u_0]$, $w$ is either complete or anticomplete to $A$ in $G^L$, and either complete or anticomplete to $B$ in $G^L$. In particular, $A'$ is complete to $A$ in $G^L$, and $B'$ is complete to $B$ in $G^L$. \item Both $A'$ and $B'$ are cliques of $G^L$. \item If in addition, $|N_{G^L}^2(u_0)|\geq 2$ and $|A'|,|B'|\leq 1$, then \begin{itemize} \item[-] $|A'|=|B'|=1$ and $A'\cap B'=\emptyset$; \item[-] $A,B\neq \emptyset$ and $A\cap B=\emptyset$; \item[-] $A'$ is anticomplete to $B$ in $G$ and $B'$ is anticomplete to $A$ in $G$; and \item[-] for every $a\in A$ and every $b\in B$, we have $L(a)\cap L(b)=\emptyset$. \end{itemize} \end{itemize} \end{lemma} \begin{proof} The first bullet follows directly from $L(u_0)=\{1,2,3\}\neq \emptyset$, $u_0\in N^2_{G^L}[u_0]$ and $G^L|N^2_{G^L}[u_0]$ being connected. To see the second bullet, let $w\in N^2_{G^L}(u_0)$ and $v\in N_{G^L}(u_0)\cap N_{G^L}(w)$. If $L(u_0)\cap L(w)\neq \emptyset$, then $u_0w\notin E(G)$, and so from $u_0v,vw\in E(G^L)$ and first bullet of Lemma \ref{ABhomo}, it follows that $u_0-v-w$ is an $L$-good $P_3$ in $G$, which is impossible. Consequently, we have $L(w)\subseteq [5]\setminus L(u_0)=\{4,5\}$, and so by the first bullet of Lemma \ref{ABhomo}, we have $L(w)=\{4,5\}$. This proves the second bullet of Lemma \ref{ABhomo}. To verify the third bullet, note that the inclusion $A'\cup B'\subseteq N^2_{G^L}(u_0)$ is clear. Now, let $w\in N^2_{G^L}(u_0)$ and $v\in N_{G^L}(u_0)\cap N_{G^L}(w)$. Then by the second bullet of Lemma \ref{ABhomo}, we have $L(v)\cap \{4,5\}=L(v)\cap L(w)\neq \emptyset$, and so $v\in A\cup B$, which in turn implies that $w\in A'\cup B'$. So $N^2_{G^L}(u_0)\subseteq A'\cup B'$, and the third bullet of Lemma \ref{ABhomo} follows. To see the fourth bullet, suppose for a contradiction that there exist $a_1,a_2\in A$ with $a_1a_2\notin E(G^L)$. Therefore, since $4\in L(a_1)\cap L(a_2)$, we have $a_1a_2\notin E(G)$. This, together with $a_1,a_2\in N_{G^L}(u_0)\subseteq N_G(u_0)$ and the first bullet of Lemma \ref{ABhomo}, implies that $a_1-u_0-a_2$ is an $L$-good $P_3$ in $G$, a contradiction. So $A$ is a clique of $G^L$. Similarly, one can show that $B$ is also a clique of $G^L$, and so the fourth bullet of Lemma \ref{ABhomo} follows. Now we argue the fifth bullet. suppose for a contradiction that there exists $w\in N^2_{G^L}(u_0)$, such that in $G^L$, $w$ has both a neighbor $x$ and a non-neighbor $y$ in either $A$ or $B$, say the former. By the fourth bullet of Lemma \ref{ABhomo}, we have $xy\in E(G^L)\subseteq E(G)$. Also, by the second bullet of Lemma \ref{ABhomo}, we have $4\in L(w)\cap L(x)\cap L(y)$, which in turn implies that $wy\notin E(G)$; that is, $w-x-y$ is an induced $P_3$ in $G$. From this and the second bullet of Lemma \ref{ABhomo}, it follows that $w-x-y$ is an $L$-good $P_3$ in $G$, a contradiction. The case $x,y\in B$ can be handled similarly. This proves the fifth bullet of Lemma \ref{ABhomo}. For the sixth bullet, suppose for a contradiction that $a'_1,a'_2\in A'$ are not adjacent in $G^L$. Then by the second bullet of Lemma \ref{ABhomo}, we have $L(a'_1)=L(a'_2)=\{4,5\}$, and so $a'_1$ and $a'_2$ are not adjacent in $G$. Note that since $A'\neq \emptyset$, by the definition of $A'$, we have $A\neq \emptyset$, and so we may pick a vertex $a_0\in A$. Therefore, by the sixth bullet of Lemma \ref{ABhomo}, $a'_1-a_0-a'_2$ is an induced $P_3$ in $G$ with $4\in L(a'_1)\cap L(a_0)\cap L(a'_2)$, which along with the first bullet of Lemma \ref{ABhomo}, implies that $a'_1-a_0-a'_2$ is an $L$-good $P_3$ in $G$, a contradiction. So the sixth bullet of Lemma \ref{ABhomo} follows. The rest of the proof aims to verify the seventh bullet of Lemma \ref{ABhomo}. For the first dash, from $|N_{G^L}^2(u_0)|\geq 2$ and the third bullet of Lemma \ref{ABhomo}, we have $|A'\cup B'|\geq 2$, which along with $|A'|,|B'|\leq 1$, implies that $|A'|=|B'|=1$ and $A'\cap B'=\emptyset$, as desired. Henceforth, we assume $A'=\{a'\}$ and $B'=\{b'\}$ for distinct $a',b'$. Note that by the second bullet of Lemma \ref{ABhomo}, we have $L(a')=L(b')=\{4,5\}$. For the second dash, note that the fact that $|A'|=|B'|=1$ along with the definition of $A'$ and $B'$, implies that $A,B\neq \emptyset$. Next we show that $A\cap B=\emptyset$. Suppose not. Let $z\in A\cap B$. By the fifth bullet of Lemma \ref{ABhomo}, $a'$ and $b'$ are adjacent to $z$ in $G^L$. Thus, from $z\in A\cap B$ and again the fifth bullet of Lemma \ref{ABhomo}, we deduce that $A'\cup B'=\{a',b'\}$ is complete to $A\cup B$ in $G^L$. But then from the definition of $A'$ and $B'$, it follows that $A=B$, which in turn implies that $A'=B'$, a contradiction with the first dash. So $A\cap B=\emptyset$, as desired. To see the third dash, suppose for a contradiction that $a'$ has a neighbor $b\in B$ in $G$. Then, since $4\in L(a')\cap L(b)$, we have $a'b\in E(G^L)$. But then from the definition of $A'$, we have $b\in A\cap B$, which violates the second dash. Note that if $b'$ has a neighbor in $A$, then a contradiction can be derived similarly. It remains to argue the fourth dash. Suppose not. Let $L(a)\cap L(b)\neq \emptyset$ for distinct $a\in A$ an $b\in B$. If $ab\notin E(G)$, then from $a,b\in N_{G^L}(u_0)$ and the first bullet of Lemma \ref{ABhomo}, we deduce that $a-u_0-b$ is an $L$-good $P_3$, which is impossible. As a result, we have $ab\in E(G)$, and so $ab\in E(G^L)$. By the fifth bullet of Lemma \ref{ABhomo}, $a'$ is adjacent to $a$ in $G^L$, and by the third dash, $a'$ is not adjacent to $b$ in $G$. So $a'$ is non-adjacent to $b$ in $G$, that is, $a'-a-b$ is an induced $P_3$ in $G$. This, along with $4\in L(a')\cap L(a)$, $5\in L(a')\cap L(b)$, $L(a)\cap L(b)\neq \emptyset$ and first bullet of Lemma \ref{ABhomo}, implies that $a'-a-b$ is an $L$-good $P_3$ in $G$, a contradiction. This proves the fourth dash of the seventh bullet of Lemma \ref{ABhomo}, and so concludes the proof. \end{proof} Let $k\in \mathbb{N}$ and $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem}. We define $p(G,L)=|V(G)|+\sum_{v\in V(G)}|L(v)|$. It is immediate from the definition that $p(G,L)\leq (k+1)|V(G)|$. For a $(G,L)$-refinement $(G'L')$, we say that $(G',L')$ \textit{represents} $(G,L)$ if the following hold. \begin{enumerate}[(R1)] \item\otherlabel{r1}{R1} $p(G',L')< p(G,L)$. \item\otherlabel{r2}{R2} If $G$ admits a frugal $L$-coloring, then $G'$ admits a frugal $L'$-coloring. \item\otherlabel{r3}{R3} If $G'$ admits an $L'$-coloring, then $G$ admits an $L$-coloring. \end{enumerate} \begin{theorem}\label{23} Let $(G,L)$ be an instance of the \textsc{List-$5$-Coloring Problem} such that $|L(v)|\neq 1$ for all $v\in V(G)$ and $G$ has no $L$-good $P_3$. Moreover, suppose that there exists a vertex $u_0\in V(G)$ with $|L(u_0)|\geq 3$. Then there exists a $(G,L)$-refinement $(\tilde{G},\tilde{L})$ with the following specifications. \begin{itemize} \item $(\tilde{G},\tilde{L})$ can be computed from $(G,L)$ in time $\mathcal{O}(|V(G)|^2)$. \item $|\tilde{L}(v)|\neq 1$ for all $v\in V(\tilde{G})$. \item $(\tilde{G},\tilde{L})$ represents $(G,L)$. \end{itemize} \end{theorem} \begin{proof} \sloppy Without loss of generality, we may assume that $\{1,2,3\}\subseteq L(u_0)$. We define the four sets $A=N_{G^L}(u_0)\cap L^{(4)}$, $B=N_{G^L}(u_0)\cap L^{(5)}$, $A'=\{w\in N^2_{G^L}(u_0): N_{G^L}(u_0)\cap N_{G^L}(w)\neq \emptyset\}$ and $B'=\{w\in N^2_{G^L}(u_0): N_{G^L}(u_0)\cap N_{G^L}(w)\neq \emptyset\}$ as in Lemma \ref{ABhomo}. For every vertex $u\in V(G)$, let $\Phi_u$ be the set of all frugal $L|_{N^2_{G^L}[u]}$-colorings of $G|N^2_{G^L}[u]$. Consider the following algorithm, called \textit{algorithm} \textbf{\textsf{A}}, which, given $G$, $L$ and $u_0$, computes a $(G,L)$-refinement $(G^*,L^*)$. \begin{enumerate}[\textit{Step }1:] \item \label{s1} Using BFS, compute $N_{G^L}(v)$ and $N^2_{G^L}(v)$ for every $v\in V(G)$. Go to step \ref{s2}, and from each step, proceed to the one below unless instructed otherwise. \item \label{s2} Compute $A$, $B$, $A'$ and $B'$. \item \label{s3} If $|N_{G^L}(u)|\geq 5$ for some $u\in V(G)$, then compute $G^*=G$, $L^*(v)=\emptyset$ for every $v\in V(G^*)$. Return $(G^*,L^*)$. \item \label{s4} If $|N_{G^L}(u)|< |L(u)|$ for some $u\in V(G)$, then compute $G^*=G-u$, $L^*(v)=L(v)$ for every $v\in V(G^*)$. Return $(G^*,L^*)$. \item \label{s5} If $|N^2_{G^L}(u)|\leq 1$ for some $u\in V(G)$, then \begin{enumerate} \item \label{s5a} Compute $\Phi_u$ by brute-forcing. \item \label{s5b} If $\Phi_u=\emptyset$, then compute $G^*=G$ and $L^*(v)=\emptyset$ for every $v\in V(G^*)$. Return $(G^*,L^*)$. \item \label{s5c} Otherwise, Compute $G^*=G-N_{G^L}[u]$, $L^*(v)=\{i\in L(v): \phi(v)=i\text{ for some }\phi\in \Phi_u\}$ for every $v\in N^2_{G^L}(u)$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus N^2_{G^L}(u)$. Return $(G^*,L^*)$. \end{enumerate} \item \label{s6} If $|A'|\geq 2$, then compute $G^*=G$, $L^*(a)=L(a)\setminus \{4,5\}$ for every $a\in A$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus A$. Return $(G^*,L^*)$. \item \label{s7} If $|B'|\geq 2$, then compute $G^*=G$, $L^*(b)=L(b)\setminus \{4,5\}$ for every $b\in B$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus B$. Return $(G^*,L^*)$. \item \label{s8} If there exist distinct vertices $a_1,a_2\in A$ with $L(a_1)=L(a_2)$ and $|L(a_1)|=|L(a_2)|=2$, then compute $G^*=G$, $L^*(a')=L(a')\setminus \{4\}$ for every $a'\in A'$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus A'$. Return $(G^*,L^*)$. \item \label{s9} If there exist distinct vertices $b_1,b_2\in B$ with $L(b_1)=L(b_2)$ and $|L(b_1)|=|L(b_2)|=2$, then compute $G^*=G$, $L^*(b')=L(b')\setminus \{5\}$ for every $b'\in B'$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus B'$. Return $(G^*,L^*)$. \item \label{s10} If $|N_{G^L}(u_0)|\geq 4$, then compute $G^*=G$, $L^*(a')=L(a')\setminus \{4\}$ for every $a'\in A'$, $L^*(b')=L(b')\setminus \{5\}$ for every $b'\in B'$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus (A'\cup B')$. Return $(G^*,L^*)$ and stop. \item \label{s11} Compute a minimal subset of $M$ of $L(u_0)$ such that $M\cap L(a)\neq \emptyset$ for some $a\in A$ and $M\cap L(b)\neq \emptyset$ for some $b\in B$. Choose $i\in M\cap L(a)$ and $j\in M\cap L(b)$. Compute $G^*=G|(\{u_0,a,b\}\cup (V(G)\setminus N_{G^L}[u_0]))$, $L^*(u_0)=M$, $L(a)=\{i,4\}$, $L(b)=\{j,5\}$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus \{u_0,a,b\}$. Return $(G^*,L^*)$. \end{enumerate} As a general property of algorithm \textbf{\textsf{A}}, note that for each step other than steps \ref{s1}, \ref{s2} and \ref{s5a}, if the corresponding `if condition' is satisfied, then the algorithm terminates at that step. In particular, \sta{\label{size4done} Suppose that $|L(u)|\geq 4$ for some $u\in V(G)$. Then algorithm \textbf{\textsf{A}} terminates at or before step \ref{s5}.} Let $j\in [5] $ with $[5]\setminus L(u)\subseteq \{i\}$. We claim that $N^2_{G^L}(u)=\emptyset$. Suppose not. Let $w\in N^2_{G^L}(u)$. Then $L(u)\cap L(w)=\emptyset$, and so $L(w)\subseteq \{i\}$. Also, there exists $v\in N_{G^L}(u)$ which is adjacent to $w$ in $G^L$, and so $L(v)\cap L(w)\neq\emptyset$. Thus, $L(w)=\{i\}$, and so $|L(w)|=1$, a contradiction. This proves the claim. But then the `if condition' in step \ref{s5} is satisfied, and so algorithm \textbf{\textsf{A}} terminates at or before step \ref{s5}. This proves \eqref{size4done}.\vspace*{3mm} We deduce: \sta{\label{getbullets} If algorithm \textbf{\textsf{A}} does not stop at steps \ref{s3}-\ref{s7}, then all seven bullets of Lemma \ref{ABhomo} hold.} Note that algorithm \textbf{\textsf{A}} does not terminate at step \ref{s5}. This has two consequences. First, by \eqref{size4done} and the assumption of Theorem \ref{23}, we have $|L(u)|\in \{0,2,3\}$ for every $u\in V(G)$. In particular, we have $L(u_0)=\{1,2,3\}$. Second, the `if condition' of step \ref{s5} is not satisfied, and in particular $|N_{G^L}^2(u_0)|\geq 2$. In addition, since algorithm \textbf{\textsf{A}} does not stop at steps \ref{s6} and \ref{s7}, the `if condition' in these two steps is not satisfied, and so $|A'|,|B'|\leq 1$. Therefore, all seven bullets of Lemma \ref{ABhomo} hold. This proves \eqref{getbullets}. \sta{\label{23runtime} Algorithm \textbf{\textsf{A}} terminates in finite time. Indeed, it runs in time $\mathcal{O}(|V(G)|^2)$.} Note that if the algorithm does not terminate in steps \ref{s3}-\ref{s10}, then it arrives at step \ref{s11}, and by \eqref{getbullets}, all seven bullets of Lemma \ref{ABhomo} hold. In particular, by the second dash of the seventh bullet of Lemma \ref{ABhomo}, we have $A,B\neq \emptyset$. As a result, the set $M$ mentioned in step \ref{s11} is well-defined. So algorithm \textbf{\textsf{A}} executes step \ref{s11}, and stops in finite time. We leave the reader to check the straightforward fact that the overall running time of algorithm \textbf{\textsf{A}} is $\mathcal{O}(|V(G)|^2)$. This proves \eqref{23runtime}.\vspace*{3mm} We need to show that the output $(G^*,L^*)$ of algorithm \textbf{\textsf{A}} represents $(G,L)$. The proof is broken into several statements, below. \sta{\label{geq5done} If algorithm \textbf{\textsf{A}} terminates at step \ref{s3}, then $(G^*,L^*)$ represents $(G,L)$.} Note that $|L^*(u_0)|=0<3\leq |L(u_0)|$, and so $(G^*,L^*)$ satisfies \eqref{r1}. Also, the `if condition' of step \ref{s3} is satisfied, and there exists a vertex $u\in V(G)$ with $|N_{G^L}(u)|\geq 5$. Thus, $L$ being a $5$-list assignment, by Lemma \ref{smallgooddeg}, $G$ admits no frugal $L$-coloring, and so $(G^*,L^*)$ vacuously satisfies \eqref{r2}. Moreover, since $L^*(v)=\emptyset$ for every $v\in V(G^*)$, $G^*$ admits no $L^*$-coloring, and so $(G^*,L^*)$ vacuously satisfies \eqref{r3}, as well. This proves \eqref{geq5done}. \sta{\label{smalldegdone} If algorithm \textbf{\textsf{A}} terminates at step \ref{s4}, then $(G^*,L^*)$ represents $(G,L)$.} The `if condition' of step \ref{s4} is satisfied, and so we have $|N_{G^L}(u)|< |L(u)|$ for some $u\in V(G)$, $G^*=G-u$ and $L^*(v)=L(v)$ for every $v\in V(G^*)$. As a result, $|V(G^*)|<|V(G)|$, and so $(G^*,L^*)$ satisfies \eqref{r1}. Moreover, if $G$ admits a frugal $L$-coloring $\phi$, then by Lemma \ref{frugalspread}, $\phi|_{V(G^*)}$ is a frugal $L^*$-coloring of $G^*$, and so $(G^*,L^*)$ satisfies \eqref{r2}. Now, suppose that $G^*$ admits an $L^*$-coloring $\psi$. Since $|\{\psi(v):v\in N_{G^L}(u)\}|\leq |N_{G^L}(u)|<|L(u)|$, there exists a color $j\in L(u)\setminus \{\psi(v):v\in N_{G^L}(u)\}$. Hence, extending $\phi$ to $G$ by defining $\psi(u)=j$, we obtain an $L$-coloring of $G$, and so $(G^*,L^*)$ satisfies \eqref{r3}. This proves \eqref{smalldegdone}. \pagebreak \sta{\label{C<1done2} If algorithm \textbf{\textsf{A}} terminates at step \ref{s5}, then $(G^*,L^*)$ represents $(G,L)$.} The `if condition' of step \ref{s5} is satisfied, and so we have $|N^2_{G^L}(u)|\leq 1$ for some $u\in V(G)$. Now, suppose that $\Phi_u=\emptyset$. Then algorithm \textbf{\textsf{A}} terminates at step \ref{s5b}, $G^*=G$ and $L^*(v)=\emptyset$ for every $v\in V(G)$. Note that $|L^*(u_0)|=0<3\leq |L(u_0)|$, and so $(G^*,L^*)$ satisfies \eqref{r1}. Also, from $\Phi_u=\emptyset$, it follows that $G|N^2_{G^L}[u]$ admits no frugal $L|_{N^2_{G^L}[u]}$-coloring, and so $G$ admits no frugal $L$-coloring. Thus, $(G^*,L^*)$ vacuously satisfies \eqref{r2}. Moreover, since $L^*(v)=\emptyset$ for every $v\in V(G^*)$, $G^*$ admits no $L^*$-coloring, and $(G^*,L^*)$ vacuously satisfies \eqref{r3}, as well. Therefore, we may assume that $\Phi_u\neq \emptyset$. Then algorithm \textbf{\textsf{A}} terminates at step \ref{s5c}, $~{G^*= G- N_{G^L}[u]}$, $L^*(v)=\{i\in L(v): \phi(v)=i\text{ for some }\phi\in \Phi_u\}$ for every $v\in N^2_{G^L}(u)$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus N^2_{G^L}(u)$. It follows that $|V(G^*)|< |V(G)|$, and so $(G^*,L^*)$ satisfies \eqref{r1}. For \eqref{r2}, suppose that $G$ admits a frugal $L$-coloring $\psi$. Then $\phi=\psi|_{N^2_{G^L}[u]}$ is readily seen to be a frugal $L|_{N^2_{G^L}[u]}$-coloring of $G|N^2_{G^L}[u]$; that is, $\phi\in \Phi_u$. So for every $v\in N_{G^L}^2(u)$, $\psi(v)=\phi(v)\in L^*(v)$. Also $\psi(v)\in L(v)=L^*(v)$ for all $v\in V(G^*)\setminus N_{G^L}^2(u)$. Thus, $\psi$ being a frugal $L$-coloring of $G$, it follows from Lemma \ref{frugalspread} that $\psi|_{V(G^*)}$ is a frugal $L^*$-coloring of $G^*$. This shows that $(G^*,L^*)$ satisfies \eqref{r2}. Now we need to argue that $(G^*,L^*)$ satisfies \eqref{r3}. Suppose that $G^*$ admits an $L^*$-coloring $\psi'$. Since $|N^2_{G^L}(u)|\leq 1$ and $\psi'(c)\in L^*(c)$ for every $c\in N^2_{G^L}(u)$, there exists $\phi'\in \Phi_u$ such that $\phi'(c)=\psi'(c)$ for all $c\in N^2_{G^L}(u)$. We define a coloring $\theta:V(G)\rightarrow [5]$ as follows. For every $v\in N_{G^L}[u_0]$, let $\theta(v)=\phi'(v)$. For all $c\in N^2_{G^L}(u)$, let $\theta(c)=\phi'(c)=\psi'(c)$. For every $v\in V(G)\setminus N_{G^L}^2[u_0]=V(G^*)\setminus N_{G^L}^2(u_0)$, let $\theta(v)=\psi'(v)$. We claim that $\theta$ is an $L$-coloring of $G$. To see this, note that since $\phi'\in \Phi_u$, we have $\theta(v)=\phi'(v)\in L(v)$ for every $v\in N^2_{G^L}[u_0]$. Also, $\theta(v)=\psi'(v)\in L^*(v)=L(v)$ for every $v\in V(G)\setminus N_{G^L}^2[u_0]$. So $\theta(v)\in L(v)$ for all $v\in V(G)$. It remains to show that $\theta$ is proper. Let $xy\in E(G)$. If $x,y\in N^2_{G^L}[u_0]$, then since $\phi'$ is proper, we have $\theta(x)=\phi'(x)\neq \phi'(y)=\theta(y)$. If $x,y\in V(G)\setminus N_{G^L}[u_0]=V(G^*)$, then since $\psi'$ is proper, $\theta(x)=\psi'(x)\neq \psi'(y)=\theta(y)$. Finally, let $x\in N_{G^L}[u_0]$ and $y\in V(G)\setminus N_{G^L}^2[u_0]$. Then $xy\notin E(G^L)$, and so $L(x)\cap L(y)=\emptyset$. Hence $\theta(x)\neq\theta(y)$, and $\theta$ is proper. This proves \eqref{C<1done2}. \sta{\label{A'2} If algorithm \textbf{\textsf{A}} terminates at step \ref{s6}, then $(G^*,L^*)$ represents $(G,L)$.} The `if condition' of step \ref{s6} is satisfied, and so we have $|A'|\geq 2$, $G^*=G$, $L^*(a)=L(a)\setminus \{4,5\}$ for every $a\in A$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus A$. Note that from $|A'|\geq 2$ and the definition of $A'$, it follows that $A\neq \emptyset$, and so for every $a\in A$, we have $4\in L(a)\setminus L^*(a)$, which in turn implies that $|L^*(a)|<|L(a)|$. This shows that $(G^*,L^*)$ satisfies \eqref{r1}. For \eqref{r2}, suppose that $G$ admits a frugal $L$-coloring $\phi$. Let $a'_1,a'_2\in A'$ be distinct. Since the algorithm does not stop at step \ref{s5}, by \eqref{size4done} and the assumption of Theorem \ref{23}, we have $|L(u)|\in \{0,2,3\}$ for every $u\in V(G)$. In particular, we have $L(u_0)=\{1,2,3\}$. So by the second bullet of Lemma \ref{ABhomo}, we have $L(a'_1)=L(a'_2)=\{4,5\}$, by the fifth bullet of Lemma \ref{ABhomo}, $a'_1$ and $a'_2$ are complete to $A$ in $G^L$ (and so in $G$), and by the sixth bullet of Lemma \ref{ABhomo}, $a'_1$ and $a'_2$ are adjacent in $G^L$ (and so in $G$). As a result, we have $\{\phi(a'_1),\phi(a'_2)\}=\{4,5\}$, and for every $a\in A$, we have $\phi(a)\in L(v)\setminus \{\phi(a'_1),\phi(a'_2)\}=L(v)\setminus\{4,5\}=L^*(a)$. In addition, we have $\phi(v)\in L(v)=L^*(v)$ for every $v\in V(G^*)\setminus A$. Thus, $\phi$ being a frugal $L$-coloring $\phi$, it follows from Lemma \ref{frugalspread} that $\phi|_{V(G^*)}$ is an $L^*$-coloring of $G^*$, and so $(G^*,L^*)$ satisfies \eqref{r2}. Finally, note that $(G^*,L^*)$ is a spanning $(G,L)$-refinement, and by Lemma \ref{spanspread}, $(G^*,L^*)$ satisfies \eqref{r3}. This proves \eqref{A'2}.\vspace*{3mm} The reader may have noticed that steps \ref{s6} and \ref{s7} of algorithm \textbf{\textsf{A}} are symmetrical with respect to $A$ and $B$. As a result, the proof of \eqref{B'2}, the following, is identical to that of \eqref{A'2}, and so we omit it. \pagebreak \sta{\label{B'2} If algorithm \textbf{\textsf{A}} terminates at step \ref{s7}, then $(G^*,L^*)$ represents $(G,L)$.} Then we continue with the following. \sta{\label{bad2A} If algorithm \textbf{\textsf{A}} terminates at step \ref{s8}, then $(G^*,L^*)$ represents $(G,L)$.} The `if condition' of step \ref{s8} is satisfied, and so there exist distinct vertices $a_1, a_2\in A$ with $L(a_1)=L(a_2)$ and $|L(a_1)|=|L(a_2)|=2$. Also, we have $G^*=G$, $L^*(a')=L(a')\setminus \{4\}$ for every $a'\in A'$ and $L^*(v)=L(v)$ for every $v\in V(G)\setminus A'$. By \eqref{getbullets}, all seven bullets of Lemma \ref{ABhomo} hold. In particular, by the first dash of the seventh bullet of Lemma \ref{ABhomo}, we have $|A'|=1$, say $A'=\{a'_0\}$, and by the second bullet of Lemma \ref{ABhomo}, we have $L(a'_0)=\{4,5\}$. As a result, we have $4\in L(a'_0)\setminus L^*(a'_0)$, which in turn implies that $|L^*(a'_0)|<|L(a'_0)|$. So $(G^*,L^*)$ satisfies \eqref{r1}. To argue the validity of \eqref{r2}, suppose that $G$ admits a frugal $L$-coloring $\phi$. Note that by the fourth bullet of Lemma \ref{ABhomo}, $a_1$ and $a_2$ are adjacent in $G$. So from $L(a_1)=L(a_2)$, $|L(a_1)|=|L(a_2)|=2$ and $4\in L(a_1)\cap L(a_2)$, we have $4\in \{\phi(a_1),\phi(a_2)\}$. Also, by the fifth bullet of Lemma \ref{ABhomo}, $a'_0$ is adjacent to both $a_1$ and $a_2$. Therefore, we have $\phi(a'_0)\in L(a'_0)\setminus \{\phi(a_1),\phi(a_2)\}\subseteq L(a'_0)\setminus \{4\}=L^*(a_0)$. In other words, for every $v\in A'=\{a'_0\}$, we have $\phi(v)\in L^*(v)$. Moreover, for every $v\in V(G^*)\setminus A'$, we have $\phi(v)\in L(v)=L^*(v)$. Hence, $\phi$ being a frugal $L$-coloring of $G$, by Lemma \ref{frugalspread}, $\phi|_{V(G^*)}$ is a frugal $L^*$-coloring of $G^*$, and so $(G^*,L^*)$ satisfies \eqref{r2}. Finally, note that $(G^*,L^*)$ is spanning $(G,L)$-refinement, and so by Lemma \ref{spanspread}, $(G^*,L^*)$ satisfies \eqref{r3}. This proves \eqref{bad2A}.\vspace*{3mm} Again, we observe that steps \ref{s8} and \ref{s9} of algorithm \textbf{\textsf{A}} are symmetrical with respect to $A$ and $B$. For this reason, the proof of \eqref{bad2B} below is identical to that of \eqref{bad2A}, and so we omit it. \sta{\label{bad2B} If algorithm \textbf{\textsf{A}} terminates at step \ref{s9}, then $(G^*,L^*)$ represents $(G,L)$.} \sta{\label{u_0deg4done} If algorithm \textbf{\textsf{A}} terminates at step \ref{s10}, then $(G^*,L^*)$ represents $(G,L)$.} The `if condition' of step \ref{s10} is satisfied, that is $|N_{G^L}(u_0)|\geq 4$. Also, since the algorithm \textbf{\textsf{A}} does not terminate at step \ref{s3}, the `if condition' of step \ref{s3} does not hold. In particular, we have $|N_{G^L}(u_0)|\leq 4$, and so $|N_{G^L}(u_0)|=4$. In addition, we have $G^*=G$, $L^*(a')=L(a')\setminus \{4\}$ for every $a\in A'$, $L^*(b')=L(b')\setminus \{5\}$ for every $b'\in B'$ and $L^*(v)=L(v)$ for every $v\in V(G)\setminus (A'\cup B')$. By \eqref{getbullets}, all seven bullets of Lemma \ref{ABhomo} hold. In particular, by the first dash of the seventh bullet of Lemma \ref{ABhomo}, we have $A'\cap B'=\emptyset$ and $|A'|=|B'|=1$, say $A'=\{a'_0\}$ and $B'=\{b'_0\}$ for distinct $a'_0,b'_0$, and by the second bullet of Lemma \ref{ABhomo}, we have $L(a'_0)=L(b'_0)=\{4,5\}$. As a result, we have $4\in L^*(a'_0)\setminus L(a'_0)$, which in turn implies that $|L^*(a'_0)|<|L(a'_0)|$. Thus, $(G^*,L^*)$ satisfies \eqref{r1}. To see \eqref{r2}, suppose that $G$ admits a frugal $L$-coloring $\phi$. Since $|N_{G^L}(u_0)|=4$, by Lemma \ref{smallgooddeg}, we have $\phi(N_{G^L}[u_0])=[5]$. Thus, from $L(u_0)=\{1,2,3\}$ and the definition of $A$ and $B$, we deduce that there exists $a_0\in A$ with $\phi(a_0)=4$ and $b_0\in B$ with $\phi(b_0)=5$. On the other hand, by the fifth bullet of Lemma \ref{ABhomo}, $a'_0$ is adjacent to $a_0$ and $b'_0$ is adjacent to $b_0$ in $G^L$ (and so in $G$). Therefore, we have $\phi(a'_0)\in L(a'_0)\setminus \{\phi(a_0)\}=L(a'_0)\setminus \{4\}=L^*(a'_0)$ and $\phi(b'_0)\in L(b'_0)\setminus \{\phi(b_0)\}=L(b'_0)\setminus \{5\}=L^*(b'_0)$. In other words, for every $v\in \{a'_0,b'_0\}=A'\cup B'$, we have $\phi(v)\in L^*(v)$. Moreover, for every $v\in V(G^*)\setminus (A'\cup B')$, we have $\phi(v)\in L(v)=L^*(v)$. Hence, $\phi$ being a frugal $L$-coloring of $G$, by Lemma \ref{frugalspread}, $\phi|_{V(G^*)}$ is a frugal $L^*$-coloring of $G^*$, and so and so $(G^*,L^*)$ satisfies \eqref{r2}. Finally, note that $(G^*,L^*)$ is spanning $(G,L)$-refinement, and so by Lemma \ref{spanspread}, $(G^*,L^*)$ satisfies \eqref{r3}. This proves \eqref{u_0deg4done}.\vspace*{3mm} From \eqref{geq5done}-\eqref{u_0deg4done}, we deduce: \pagebreak \sta{\label{finalstep} The output $(G^*,L^*)$ of algorithm \textbf{\textsf{A}} represents $(G,L)$.} If algorithm \textbf{\textsf{A}} terminates at one of the steps \ref{s3}-\ref{s10}, then by \eqref{geq5done}-\eqref{u_0deg4done}, we are done. Therefore, we may assume that algorithm \textbf{\textsf{A}} stops at step \ref{s11}. Since algorithm \textbf{\textsf{A}} does not terminates at steps \ref{s3} and \ref{s10}, the `if condition' in these two steps is not satisfied, and so $3\leq |L(u_0)|\leq |N_{G^L}(u_0)|\leq 3$; that is, $|N_{G^L}(u_0)|=|L(u_0)|=3$ and $L(u_0)=\{1,2,3\}$. By \eqref{getbullets}, all seven bullets of Lemma \ref{ABhomo} hold. In particular, by the first and the second dash of the seventh bullet of Lemma \ref{ABhomo}, we have $A'\cap B'=\emptyset$ and $|A'|=|B'|=1$, say $A'=\{a'_0\}$ and $B'=\{b'_0\}$ for distinct $a'_0,b'_0$, $A,B\neq \emptyset$ and $A\cap B=\emptyset$. Also, by the fifth bullet and the third dash of the second bullet of Lemma \ref{ABhomo}, $a'_0$ is complete to $A$ in $G^L$ (and so in $G$) and anticomplete to $B$ in $G$ (and so in $G^L$), and $b'_0$ is complete to $B$ in $G^L$ (and so in $G$) and anticomplete to $A$ in $G$ (and so in $G^L$). Moreover, by the second bullet of Lemma \ref{ABhomo}, we have $L(a'_0)=L(b'_0)=\{4,5\}$. Let $M$, $i$ and $j$ be as in step \ref{s11} of algorithm \textbf{\textsf{A}}. Then we have $~{G^*=G|(\{u_0,a,b\}\cup (V(G)\setminus N_{G^L}[u_0]))}$, $L^*(u_0)=M$, $L^*(a)=\{i,4\}$, $L^*(b)=\{j,5\}$ and $L^*(v)=L(v)$ for every $v\in V(G^*)\setminus \{u_0,a,b\}$. Also, by the minimality of $M$, we have $M=\{i,j\}\subseteq \{1,2,3\}$. To verify validity of \eqref{r1}, note that from $|N_{G^L}(u_0)|=3$, one may deduce $~|{V(G^*)|=3+|V(G)\setminus N_{G^L}[u_0]|<4+|V(G)\setminus N_{G^L}[u_0]|\leq |V(G)|}$. So $(G^*,L^*)$ satisfies \eqref{r1}. For \eqref{r2}, suppose that $G$ admits a frugal $L$-coloring $\phi$. From $L(u_0)=\{1,2,3\}$, $|N_{G^L}(u_0)|=3$ and Lemma \ref{smallgooddeg}, we observe that either there exists $a_0\in A$ with $\phi(a_0)=4$ or there exists $b_0\in A$ with $\phi(b_0)=5$. On the other hand, by the fifth bullet of Lemma \ref{ABhomo}, $a'_0$ is complete to $A$ in $G^L$ (and so in $G$) and $b'_0$ is complete to $B$ in $G^L$ (and so in $G$). Consequently, since $\phi$ is proper, either $\phi(a'_0)=5$ or $\phi(b'_0)=4$. In the former case, let $\psi(a)=4$, $\psi(b)=j$, $\psi(u_0)=i$ and $\psi(v)=\phi(v)$ for every $v\in V(G^*)\setminus \{u_0,a,b\}=V(G)\setminus N_{G^L}[u_0]$. In the latter case, let $\psi(a)=i$, $\psi(b)=5$, $\psi(u_0)=j$ and again $\psi(v)=\phi(v)$ for every $v\in V(G^*)\setminus \{u_0,a,b\}=V(G)\setminus N_{G^L}[u_0]$. We leave the reader to check that, from $\phi$ being a frugal $L$-coloring of $G$, it follows that $\psi$ is a frugal $L^*$-coloring of $G^*$. This verifies \eqref{r2} for $(G^*,L^*)$. It remains to argue the truth of \eqref{r3} for $G^*,L^*)$. Suppose $\psi$ is an $L^*$-coloring of $G^*$. Due to $|N_{G^L}(u_0)|=3$, let $N_{G^L}(u_0)\setminus \{a,b\}=\{c\}$. By the first bullet of Lemma \ref{ABhomo}, we have $|L(a)|,|L(b)|,|L(c)|\geq 2$. Note that either $\psi(a'_0)=5$ or $\psi(b'_0)=4$, for otherwise from $\psi(a'_0)=4$ and $\psi(b'_0)=5$, it follows that $\psi(a)=i$, $\psi(b)=j$, and so $\psi(u_0)\in M=\{i,j\}= \{\psi(a),\psi(b)\}$, which contradicts $\psi$ being proper. We deduce \eqref{r3} for cases $\psi(a'_0)=\psi(b'_0)$ and $\psi(a'_0)\neq \psi(b'_0)$ separately, below. First, suppose that $\psi(a'_0)=\psi(b'_0)$, and by symmetry, let $\psi(a'_0)=\psi(b'_0)=4$. We define a coloring $\phi$ of $G$ as follows. Let $\phi(b)=5$ and $\phi(v)=\psi(v) $ for every $v\in V(G)\setminus N_{G^L}[u_0]$. In order to determine $\phi(a)$ and $\phi(c)$, we need to consider two cases. If $c\in A$, then from $A\cap B=\emptyset$, we have $5\notin L(a)\cup L(c)$, and so we may choose two distinct colors $k$ and $l$ with $~{k\in L(a)\setminus \{4,5\}=L(a)\setminus \{\phi(a'_0),\phi(b'_0),\phi(b)\}}$ and $l\in L(c)\setminus \{4\}=L(c)\setminus \{\phi(a'_0),\phi(b'_0),\phi(b)\}$, since otherwise $L(a)=L(c)$ and $|L(a)|=|L(c)|=2$, and so algorithm \textbf{\textsf{A}} should have terminated at step \ref{s8}, a contradiction. Otherwise, if $c\notin A$, the we have $4\notin L(c)$. So since $|L(a)|\geq 2$, there exists $k\in L(a)\setminus \{4,5\}=L(a)\setminus \{\phi(a'_0),\phi(b'_0),\phi(b)\}$, and since $|L(c)|\geq 2$, there exists $l\in L(c)\setminus \{k,5\}=L(c)\setminus \{k,4,5\}=L(c)\setminus \{k,\phi(a'_0),\phi(b'_0),\phi(b)\}$, as otherwise $c\in B$ and $L(a)\cap L(c)\neq \emptyset$, which violates the fourth dash of the seventh bullet of Lemma \ref{ABhomo}. We define $\phi(a)=k$ and $\phi(c)=l$. Eventually, we choose $\phi(u_0)\in L(u_0) \setminus \{k,l,4\}=\{1,2,3\}\setminus \{k,l\}$. We leave it to the reader to check that since $\psi$ is and $L^*$-coloring of $G^*$, $\phi$ is an $L$-coloring of $G$, and so the third bullet of \eqref{finalstep} follows. Note that the argument for the case $\psi(a'_0)=\psi(b'_0)=5$ is analogous, with an additional caveat that this time we rely on the fact that algorithm \textbf{\textsf{A}} does not terminate at step \ref{s9}, instead. Next, suppose that $\psi(a'_0)\neq \psi(b'_0)$; that is, $\psi(a'_0)=5$ and $\psi(b'_0)=4$. We define a coloring $\phi$ of $G$ as follows. Let $\phi(a)=4$, $\phi(b)=5$, $\phi(v)=\psi(v) $ for every $v\in V(G)\setminus N_{G^L}[u_0]$. Also since $A\cap B=\emptyset$, either $4\notin L(c)$ or $5\notin L(c)$, and from $|L(c)|\geq 2$, there exists $k\in L(c)\setminus \{4,5\}=L(c)\setminus \{\phi(a),\phi(b),\psi(a'_0),\psi(b'_0)\}$, and we set $\phi(c)=k$. Finally, we choose $\phi(u_0)\in L(u_0) \setminus \{k,4,5\}=\{1,2,3\}\setminus \{k\}$. Then it is easy to check that since $\psi$ is an $L^*$-coloring of $G^*$, $\phi$ is an $L$-coloring of $G$, and so $(G^*,L^*)$ satisfies \eqref{r3}. This proves \eqref{finalstep}. \sta{\label{23kill1} There exists a $(G^*,L^*)$-refinement $(\tilde{G},\tilde{L})$ with the following specifications. \begin{itemize} \item $(\tilde{G},\tilde{L})$ can be computed from $(G^*,L^*)$ is time $\mathcal{O}(|V(G^*)|^2)$. \item We have $|\tilde{L}(v)|\neq 1$ for all $v\in V(\tilde{G})$. \item If $G^*$ admits a frugal $L^*$-coloring, then $\tilde{G}$ admits a frugal $\tilde{L}$-coloring. \item If $\tilde{G}$ admits an $\tilde{L}$-coloring, then $G^*$ admits an $L^*$-coloring. \end{itemize}} We may apply Lemma \ref{kill1} to $(G^*,L^*)$, obtaining a $(G^*,L^*)$-refinement $(\hat{G^*},\hat{L^*})$ satisfying the bullet conditions of Lemma \ref{kill1}. Then, defining $\tilde{G}=\hat{G^*}$ and $\tilde{L}=\hat{L^*}$, it follows that $(\tilde{G},\tilde{L})$ satisfies the bullet conditions of \eqref{23kill1}. This proves \eqref{23kill1}.\vspace*{3mm} To conclude the proof, let $(\tilde{G},\tilde{L})$ be as in \eqref{23kill1}. We show that $(\tilde{G},\tilde{L})$ satisfies Theorem \ref{23}. By \eqref{23runtime}, algorithm \textbf{\textsf{A}} computes $(G^*,L^*)$ from $(G,L)$ in time $\mathcal{O}(|V(G)|^2)$. Also, by the first bullet of \eqref{23kill1}, $(\tilde{G},\tilde{L})$ can be computed from $(G^*,L^*)$ in time $\mathcal{O}(|V(G^*)|^2)=\mathcal{O}(|V(G)|^2)$. So $(\tilde{G},\tilde{L})$ can be computed from $(G,L)$ in time $\mathcal{O}(|V(G)|^2)$; that is, $(\tilde{G},\tilde{L})$ satisfies the first bullet of Theorem \ref{23}. For the second bullet of Theorem \ref{23}, we argue the validity of \eqref{r1}, \eqref{r2} and \eqref{r3} for $(\tilde{G},\tilde{L})$ separately. By \eqref{finalstep}, $(G^*,L^*)$ satisfies \eqref{r1}, and so being a $(G^*,L^*)$-refinement, it follows that $(\tilde{G},\tilde{L})$ satisfies \eqref{r1}, as well. For \eqref{r2} suppose that $G$ admits a frugal $L$-coloring. Then by \eqref{finalstep}, $(G^*,L^*)$ satisfies \eqref{r2}, and so $G^*$ admits a frugal $L^*$-coloring. Therefore, by the third bullet of \eqref{23kill1}, $\tilde{G}$ admits a frugal $\tilde{L}$-coloring, and so $(\tilde{G},\tilde{L})$ satisfies \eqref{r2}. Finally, for \eqref{r3}, suppose that $\tilde{G}$ admits an $\tilde{L}$-coloring. Then by the fourth bullet of \eqref{23kill1}, $G^*$ admits an $L^*$-coloring. Also, by \eqref{finalstep}, $(G^*,L^*)$ satisfies \eqref{r3}, and so $G$ admits an $L$-coloring. Hence, $(\tilde{G},\tilde{L})$ satisfies \eqref{r3}. This completes the proof. \end{proof} \section{Proof of Theorem \ref{thm:main}}\label{mainsec} In this section, we combine Theorems \ref{frugality}, \ref{goodp3theorem} and \ref{23} to deduce Theorem \ref{thm:main}. First, Theorems \ref{frugality} and \ref{goodp3theorem} are applied to deduce the following. \begin{theorem}\label{allfromfrug+good} For all fixed $k,r\in \mathbb{N}$, there exists $\eta(k,r)\in \mathbb{N}$ with the following property. Let $(G,L)$ be an instance of the \textsc{List-$k$-Coloring Problem} where $G$ is $rP_3$-free graph. Then there exists a $(G,L)$-profile $\Xi(G,L)$ with the following specifications. \begin{itemize} \item $|\mathcal{L}|\leq \mathcal{O}\left(|V(G)|^{\eta(k,r)}\right)$ and $\Xi(G,L)$ can be computed from $(G,L)$ in time $ \mathcal{O}\left(|V(G)|^{\eta(k,r)}\right)$. \item For every $(G',L')\in \Xi(G,L)$ and every $v\in V(G')$, we have $|L'(v)|\neq 1$. \item For every $(G',L')\in \Xi(G,L)$, $G'$ has no $L'$-good $P_3$. \item If $G$ admits an $L$-coloring, then for some $(G',L')\in \Xi(G,L)$, $G'$ admits a frugal $L'$-coloring. \item If $G'$ admits an $L'$-coloring for some $(G',L')\in \Xi(G,L)$, then $G$ admits an $L$-coloring. \end{itemize} \end{theorem} \begin{proof} Applying Theorem \ref{frugality} to $(G,L)$, we obtain a spanning $(G,L)$-profile $\Pi(G,L)$ satisfying the bullet conditions of Theorem \ref{frugality}. Also, for every $(G,K)\in \Pi(G,L)$, applying Theorem \ref{goodp3theorem} to $(G,K)$, we obtain a spanning $(G,K)$-profile $\Upsilon(G,L)$ satisfying the bullet conditions of Theorem \ref{goodp3theorem}. Let $\Theta(G,L)=\bigcup_{(G,K)\in \Pi(G,L)}\Upsilon(G,K)$. Then, for every $(G,J)\in \Theta(G,L)$, we may apply Theorem \ref{kill1} to $(G,J)$, obtaining a $(G,J)$-refinement $(\hat{G},\hat{J})$ satisfying bullet conditions of Theorem \ref{kill1}. Let $\Xi(G,L)=\{(\hat{G},\hat{J}): (G,J)\in \Theta(G,L)\}$. We claim that $\Xi(G,L)$ satisfies Theorem \ref{allfromfrug+good}. Clearly, $\Xi(G,L)$ is a $(G,L)$-profile. Also, let $\pi(k,r)$ be as in Theorem \ref{frugality} and $\upsilon(k,r)$ be as in Theorem \ref{goodp3theorem}. Now, assuming $\eta(k,r)=\pi(k,r)+\upsilon(k,r)$, by the first bullet of Theorems \ref{frugality} and \ref{goodp3theorem} and \ref{kill1}, $\Xi(G,L)$ satisfies the first bullet of Theorem \ref{allfromfrug+good}. Also, by the second bullet of Theorem \ref{kill1}, $\Xi(G,L)$ satisfies the second bullet of Theorem \ref{allfromfrug+good}. Moreover, the second bullet of Theorem \ref{goodp3theorem} along with Lemma \ref{goodspread} implies that $\Xi(G,L)$ satisfies the third bullet of Theorem \ref{allfromfrug+good}. The fourth bullet of Theorem \ref{allfromfrug+good} for $\Xi(G,L)$ follows from the second bullet of Theorem \ref{frugality} and the third bullets of Theorems \ref{goodp3theorem} and \ref{kill1}. Finally, by Lemma \ref{spanspread} and the fourth bullet of Theorem \ref{kill1}, $\Xi(G,L)$ satisfies the fifth bullet of Theorem \ref{allfromfrug+good}. This completes the proof. \end{proof} \sloppy Next, we prove the following as an application of Theorem \ref{23}. Recall the definition $~{p(G,L)=|V(G)|+\sum_{v\in V(G)}|L(v)|}$ for every instance $(G,L)$ of the \textsc{List-$k$-Coloring Problem}, $k\in \mathbb{N}$. \begin{theorem}\label{23rep} Let $(G,L)$ be an instance of the \textsc{List-$5$-Coloring Problem} such that $|L(v)|\neq 1$ for all $v\in V(G)$ and $G$ has no $L$-good $P_3$. Then there exists a $(G,L)$-refinement $(G^{\flat},L^{\flat})$ with the following specifications. \begin{itemize} \item $(G^{\flat},L^{\flat})$ can be computed from $(G,L)$ in time $\mathcal{O}(p(G,L)|V(G)|^2)=\mathcal{O}(|V(G)|^3)$. \item $|L^{\flat}(v)|\in \{0,2\}$ for all $v\in V(G^{\flat})$. \item If $G$ admits a frugal $L$-coloring, then $G^{\flat}$ admits a frugal $L^{\flat}$-coloring. \item If $G^{\flat}$ admits an $L^{\flat}$-coloring, then $G$ admits an $L$-coloring. \end{itemize} \end{theorem} \begin{proof} Let $(G,L)$ be a counterexample with $p(G,L)$ as small as possible. If $|L(u)|\in \{0,2\}$ for every $u\in V(G)$, then we define $G^{\flat}=G$, $L^{\flat}=L$, and it is immediately seen that $(G^{\flat},L^{\flat})$ satisfies the bullet conditions of Theorem \ref{23rep}, a contradiction. So we may assume that there exists a vertex $u_0\in V(G)$ with $|L(u_0)|\geq 3$. Applying Theorem \ref{23} to $(G,L)$ and $u_0$, we obtain a $(G,L)$-refinement $(\tilde{G},\tilde{L})$, satisfying the bullet conditions of Theorem \ref{23}. In particular, by the second bullet of Theorem \ref{23}, we have $|\tilde{L}(v)|\neq 1$ for all $v\in V(\tilde{G})$. Also, since $G$ has no $L$-good $P_3$, by Lemma \ref{goodspread}, $\tilde{G}$ has no $\tilde{L}$-good $P_3$. Moreover, by the third bullet of Theorem \ref{23}, $(\tilde{G},\tilde{L})$ satisfies \eqref{r1}; that is, $p(\tilde{G},\tilde{L})<p(G,L)$. This, together with the minimality of $p(G,L)$, implies that there exists a $(\tilde{G},\tilde{L})$-refinement $(\tilde{G}^{\flat},\tilde{L}^{\flat})$, satisfying the bullet conditions of Theorem \ref{23rep}. Now, let $G^{\flat}=\tilde{G}^{\flat}$ and $L^{\flat}=\tilde{L}^{\flat}$. Since $(\tilde{G}^{\flat},\tilde{L}^{\flat})$ is $(\tilde{G},\tilde{L})$-refinement and $(\tilde{G},\tilde{L})$ is a $(G,L)$-refinement, it follows that $(G^{\flat},L^{\flat})=(\tilde{G}^{\flat},\tilde{L}^{\flat})$ is a $(G,L)$-refinement. Moreover, since $(G^{\flat},L^{\flat})=(\tilde{G}^{\flat},\tilde{L}^{\flat})$, it is easy to see that \begin{itemize} \item[-] the first bullet of Theorem \ref{23rep} for $(G,L)$ and $(G^{\flat},L^{\flat})$ follows from the first bullet of Theorem \ref{23} for $(G,L)$ and $(\tilde{G},\tilde{L})$ and the first bullet of Theorem \ref{23rep} for $(\tilde{G},\tilde{L})$ and $(\tilde{G}^{\flat},\tilde{L}^{\flat})$; \item[-] the second bullet of Theorem \ref{23rep} for $(G^{\flat},L^{\flat})$ follows from the second bullet of Theorem \ref{23rep} for $(\tilde{G}^{\flat},\tilde{L}^{\flat})$; \item[-] the third bullet of Theorem \ref{23rep} for $(G,L)$ and $(G^{\flat},L^{\flat})$ follows from the third bullet of Theorem \ref{23} (in particular, \eqref{r2}) for $(G,L)$ and $(\tilde{G},\tilde{L})$ and the third bullet of Theorem \ref{23rep} for $(\tilde{G},\tilde{L})$ and $(\tilde{G}^{\flat},\tilde{L}^{\flat})$; and \item[-] the fourth bullet of Theorem \ref{23rep} for $(G,L)$ and $(G^{\flat},L^{\flat})$ follows from the third bullet of Theorem \ref{23} (in particular, \eqref{r3}) for $(G,L)$ and $(\tilde{G},\tilde{L})$ and the fourth bullet of Theorem \ref{23rep} for $(\tilde{G},\tilde{L})$ and $(\tilde{G}^{\flat},\tilde{L}^{\flat})$. \end{itemize} But this violates $(G,L)$ being a counterexample to Theorem \ref{23rep}, and so completes the proof. \end{proof} As the last ingredient, we need the following, which is proved via a reduction to 2SAT, and has been discovered independently by many authors \cite{2sat1,2sat2,2sat3}. \begin{theorem}[Edwards \cite{2sat1}]\label{2LC} Let $k\in \mathbb{N}$ be fixed and $(G,L)$ be an instance of the \textsc{List-$5$-Coloring Problem} with $|L(v)|\leq 2$ for every $v\in V(G)$. Then it can be decided in time $\mathcal{O}(|V(G)|^2)$ whether $G$ admits an $L$-coloring. \end{theorem} Now we are in a position to prove Theorem \ref{thm:main}, which we restate. \begin{theorem} Let $r\in \mathbb{N}$ be fixed. Then there exists a polynomial-time algorithm which solves the \textsc{List-$5$-Coloring Problem} restricted to $rP_3$-free instances. \end{theorem} \begin{proof} Given an $rP_3$-free instance $(G,L)$ of the \textsc{List-$5$-Coloring Problem}, let $\Xi(G,L)$ be as in Theorem \ref{allfromfrug+good}. Then, for every $(G',L')\in \Xi(G,L)$, by the seond bullet of Theorem \ref{allfromfrug+good}, $|L'(v)|\neq 1$ for all $v\in V(G)$, and by the third bullet of Theorem \ref{allfromfrug+good}, l$G'$ has no $L$-good $P_3$. Therefore, we may apply Theorem \ref{23rep} to $(G',L')$ obtaining a $(G,L)$-refinement $(G'^{\flat},L'^{\flat})$ satisfying the bullet conditions of Theorem \ref{23rep}. Now, consider the $(G,L)$-profile $\Gamma(G,L)=\{(G'^{\flat},L'^{\flat}): (G',L')\in \Xi(G,L)\}$. For all $k,r\in \mathbb{N}$, let $\eta(k,r)$ be as in Theorem \ref{allfromfrug+good}. Then, statement \eqref{finalruntime} below follows immediately from the first bullet of Theorem \ref{allfromfrug+good} for $\Xi(G,L)$, and the first bullet of Theorem \ref{23rep} for every $(G'^{\flat},L'^{\flat})$, where $(G',L')\in \Xi(G,L)$. \sta{\label{finalruntime} $|\Gamma(G,L)|\leq \mathcal{O}(|V(G)|^{\eta(k,r)})$, and $\Gamma(G,L)$ can be computed from $(G,L)$ in time $\mathcal{O}(|V(G)|^{\max\{\eta(k,r),3\}})$.} Also, we deduce: \sta{\label{ordtoord} $G$ admits an $L$-coloring if and only if there exists $(G'^{\flat},L'^{\flat})\in \Gamma(G,L)$ such that $G'^{\flat}$ admits an $L'^{\flat}$-coloring for some $(G',L')\in \Xi(G,L)$.} Suppose that $G$ admits an $L$-coloring. By the fourth bullet of Theorem \ref{allfromfrug+good}, for some $(G',L')\in \Xi(G,L)$, $G'$ admits a frugal $L'$-coloring. As a result, by the third bullet of Theorem \ref{23rep}, $(G'^{\flat},L'^{\flat})\in \Gamma(G,L)$ admits a frugal $L'^{\flat}$-coloring, and so an $L'^{\flat}$-coloring. Conversely, suppose that for some $(G',L')\in \Xi(G,L)$, $G'^{\flat}$ admits an $L'^{\flat}$-coloring. Then by the fourth bullet of Theorem \ref{23rep}, $G'$ admits an $L'$-coloring. Therefore, by the fifth bullet of Theorem \ref{allfromfrug+good}, $G$ admits an $L$-coloring. This proves \eqref{ordtoord}.\vspace*{3mm} Now, the algorithm is follows. First, we compute $\Gamma(G,L)$. By \eqref{finalruntime}, this is doable in time $\mathcal{O}(|V(G)|^{\max\{\eta(k,r),3\}})$. Then, by the second bullet of Theorem \ref{23rep}, for each $(G'^{\flat},L'^{\flat})\in \Gamma(G,L)$, we have $|L'^{\flat}(v)|\in \{0,2\}$. Therefore, since $|\Gamma(G,L)|\leq \mathcal{O}(|V(G)|^{\eta(k,r)})$ by \eqref{finalruntime}, applying the algorithm from Theorem \ref{2LC}, we decide in polynomial time whether there exists $(G'^{\flat},L'^{\flat})\in \Gamma(G,L)$ such that $G'^{\flat}$ admits an $L'^{\flat}$-coloring. If the answer is yes, then by \eqref{ordtoord}, $G$ admits an $L$-coloring. If the answer is no, again by \eqref{ordtoord}, $G$ admits no $L$-coloring. This completes the proof. \end{proof} \section{Proof of Theorem \ref{thm:hardness}} \label{sec:hardness} In this section, we prove Theorem \ref{thm:hardness} via a reduction from \textit{monotone} \textsc{NAE3SAT}, defined as follows. The \textsc{Not-All-Equal-3-Satisfiability Problem (NAE3SAT)} is to decide, given an instance $I$ consisting of $n$ Boolean variables $x_1,\ldots,x_n$ and $m$ clauses $C_1,\ldots, C_m$, each containing three literals, whether there exists a true/false assignment for each variable such that each clause contains at least one true literal and one false literal. We say $I$ is \textit{satisfiable} if it admits such an assignment. By \emph{monotone} \textsc{NAE3SAT}, we mean \textsc{NAE3SAT} restricted to \textit{monotone} instances; that is, instances with no negated literals. \begin{theorem}[Garey and Johnson \cite{NAE-NP}]\label{NAENP} Monotone \textsc{NAE3SAT} is \textsf{NP}-complete. \end{theorem} Now we can prove Theorem \ref{thm:hardness}, which we restate. \begin{theorem} The \textsc{$k$-Coloring Problem} restricted to $rP_4$-free graphs is \textsf{NP}-complete for all $k\geq 5$ and $r\geq 2$. \end{theorem} \begin{proof} Clearly, the \textsc{$k$-Coloring Problem} restricted to $rP_4$-free graphs belongs to \textsf{NP}. For the hardness, it suffices to prove that the \textsc{$5$-Coloring Problem} restricted to $2P_4$-free graphs is \textsf{NP}-hard. Given a monotone \textsc{NAE3SAT} instance $I$ with variables $x_1,x_2,...x_n$ and clauses $C_1,C_2,...,C_m$, we construct a graph $G$ as an instance of the \textsc{$5$-coloring Problem}, as follows. Let $C=\{c_i: i\in [5]\}$, $X=\{x_i:i\in [n]\}$, $Y=\{y_j,z_j:j\in [m]\}$ and $U=\{u_j^k,w_j^k:j\in [m],k\in [3]\}$. Then we let $V(G)=C\cup X\cup Y\cup U$, and the adjacency in $G$ is as follows. \begin{itemize} \item[-] $C$ is a clique of $G$. \item[-] For each $i\in [n]$, we have $c_3x_i,c_4x_i,c_5x_i\in E(G)$. \item[-] For each $j\in [m]$, we have $c_1y_j,c_2y_j,c_1z_j,c_2z_j\in E(G)$. \item[-] For each $j\in [m]$ and $k\in[3]$, we have $c_1u_j^k,c_2w_j^k\in E(G)$. \item[-] For each $j \in [m]$, we have $c_iu_j^k,c_iw_j^k\in E(G)$ for all pairs $(i, k)$ with $i \in \{3,4,5\}$, $k \in \{1,2,3\}$, and $i \neq k+2$. \item[-] For each $i\in[n]$ and $j\in [m]$, we have $x_iy_j,x_iz_j\in E(G)$. \item[-] For each $j\in [m]$ and all $k\in[3]$, we have $y_ju_j^k,z_jw_j^k\in E(G)$. \item[-] For each $j\in [m]$, if $C_j$ contains $x_{i_1}$, $x_{i_2}$, and $x_{i_3}$, then we have $x_{i_k}u_j^k,x_{i_k}w_j^k\in E(G)$ for all $k\in [3]$. \end{itemize} There are no edges in $E(G)$ other than those described above. It is easily seen that the construction is of polynomial size and can be computed in polynomial time. \sta{\label{feas} $I$ is satisfiable if and only if $G$ is $5$-colorable.} First, let $\phi:V(G)\rightarrow [5]$ be a $5$-coloring of $G$. Since $C$ is a clique of $G$, we may assume without loss of generality that that $\phi(c_i)=i$ for every $i\in[5]$. Thus, $\phi(x_i)\in \{1,2\}$ for every $i\in [n]$. Now, let $j\in [m]$ and let $x_{i_1}$, $x_{i_2}$, and $x_{i_3}$ be the literals in $C_j$. If $\phi(x_{i_1})=\phi(x_{i_2})=\phi(x_{i_3})=2$, then we have $\phi(u_j^1)=3$, $\phi(u_j^2)=4$ and $\phi(u_j^3)=5$. But then $y_j$ has a neighbour of each color in $[5]$, which is a contradiction. As a result, at least one of $\phi(x_{i_1})$, $\phi(x_{i_2})$ and $\phi(x_{i_3})$ is equal to $1$. Similarly, by considering the vertex $w_j^k$ for $k\in \{1,2,3\}$, we deduce that at least one of $\phi(x_{i_1})$, $\phi(x_{i_2})$ and $\phi(x_{i_3})$ is $2$. Thus, by setting $x_i$ to be True if $\phi(x_i)=1$ and False if $\phi(x_i)=2$, we conclude that $I$ is satisfiable. Next, suppose that $I$ is satifiable. We define a coloring $\phi:V(G) \rightarrow [5]$ of $G$ as follows. Let $\phi(c_i)=i$ for every $i\in[5]$. Let $\phi(x_i)=1$ if $x_i$ is assigned True and $\phi(x_i)=2$ otherwise. For each $j\in [m]$ and $C_j$ with literals $x_{i_1}$, $x_{i_2}$, and $x_{i_3}$, and each $k\in [3]$, if $\phi(x_{i_k})=1$, then we set $\phi(u_j^k)=2$ and $\phi(w_j^k)=k+2$, and if $\phi(x_{i_k})=2$, we set $\phi(u_j^k)=k+2$ and $\phi(w_j^k)=1$. Since at least one of $x_{i_1}, x_{i_2}$ and $x_{i_3}$ is assigned True and at least one of them is assigned False, there exists $k_1,k_2\in [3]$ with $k_1\neq k_2$ such that $\phi(u_j^{k_1})=2$ and $\phi(w_j^{k_2})=1$. We set $\phi(y_j)=k_1+2$ and $\phi(z_j)=k_2+2$. We leave it to the reader to check that $\phi$ is $5$-coloring of $G$. This proves \eqref{feas}. \sta{\label{P4type} The vertex set of every induced $P_4$ in $G$ intersects either $C$ or both $X$ and $Y$.} Let $P$ be an induced $P_4$ in $G$ with $V(P)=\{v_1,v_2,v_3,v_4\}$ and $E(P)=\{v_1v_2,v_2v_3,v_3v_4\}$ such that $V(P)\cap C=\emptyset$. If $v_2\in U$, then without loss of generality, we may assume that $v_1\in X$ and $v_3\in Y$. But then $v_1v_3\in E(G)$, a contradiction. So $v_2\notin U$, and similarly, $v_3\notin U$. It follows that $v_2,v_3\in X\cup Y$. Therefore, since $X$ and $Y$ are stable sets of $G$ and $v_2v_3\in E(G)$, one of $v_2$ and $v_3$ belongs to $X$ and the other one belongs to $Y$. This proves \eqref{P4type}. \sta{\label{2P4free} $G$ is $2P_4$-free.} Suppose not. Let $P$ and $Q$ be two induced $P_4$'s in $G$ with $V(P)$ anticomplete to $V(Q)$. Since $C$ is a clique of $G$, we may assume without loss of generality that $V(P)\cap C\neq \emptyset$. By \eqref{P4type}, we may choose vertices $x_1\in V(P)\cap X$ and $y_1\in V(P)\cap Y$. If there exists $c_i\in V(Q)\cap C$, then depending on whether $i\in \{1,2\}$ or not, either $c_iy_1\in E(G)$ or $c_ix_1\in E(G)$, which is impossible. Therefore, by \eqref{P4type}, we may choose a vertex $x_2\in V(Q)\cap X$. But then $x_2y_1\in E(G)$, a contradiction. This proves \eqref{2P4free}.\vspace*{3mm} From \eqref{feas}, \eqref{2P4free} and Theorem \ref{NAENP}, it follows that the \textsc{$5$-Coloring Problem} restricted to $2P_4$-free graphs is \textsf{NP}-hard. This completes the proof. \end{proof} \bibliographystyle{abbrv}
{ "timestamp": "2021-09-09T02:10:05", "yymm": "2105", "arxiv_id": "2105.01787", "language": "en", "url": "https://arxiv.org/abs/2105.01787" }
\section{Introduction} Recognizing a table image into a Latex code is challenging due to complexity and diversity of table structures and long sequence problem compared to traditional OCR. The challenge aims at assessing the ability of state-of-the-art methods to recognize scientific tables into LaTeX codes. In this competition, there are two sub-tasks with different levels of difficulty.\\\\ \textbf{Subtask I Table Structure Reconstruction} is to reconstruct the structure of a table image into the form of LaTeX code but ignore the content of the table.\\\\ \textbf{Subtask II Table Content Reconstruction} is to reconstruct the structure and the content of a table image simultaneously into the form of LaTeX code.\\\\ Table Image Recognition to LaTeX data set~\cite{Pratik2021ICDAR} is a scientific table recognition dataset which consists of 43,138 training samples, 800 validation samples, 2,203 test samples on the TSR task, and 35,500 training samples, 500 validation samples, 1,917 test samples on the TCR task. We treat both two tasks as an image-to-sequence recognition task as scene text recognition. Our model is based on our previous work MASTER~\cite{lu2019master}, and it performs well on OCR task and can be freely adopt to other similar tasks, such as curved text prediction, multi-line text prediction, vertical text prediction, multilingual text prediction. The rest of the paper is organized as follows. We analyze the competition data set in Section~\ref{sec:datasec}, introduce the method and the tricks on the TSR and the TCR tasks in Section~\ref{sec:methodsection}, and then present our experimental results in Section~\ref{sec:experiments}, and finally conclude the paper in Section~\ref{sec:conclusion}. \section{Data} \label{sec:datasec} In this section, we conduct some statistical analysis of the provided data. Our choices of some model parameters are based on the analysis obtained in this section. \subsection{Statistics of Data} The sequence length distribution of TSR data set and TCR data set are shown at Figure~\ref{fig:sequencelength}. As shown in Figure~\ref{fig:sequencelength}, Tables have less than 250 tokens for the TSR task and less than 500 tokens for the TCR task. The distribution of sequence lengths appears relatively uniform; the average numbers of sequence lengths are 76.09 for the TSR task and 213.95 for the TCR task separately. Thus, in our MASTER model, the maximum sequence length is set to be 500 for the TCR task, and 250 for the TSR task. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{images/sequence_length.png} \caption{Distribution of sequence length. The maximum sequence lengths are 500 for the TCR task, and 250 for the TSR task.} \label{fig:sequencelength} \end{figure} The tokens number distribution of TSR data and TCR data are shown at Figure~\ref{fig:tokennumber}. There are only 27 tokens in the TSR task; it shows extreme imbalance in distribution of the tokens. The most frequent token is ``CELL'' that appears 1,217,487 times, while the least token appear only several dozen of times. Similarly, the category imbalance also appears in the TCR task. Specifically, all special mathematical characters are represented by ``LATEX\_TOKEN'', so there are only 236 tokens in the TCR task. The tokens ``2c'', ``2ex'', ``564'' and ``constant'' only appear one time so that we believe that there are some mislabeling; we therefore replace them with ``LATEX\_TOKEN''. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{images/tokens2.png} \caption{Distribution of tokens number. For aesthetic display, we apply the logarithmic function to the distribution. The distribution of tokens varies largely.} \label{fig:tokennumber} \end{figure} \section{Method} \label{sec:methodsection} Overall, our model is based on our previous work MASTER~\cite{lu2019master}. We recommend you to refer to the original manuscript for detailed information. In this section, we will mainly describe some useful attempts we conduct in this competition. \subsection{Task1: Table Structure Reconstruction} In the TSR task, we mainly use the following strategies to improve accuracy.\\\\ \textbf{Ranger Optimizer.} Ranger~\cite{lessw2019ranger} integrates RAdam (Rectified Adam)~\cite{liu2019radam}, LookAhead~\cite{zhang2019lookahead}, and GC (gradient centralization)~\cite{yong2020gradient} into one optimizer. LookAhead can be considered as an extension of Stochastic Weight Averaging (SWA)~\cite{izmailov2018averaging} in the training stage.\\\\ \textbf{Data Augmentation.} Our data augmentation method is based on~\cite{le2019pattern}. Shear, affine transformation, perspective transformation, contrast, brightness, and saturation augment are used in our tasks. The loss obtained by the data augmentation method is two orders of magnitude higher than that without using it, but the accuracy of single model on the validation set is approximate. However, the model voting based on models obtained from different data augmentation achieves large performance gains compared to single model.\\\\ \textbf{Multiple Resolutions.} The provided images in the training set are resized to $400\times 400$ by the organizer, so our baseline model is to use the original $400\times 400$ image. In addition, we have tried different resolutions, e.g., $300 \times 200, 320 \times 240, 320 \times 320, 360 \times 200, 360 \times 240, 400 \times 200, 400 \times 240, 400 \times 300, 400 \times 400, 480 \times 240, 500 \times 400, 600 \times 300, 600 \times 400, 800 \times 400$. \\\\ \textbf{Synchronized Batch Normalization (SyncBN).} Compared to traditional OCR tasks, table recognition often requires longer sequence decoding lengths and larger input image sizes, which will cause large GPU memory usage, and thus lead to insufficient batch size. Synchronized BatchNorm (SyncBN) is an effective type of batch normalization used for multi-GPU training. Standard batch normalization only normalizes the data within each device (GPU). SyncBN normalizes the input within the whole mini-batch.\\\\ \textbf{Feature Concatenation of Layers in Transformer Decoder (FeaC).} Different with the original MASTER, we concatenate the outputs of the last two transformer layers and apply a linear projection on the concatenated feature to obtain the last-layer feature. The last-layer feature is used for the final classification.\\\\ \textbf{Model Ensemble.} Model ensemble is a very widely used technique to improve the performance on many deep learning tasks. Voting and bagging are two main ensemble strategy. Voting: Models trained with different resolutions and data augmentation methods are used for voting. Bagging: Sampling of the self-segmented data set is performed with put-back, sub-models are trained with the sampled samples and finally the sub-models are fused. \subsection{Task2: Table Content Reconstruction} In the TCR task, we also use Ranger optimizer, SyncBN, data augmentation, feature concatenation and model ensemble. Different from the TSR task, we need to reconstruct the structure and the content of a table image simultaneously into the form of LaTeX code. We try some different strategies in this sub-task. \textbf{Multiple Resolutions.} We have used the following input sizes in the TCR task, e.g., $400\times 400, 440\times 400, 480\times 400, 512\times 400, 544\times 400, 600\times 400$. We will discuss the influence of resolutions to the final performance in the experimental section.\\\\ \textbf{Pre-train Model.} We find that TABLE2LATEX-450K~\cite{deng2019challenges} is a large-scale data set sharing the same target (table recognition to Latex) with this competition. However, the Number of tokens categories in TABLE2LATEX-450K is much larger than that the data in this competition, and the labeling of the table structure is slightly different from this competition. Therefore, we carefully extract samples containing only the target categories with this competition from the TABLE2LATEX-450K. Finally, we obtain a 58k data set from TABLE2LATEX-450K as our data for the pretrain model. \section{Experiments} \label{sec:experiments} \subsection{Metric} This competition offers several metrics for evaluation, \textbf{Exact Match Accuracy:} Fraction of predictions which match exactly with the ground truth. \textbf{Exact Match Accuracy @95\% similarity:} Fraction of predictions with at least 95\% similarity between ground truth. \textbf{Row Prediction Accuracy:} Fraction of predictions with a count of rows equal to the count of rows in the ground truth. \textbf{Column Prediction Accuracy:} Fraction of predictions with a count of cell alignment (``c'', ``r'', ``l'') tokens equal to the count of cell alignment tokens in the ground truth. \textbf{Alpha-Numeric Characters Prediction Accuracy:} Fraction of predictions which has the same alphanumeric characters as in the ground truth. \textbf{LaTeX Token Accuracy:} Fraction of predictions which has the same LaTeX tokens as in the ground truth. \textbf{LaTeX Symbol Accuracy:} Fraction of predictions which has the same LaTeX Symbols as in the ground truth. \textbf{Non-LaTeX Symbol Prediction Accuracy:} Fraction of predictions which has the same Non-LaTeX Symbols as in the ground truth. The exact match accuracy is used for the final ranking. \subsection{Implementation Details} In the TCR training, 4 Tesla V100 GPUs are used with the batch size 8 in each GPU while the input image size is 400 × 400, and batch size differs accordingly when using different resolutions. The maximum sequence length is 500. In default, Synchronized BN~\cite{zhang2018context} and Ranger optimizer~\cite{lessw2019ranger} are used in our experiments. The initial learning rate of optimizer is 0.001 with step learning rate decay. The maximum sequence length is 250 for the TSR task, the other hyper-parameter settings of the TSR training are the same as the TCR training. All models are trained based on our own FastOCR toolbox. FastOCR is a fast and powerful text detection, text recognition and key information extraction framework. Motivated by MMDetection~\cite{chen2019mmdetection}, our FastOCR also leverages the mechanisms of configuration and register. We recommend the readers to refer to one of our another report~\cite{ye2021ICDAR}. \subsection{TCR} The method and tricks of our table recognition system is described above. We will display some experiment results as follow. \begin{table}[h] \centering \small \begin{tabular}{|@{\ }c@{\ }|@{\ }c@{\ }|@{\ }c@{\ }|@{\ }c@{\ }|@{\ }c@{\ }|@{\ }c@{\ }|c|c|c|c|c|c|} \hline \textbf{Ranger} & \textbf{Pre-train} & \textbf{Augment} & \textbf{SyncBN} & \textbf{FeaC} & \textbf{Ensemble} & \multicolumn{6}{c|}{\textbf{TCR metrics}} \\ \cline{7-12} & & & & & & AA & LTA & LSA & SA & EM@95\% & EM \\ \hline & & & & & & 0.8116 & 0.7026 & 0.9499 & 0.5446 & 0.6823 & 0.4710 \\ \hline \checkmark & & & & & & 0.8064 & 0.6958 & 0.9504 & 0.5378 & 0.6807 & 0.4731 \\ \hline \checkmark & \checkmark & & & & & 0.8200 & 0.7157 & 0.9535 & 0.5670 & 0.7078 & 0.4960 \\ \hline \checkmark & \checkmark & \checkmark & & & & 0.8283 & 0.7214 & 0.9540 & 0.5696 & 0.7047 & 0.5018 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & & & 0.8356 & 0.7183 & 0.9572 & 0.5602 & 0.6995 & 0.4903 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & & 0.8158 & 0.7099 & 0.9561 & 0.5670 & 0.7031 & 0.4966 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \textbf{0.8690} & \textbf{0.7511} & \textbf{0.9577} & \textbf{0.6207} & \textbf{0.7386} & \textbf{0.5586} \\ \hline \end{tabular} \caption{End-to-end evaluation on the TCR test data set under six indicators. AA: Alpha-Numeric Characters Prediction Accuracy, LTA: LaTeX Token Accuracy, LSA: LaTeX Symbol Accuracy, SA: Non-LaTeX Symbol Prediction Accuracy, EM: Exact Match Accuracy, EM@95\%: Exact Match Accuracy @95\% similarity.} \label{tab:endtoendEval} \end{table} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Resolution} & \multicolumn{6}{c|}{\textbf{TCR metrics}} \\ \cline{2-7} & AA & LTA & LSA & SA & EM@95\% & EM \\ \hline 400 x 400 & 0.8200 & 0.7157 & 0.9535 & 0.5670 & 0.7078 & 0.4960 \\ \hline 440 x 400 & 0.8356 & 0.7209 & 0.9514 & 0.5618 & 0.6979 & 0.4997 \\ \hline 480 x 400 & 0.8335 & 0.7172 & 0.9514 & 0.5712 & 0.7047 & 0.5054 \\ \hline 512 x 400 & 0.8367 & 0.7282 & 0.9530 & 0.5665 & 0.7157 & 0.5080 \\ \hline 544 x 400 & 0.8231 & 0.7271 & 0.9535 & 0.5701 & \textbf{0.7198} & 0.5096 \\ \hline 600 x 400 & \textbf{0.8377} & \textbf{0.7297} & \textbf{0.9561} & \textbf{0.5837} & 0.7146 & \textbf{0.5153} \\ \hline \end{tabular} \caption{Comparison of different resolutions on the TCR test data set under six indicators. AA: Alpha-Numeric Characters Prediction Accuracy, LTA: LaTeX Token Accuracy, LSA: LaTeX Symbol Accuracy, SA: Non-LaTeX Symbol Prediction Accuracy, EM: Exact Match Accuracy, EM@95\%: Exact Match Accuracy @95\% similarity.} \label{tab:endtoendEvalResolution} \end{table} In this competition, we have conducted some evaluations and recorded the results. Our model is tuned based on the validation set. In Table~\ref{tab:tsreval}, we have shown the results of different settings on the final test data set. The results are collected from the official website. According to the results, we have the following observations, \begin{itemize}[leftmargin=.1in] \item pre-trained model can significantly improve the model performance. We speculate that this is due to the relatively limited data set provided in this competition. We want to point out that the model trained on TABLE2LATEX-450K has 0\% accuracy when evaluated on the validation set of this competition. This means that the labeling formats of these two data sets are slightly different. Regardless of the 0\% accuracy, model fine-tuning helps a lot. \item data augmentation improves the performance by around 0.5\%; feature concatenation boosts the performance by 0.6\%. \item model ensemble largely improves the single model. We observe models trained under different settings (e.g., resolution, data augmentation) can provide more complementary information. \item there is some inconsistency between the results on the validation set and on the test set. Ranger shows much better performance than Adam~\cite{kingma2014adam} on the validation set, but only slightly performs better on the test set. SyncBN shows better performance on the validation set, but seems to make the performance worse on the test set. \end{itemize} We also evaluate the influence of different resolutions to the performance. The results are shown in Table~\ref{tab:endtoendEvalResolution}. We can find that large resolution shows better performance. In the TCR task, large resolution is important for the algorithm to recognize each character. \subsection{TSR} Similar to TCR, we evaluate different settings. The results are shown in Table~\ref{tab:tsreval}. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Ranger} & \textbf{SyncBN} & \textbf{Augment} & \textbf{FeaC} & \textbf{Ensemble} & \multicolumn{4}{c|}{\textbf{TSR metrics}} \\ \cline{6-9} & & & & & EM@95\% & RA & CA & EM \\ \hline \checkmark & \checkmark & & & & 0.8488 & 0.9369 & 0.8651 & 0.6922 \\ \hline \checkmark & \checkmark & \checkmark & & & 0.8365 & 0.9332 & 0.8656 & 0.6890 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & & 0.8483 & 0.9414 & 0.8674 & 0.6958 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \textbf{0.8765} & \textbf{0.9496} & \textbf{0.8874} & \textbf{0.7444} \\ \hline \end{tabular} \caption{End-to-end evaluation on the TSR test data set under four indicators. EM: Exact Match Accuracy, EM@95\%: Exact Match Accuracy @95\% similarity, RA: Row Prediction Accuracy, CA: Column Prediction Accuracy.} \label{tab:tsreval} \end{table} We observe that from Table~\ref{tab:tsreval}, \begin{itemize}[leftmargin=.1in] \item feature concatenation improves the performance by around 0.7\%. \item model ensemble is very effective also for the TSR task. \end{itemize} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Resolution} & \multicolumn{4}{c|}{\textbf{TSR metrics}} \\ \cline{2-5} & EM@95\% & RA & CA & EM \\ \hline 300 x 200 & \textbf{0.8615} & 0.9359 & \textbf{0.8806} & 0.7058 \\ \hline 320 x 320 & 0.8502 & 0.9337 & 0.8715 & \textbf{0.7063} \\ \hline 360 x 240 & 0.8556 & 0.9364 & 0.8710 & 0.7044 \\ \hline 400 x 200 & 0.8502 & 0.9314 & 0.8697 & 0.6985 \\ \hline 400 x 300 & 0.8597 & 0.9400 & 0.8724 & 0.7035 \\ \hline 400 x 400 & 0.8483 & \textbf{0.9414} & 0.8674 & 0.6958 \\ \hline 480 x 240 & 0.8511 & 0.9355 & 0.8669 & 0.6922 \\ \hline 500 x 400 & 0.8352 & 0.9291 & 0.8669 & 0.6804 \\ \hline 600 x 400 & 0.8361 & 0.9278 & 0.8633 & 0.6849 \\ \hline 800 x 400 & 0.8025 & 0.9187 & 0.8647 & 0.6463 \\ \hline \end{tabular} \caption{Comparison of different resolutions on the TSR test data set under four indicators. EM: Exact Match Accuracy, EM@95\%: Exact Match Accuracy @95\% similarity, RA: Row Prediction Accuracy, CA: Column Prediction Accuracy.} \label{tab:endtoendEval} \end{table} In Table~\ref{tab:endtoendEval}, we report the results of TSR under different resolutions. We can see that, different from the TCR task, small resolution performs better than large resolution for the TSR task. This is due to that a large resolution allows the model to distinguish clearly the text content, while a small resolution allows the model to focus on the structure of the table. \textbf{Discussion.} We have made some reflections from this competition, \begin{itemize}[leftmargin=.1in] \item the data set of this competition is relatively small compared to some similar tasks~\cite{zhong2019image,Antonio2021ICDAR}. We believe that with increase of the data scale, the performance of the algorithm will improve dramatically. \item as shown in Figure~\ref{fig:wrongTCR}, the error is caused by the inconsistent data annotations. For instance, some classical inconsistency includes with or without ``\{'' and ``\}'', with or without ``\textbackslash small'', confusion between ``\textbackslash textbf'' and ``\textbackslash mathbf'', confusion between ``\textbackslash em'' and ``\textbackslash emph'', and so on. \item as shown in Figure~\ref{fig:wrongTSR}, the most difficult part of the TSR task is to predict the head part (the structure part) of the table. \item the official images are resized to $400 \times 400$, which may cause heavy distortion, and thus seriously affect the performance of table content recognition. We believe that the padding-based pre-processing method is more suitable for these two sub-tasks. \end{itemize} \section{Conclusion} \label{sec:conclusion} In this paper, we present our solution for ICDAR 2021 Competition Scientific Table Image Recognition To LaTeX. Our model is based on MASTER. We optimize the MASTER model from several perspectives: network structure, optimizer, normalization method, pre-trained model, resolution of input image, data augmentation, and model ensemble. Our method achieves 0.7444 Exact Match and 0.8765 Exact Match @95\% on the TSR task, and obtains 0.5586 Exact Match and 0.7386 Exact Match 95\% on the TCR task. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{images/wrongTCR.png} \caption{An example of wrong table content prediction.} \label{fig:wrongTCR} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{images/wrongTSR.png} \caption{An example of wrong table structure prediction.} \label{fig:wrongTSR} \end{figure} \bibliographystyle{unsrt}
{ "timestamp": "2021-05-06T02:09:00", "yymm": "2105", "arxiv_id": "2105.01846", "language": "en", "url": "https://arxiv.org/abs/2105.01846" }
\section{Introduction} Artificial intelligence (AI) for social good, hereafter AI4SG, has received growing attention across academia and industry. Countless research groups, workshops, initiatives, and industry efforts tout programs to advance computing and AI ``for social good.'' Work in domains from healthcare to conservation has been brought into this category \cite{shi2020artificial,wang2019ai,kwok2019ai}. We, the authors, ourselves have endeavored towards AI4SG work as computer science and philosophy researchers. Despite the rapidly growing popularity of AI4SG, social good has a nebulous definition in the computing world and elsewhere \cite{green2019good}, making it unclear at times what work ought to be considered social good. For example, can a COVID-19 contact tracing app be considered to fall within this lauded category, even with privacy risks? Recent work has begun to dive into this question of defining AI4SG \cite{floridi2020design,madaio2020co,floridi2018ai4people}; we will explore these efforts in more depth in Section~\ref{sec:critique}. However, we point out that context is critical, so no single tractable set of rules can determine whether a project is ``for social good.'' Instead, whether a project may bring about social good must be determined by those who live within the context of the system itself; that is, the community that it will affect. This point echoes recent calls for decolonial and power-shifting approaches to AI that focus on elevating traditionally marginalized populations \cite{mohamed2020decolonial,kalluri2020don,lewis2020indigenous,mhlambi2020from,birhane2020towards,whittaker2019disability}. \begin{figure} \centering \includegraphics[width=.7\columnwidth]{figures/pact.pdf} \caption{The framework we propose, Participatory Approach to enable Capabilities in communiTies (PACT), melds the capabilities approach (see Figure~\ref{fig:capabilities}) with a participatory approach (see Figures \ref{fig:participatory} and \ref{fig:guiding-principles}) to center the needs of communities in AI research projects.} \label{fig:pact} \end{figure} The community-centered, context-specific conception of social good that we propose raises its own questions, such as how to reconcile multiple viewpoints. We therefore address these concerns with an integrated framework, called the \textbf{P}articipatory \textbf{A}pproach to enable \textbf{C}apabilities in communi\textbf{T}ies, or PACT, that allows researchers to assess ``goodness" across different stakeholder groups and different projects. We illustrate PACT~in Figure~\ref{fig:pact}. As part of this framework, we first suggest ethical guidelines rooted in capability theory to guide such evaluations \cite{sen1999development,nussbaum2000feminism}. We reject a view that favors courses of action solely for their aggregate net benefits, regardless of how they are distributed and what resulting injustices arise. Such an additive accounting may easily err toward favoring the values and interests of majorities, excluding traditionally underrepresented community members from the design process altogether. Instead, we employ the \emph{capabilities approach}, designed to measure human development by focusing on the substantive liberties that individuals and communities enjoy, to lead the kind of lives they have reason to value \cite{sen1999development}. A capability-focused approach to social good is aimed at increasing opportunities for people to achieve combinations of ``functionings''---that is, combinations of things they may find valuable doing or being. While common assessment methods might overlook new or existing social inequalities if some measure of ``utility'' is increased high enough in aggregate, our approach would only define an endeavor as contributing to social good if it takes concrete steps toward empowering all members of the affected community to each enjoy the substantive liberties to function in the ways they have reason to value. We then propose to enact this conception of social good with a \emph{participatory approach} that involves community members in ``a process of investigating, understanding, reflecting upon, establishing, developing, and supporting mutual learning between multiple participants'' \cite{simonsen2012routledge}, in order for the community itself to define what those substantive liberties and functionings should be. In other words, communities define social good in their context. Our contributions are therefore (i)~arguing that the capabilities approach is a worthy candidate for conceptualizing social good, especially in diverse-stakeholder settings (Section~\ref{sec:capabilities}), (ii)~ highlighting the role that AI can play in expanding and equalizing capabilities (Section~\ref{sec:AIandCapabilities}), (iii)~explaining how a participatory approach is best served to identify desired capabilities (Section~\ref{sec:participatory}), and (iv)~presenting and discussing our proposed guiding principles of a participatory approach (Section~\ref{sec:guiding-principles}). These contributions come together to form PACT. \section{Growing Criticisms of AI for Social Good} \label{sec:critique} As a technical field whose interactions on social-facing problems are young but monumental in impact, the field of AI has yet to fully develop a moral compass. On the whole, the subfield of AI for social good is not an exception. We highlight criticisms that have arisen against AI4SG research, which serve as a call to action to reform the field. Later, we argue that a participatory approach rooted in enabling capabilities will provide needed direction to the field by letting the affected communities---particularly those who are most vulnerable---be the guide. Recent years have seen calls for AI and computational researchers to more closely engage with the ethical implications of their work. \citet{green2018data} implores researchers to view themselves and their work through a political lens, asking not just how the systems they build will impact society, but also how even the problems and methods they choose to explore (and \textit{not} explore) serve to normalize the types of research that ought to be done. \citet{latonero2019} offers a critical view of technosolutionism as it has recently manifested in AI4SG efforts emerging from industry, such as Intel's TrailGuard AI that detects poachers in camera trap images, which have the potential to individually identify a person. \citeauthor{latonero2019} argues that while companies may have good intentions, they often lack the ability to gain the expertise and local context required to tackle complex social issues. In a related spirit, \citet{blumenstock2018don} urges researchers not to forget ``the people behind the numbers'' when developing data-driven solutions, especially in development contexts. \citet{de2018machine} focus on a specific subset of AI4SG dubbed machine learning for development (ML4D) and similarly express the importance of considering local context to ensure that researcher and stakeholder goals are aligned. Also manifesting recently are meta-critiques of AI4SG specifically, which contend that the subfield is vaguely defined, to troubling implications. \citet{moore2019ai} focuses specifically on how the choice of the word ``good'' can serve to distract from potentially negative consequences of the development of certain technologies, retorting that AI4SG should be re-branded as AI for ``not bad''. \citet{whatisErin2019} argues that AI4SG's imprecise definition hurts its ability to progress as a discipline, since the lack of clarity around what values are held or what progress is being made hinders the ability of the field to establish specific expertise. \citet{green2019good} points out that AI4SG is sufficiently vague to encompass projects aimed at police reform as well as predictive policing, and therefore simply lacks meaning. \citeauthor{green2019good} also argues that AI4SG's orientation toward ``good'' biases researchers toward incremental technological improvements to existing systems and away from larger reformative efforts that could be better. Others have gone further, calling for reforms in the field to say that ``good'' AI should seek to shift power to the traditionally disadvantaged. \citet{mohamed2020decolonial} put forth a decolonial view of AI that suggests that AI systems should be built specifically to dismantle traditional forms of colonial oppression. They provide examples of how AI can perpetuate colonialism in the digital age, such as through algorithmic exploitation via Mechanical Turk--style ``ghost work'' \cite{gray2019ghost} or through algorithmic dispossesion in which disadvantaged communities are designed \emph{for} without being allowed a seat at the design table. They also offer three ``tactics'' for moving toward decolonial AI, namely: (1)~a critical technical practice to analyze whether systems promote fairness, diversity, safety, and mechanisms of anticolonial resistance; (2)~reciprocal engagements that engender co-design between affected communities and researchers; and (3)~a change in attitude from benevolence to solidarity, that again necessitates active engagement with communities and grassroots organizations. In a similar spirit, \citet{kalluri2020don} calls for researchers to critically analyze the power structures of the systems they design for and consider pursuing projects that empower the people they are intended to help. For example, researchers may seek to empower those represented in the data that enables the system, rather than the decision-maker who is privileged with a wealth of data. This could be accomplished by designing systems for users to audit or demand recourse based on an AI system's decision \cite{kalluri2020don}. In tandem with these criticisms, there have been corresponding efforts to define AI4SG. \citet{floridi2018ai4people} provide a report of an early initiative to develop guidelines in support of AI for good. Therein, they highlight risks and opportunities for such systems, outline core ethical principles to be considered, and offer several recommendations for how AI efforts can be given a ``firm foundation'' in support of social good. \citet{floridi2020design} later expand on this work by proposing a three-part account, which includes a definition, a set of guiding principles, and a set of ``essential factors for success.'' They define AI4SG as ``the design, development, and deployment of AI systems in ways that (i)~prevent, mitigate, or resolve problems adversely affecting human life and/or the well-being of the natural world, and/or (ii)~enable socially preferable and/or environmentally sustainable developments.'' Note that this disjunctive definition captures a broad spectrum of projects with widely diverging outcomes, leaving open still the possible critiques discussed above. However, their principles and essential factors contribute to establishing rough guidelines for projects in the field, as others have done \cite{tomavsev2020ai,madaio2020co} in lieu of seeking a definition. \section{The Capabilities Approach}\label{sec:capabilities} Clearly, AI for social good is in need of a stricter set of ethical criteria and stronger guidance. To understand what steps take us in the direction of a more equitable, just, and fair society, we must find the right conceptual tools and frameworks. Utilitarian ethics, a widely referenced framework, adopts the aggregation of utility (broadly understood) as the sole standard to determine the moral value of an action \cite{timmons2013moral,robichaud2005great}. According to classic utilitarianism, the right action to take in any given context is whichever action maximizes utility for society. This brand of utilitarianism seems particularly attractive in science-oriented circles, due to its claim that a moral decision can be made by simply maximizing an objective function. For example, in the social choice literature, \citet{bogomolnaia2005collective} argue for the use of utilitarianism as ``it is efficient, strategyproof and treats equally agents and outcomes.'' However, the apparent transparency of the utilitarian argument obscures its major shortcomings. First, like the problem of class imbalance in machine learning, a maximizing strategy will bias its objective towards the majority class. Hence, effects on minority and marginalized groups may be easily overlooked. Second, which course of action can maximize utility for society overall cannot be determined on simple cost--benefit analyses. Any such procedure involves serious trade-offs, and a moral decision requires that we acknowledge tensions and navigate them responsibly. To take a recent example, the development of digital contact tracing apps in the wake of the COVID-19 pandemic represented a massive potential benefit for public health, yet posed a serious threat to privacy rights, and all the values that privacy protects---including freedom of thought and expression, personal autonomy, and democratic legitimacy. Which course of action will maximize utility? It is not just controversial. What verdict is reached will necessarily depend on what fundamental choices and liberties individuals value most, and which they are willing to forgo. Furthermore, for some groups the losses may be more significant than for others, and this fact requires due consideration. A related, though distinct, standard might put aside the ambition of maximizing good and adopt some threshold condition instead. As long as it generates a net improvement over the \emph{status quo}, someone might say, an action can be said to promote social and public good. This strategy raises three additional problems: first, utility aggregates are insensitive to allocation and distribution; second, they are not sensitive to individual liberties and fundamental rights; and third, to the extent that they consider stakeholders' preferences, they do not take into account the idea that individuals' preferences depend on their circumstances and in better circumstances could have different preferences \cite{sen1999development}. Why should this matter? \emph{Because attempts to increase utility may be too easily signed off as progress while inflicting serious harm and contributing to the entrenchment of existing inequality.} For this reason, we believe that respect for liberty, fairness of distribution, and sensitivity to interpersonal variations in utility functions should serve as moral constraints, to channel utility increments towards more valuable social outcomes. Operating within these constraints suggests shifting the focal point from utility to a different standard. In the spirit of enabling capabilities, we suggest that such standard should be based on an understanding of the kinds of lives individuals have reason to value, how the distribution of resources in combination with environmental factors may create (or eliminate) opportunities for them to actualize these lives, and designing projects that create fertile conditions for these opportunities to be secured. This reorientation of the moral assessment framework is in line with a view of AI as a tool to shift power to individuals and groups in situations of disadvantage, as advocated by \citet{kalluri2020don}. What this shift means, in operational terms, is not yet fully elucidated. We therefore hope to contribute to this endeavor by exploring conceptual tools that may guide developers in the process, providing intermediate landmarks. We consider ``capabilities'' to be suitable candidates for this task, as they constitute a step in the direction of empowering disadvantaged individuals to decide what, in their view, that shift should consist of. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/hdi_map.png} \caption{The United Nations Human Development Index (HDI) was inspired by the capabilities approach, which serves as the foundation of the participatory approach that we propose. The above map from Wikimedia Commons shows the HDI rankings of countries around the world, where darker color indicates a higher index.} \label{fig:hdi} \end{figure} \begin{figure*} \centering \includegraphics[width=.9\textwidth]{figures/capabilities.pdf} \caption{AI for social good research projects can expand and equalize capabilities by fostering individuals' internal characteristics, providing external resources, or altering the wider material and social environment, thus creating capability sets to improve social welfare and fight injustice.} \label{fig:capabilities} \end{figure*} The capabilities approach, which stands at the intersection of ethics and development economics, was originally proposed by Amartya Sen and Martha Nussbaum \cite{sen1999development,nussbaum2011creating}. The approach focuses on the notion that all human beings should have a set of substantive liberties that allow them to function in society in the ways they choose \cite{nussbaum2011creating}. These substantive liberties include both freedom from external obstruction and the external conditions conducive to such functioning. The \textit{capability set} includes all of the foundational abilities a human being ought to have to flourish in society, such as, freedom of speech, thought and association; bodily health and integrity; and control over one's political and material environment; among others. The capabilities approach has inspired the creation of the United Nations Human Development Index (Figure~\ref{fig:hdi}), marking a shift away from utilitarian evaluations such as gross domestic product to people-centered policies that prioritize substantive capabilities \cite{ul2003birth}. A capability-oriented AI does not rest content with increasing aggregate utility and respecting legal rights. It asks itself what it is doing, and how it interacts with existing social realities, to enhance or reduce the opportunities of the most vulnerable members of society to pursue the lives they have reason to value. It asks whether the social, political, and economic environment that it contributes to create deprives individuals of those capabilities or fosters the conditions in which they may enjoy equal substantive liberties to flourish. In this framework, a measure of social good is how much a given project contributes to bolster the enjoyment of substantive liberties, especially by members of marginalized groups. This kind of measure is respectful of individual liberties, sensitive to questions of distribution, and responsive to interpersonal variations in utility. Before discussing the relation between AI and capabilities, we would like to dispel a potential objection to our framework. The capabilities approach has been criticized for not paying sufficient attention to groups, institutions, and social structures, thus rendering itself unable to account for power imbalances and dynamics \cite{hill2003development,koggel2003globalization}. This characteristic would make the approach unappealing for a field that seeks to address injustices in the distribution of power. However, the approach does indeed place substantial emphasis on conceptualizing the factors that affect (particularly those which may enhance) individuals' power. Its emphasis on substantial opportunities is itself a way of conceptualizing what individuals have the power to do or to be. These powers are determined in large part by the broader social environment, which includes group membership, inter-group relations, institutions, and social practices. Furthermore, the approach explicitly focuses on enhancing the power of the most vulnerable members of society. In fact, the creation and enhancement of capabilities constitute intermediate steps on the way towards shifting power, promoting the welfare and broadening the liberties of those who have been historically deprived. For this reason, it provides a fruitful framework to assess genuine moral progress and a promising tool for AI projects seeking to promote social good. Light discussion of the capabilities approach has begun inside the AI community. \citet{moore2019ai} references the capabilities approach to call for greater accountability and individual control of private data. \citet{coeckelbergh2010health} proposes taking a capabilities approach to health care, particularly when using AI assistance to replace human care, but focuses on Nussbaum's proposed set of capabilities rather than eliciting desired capabilities from affected patients. We significantly build on these claims by discussing the potential of AI to enhance capabilities and argue that the capabilities approach goes hand-in-hand with a participatory approach. Two papers specifically ground their work in the capabilities approach; we highlight these as exemplars for future work in AI4SG. \citet{thinyane2019apprise} invoke the capabilities approach, specifically to empower the agency of marginalized groups, to motivate their development of a mobile app to identify victims of human trafficking in Thai fisheries. \citet{kumar2020taking} conduct an empirical study of women's health in India through the lens of Nussbaum's central human capabilities. \section{AI and Capabilities}\label{sec:AIandCapabilities} The capabilities approach may serve AI researchers and developers to assess the potential impact of projects and interventions along two dimensions of social progress: capability expansion and distribution. This section argues that, when they interact productively with other social factors, AI projects can contribute to equalizing and enhancing capabilities---and that \emph{therein} lies their potential to bring about social good. For instance, an AI-based system Visual Question Answering System can enhance the capabilities of the visually impaired to access relevant information about their environment \cite{kim2019korean}. When considering equalizing and enhancing capabilities, it is important to notice that whether an individual or group enjoys a set of capabilities is not solely a matter of having certain personal characteristics or being free from external obstruction. External conditions need also be conducive to enable individuals' choices among the set of alternatives that constitutes the capability set \cite{nussbaum2011creating}. This is why these liberties are described as \textit{substantial}, as opposed to formal or negative. Hence, capabilities, or substantive liberties, are composed of (1)~individuals’ personal characteristics (including skills, conditions, and traits), (2)~external resources they have access to, and (3)~the configuration of their wider material and social environment \cite{johnstone2007technology}. Prior work at the intersection of technology and capabilities has addressed the potential of technology to empower users, by enhancing their capabilities and choice sets. \citet{johnstone2007technology} proposes the capabilities approach as a key framework for computer ethics, suggesting that technological objects may serve as tools or external resources that enhance individuals’ range of potential action and choice \cite{cohen2012configuring,kleine2013technologies}. This is an important role that AI-based technologies may play in enhancing capabilities: providing tools for geographic navigation, efficient communications, accurate health diagnoses, and climate forecasts. Technology may also foster the development of new skills, abilities, and traits that broaden an individual's choice sets. We diagram in Figure~\ref{fig:capabilities} the relationship between an AI4SG project on an individual's characteristics, resources, and environments, and thus the potential of AI to alter (both positively and negatively) capability sets. Technological objects may also become part of social structures, forming networks of interdependencies with people, groups, and other artifacts \cite{oosterlaken2015technology,winner1980artifacts}. When focusing on AI-based technologies, it is crucial to also acknowledge their potential to affect the social and material environment, which may render it more or less conducive to secure capability sets for individuals and communities. For example, AI-based predictive policing may increase the presence of law enforcement agents in specific areas, which may in turn affect the capability sets of community members residing in those areas. If this presence increases security from, say, armed robbery, community members may enjoy some more liberties than they previously did. And yet, if this acts to reinforce the disparate impact of law enforcement on members of vulnerable communities, along with other inequalities, then it may not only diminish the substantive liberties of those individuals impacted, but also alter the way in which such liberties are distributed across the population. Assessing this kind of trade-off ought to be a crucial step in evaluating the ability of a particular project to promote social good. This assessment must be aligned with the kinds of choices and opportunities community members have reasons to pursue \citep{sen2017collective}. A project should not be granted the moral standing of being ``for social good'' if it leads to localized utility increments, at the expense of reducing the ability of members of the larger community to choose the lives they value. Most importantly, this assessment must be made by community members themselves, as the following section argues. \section{Community Participation} \label{sec:participatory} As \citet{azra2021ai} contend, communities should ultimately be the ones to decide whether and how they would like to use AI. If the former condition is met and the community agrees that an AI solution may be relevant and useful, the latter requires the inclusive design of an AI system through a close and continued relationship between the AI researcher and those impacted. This close partnership is particularly important as the gross effects of AI-based social interventions on communities’ capabilities are unlikely to be exclusively positive. Tradeoffs may be forced on designers and stakeholders; some are likely to be intergroup, and some, intragroup. For this reason, it is only consistent with the proposed approach that designers consult stakeholders from all groups on their preferred ways to navigate such tradeoffs. In other words, if PACT~is focused on creating capabilities, it must enable impacted individuals to have a say on what alternatives are opened (or closed) to them. Moreover, AI projects that incorporate stakeholders' choices into their design process may contribute to create what \citet{wolff2007disadvantage} refer to as ``fertile functionings.'' That is, functionings that, when secured, are likely to secure other functionings. Fertile functionings include, though are not limited to, the ability to work and the ability to have social affiliations. These are the kinds of functionings that either enable other functionings (e.g. control over one's environment) or reduce the risk associated with them. Projects in AI that create propitious environments and enable individuals to make decisions over the capability sets they value in turn give those individuals the capability to function in a way that leads to the creation of other capabilities. If this kind of participatory space is offered to those who are the most vulnerable, AI could plausibly act against disadvantage, and contribute to shifting the distribution of power. In this way, the PACT~ framework is aligned with the principles of Design Justice in prioritizing the voices of affected communities and viewing positive change as a result of an accountable collaborative process \cite{2020Introduction}. The PACT~framework is also committed to the notion that, when applying capabilities to AI, we must include all stakeholders, especially vulnerable members of society. But how do we accomplish this concretely, and how does this affect the different steps in an AI4SG research project? In the following section, we will first introduce suggested mechanisms of a participatory approach rooted in capabilities, and then discuss how these come into play in (1)~determining which capability sets to pursue at the beginning of a project, and (2)~evaluating the success of an AI4SG system in terms of its impact, particularly on groups' sets of substantive liberties. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/diagram.pdf} \caption{Our proposed approach to AI4SG projects, where the key is that stakeholder participation is centered throughout. We do not explicitly comment on the design and development of the project other than calling for the inclusion of community participation and iterative evaluation of success, in terms of whether the project realized the desired capability sets.} \label{fig:participatory} \end{figure} The approach we propose is diagrammed in Figure~\ref{fig:participatory}, which embeds community participation into each stage. By beginning first with defining capability sets, we resist the temptation to immediately apply established tools to address ingrained social issues, which would only restrict the possibilities for change \cite{lorde1984master}. These participatory approaches constitute bottom-up approaches for embedding value into AI systems, by learning values from human interaction and engagement rather than being decided through the centralized and non-representative lens of AI researchers \cite{liao2019enabling}. \section{Guiding Principles for a Participatory Approach} \label{sec:guiding-principles} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/guiding_principles.pdf} \caption{Guiding principles for the participatory approach we propose. The key message is to center the community's desired capabilities throughout, and to ensure that the goals of the project are properly aligned to do so without negatively impacting other capabilities.} \label{fig:guiding-principles} \end{figure*} To direct the participatory approach we propose, we provide the following set of guiding questions, outlined in Figure~\ref{fig:guiding-principles} and elaborated on below. These questions are specifically worded to avoid binary answers of yes or no. When vaguely stated requirements are put forth as list items to check off, there is a risk of offering a cheap seal of approval on projects that aren't sufficiently assessed in terms of their ethical and societal implications. Instead, we expect that in nearly all cases, none of these questions will be fully resolved. Thus, these questions are meant to serve as a constant navigation guide to help researchers regularly and critically reassess their intended work. We begin our discussion on how a participatory approach to AI4SG would look with our first guiding question: \smallskip \noindent \textit{How are impacted communities identified and how are they represented in this process? Who represents historically marginalized groups?} \noindent This question is one of the most important and potentially difficult. As such, we encourage readers to research their domain of interest and seek partners as an important first step, but offer some advice from our experience. We often consider seeking partners at non-profits, community-based organizations, and/or \mbox{(non-)}governmental organizations (NGOs), and have even had some experience with non-profits or NGOs finding us. Finding these groups as an AI researcher may be done with the help of an institution's experts on partnerships (e.g., university or company officials focused on forming external partnerships), prior collaborators or acquaintances, discussions with peers within an institution in different departments (e.g., public health or ecology researchers), or ``cold emails.'' \citet{young2019toward} provide further guiding questions for finding these partners, particularly in their companion guide \cite{lassanaGuide}. In any case, some vetting and research is an important step. AI researchers may also co-organize workshops, such as a pair of ``AI vs.~tuberculosis'' workshops held by some of the authors in Mumbai, India which brought together domain experts, non-profits, state and local health officials, industry experts, and researchers, identified by Mumbai-based collaborators who specialize in building relationships across health-focused organizations in India. The varied backgrounds of the stakeholders at these workshops was conducive to quickly identifying areas of greatest need across the spectrum of tuberculosis care in India, many of which would not have been considered by AI researchers alone. Further, these group forums sparked conversations between stakeholders that often would not otherwise communicate, but that led to initial ideas for solutions. This highlights that such approaches are often most successful when AI researchers move out of their usual spheres and into the venues of the domains that their projects impact. At the inception of a project, the designers and initial partners should together identify all relevant stakeholders that will be impacted by the proposed project, bringing them on as partners or members of a (possibly informal) advisory panel. Every effort should be made to include at least one member from each impacted group. If stakeholders were initially omitted and are later identified, they should then be added. Initial consultation with panel members and partners should be carried out in a way that facilitates open and candid discussion to ensure all voices are represented and accounted for during the project design phase \cite{young2019toward}. We additionally propose that during the lifetime of the project, the panel and partners should, if possible, be available for ongoing consultation during mutually agreed upon checkpoints to ensure the project continues to align with stakeholder values. Similar practices have been suggested by \citet{perrault2020ai}, who advocate for close, long-term collaborations with domain experts. Likewise, the Design Justice network pursues non-exploitative and community-led design processes, in which designers serve more as facilitators than experts. This framework is modeled on the idea of a horizontal relation in which affected communities are given the role of domain experts, given their lived experience and direct understanding of projects' impact \cite{2020Introduction}. \smallskip \noindent \textit{How are plans laid out for maintaining and sustaining this work in the long-term, and how would the partnership be ended?} \noindent We believe partnership norms should be established at the beginning of the partnership, including ensuring that the community representatives and experiential experts have the power to end the project and partnership, in addition to the designers and other stakeholders. As noted by \citet{madaio2020co}, this should also necessitate an internal discussion amongst the research team to decide what, if any, criteria would be grounds for such a cessation, e.g., if it becomes clear that the problem at hand requires methods in which the research team does not have sufficient expertise. Discussions with the broader partners should also lay out the expected time scale and potential benefits and risks for all involved. Once the relationship is underway, it should be maintained as discussed, with regular communication. When a project reaches the point where a potential deployment is possible, we advocate for an incremental deployment, such as the example of Germany's \textit{teststrecken} for self-driving cars, which sets constraints for autonomy, then expands these constraints once certain criteria are met \cite{floridi2020design}. \smallskip \noindent \textit{What kind of compensation are stakeholders receiving for their time and input? Does that compensation respect them as partners?} \noindent Stakeholders must be respected as experiential or domain experts and be considered as partners. They must be compensated in some way, whether monetarily or otherwise. We advocate for compensation to avoid establishing an approach made under the guise of participatory AI that ends up being extractive \cite{pain2003reflections}. One notable drawback to a participatory approach is the potential for these community interactions to become exploitative if not done with intention, compensation, or long-term interactions \cite{sloane2020participation}. Collaborators who have significantly influenced the design or implementation of a project ought to be recognized as coauthors or otherwise acknowledged in resulting publications, presentations, media coverage, etc. \smallskip \noindent \textit{How can we understand and incorporate viewpoints from many groups of stakeholders?} \noindent Surveys or voting may seem a natural choice. However, a simple voting mechanism is risky, as it may find that a majority of the community favors, for example, building a heavily polluting factory near a river, while the impacted community living at the proposed site would object that this factory would severely degrade their quality of life. These concerns from the marginalized group must be given due weight. This emphasis on the welfare of marginalized groups is based on the premise that the evaluation of human capabilities must consider \emph{individual capabilities} \cite{nussbaum2000feminism}. In short, all individual capabilities are valuable as ends in their own right; they should never be considered means to someone else's rights or welfare. We must, therefore, guard against depriving individuals of basic entitlements as a means to enhancing overall welfare. Hence, we endorse a deliberative approach, which aims to uncover ``overlapping consensus'' \cite{rawls1971theory}. This approach is based on the expectation that, in allowing diverse worldviews, value systems, and preference sets to engage in conversation, with appropriate moderation and technical tools, social groups may find a core set of decisions that all participants can reasonably agree with. Deliberative approaches to democracy have been operationalized by programs such as vTaiwan, which uses digital technology to inform policy by building consensus through civic engagement and dialogue \cite{hsiao2018vtaiwan}. vTaiwan uses the consensus-oriented voting platform Pol.is which seeks to identify a set of common values upon which to shape legislation, rather than arbitrating between polarized sides \cite{tang2020inside}. Similarly, OPPi serves as a platform for consensus building through bottom-up crowd-sourcing \cite{oppi}. This tool is tailored for opinion sharing and seeking, oriented towards finding common ground among stakeholders. An alternative to fully public deliberative processes are targeted efforts such as Diverse Voices panels \cite{young2019toward} which focus on including traditionally marginalized groups in existing policy-making processes. Specifically, they advocate for informal elicitation sessions with partners, asking questions such as, ``What do you do currently? What would support your work?'' \cite{katell2020toward}. They also suggest considering whether tech should be used in the first place and highlight that a key challenge of participatory design is to determine a course of action if multiple participants disagree. We add that it remains a challenge to assemble these panels. \citet{simonsen2012routledge} provide multiple strategies as well, particularly with the goal of coming to a common language and fostering communication between groups of experts from different backgrounds. They suggest strategies to invite discussion, such as games, acting out design proposals, storytelling, group brainstorms of what a perfect utopian solution would look like, participatory prototyping, and probing. In the case of probing, for example, one strategy was to provide participants with a cultural probe kit, consisting of items such as a diary and a camera, in order to understand people's reactions to their environments \cite{gaver1999design}. Further, several fields in artificial intelligence have devoted a great deal of thought to learning and aggregating preferences: preference elicitation \cite{chajewska2000making}, which learns agent utility functions; computational social choice \cite{brandt2016handbook}, which deals with truthful agents; and mechanism design \cite{shoham2008multiagent}, which deals with strategic agents. As an example, \citet{kahng2019statistical} form what they call a ``virtual democracy'' for a food rescue program. They collect data about preferences on which food pantry should receive food, then use these preferences to create and aggregate the preferences of virtual voters. The goal is to make ethical decisions without needing to reach out to stakeholders each time a food distribution decision needs to be made. These fields have highlighted theoretical limitations in preference aggregation, such as Arrow's impossibility theorem \cite{arrow1950difficulty} which reveals limitations in the ability to aggregate preferences with as few as three voters, but these results do not necessarily inhibit us from designing good systems in practice \cite{sen1999possibility,maskin2014arrow}. Such areas provide rich directions for future research, particularly at the intersection of participatory methods and social science research. \smallskip \noindent \textit{What specific concerns are raised during the deliberative process, and how are these addressed?} Diverse Voices panels \cite{young2019toward} with experiential experts (both from non-profits and from within the community) may also be used to identify potential issues by asking questions such as ``What mistakes could be made by decision makers because of how this proposal is currently worded?'' and, ``What does the proposal not say that you wish it said?'' Other strategies we discussed for understanding multiple viewpoints may also have a role to play here when tailored towards sharing concerns. However, we also stress the importance of ongoing partnerships with impacted community members beyond just an initial session. By ensuring that the community has a voice throughout the project's lifetime, their values are always kept front-and-center. As others have argued, it is not sufficient to interview impacted communities once simply to ``check the participatory box'' \cite{sloane2020participation}. \subsection{Determining Capability Sets} Now, equipped with some guiding questions and concrete examples of a participatory approach, we will apply these principles directly to selecting capability sets at the outset of an AI4SG project. To identify what capabilities to work towards, some scholars such as \citet{nussbaum2000feminism} have proposed well-defined sets of capabilities and argued that society ought to focus on enabling those capabilities. However, we endorse the view that the selection of an optimal set of capabilities should be based on community consensus \cite{sen1999development}. We argue that an AI project can only be for social good if it is responsive to the values of the communities affected by the AI system. \smallskip \noindent \textit{What functionings do members of the various stakeholder groups wish they could achieve through the implementation of the project?} In the fair machine learning literature, \citet{martin2020participatory} propose a participatory method called community-based system dynamics (CBSD) to bring stakeholders in to help formulate a machine learning problem and mitigate bias. Their method is designed to understand causal relationships, specifically feedback loops, particularly those in high-stakes environments such as health care or criminal justice, that may disadvantage marginalized and vulnerable groups. This process is intended to bring in relevant stakeholders and recognize that their lived experience makes these participants more qualified to recognize the effects of these interventions. Using visual diagrams designed by impacted community members, the CBSD method can help identify levers that may help enable or inhibit functionings, particularly for those who are most vulnerable. Similarly, \citet{simonsen2012routledge} suggest the use of mock-ups and prototypes to facilitate communication between experiential experts and developers. Other strategies discussed for understanding multiple viewpoints may also apply if tailored towards determining capabilities. \smallskip \noindent \textit{What functionings are the priority for those most vulnerable? Is there an overlap between their priorities and the goals of other stakeholders?} We need to pay attention to those who are most vulnerable. The capabilities approach may be leveraged to fight inequality by thinking in terms of capability equalizing. To identify capabilities that are not yet available to marginalized members of a community, we must listen to their concerns and ensure those concerns are prioritized, for example via strategies proposed in our discussions on finding those impacted by AI systems and including multiple viewpoints. As an example of the consequences of failing to include those most vulnerable throughout the lifetime of an AI system, consider one of the initial steps of data collection. Women are often ignored in datasets and therefore their needs are underreported \cite{dignazio2020data}. For example, crash-test dummies were designed to resemble the average male, and vehicles were evaluated to be safe based on these male dummies---leaving women 47\% more likely to be seriously injured in a crash \cite{perez2019invisible}. These imbalances are often also intersectional, as \citet{buolamwini2018gender} demonstrate by revealing stark racial and gender-based disparities in facial recognition algorithms. Beyond the inclusion of all groups in datasets, data must be properly stratified to expose disparities. During the COVID-19 pandemic in December 2020, the Bureau of Labor Statistics reported a loss of 140,000 net jobs. The stratified data reveal that all losses were women's: women lost 156,000 jobs while men gained 16,000, and unemployment was most severe for Black, Latinx, and Asian women \cite{ewing2021all}. Without accounting for the capabilities of all people affected by such systems, it is difficult to claim that these technologies were for social good. On the other hand, prioritizing the needs of the most marginalized groups may at times offer an accelerated path towards achieving collective goals. Project Drawdown identified and ranked 100 practical solutions for stopping climate change \cite{hawken2017drawdown}. Number 6 on its list was the education of girls, recognizing that women with higher levels of education marry later and have fewer children, as well as manage agricultural plots with greater yield. Another solution advocates for securing land tenure of indigenous peoples, whose stewardship of the land fights deforestation, resource extraction, and monocropping. \smallskip \noindent \textit{Are any of these capability sets fertile, in the sense of securing other capabilities?} To maximize the capabilities of various communities, we may wish to focus on capabilities that produce fertile functionings, as discussed in Section~\ref{sec:participatory}. Specifically, many functionings are necessary inputs to produce others; for example, achieving physical fitness from playing sports requires as input good health and nourishment \cite{clark2005sen}. Some of Nussbaum's 10 central capabilities---including bodily integrity (security against violence and freedom of mobility) and control over one's environment (right of political participation, to hold property, and dignified work)---may be viewed as fertile \cite{nussbaum2000feminism}. Reading, for instance, may secure the capability to work, associate with others, and have control over one's environment. AI has the potential to help achieve many of these capabilities. For example, accessible crowdwork, done thoughtfully, offers the opportunity for people with disabilities to find flexible work without the need for transit \cite{zyskowski2015accessible}. \smallskip \noindent \textit{How closely do the values of the project match those of the community as opposed to the designers? How does the focus of the project respond to their expressed needs and concerns? Does the project have the capacity to respond to those needs?} \noindent Consider the scenario where a community finds it acceptable to hunt elephants, while the designers are trying to prevent poaching. There could be agreement on high-level values, such as public health, but disagreement on whether to prioritize specific interventions to promote public health. There could even be a complete lack of interest in the proposed AI system by the community. At an early stage of a project, AI researchers need to facilitate a consultation method to understand communities' values and choices. Note that this process could allow stark differences in priority between stakeholders to surface which prevent the project from starting so as not to over-invest in a project that will later be terminated because of difference in values. We may consider several of the strategies we discussed previously in this case, such as deliberative democratic processes, Diverse Voices panels, or computational social choice methods. It may subsequently be necessary to end the project if these strategies do not work. \subsection{Evaluating AI for Social Good} Once AI researchers and practitioners have a system tailored to these capabilities, we believe that communities should be the ones to judge the success of this new system. This stage of the process may pose additional challenges, given the difficulty of measuring capabilities \cite{johnstone2007technology}. Though we do not here endorse any measurement methodology, various attempts to operationalize the capabilities approach give us confidence that such methodologies are feasible and may be implemented in the course of evaluating AI projects \cite{anand2009development}. First and foremost, we maintain that the evaluation of success should be done throughout the lifecycle of the AI4SG project (and beyond) as discussed above, especially via community feedback. However, we wish to emphasize that we as AI researchers need to keep capabilities in mind as we evaluate the success of AI4SG projects to avoid ``unintended consequences'' and a short-sighted focus on improved performance on metrics such as accuracy, precision, or recall. \smallskip \noindent \textit{How does the new AI4SG system affect all stakeholders' capabilities, particularly those selected at the start?} \noindent This question is related to the literature on measuring capabilities \cite{johnstone2007technology}, and is therefore difficult to answer. We will aim to provide a few examples, which may not apply well to every AI4SG system, nor will it be an exhaustive list of techniques that could be valid. First, based on the idea of AI as a diagnostic to measure social problems \cite{abebe2020roles}, we may be able to (partially) probe this question using data. \citet{sweeney2013discrimination} show through ad data that advertisements related to arrest records were more likely to be displayed when searching for ``Black-sounding'' names, which may affect a candidate's employment capability, for example. \citet{obermeyer2019dissecting} analyze predictions, model inputs and parameters, and true health outcome data to show that the capability of health is violated for Black communities, as they are included in certain health programs less frequently than white communities due to a biased algorithm. There could additionally be feedback mechanisms when an intervention is deployed, whether via the AI system itself, or possibly by collaborating with local non-profits and NGOs. This may be especially useful in cases where these organizations are already engaged in tracking and improving key capabilities such as health outcomes, e.g., World Health Partners \cite{chavali2011world} or CARE international \cite{care2017annual}. Again, these examples will likely not apply to all cases and opens the door for further, interdisciplinary research. However, no matter what strategy is taken, it is imperative that we continue to center communities in all attempts to measure the effects on stakeholders' capabilities. \smallskip \noindent \textit{Are other valued capabilities or functionings negatively affected as a result of the project? Are stakeholders' values and priority rankings in line with such tradeoffs?} \noindent We have a responsibility to \emph{actively} think about possible outcomes; it is neglect to dismiss negative possibilities as ``unintended consequences'' \cite{parvin2020unintended}. These participatory mechanisms should thus ensure that the perspective of the most vulnerable and most impacted stakeholders is given due consideration. We recognize that this can be especially challenging, as discussed in our first guiding question for identifying impacted communities. Therefore, we further suggest that the evaluation of an AI4SG project should employ consultation mechanisms that are open to all community members throughout the implementation process, such as the feedback mechanisms suggested previously. \smallskip \noindent\textit{What should my role be as an AI researcher? As a student?} \noindent We believe that AI researchers at all levels should participate in this work. This work involves all of the above points, including learning about the domain to understand who stakeholders are, discussing with the stakeholders, and evaluating performance. We also acknowledge that we AI researchers may not always be the best suited to lead participatory efforts, and so encourage interdisciplinary collaborations between computer science and other disciplines, such as social sciences. A strong example would be the Center for Analytical Approaches to Social Innovation (CAASI) at the University of Pittsburgh, which brings together interdisciplinary teams from policy, computing, and social work \cite{caasi}. However, AI researchers should not completely offload these important scenarios to social science or non-profit colleagues. It should be a team effort, which we believe will bring fruitful research in social science, computer science, and even more disciplines. AI researchers and students can also advocate for systematic change from within, which we discuss more in depth in the conclusion. Although student researchers are limited by constraints such as research opportunities and funding, they may establish a set of moral aspirations for their work and set aside time for people-centered activities, such as mentoring and community-building \cite{chan2020approaching}. \section{Conclusion: Thoughts on AI for Social Good as a Field} In this paper, we lay out a community-centered approach to defining AI for social good research that focuses on elevating the capabilities of those members who are most marginalized. This focus on capabilities, we argue, is best enacted through a participatory approach that includes those affected throughout the design, development and deployment process, and gives them ground to choose their desired capability sets as well as influence how they wish to see those capabilities realized. We recognize that the participatory approach we lay out requires a significant investment of time, energy, and resources beyond what is typical in AI or even much existing AI4SG research. \emph{We highlight this discrepancy to urge a reformation within the AI research community to reconsider existing incentives to encourage researchers to pursue more socially impactful work.} Institutions have the power to catalyze change by (1) establishing requirements for community engagement in research related to public-facing AI systems; and (2) increasing incentives for researchers to meaningfully engage impacted communities while simultaneously producing more impactful research \cite{black2020call}. While engaging in collaborative work with communities can give rise to some technical directions of independent interest to the AI community \cite{de2018machine}, such a shift to encourage community-focused work will in part require reconsidering evaluation criteria used when reviewing papers at top AI conferences. Greater value must be placed on papers with positive social outcomes, including those with potential for impact, if the work has not yet been deployed. Such new criteria are necessary since long-term, successful AI4SG partnerships often also lead to non-technical contributions, as well as situated programs which do not focus necessarily on generalizability \cite{perrault2020ai}. We encourage conferences to additionally consider rewarding socially beneficial work with analogies to Best Paper awards, such as the ``New Horizons'' award from the MD4SG 2020 Workshop, and institutions to recognize impactful work such as the Social Impact Award at the Berkeley School of Information \cite{berkeleyIschool}. In the meantime, we suggest that researchers look to nontraditional or interdisciplinary venues for publishing their impactful community-focused work. These venues often gather researchers from a variety of disciplines outside computer science, opening the door for future collaborations. For example, researchers could consider COMPASS, IAAI, MD4SG/EAAMO \cite{abebe2018mechanism}, the LIMITS workshop \cite{nardi2018computing}, and special tracks at AAAI and IJCAI, among others. Venues such as the Computational Sustainability Doctoral Consortium and CRCS Rising Stars workshop bring students together from multiple disciplines to build relationships with each other. Researchers could also consider domain workshops and conferences, such as those in ecology or public health. The incentive structure in AI research is often stacked against thoughtful deployment. Whereas a traditional experimental section may take as little as a week to prepare, a deployment in the field may take months or years, but is rarely afforded corresponding weight by reviewers and committee members. This extended timeline weighs most heavily on PhD students and untenured faculty who are evaluated on shorter timescales. We should thus reward both incremental and long-term deployment, freeing researchers from the pressure to rush to deployment before an approach is validated and ready. In addition to the need for bringing stakeholders into the design process of AI research, we must ensure that all communities are welcomed as AI researchers as well. Such an effort could counteract existing disparities and inequities within the field. For example, as is the case in other academic disciplines, systemic anti-Blackness is ingrained in the AI community, with racial discrepancies in physical resources such as access to a secure environment in which to focus, social resources such as access to project collaborators or referrals for internships, and measures such as the GRE or teacher evaluations \cite{guillory2020combating}. Further, as of 2018, only around 20\% of tenure-track computer science faculty were women \cite{roy2019engineering}. To combat these inequities, people across the academic ladder may actively work to change who they hire, with whom they collaborate, including collaborations with minority-serving institutions \cite{kuhlman2020no}, and how much time they spend on service activities to improve diversity efforts \cite{guillory2020combating}. The above reformations could contribute greatly to making the use of participatory approaches the norm in AI4SG research, rather than the exception. PACT, we argue, is a meaningful new way to answer the question: ``what is AI for social good?'' We, as AI researchers dedicated to the advancement of social good, must make a PACT~with communities to find our way forward together. \begin{acks} We would like to thank Jamelle Watson-Daniels and Milind Tambe for valuable discussions. Thank you to all those we have collaborated with throughout our AI4SG journey for their partnership, wisdom, guidance, and insights. This work was supported by the Center for Research on Computation and Society (CRCS) at the Harvard John A. Paulson School of Engineering and Applied Sciences. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2021-06-21T02:20:25", "yymm": "2105", "arxiv_id": "2105.01774", "language": "en", "url": "https://arxiv.org/abs/2105.01774" }
\section{Introduction}\label{sec:intro} Matrix completion is a framework that has gained popularity in a wide range of machine learning applications, including recommender systems \citep{Koren2009}, system identification \citep{Liu2010}, global positioning \citep{singer2010uniqueness} and natural language processing \citep{Wijaya2017}. It is a useful framework for complex prediction problems, where each observation comes with a heterogeneous collection of observed features. In particular, matrix completion is applied to problems where the object of inference or prediction is a matrix whose rows correspond to observation and columns to variables/features. In many applications, only a subset of entries in this matrix are observed (often with noise), and the goal is to ``complete'' the matrix, filling in estimates of the unobserved entries. This ``completion'' is done by leveraging the known structure in the matrix. The most famous example, which brought matrix completion to prominence, is the Netflix Challenge \citep{Koren2009}, where a small sample of observed ratings for each customer was used to successfully predict future/unobserved movie ratings for Netflix customers. More formally, suppose we have an underlying unobserved matrix $M\in\mathbb{R}^{n\times p}$: We then observe a subset of the entries from the noise-contaminated matrix $Y= M + E$, where $E$ is a matrix of i.i.d. mean zero, finite variance noise variables. Our goal is to recover matrix $M$ from this partially observed, noisy $Y$. This is known as matrix completion. Without any structure on the matrix $M$, recovering the values of $M$ corresponding to unobserved entries is impossible \citep{Laurent2001}. Matrix completion becomes possible if one imposes some constraints on the structure of the underlying matrix: It is most common to assume that $M$ is low rank. Directly employing this assumption by e.g., finding the minimum rank completion of $Y$ (or corresponding rank-constrained regression) is unfortunately NP-hard and becomes computationally infeasible for problems involving large matrices \citep{CandesTao2010, Chistov1984ComplexityOQ}. Over the last decades, computationally efficient methods using convex optimization have been developed for recovering a low rank matrix from a small number of observations with near-optimal statistical guarantees in primarily noiseless problems \citep{Srebro2004, Recht2011, CandesTao2010, Recht2010}, and when the observed entries are contaminated with noise \citep{CandesPlan2009, koltchinskii2011}. These methods rely on using the nuclear norm of the matrix \citep{fazel2002, jaggi2010simple}, i.e., sum of its singular values, as a convex surrogate for the matrix rank. The low-rank structure leveraged in matrix completion can be thought of as learning a linear embedding of the data in a low-dimensional space. In practice, the underlying matrix $M$ may not be low rank. However, we often believe it may still have useful low-dimensional structure. It has thus become popular to learn a low-dimensional non-linear embedding of the data. This idea is used both in matrix completion and more generally for low-dimensional summaries of data. It has been applied in motion recovery \citep{Xia2018}, epigenomics \citep{Schreiber2018}, and health data analytics \citep{wang2015} among other areas. To recover these embeddings, Reproducing Kernel Hilbert Space (RKHS) methods \citep{Fan2018a}, nearest neighbor methods \citep{li2019nearest}, and deep learning methods like autoencoders and neural-network-based variational frameworks \citep{Fan2018b, yu2013embedding, jiang2016variational} have been used. Additionally, there has been strong empirical evidence that matrix completion methods based on nuclear norm penalization perform well even in scenarios where any low dimensional structure is likely non-linear. As these methods were developed for linear low rank structure, this is, at first glance, a bit surprising. There has been some work giving theoretical justification for these empirical results \citep{chatterjee2015matrix, udell2019big}. In particular, they note that in the presence of some types of non-linear low-dimensional structure in $M$, nuclear norm-based matrix completion methods can still consistently estimate $M$. These work additionally gives some non-stochastic approximation error results. However, optimality of the statistical perform of nuclear-norm-based matrix completion is not considered to the best of our knowledge. In this manuscript, we delve further into the performance of matrix completion for $M$ with low-dimensional, non-linear structure. In particular, we consider $M$ with rows that can be embedded in a low-dimensional smooth manifold. We then (i) show that nuclear norm-based matrix completion can consistently estimate $M$; (ii) characterize the rate at which the reconstruction error converges to $0$ as a function of the size of the matrix, number of observed entries, and smoothness and dimension of the underlying manifold; and (iii) prove that, up to a log term, this rate cannot by improved upon by any method; that is, our upper bound is actually the minimax rate optimal for reconstruction error in this problem. Furthermore, our error bounds (and our techniques) also relate the matrix completion problem clearly to more classical non-parametric estimation: Our reconstruction error bounds parallel the minimax rate of mean squared error (MSE) in the nonparametric regression setting. Results (ii) and (iii), we believe, are novel. Our experiments on synthetic data corroborate our theoretical findings. In particular, they suggest that the finite sample empirical performance of matrix completion in non-linear low rank embeddings is consistent with the asymptotic theoretical error bounds. These empirical results also corroborate the claim that better performance is achieved when the embedding of the underlying matrix $M$ lies in a smoother manifold. \section{Methods}\label{sec:setup} \subsection{Problem setup} We start by giving some notation. We use upper case letters to represent matrices and lower case letters to represent scalars. The trace inner product of any two matrices, $M, B\in \mathbb R^{n\times p},\ n,p \in \mathbb{Z}^+$, is $\langle M, B\rangle = \operatorname{tr}(M^TB)$. The element-wise infinity norm of $M\in \mathbb R^{n\times p}$ is defined by $\|M\|_\infty= \max_{1\le i \le n, 1\le j \le p}|m_{ij}|$ where $m_{ij}$ denotes the $(i,j)$-th entry of $M$. We also denote the Frobenius norm of matrix $M$ as $\|M\|_F = \sqrt{\sum_{i=1}^{n}\sum_{j=1}^{p}m_{ij}^2}$. In the general matrix completion problem, we randomly observe some of the entries from a matrix $M\in\mathbb{R}^{n\times p}$; the observed entries may also be contaminated with error. To support our later theoretical derivations, we will describe this process in terms of a set of mask matrices $X_t\in \mathbb{R}^{n\times p}$ and observed values $y_t \in\mathbb{R}$. Each $X_t$ is a matrix with a single $1$ whose position is indexed by $t$ and all other entries are equal to $0$ as follows: \begin{equation}\label{eq::mask_mat} X_t = \begin{pmatrix} 0 & 0& \cdots & 0 & \cdots & 0 \\ \vdots & & \vdots & & \vdots\\ 0 &0 & \cdots & 1 &\cdots & 0\\ \vdots & & \vdots & & \vdots\\ 0 &0 & \cdots & 0 & \cdots & 0 \\ \end{pmatrix}_{n\times p}. \end{equation} The collection of matrices $X_t$ fall in the set $\mathcal{X} = \{e_n(i)e_p(j)^T, \textrm{ for all }i=1,\ldots,n \textrm{ and }j=1,\ldots,p\}$, where $e_n(i)\in \mathbb R^n$ is the basis vector consisting of all zeros except for a single 1 at $i$th entry. In this formulation, $X_t$ indicates the location in $M$ where $y_t$ is drawn from. That is, for $X_t = e_n(i)e_p(j)^T \in \mathcal{X}$, $\langle X_t, M\rangle = m_{ij}$. Now, we can frame the matrix completion problem as follows: Suppose we have $N$ pairs of observations $(X_t,y_t)$, $t=1,\ldots, N$, that satisfy \begin{equation}\label{eq::data_model} y_t = \langle X_t, M\rangle + \xi_t, \end{equation} where $\xi_t$ are i.i.d random errors distributed $N(0,\sigma^2)$, $M \in \mathbb{R}^{n\times p}$ is the underlying true matrix to be recovered, and $y_t\in \mathbb{R}$ are observed values. The observed matrix can be written as $Y = \sum_{t=1}^{N}y_tX_t$ where $N$ is the number of observed entries. We assume that $X_t$ is uniformly sampled at random from $\mathcal{X}$ \citep{koltchinskii2011}, i.e. $X_t\sim \Pi$, and the probability that the $(i,j)$th entry of $X_t$ equals to 1 is $\pi_{ij} = \operatorname{P}(X_t = e_i(n)e_j(p)^T) = \frac{1}{np}$ for $1 \le i \le n, 1 \le j \le p$. This is essentially a missing completely at random (MCAR) assumption. The goal is to recover $M$ given pairs $(X_t, y_t)$, $t = 1,2,...,N$, and we are generally interested in the setting where $N\ll np$. To solve this problem, existing methods often assume that $M$ has low rank (or approximately low rank), i.e. $M \simeq UV^T$ with $U \in \mathbb{R}^{n\times r}$ and $V \in \mathbb{R}^{p \times r}$ for some integer $r \ll \min(n,p)$. In contrast to this low rank assumption, this paper studies the problem where $M$ is not necessarily low-rank but generated from a low-dimensional non-linear manifold. This notion is formalized in the next section. \subsection{Non-linearly Embeddable Matrices} We begin by formalizing what we mean by ``low-dimensional non-linear structure''. Consider a matrix $M$, a positive integer $K$, and a function class $\mathcal{F} \subset \mathcal{L}^2\left(\mathbb{R}^K\right)$. We say $M$ is $\mathcal{F}$-embeddable if there exist functions $f_j\in\mathcal{F}: \mathbb{R}^K \to \mathbb{R}, \ j=1,\ldots,p$, and a matrix $\bm\Theta\in\mathbb{R}^{n\times K}$ such that \begin{equation}\label{eq::mat_gen} m_{ij} = f_j\left(\bm\theta_{i,\cdot}\right), i=1,\ldots,n, j=1,\ldots,p, \end{equation} where $m_{ij}$ is the ($i,j$) entry of $M$ and $\bm\Theta \in \mathbb{R}^{n\times K}$ is a matrix (with $\bm\theta_{i,\cdot}$ indicating its $i$th row vector). Here, $\bm\Theta$ gives an embedding of our observations from its original $p$-dimensional space into a $K$-dimensional space ($K \le p$). The set of functions $\{f_j\}_{j=1}^p \subset \mathcal{F}$ identifies how to map our embedding in $\mathbb{R}^K$ back to $\mathbb{R}^p$. In classical matrix completion setting, where we assume $M$ is low-rank, nuclear norm penalized empirical risk minimization is often used to estimate $M$ \citep{argyriou2008convex, candes2010matrix, negahban2011estimation}; more specifically, the estimator is obtained by, \begin{equation}\label{eq::est_proc_old} \arg\min_{M}\left\{N^{-1}\sum_{t=1}^N(y_t - \langle X_t, M\rangle)^2 + \lambda \|M\|_*\right\}, \end{equation} where $\lambda$ is a regularization parameter which is used to balance the trade-off between fitting the unknown matrix using least squares and minimizing the nuclear norm $\|M\|_*$. This ``matrix lasso'' is known to have strong theoretical properties when $M$ is low rank \citep{argyriou2008convex,candes2010matrix,negahban2011estimation,cai2016matrix}. However, in our scenario, $M$ likely does not have low rank and previous work does not fully explain the effectiveness of the estimate from \eqref{eq::est_proc_old} in this setting. While the estimator in \eqref{eq::est_proc_old} is simple and quite well known, it fails to exploit knowledge of the sampling scheme (which is often known or at least assumed to be known). To use the assumption that the mask matrices $\{X_t\}_{t=1}^N$ are i.i.d. uniformly sampled from $\mathcal{X}$, we study a slight modification to \eqref{eq::est_proc_old} described in \citet{koltchinskii2011}: \begin{equation}\label{eq::est_proc} \begin{aligned} \widehat M & \leftarrow \arg\min_{M} \left\{\frac{1}{np}\|M\|_F^2 - \left\langle \frac{2}{N} \sum_{t=1}^{N} y_t X_t, M \right\rangle + \lambda \|M\|_*\right\} \end{aligned} \end{equation} After some simple manipulation, \eqref{eq::est_proc} can be further reduced to minimizing \[ \frac{1}{np}\|M-R\|_F^2 +\lambda \|M\|_* . \] where $R = \frac{np}{N}\sum_{t=1}^{N}y_tX_t = \frac{np}{N}Y$. Thus, $\widehat{M}$, the solution to \eqref{eq::est_proc}, is merely a singular-value soft-thresholding estimator: \begin{equation}\label{eq::hat_M} \widehat M = \sum_{j=1}^{\operatorname{rank}(R)}(\Lambda_j(R)-\lambda np/2)_+u_j(R)v_j(R)^T, \end{equation} where $\Lambda_j(R)$ are the singular values and $u_j(R)$, $v_j(R)$ are the left and right singular vectors of $R$ such that $R = \sum_{j=1}^{\operatorname{rank}(R)}\Lambda_j(R)u_j(R)v_j(R)^T$. \citet{koltchinskii2011} established the rate optimality of this estimator with respect to Frobenius-norm loss when $M$ is low rank. In this paper, we aim to ultimately claim that $\widehat M$ in~\eqref{eq::est_proc} is still a consistent and rate optimal estimator of $M$ in the case that $M$ is non-linearly embeddable, as long as $K$ is small and the function class $\mathcal F$ is sufficiently smooth. \subsection{Approximation of Embeddable Matrices}\label{sec::approximation} Our goal is to show that the estimator obtained by \eqref{eq::est_proc} is consistent for the true underlying matrix $M$ with respect to Frobenius-norm loss (and characterize the convergence rate), when $M$ is non-linearly embeddable. To this end, we first show that $M$ can be well approximated by a series of matrices with low (and only slowly growing) rank as long as the function class $\mathcal F$ is sufficiently smooth. More specifically, we will need the following condition for the function class $\mathcal{F}$. \begin{condition}\label{cond::approx} Given a function class $\mathcal{F}$, let $C_0$ denote a fixed positive number. Suppose that for any $\epsilon > 0$, there exists a finite set of functions $\mathcal{F}_{\epsilon} = \left\{\psi_1, \psi_2, \ldots, \psi_{J(\epsilon)}\right\} \subset \mathcal{F}$, such that \begin{equation}\label{eq:bound} \left\|\psi\right\|_{\infty} \leq C_0,\quad\text{for all }\psi\in\mathcal{F}_{\epsilon}, \end{equation} and \begin{equation}\label{eq:appx} \max_{f\in\mathcal{F}}\min_{\left\|\beta\right\|_2^2 \leq C_0} \left\|f - \sum_{l=1}^{J(\epsilon)}\beta_l \psi_l\right\|_{\infty} \leq \epsilon. \end{equation} For each $\epsilon$, we denote by $\mathcal{F}^{*}_{\epsilon}$ a set of minimal cardinality such that \eqref{eq:bound} and \eqref{eq:appx} hold. We let $J^{*}(\epsilon)$ denote the cardinality of $\mathcal{F}^{*}_{\epsilon}$. \end{condition} For a function class $\mathcal{F}$, Condition~\ref{cond::approx} characterizes the minimal number of basis functions needed to uniformly approximate functions in $\mathcal{F}$ up to precision $\epsilon$. In Section~\ref{sec::theory}, we shall apply this condition to $K$-dimensional, $L$-th order differentiable functions, and show how this number scales as a function of $\epsilon$. Based on the above condition, we can establish the existence of an approximation matrix which is sufficiently close to the true matrix $M$ and has a bounded nuclear norm. \begin{lemma} \label{lem::approx} Suppose matrix $M\in\mathbb{R}^{n\times p}$ is $\mathcal{F}$-embeddable, and $\mathcal{F}$ satisfies Condition~\ref{cond::approx}. Then, for any $\epsilon>0$, there exists a matrix $M^\epsilon$ satisfying $\operatorname{rank}(M^\epsilon) = J^*(\epsilon) \le \min(n,p)$ such that \begin{equation}\label{eq::approx_mat} \left\|M^{\epsilon} - M\right\|_{\infty} \leq \epsilon. \end{equation} Furthermore, the nuclear norm of $M^\epsilon$ is bounded: There exists $C_1>0$ (independent of $\epsilon$) such that \begin{equation}\label{eq::bound_nuclear} \frac{1}{\sqrt{np}}\left\|M^{\epsilon}\right\|_{*} \leq C_1 J^*(\epsilon). \end{equation} \end{lemma} The proof is given in Appendix~\ref{proof_lem_approx}. Note, for the $\mathcal{F}$ we consider later (restricted to smooth functions) we will show that $J^{*}(\epsilon) << \operatorname{min}(n,p)$. This parallels results in classical non-parametric regression where many function-spaces considered can be approximated uniformly with small error by linear combinations of relatively few basis functions \citep{tsybakov2009introduction}. \section{Consistency}\label{sec::theory} Using Lemma~\ref{cond::approx}, it is relatively straightforward to evaluate the performance of our estimator $\widehat M$ in \eqref{eq::est_proc}. The performance metric simplest to theoretically analyze is $N^{-1}\sum_{i=1}^{N}\left\langle X_i, \widehat{M} - M\right\rangle^2$. However, this criterion only evaluates the prediction error on the \emph{observed} entries. This is unsatisfying as our ultimate goal is to recover the entire matrix. Thus, we instead aim to evaluate the performance of $\widehat{M}$ based on the metric $\frac{1}{np} \|\widehat M - M\|_F^2$. The following result gives an upper bound for the performance of our estimator $\widehat{M}$ in this metric. \begin{theorem}\label{thm::upper_bound} Suppose we observe N pairs $\{(y_t, X_t)\}_{t=1}^N$ satisfying data generating model \eqref{eq::data_model} where $X_t$ are i.i.d. uniformly sampled from $\mathcal{X}$. Assume the true matrix $M \in \mathbb{R}^{n\times p}$ is $\mathcal{F}$-embeddable where $\mathcal{F}$ satisfies Condition \ref{cond::approx}. Further suppose that $N \ge (n\wedge p)\log^2(n+p)$. Then there exists a constant $C_2 >0$ (that only depends on $\sigma$ and $ \|M\|_\infty$) such that if we define the regularization parameter $\lambda$ by \[\lambda = C_2\sqrt{\frac{\log (n+p)}{N(n\wedge p)}}, \] then, with probability at least $1-2(n+p)^{-1}$, the completion error of $\widehat M$ in \eqref{eq::hat_M} is bounded by \begin{equation}\label{eq::generalbound} \frac{1}{np}\left\|\widehat M-M\right\|_F^2 \le C_2^2\left(\frac{1+\sqrt{2}}{2}\right)^2\frac{(n\vee p)\log(n+p)}{N}J^*(\epsilon) + \epsilon^2, \end{equation} for any $\epsilon>0$. Here, $J^*(\epsilon)$ is the rank of the approximation matrix $M^\epsilon$ with $\|M - M^\epsilon\|_\infty \le \epsilon$, which corresponds to the minimal cardinality of $\mathcal{F}^* \subset \mathcal{F}$ satisfying Condition~\ref{cond::approx}. \end{theorem} The upper bound in Theorem~\ref{thm::upper_bound} can be established by extending the results from \citet{koltchinskii2011}. The details of the proof are given in the Appendix~\ref{proof_upper_bound}. The two terms on the right-hand-side of \eqref{eq::generalbound} clarify the trade-off between the approximation error, $\epsilon$, and the cardinality of the minimal linear approximation set $\mathcal{F}^*$, $J^*(\epsilon)$. Our upper bound is consistent with the results in \citet{koltchinskii2011}, where the error is decomposed into a misspecification error $(\epsilon^2)$ and a prediction error. Usually, when there is no misspecification, i.e., the true matrix $M$ is low rank, the prediction error is linearly related to the rank of $M$ \citep{candes2011tight, klopp2014noisy, cai2016matrix}. In our scenario, where the low-rank assumption is violated, the prediction error in \eqref{eq::generalbound} is linearly related to the rank of the approximation matrix. Ideas similar to this occur in more traditional non-parametric estimation problems. For example, when using projection estimators in H\"older and Sobolev spaces, one of the main rate-optimal estimation approaches requires a truncated basis to be selected for projection that will grow with the sample size $N$ \citep{tsybakov2008introduction}. However, in those examples, the number of basis vectors is a tuning parameter in the algorithm, and the set of basis functions must be selected in advance. Here, both the set of basis functions and the truncation level are rather just theoretical tools for analyzing the algorithm performance. In employing matrix completion, the analyst only needs to select $\lambda$. We note that $N \ge (n\wedge p)\log^2(n+p)$ in the above Theorem~\ref{thm::upper_bound} is a quite weak condition on the number of observations: $N$ could satisfy this and still be far less than $np$. For the results of the latent space model in \cite{chatterjee2015matrix}, they require at least $O\left( n^{\frac{2(K+1)}{K+2}}\right)$ entries to be observed out of $n^2$ entries to guarantee the consistency for recovering an $n\times n$ matrix. This implies that one needs to observe $O\left( n^{\frac{K}{K+2}}\right)$ entries out of $n$ in each row, as compared to our much weaker requirement of $O\left(\log^2(n)\right)$ per row. We now specialize our results to matrices that are $\mathcal{F}$-embeddable for $\mathcal{F}$ containing functions with bounded derivatives. This is a natural class of functions to work with (though one could alternatively work in a multivariate Sobolev or H\"older space). \begin{condition}\label{cond::bound_derivative} $M$ is $\mathcal{F}$-embeddable, where $\mathcal{F}$ contains functions with uniformly bounded $L$-th order mixed partials (for some fixed $L>0$). More formally, define $\mathcal{F}(L,\gamma,K)$, for $L,K \geq 1$ as the set of $L$-th order differentiable functions from $\mathbb{R}_{[0,1]}^K$ to $\mathbb{R}$ satisfying \begin{equation}\label{eq::bound_deriv} \left|\frac{\partial^L}{\partial x_1^{L_1}\cdots x_K^{L_K}} f(\mathbf{x}) \bigg\vert_{\mathbf{x}=\mathbf{x^0}}\right| \leq \gamma, \end{equation} for all $\mathbf x^0 = (x_1^0,\ldots,x_K^0)\in \mathbb{R}_{[0,1]}^K \subset \mathbb{R}^K$ and all integers $L_1,\ldots,L_K$ satisfying $L_1+\cdots+L_K=L$. Now, additionally define the set \begin{equation}\label{eq::mat_class} \begin{aligned} \mathcal{M}(L,\gamma,K) &= \{ M \in \mathbb{R}^{n\times p}\,\mid \, m_{ij} = f_j(\bm\theta_{i,\cdot}),\\ &\textrm{ with } f_j\in\mathcal{F}(L,\gamma,K),\ j\leq p,\text{ and }\bm\theta_{i,\cdot}\in\mathbb{R}_{[0,1]}^K,\ i \leq n\} \end{aligned} \end{equation} % This is the set of $F(L,\gamma,K)$ embeddable matrices, where the embedding lives in a compact space (for convenience we use the $\ell_{\infty}$ ball). Our formal condition here is that $M\in \mathcal{M}(L,\gamma,K)$. \end{condition} {\bf Remark.} In the above condition, we will often suppress the dependence on $\gamma$, and write $\mathcal{M}(L,K)$ and $\mathcal{F}(L,K)$. This is because $\gamma$ does not affect the convergence rate of our estimator. Additionally, here we specify the domain of the embeddings to be $[0,1]^K$ for ease of exposition. This is actually general as we could rescale any compactly supported embedding to live in this interval. Condition~\ref{cond::bound_derivative} imposes an additional constraint on our embedding: The underlying manifold on which our matrix lives should be smooth. Here smoothness is characterized by a number of bounded derivatives. As we will see, this function class engages well with Condition~\ref{cond::approx} in the sense that we are able to characterize $J^{*}(\epsilon)$ for the function class $\mathcal{F}(L,K)$. This is essentially a multivariate H\"older class, which has been widely used in the area of non-parametric estimation \citep{tsybakov2008introduction}. One could alternatively look at this as a multivariate Sobolev class under the sup-norm, $W^{L,\infty}(\mathbb{R}^K)$. The following lemma gives the number of basis elements needed to linearly approximate a matrix satisfying the above condition, with bounded approximation error $\epsilon$. \begin{lemma}\label{lem::J_star_bound} For the function class $\mathcal{F}(L,K)$ described in Condition~\ref{cond::bound_derivative}, we have that Condition~\ref{cond::approx} is satisfied with $J^*(\epsilon) = O\left(\epsilon^{-K/L}\right)$. \end{lemma} The proof of this lemma is given in Appendix~\ref{proof_J_star_bound}. Now, we can establish the final convergence result for smoothly embeddable matrices. \begin{theorem}\label{thm::upper_bound2} Under the same scenario and assumptions as in Theorem \ref{thm::upper_bound}, assume further the $\mathcal{F}(K,L)$-embeddable matrix $M$ satisfies Condition~\ref{cond::bound_derivative} for a given $L$ and $K$. Then, the upper bound \eqref{eq::generalbound} is optimized at $\epsilon = \left(\frac{(n\vee p)\log(n+p)}{N}\right)^{\frac{L}{2L+K}}$, resulting in \begin{equation}\label{eq::upper_bound} \begin{aligned} \frac{1}{np}\left\|\widehat M-M\right\|_F^2 &=O_P\left(\left[\frac{(n\vee p)\log(n+p)}{N}\right]^{\frac{2L}{2L+K}}\right). \end{aligned} \end{equation} \end{theorem} The proof is given in Appendix~\ref{proof_upper_bound2}. This upper bound of the convergence rate of the MSE of $\widehat M$ is only based on the dimensions $n$ and $p$ of matrix $M$, the total number of observations $N$, as well as the degree of smoothness $L$ and dimension of the embedding $K$. Previous work that assumed $M$ was low-rank generally gave a rate of the form $N^{-1}\operatorname{(n\vee p) rank(M)\log(n+p)}$ \citep{bach2008consistency, klopp2014noisy, van2016estimation}. In contrast, our upper bound does not rely on the rank of $M$. Instead, the role of $\operatorname{rank}(M)$ is replaced by $L$, and $K$. This result reaffirms that the standard matrix completion estimator based on nuclear norm minimization is consistent for matrices with low-dimensional non-linear structure. Perhaps more importantly, it also shows how the convergence rate depends on the degree of smoothness, and dimension of the manifold. This can be seen in the exponent on the RHS of \eqref{eq::upper_bound}: $2L/(2L+K)$. Increasing the degree of smoothness moves this exponent towards $1$; increasing the dimension moves the exponent towards $0$. This is analogous to more standard non-parametric regression problems in smooth hypothesis spaces where the minimax convergence rate for MSE looks analagous \citep{tsybakov2008introduction}. \section{Minimax Lower Bound}\label{sec::minimax} In this section, we use information-theoretical methods to establish a lower bound on the estimation error for completing \emph{non-linearly embeddable} matrices with uniformly sampled at random entries when the latent embedding $\bm\Theta$ is $K$-dimensional and satisfies Condition~\ref{cond::bound_derivative}. The rate we find in the lower bound matches the rate obtained by nuclear norm penalization in Theorem~\ref{thm::upper_bound2} up to a log-term. Thus our upper bound is sharp (up to a logarithmic factor), and, the nuclear-norm penalization based estimator given in \eqref{eq::est_proc} is rate-optimal (up to polylog) for this problem. To derive the lower bound, we consider the underlying matrices $M \in \mathcal{M}(L,\gamma,K)$ as defined in \eqref{eq::mat_class}, i.e., matrices that live in $L$-th order smooth, $K$ dimensional manifolds. Let $\mathbb{P}_M$ denote the probability distribution of the observations $\{(y_t,X_t)\}_{t=1}^N$ generated by model \eqref{eq::data_model} with $\operatorname{E}(y_t|X_t) = \langle X_t, M\rangle$. We give a minimax lower bound of the $\|\cdot\|_F^2$-risk for estimating $M$ in the following result. \begin{theorem}\label{thm::lower_bound} For any given $L\geq 1$, $\gamma>0$ and $K\geq 1$, let $\kappa:=n/p$. Then, for some constant $A>0$ that depends on $K, L, \gamma, \sigma^2$ and $\kappa$, the minimax risk for estimating $M$ satisfies \begin{equation}\label{eq::lower_bound} \inf_{\hat M}\sup_{M \in \mathcal{M}(L,\gamma,K)} \mathbb{P}_M\left(\frac{1}{np}\left\|\widehat M - M\right\|_F^2 > A \left(\frac{n\vee p}{N}\right)^{\frac{2L}{2L+K}} \right) \ge 1/2, \end{equation} when $c_0^{-\frac{2L+K}{K}}(n\vee p) \le N \le c_0^{-\frac{2L+K}{K}} 0.48^{2L+K} (n\vee p)n^{\frac{2L+K}{K}}$ for some constant $c_0$ which depends on $K, L, \gamma, \sigma^2$ and $\kappa$ \end{theorem} The proof is given in the Appendix~\ref{proof_lower_bound}. Comparing Theorem \ref{thm::lower_bound} to Theorem \ref{thm::upper_bound2}, we see that the lower bound matches the upper bound \eqref{eq::upper_bound} up to a logarithmic factor. This shows that the estimator given by \eqref{eq::est_proc} is actually an optimal estimator (up to a log term) for this non-linear low-dimensional matrix completion regime. We note that the requirement $N = O\left((n\vee p)n^{\frac{2L+K}{K}}\right)$ in Theorem \ref{thm::lower_bound} is a bit unusal. It comes from a technical constraint in our proof, required to construct a suitably large packing set. This may just be an artifact of our proof technique, and not innate to the problem. Recall that the upper bound holds as long as $N \ge (n\vee p) \log^2(n+p)$, so there is a large regime where the assumption required for our upper and lower bounds overlap. \section{Simulation Study}\label{sec::sims} In this section, we empirically evaluate the effectiveness of matrix completion using the soft-thresholding estimator $\widehat M$ in \eqref{eq::hat_M} for noisy incomplete matrices which are generated from low-dimensional non-linear embeddings. (These matrices are full rank, even though they are generated from low-dimensional non-linear embeddings). Here, we only show the case of univariate embedding ($K=1$) and aim to empirically evaluate how the Frobenius error $\frac{1}{np}\left\|\widehat M-M\right\|_F^2$ changes with the dimension ($n$) when $n=p$. We examine scenarios where the non-linear embeddings are of different orders of smoothness. The underlying matrices are generated as described in \eqref{eq::mat_gen}: $m_{ij} = f_j(\bm\theta_{i,\cdot})$ for $i=1,\ldots,n$ and $j = i,\ldots,p$. In particular, to make sure that Conditions \ref{cond::approx} and \ref{cond::bound_derivative} are satisfied, we generate $f_j$ as \[ f_j(x) = \sum_{b=1}^{\infty}\beta_b\psi_b(x), \] where $\psi_b(x)$ are orthonormal bases in $L_2[0,1]$ defined by: \[ \begin{aligned} \psi_1(x) & = 1,\\ \psi_{2b}(x) & = \sqrt{2}\cos(2\pi b x),\\ \psi_{2b+1}(x) & = \sqrt{2}\sin(2\pi b x). \end{aligned} \] Meanwhile, to set up the order of smoothness $L$ and make sure that $\beta_b\psi_b(x)$ vanishes with $b$, we sample the coefficients $\beta_b$ from a uniform distribution: \[ \beta_b \sim_{i.i.d} U\left[-b^{-(L+1)}, b^{-(L+1)}\right], \quad b=1,2,\ldots . \] % In this way, we can guarantee that $\sum_{b=1}^\infty b^{2L}\beta_j^2 < \infty$. Thus, $f_j$ is a function whose $L$th order derivative is $O_p(1)$. In this simulation, for computational reasons, we actually use only the first $100$ basis vectors $f_j(x) = \sum_{b=1}^{100}\beta_b\psi_b(x)$. The underlying embeddings $\bm\theta_{i,\cdot} \in \mathbb{R}$ are also i.i.d. sampled from a uniform distribution $U(0,1)$ for $i=1,\ldots,n$. We set the missingness rate to $\nu = 0.3$: The total number of observed entries is $N=(1-\nu) np$. The observed entries are $y_t = \langle X_t, M\rangle + \xi_t$, where $X_t$ are uniformly sampled from $\mathcal{X}$ and the error terms are independently Gaussian distributed $\xi_t \sim_{i.i.d.} N(0,1)$. We generate random data sets $\{(y_t,X_t)\}_{t=1}^N$ of size $n \in \{500, 1000, 2000, 3000, 5000\}$ and estimate $M$. We run 100 simulations for each size. To select $\lambda$, instead of using cross-validation, here we consider an oracle procedure: For each simulation, we estimate the MSE for a set of $\lambda$ values and select the $\lambda$ that minimizes the MSE. We report this MSE of the estimated matrix $\widehat{M}$ and the corresponding $\lambda$. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{figure/case2_new.pdf} \caption{Theoretical rate vs. empirical rate (in log scale) of the mean squared errors as a function of sample size. The underlying matrices $M$ are generated by $f$ with different orders ($L$) of smoothness. The low-rank embedding is one-dimensional ($K=1$). We regress log(MSE) on log($n$), and compare the theoretical slopes (left) with the empirical slopes (right). For each smoothness level, $L$, we also obtain the 95\% confidence regions using bootstrap (dash lines). }\label{fig::simulation} \end{figure} Figure \ref{fig::simulation} shows the results of estimating $M$ generated by non-linear embeddings with different orders of smoothness, $L$. Since $N = (1-\nu)np$, the convergence rate in \eqref{eq::upper_bound} reduces to $O_P\left(\left[\log(2n)/n\right]^{\frac{2L}{2L+K}}\right)$. The log term inside is negligible as $n$ increases. Hence, if we regress log(MSE) on $\log(n)$, the absolute value of slope should be roughly about $2L/(2L+1)$ ($K=1$ in this simulation). We increase the order of smoothness of $f$ from $L=1$ to $L=5$. For these values of $L$, the expected absolute value of the slope should be 0.67, 0.80, 0.86, 0.89, and 0.91. The rates from our simulations are respectively 0.67, 0.78, 0.80, 0.88, and 0.91. There is generally strong agreement between theoretical and empirical results except for the setting of $L=3$. We hypothesize that this is due to finite sample issues. \section{Discussion}\label{sec::discussion} Nuclear-norm based matrix completion methods were originally developed for scenarios where the underlying mean matrix has low rank. In this manuscript, we present theoretical results to explain the effectiveness of matrix completion in applications where the underlying mean matrix is not low rank, but instead lives in a low-dimensional smooth manifold. Our results show that, in such scenarios, nuclear-norm regularization can still result in a procedure that is minimax rate optimal (up to a log factor) for recovering the underlying mean matrix. In particular, we give upper bounds on the rate of convergence as a function of the number of rows, columns, and observed entries in the matrix, as well as the smoothness, and dimension of the embeddings. We additionally give matching minimax lower bounds (up to a logarithmic factor) for this problem. These bounds appear analogous to the minimax rate in the case of standard non-parametric regression. Our theoretical results relate the error bounds to the smoothness and dimension of the non-linear embedding; however, the technical proof does not provide a way to figure out the explicit form of the hidden embeddings, which may be interesting in practice, e.g., for dimension reduction. Modifying the original matrix completion method in order to estimate the hidden embeddings may be an important direction of future research. \newpage
{ "timestamp": "2021-05-06T02:10:00", "yymm": "2105", "arxiv_id": "2105.01874", "language": "en", "url": "https://arxiv.org/abs/2105.01874" }
\section{Introduction} In the last decade, the quality of text-to-speech (TTS) has greatly been improved by the introduction of neural vocoders in combination with end-to-end TTS models like Tacotron \cite{shen2018, wang2017}. More recently, models have been proposed that are able to generate expressive speech, such as Tacotron with Global Style Tokens (GST Tacotron) \cite{wang2018}, Mellotron \cite{valle2020} and Flowtron \cite{valle2020a}. While these models can produce prosodically varied and realistic human-like speech, it is unclear how the prosody can be changed in a meaningful way such that it fulfills paralinguistic functions, like the communication of attitudes, intentions or emotions. In order to find these prosodic representations, one needs to efficiently search the model's latent space. This is an increasingly difficult task for high dimensional spaces, since all combinations cannot be tried within reasonable time. There are several psychological paradigms that can sample from such spaces using human participants, such as reverse correlation \cite{mangini2004}, Markov Chain Monte Carlo with People (MCMCP) \cite{sanborn2008} and Gibbs Sampling with People (GSP) \cite{harrison2020a}. GSP is a particularly recent paradigm that uses a continuous-sampling task instead of the binary choice task used by the other methods. This greatly increases information per trial and thus speeds up the parameter search. Here we use GSP to search the prosodic latent space in a trained GST Tacotron model to explore prototypes of emotional prosody. \section{Background} There are two main challenges involved in synthesizing prototypes of emotional prosody. First, one must define a stimulus space, comprising of parametric manipulations applied to the sound. Second, one must find an effective way to identify regions of this stimulus space associated with particular emotional prototypes. One way to define the stimulus space is to construct a set of hand-crafted features that capture important aspects of prosody perception, for example pitch slope, jitter, and mean intensity. Previous work \cite{harrison2020a} showed that a simple handcrafted feature space was sufficient for generating distinctive, well-recognized prosodic prototypes of emotions. However, this approach is fundamentally limited because (i) it makes strong assumptions about which acoustic manipulations are relevant, (ii) not all potentially relevant manipulations to the sound can be made, because changing a single feature (e.g., pitch contour) regardless of other feature it is correlated with (e.g., spectral properties of the sound) may create unnatural and distorted speech, and (iii) when changing existing speech recordings, we are essentially changing continuous time-series, like pitch or intensity over time. Traditional hand-crafted features such as pitch slope and pitch range struggle to capture the full expressivity of underlying pitch or intensity contours. Alternatively, the stimulus space may be created in a data-driven fashion. One solution is to use TTS models that factorize audio into separate text and prosody representations. GST Tacotron \cite{wang2018} is one of the most prominent examples of such TTS systems and is an extension of Tacotron, which is a sequence-to-sequence model learning the TTS task solely on pairs of recordings and transcripts (see supplementary Figure S1 for the architecture). In GST Tacotron a few components are added to Tacotron. A reference encoder \cite{skerry-ryan2018} is added, which compresses the Mel spectrogram to a fixed length embedding. This embedding is then passed to the so-called `style token layer'. This layer consists of a multihead attention mechanism, in which the given attention by the head is a similarity measure between the reference embedding and a bank of global style tokens. Based on the given attention weights a weighted average over all global style tokens is computed, which is called style embedding. Together with the text, the style embedding is passed to the Tacotron model which creates the predicted spectrogram. While this architecture can create varied speech, the control over prosody is relatively coarse, because the global style tokens are of a fixed length. Newer developments like Mellotron \cite{valle2020} and Flowtron \cite{valle2020a} aim to enhance prosodic control, which is a requirement for speech and song transfer. The second challenge is to identify regions of this space associated with particular emotional prototypes. A naive approach is to manipulate single dimensions independently, and assess the consequences for emotion perception. However, this assumes that the underlying dimensions contribute independently to emotion judgments, which cannot typically be justified. GSP provides a way to overcome this independence assumption: it leverages a well-established algorithm from computational statistics (Gibbs sampling) to identify regions of stimulus spaces associated with given semantic labels, while avoiding any independence assumptions \cite{harrison2020a}. \begin{figure} \centering \includegraphics[width=80mm]{figures/figure_1.pdf} \caption{(A) Example slider in which a user is prompted to move the slider such that the speaker sounds as sad as possible. Moving the slider plays the sound with the selected attention weight (in this case for style H). (B) Schematic depiction of GSP chain. The chain consists of iterations. At every iteration only one dimension is changed. The colored dots represent choices by single participants. (C) Every slider is visited by 5 different participants. The median answer from all participants is passed to the next iteration, also compare with B. (E) Example for GSP process. For simplicity only showing dimension H and I. (E) GSP sliders control the attention weights that are passed to GST Tacotron that will create the stimulus for this configuration.} \label{fig1} \end{figure} \section{Methods} \subsection{GSP} GSP is an adaptive procedure whereby many participants collaborate to explore a high-dimensional sample space (Figure \ref{fig1}). The participants' responses are organized into sequences of iterations called ``chains'' (Figure \ref{fig1}B). A given iteration in a given chain has fixed values for all but one of the space's dimensions, and leaves the remaining dimension to be manipulated by the participants. In each trial, the participant is assigned to a particular iteration in a particular chain, and presented with a randomly initialized slider that manipulates the free dimension with real-time audio feedback. The participant is instructed to adjust the stimulus until it maximally resembles the target concept (e.g., \textit{sad}; Figure \ref{fig1}A). In our implementation, 5 different participants contribute trials for a given iteration in a given chain, and their responses are aggregated by taking the median (Figure \ref{fig1}C). This aggregated value is then propagated to the next iteration, where a different dimension is then manipulated. This procedure is repeated multiple times, cycling through each of the dimensions of the sample space (Figure \ref{fig1}D). The resulting process can be interpreted as a Gibbs sampler, a well-known algorithm from computational statistics for sampling from high-dimensional probability distributions \cite{harrison2020a}. In the current experiment, participants change the attention weights of one of the 10 global style tokens.\footnote{We found that the attention weights of the four heads correlate with each other (average correlation of \textit{r} = .65). We therefore decided to reduce the dimensionality of the sample space by fixing each head to receive the same attention weight.} The participants are prompted to adjust a slider to make the speaker sound like a given emotion (see Figure \ref{fig1}A). The range of all dimensions is constrained to [-0.24, 0.38], corresponding to a 94\% confidence interval of the attention weights given by the model in the training data, so as to minimize distortions. Every slider contains 32 equally-spaced slider positions. Since the synthesis of the stimuli must happen in real-time during the experiment, we used a Griffin-Lim vocoder for synthesis, finding that it achieved a good compromise between quality and speed (sound examples in the supplemental material). Every chain is initialized at 0 for every dimension, because extreme slider values can cause distortions to the signal. \subsection{Synthesis model} We train the model\footnote{https://github.com/syang1993/gst-tacotron} for 380,000 epochs using the same corpus (Blizzard Challenge 2013) and hyperparameters as the original paper \cite{wang2018}. When synthesizing from the model we set the attention weights (Figure \ref{fig1}E) directly from the current location of the relevant GSP chain in the sample space (Figure \ref{fig1}B), generating one output for each of the 32 possible slider positions. The participant would then select from these different outputs using the slider (Figure \ref{fig1}A). \subsection{Material} We use three phonologically balanced and semantically neutral sentences from the Harvard sentence text corpus \cite{harvardsentences}, and study three emotions: \textit{anger}, \textit{happiness} and \textit{sadness}. During the initialization of the experiment, a single sentence and emotion is assigned to every chain, such that every sentence and every emotion occurs equally often and are balanced across chains. \begin{figure*}[h!] \centering \includegraphics[width=175mm]{figures/figure_2.pdf} \caption{(A) Example validation trial. Audio plays automatically and user is prompted to answer. (B) Average ratings for the initial sample (iteration 0), binned iteration 1–4, 5–8, 9–12, 13–16, 17–20, the rating for the transfer and random samples (95\% confidence intervals). (C) Contrast between ratings (95\% confidence intervals). (D) Principal Component Analysis on style embeddings of 39 chains at iterations 9–20. (E) Development over iterations in PC style embedding space at iteration 0–5. (F) Comparison between the previous (changing specific acoustic features with Praat) and the current study (using GST Tacotron).} \label{fig2} \end{figure*} \subsection{Experiments} 130 US participants (61 female, 1 prefer not to say, 68 male) engaged in the experiment. The age ranged from 18 to 59 years old (\textit{M} = 36, \textit{SD} = 10). Before the experiment, the participant starts with three practice trials to get acquainted with the task. We terminated the experiment after 48 hours, after which 39 out of the 45 chains were full (20 iterations). In a separate validation experiment, participants (\textit{N} = 82) rate how well samples matched each emotion on a four-point scale (see Figure \ref{fig2}A). The validation includes stimuli generated in the 39 full chains of the first experiment (i.e., the chains at different iterations of the experiment) as well as 18 random samples. We created 156 transfer stimuli by applying the median attention weights of the final GSP iteration to four novel sentences from the Harvard sentence corpus. These stimuli were also rated in the validation. On average every stimulus was rated 4.5 $\times$ for every emotion. \subsection{Participants} All participants were recruited from Amazon Mechanical Turk (AMT) and provided informed consent in accordance with the Max Planck Society Ethics Council approved protocol (application 2020\_05). Participants were paid \$9/hour. Requirements for participation include a minimal age of 18 years, 95\% or higher approval rate on previous tasks on AMT (which helps to recruit reliable participants), residency in the US and wearing headphones (had to pass a pre-screening headphone check \cite{woods2017headphone}). Participant recruitment was managed by PsyNet \cite{harrison2020a}, an under-development framework for implementing complex experiment paradigms such as GSP and MCMCP. This framework builds on the Dallinger platform\footnote{https://github.com/Dallinger/Dallinger} for experiment hosting and deployment. \subsection{Acoustic analysis} In order to compare the current results with our findings from previous work \cite{harrison2020a}, we computed a similar set of acoustic features that were manipulated in the previous experiment. Duration- and pitch-related slider positions were well-recovered from the acoustical signal, were this was not the case for the applied jitter- and tremolo-effect. We extracted duration, F\textsubscript{0} slope, mean and range\footnote{To make the range more robust to octave jumps, we do not use min-max range, but compute the standard deviation of the mean-centered pitch points.}, as well as shimmer (local) and jitter (ddp). All features were extracted with Praat \cite{praat} through a Python wrapper \cite{parselmouth}. To complement this hand-crafted feature set, we also computed a larger standard feature set (eGeMAPS) developed for detecting emotions from speech \cite{eyben2013, eyben2016}. \section{Results and discussion} \subsection{Validation} As illustrated in Figure \ref{fig2}B, the ratings for the intended emotion steadily increase over the course of the iterations whereas the ratings for non-intended emotions plateau or drop. Moreover, there seem to be imbalances in the rating of the initial and the random samples, representing some perceived biases (e.g., iteration 0 sounds more happy than sad). To control for these imbalances, we compute the ``contrast'' between the ratings, corresponding to the mean rating for intended emotions minus the mean rating for not-intended emotions (Figure \ref{fig2}C). The contrast shows that the intended emotion reliably achieves higher ratings than the non-intended emotions. Consistent with the previous results, the contrast steadily increases over the iterations, but is close to 0 for the random samples and the initial sample (iteration 0). \subsection{Transfer} Figures \ref{fig2}B and \ref{fig2}C also show average ratings for stimuli created by applying the derived attention weights to new sentences. These stimuli obtain high ratings, indicating that this transfer process worked remarkably well (see supplementary materials for audio examples). \subsection{Structure of the latent space} To investigate if the emotional sentences group together in the TTS latent space, we performed a Principal Component Analysis on all style embeddings of all stimuli in the experiment. Figure \ref{fig2}D depicts the first two principal components for the iteration 9–20. The figure shows that the three emotions separate moderately well on these two components. This grouping emerges relatively early on, providing additional support for early convergence of the GSP process (Figure \ref{fig2}E). \subsection{Comparison to Harrison et al. (2020)} In a previous study \cite{harrison2020a}, we used GSP to sample prototypes of emotional prosody when explicitly manipulating duration, loudness and pitch related features. Stimuli in the later iterations of the chains in both experiments are well-recognized as shown by validation experiments. However, Figure \ref{fig2}F shows that profiles computed on the stimuli from Harrison et al. (2020) look rather different from the profiles in the current study (\textit{r}(16) = .27, \textit{p} = \textit{ns}). Since these features only cover a very constrained acoustic space, we also compute the correlation between both studies on the larger feature set eGeMAPS. Again the correlation between both experiments is low (\textit{r}(256) = .31, \textit{p} $<$ .001), indicating that our GST experiment identifies different regions of the prosodic space than the experiment described in Harrison et al. (2020). To further address the question, we perform a 4-fold classification on both stimuli sets (linear kernel, C values: 1e-5, \dots 1e-1, 1). We include the last two iterations from Harrison et al. (2020) and iteration 9 – 20 from the current experiment to have a similar number of stimuli per experiment and made sure every emotion occurs equally often in every fold. We observed that the Unweighted Average Recall (UAR) is high for both experiments: Harrison et al. (2020) obtains 75.0\% UAR and the current experiment 79.4\% UAR (chance: 33.3\% UAR). However, when predicting Harrison et al. (2020) with the current results or vice versa, we obtain a lower UAR (49.1\% and 48.6\% respectively). These results suggests that both GSP methods generate samples with emotional states that occupy distinct parts of the feature space. However, there is only partial overlap between the features generated by the two methods. There are multiple potential explanations for this finding. First, the constrained feature set used in Harrison et al. (2020) might have forced participants to rely heavily on particular prosodic features that otherwise might be treated only as secondary emotional cues. Second, the two experiments rely on different speakers, and differences in their voices and accents may contribute to differences in emotional prototypes. Likewise, differences in the spoken texts may have contributed to differences in the resulting prototypes. These possibilities deserve further exploration. \subsection{Limitations and outlook} One clear limitation of the present study is that the prototypes might be stereotypical and might not fully represent how emotions are communicated in real life \cite{anikin2017a, barrett2019}. Future research could address this issue by replacing the discrete emotion labels in the GSP paradigm with descriptions of real-life emotional situations. Future research could further improve the parametrization of the latent space, for example by relaxing the constraint that each head receives the same attention weight, using different TTS models, and training on different datasets. Most importantly, future research will need to test more heterogeneous populations and also train on non-western and non-English corpora in order to make valid claims about emotional prosody and to develop robust applications \cite{henrich2010}. \section{Conclusion} In this paper, we used Gibbs Sampling with People together with a trained GST Tacotron model in order to explore prototypes of emotional prosody. Our results show that (1) particular regions of the model's latent space are reliably associated with particular emotions, (2) the emotional prototypes are well-recognized by human raters, and (3) the emotional prototypes can be transferred to new sentences. We showed that the emotional prototypes occupy different positions in the TTS latent space and do so from early stages of the experiment, indicating early convergence. Finally, we found interesting acoustic differences between the current study and Harrison et al. (2020), which should be explored in future research by carefully comparing emotional prototypes created with hand-crafted acoustic manipulations versus those created by TTS models. All in all, GSP in combination with GST Tacotron seems to be a useful and efficient tool for studying emotional prototypes, for exploring speaking styles in existing TTS systems, and for generating new emotional sentences based on pre-existing speech recordings. \section{Acknowledgments} This work has partially been funded by the European Union Horizon 2020 research and innovation programme, grant agreement 856879. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-06T02:11:08", "yymm": "2105", "arxiv_id": "2105.01891", "language": "en", "url": "https://arxiv.org/abs/2105.01891" }
\section*{Acknowledgments} \medskip \noindent\textbf{Disclosures.} The authors declare no conflicts of interest.
{ "timestamp": "2021-05-06T02:09:00", "yymm": "2105", "arxiv_id": "2105.01847", "language": "en", "url": "https://arxiv.org/abs/2105.01847" }